Browsing by Subject "System design"
Now showing 1 - 9 of 9
- Results Per Page
- Sort Options
Item Agile development of Domain-Specific solutions for emerging mobile systems(2022-10-05) Boroujerdian, Behzad; Janapa Reddi, Vijay; Tiwari, Mohit; Gerstlauer, Andreas; John, Lizy K; Subramanian, LavanyaEmerging mobile systems such as robots or Augmented Reality glasses increasingly interact with their physical surroundings. This means they require efficient computing platforms to continuously process a high volume of sensory information in a limited-energy battery-powered device. Highly specialized Domain-Specific System on Chips (DSSoCs) have been recently proposed as a solution to provide this efficiency. However, although efficient, the complexity of DSSoCs driven by a high count of hardware intellectual property (IP) blocks results in a long development time, concretely a long design exploration and implementation time. This is because a high IP count enlarges the design space, makes the search for optimal design difficult, and thus lengthens the exploration time. In addition, each new IP in the system increases the implementation time, i.e., the time associated with converting an algorithm to a hardware IP, verification, and integration. These two problems only worsen as the future system’s intelligence and, thus, computational need expands. This dissertation demonstrates how to mitigate the DSSoC’s long exploration and implementation time by employing efficient resource selection and resource management techniques. We demonstrate how to lower the exploration time using an agile design space exploration (DSE) framework that efficiently selects system resources. Similar to other DSEs, our framework consists of a simulator and an exploration heuristic. In contrast to the state-of-the-art simulators that are either too slow or inaccurate and thus infeasible for sufficient coverage of DSSoCs’ large design space, our simulator is highly suited for DSSoCs as it combines the agility of analytical models and the accuracy of the phase-driven models. Our result shows that this methodology speeds up the state-of-the-art transactional models by 8400x while only incurring a small 1.5% error for a host of complex SoCs. Our DSE also lowers the exploration time by deploying an agile search heuristic that efficiently navigates DSSoCs’ large design space. In this dissertation, we highlight two features for an efficient search heuristic, namely 1) joint-optimization capability to exploit cross-boundary optimizations and 2) architectural awareness to prevent blind traversal of the design space. State-of-the-art solutions either lack the former or the latter. In contrast, our methodology combines the two and speeds up the convergences of the baseline, i.e., the classic simulated annealing (SA) and more modern Multi-Objective optimistic Search (MOOS) by 62× and 35×, respectively. It also improves their Quality Gain and Pareto Hyper Volume. We also demonstrate how to lower the implementation time by replacing high-effort IP customization solutions with low-effort environment-aware resource management ones. Our solution uses the physical environment’s heterogeneity, a prominent characteristic of the mobile edge domain, to its advantage and dynamically manages system resources to improve efficiency. To this end, first, we show that various spatial heterogeneity factors, such as environment congestion, impact the processing payload. Then, we establish that designs that ignore this heterogeneity incur system-level performance and energy degradation and thus require specialized IPs to solve their efficiency problem; However, introducing IPs increases the implementation time and effort. Thus, to address this problem, we provide a resource management runtime that dynamically exploits said compute-environment synergy and improves the system efficiency without introducing customized IPs. We implement our runtime in Robot Operating System (ROS) middleware and evaluate it for autonomous drones without the loss of generality. We compare our runtime with a spatially oblivious system, typical of traditional commercial deployments, whose parameters are statically set at design time. We show that exploiting spatial heterogeneity leads to 4.5x improvement in mission time, 4x improvement in energy efficiency, and 36% reduction in CPU utilization. The contributions made in this dissertation will likely have a long-term impact as, with the end of Moore’s law, highly specialized DSSoCs have shown promise to address general systems’ lack of efficiency. Thus, understanding and further providing techniques to mitigate DSSoCs’ design and development issues are of high importance.Item Automatic workload synthesis for early design studies and performance model validation(2005) Bell, Robert Henry; John, Lizy KurianComputer designers rely on simulation systems to assess the performance of their designs before the design is transferred to silicon and manufactured. Simulators are used in early design studies to obtain projections of performance and power over a large space of potential designs. Modern simulation systems can be four orders of magnitude slower than native hardware execution. At the same time, the numbers of applications and their dynamic instruction counts have expanded dramatically. In addition, simulation systems need to be validated against cycle-accurate models to ensure accurate performance projections. In prior work, long running applications are used for early design studies while hand-coded microbenchmarks are used for performance model validation. One proposed solution for early design studies is statistical simulation, in which statistics from the workload characterization of an executing application are used to create a synthetic instruction trace that is executed on a fast performance simulator. In prior work, workload statistics are collected as average behaviors based on instruction types. In the present research, statistics are collected at the granularity of the basic block. This improves the simulation accuracy of individual instructions. The basic block statistics form a statistical flow graph that provides a reduced representation of the application. The synthetic trace generated from a traversal of the flow graph is combined with memory access models, branching models and novel program synthesis techniques to automatically create executable code that is useful for performance model validation. Runtimes for the synthetic versions of the SPEC CPU, STREAM, TPC-C and Java applications are orders of magnitude faster than the runtimes of the original applications with performance and power dissipation correlating to within 2.4% and 6.4%, respectively, on average. The synthetic codes are portable to a variety of platforms, permitting validations between diverse models and hardware. Synthetic workload characteristics can easily be modified to model different or future workloads. The use of statistics abstracts proprietary code, encouraging code sharing between industry and academia. The significantly reduced execution times consolidate the traditionally disparate workloads used for early design studies and model validation.Item Beleaf : an earth-friendly solution to disposable dinnerware(2011-05) Adhikary, Amrita Prasad; Hall, Peter, 1965-; Olsen, Daniel M., 1963-This report is a documentation of an investigative design process that looks at how small shifts in established systems can be reconfigured to make big changes. It is an attempt at establishing a framework for designing sustainable solutions with the environment and social good in mind. In addressing the problems resulting from our indiscriminate use of plastic disposable dinnerware and offering a viable and earth-friendly system solution to the same, I am interested in reminding fellow designers that accountability towards the environment is the new design reality. The report advocates methods that synthesize design for people, profit, and most importantly, the planet. By using plates made from fallen leaves, the user fulfills his specific need for disposable dinnerware while simultaneously participating in an environmental task of closing the loop through responsible disposal and composting.Item Detecting and tolerating faults in distributed systems(2008-12) Ogale, Vinit Arun, 1979-; Garg, Vijay K. (Vijay Kumar), 1963-This dissertation presents techniques for detecting and tolerating faults in distributed systems. Detecting faults in distributed or parallel systems is often very difficult. We look at the problem of determining if a property or assertion was true in the computation. We formally define a logic called BTL that can be used to define such properties. Our logic takes temporal properties in consideration as these are often necessary for expressing conditions like safety violations and deadlocks. We introduce the idea of a basis of a computation with respect to a property. A basis is a compact and exact representation of the states of the computation where the property was true. We exploit the lattice structure of the computation and the structure of different types of properties and avoid brute force approaches. We have shown that it is possible to efficiently detect all properties that can be expressed by using nested negations, disjunctions, conjunctions and the temporal operators possibly and always. Our algorithm is polynomial in the number of processes and events in the system, though it is exponential in the size of the property. After faults are detected, it is necessary to act on them and, whenever possible, continue operation with minimal impact. This dissertation also deals with designing systems that can recover from faults. We look at techniques for tolerating faults in data and the state of the program. Particularly, we look at the problem where multiple servers have different data and program state and all of these need to be backed up to tolerate failures. Most current approaches to this problem involve some sort of replication. Other approaches based on erasure coding have high computational and communication overheads. We introduce the idea of fusible data structures to back up data. This approach relies on the inherent structure of the data to determine techniques for combining multiple such structures on different servers into a single backup data structure. We show that most commonly used data structures like arrays, lists, stacks, queues, and so on are fusible and present algorithms for this. This approach requires less space than replication without increasing the time complexities for any updates. In case of failures, data from the back up and other non-failed servers is required to recover. To maintain program state in case of failures, we assume that programs can be represented by deterministic finite state machines. Though this approach may not yet be practical for large programs it is very useful for small concurrent programs like sensor networks or finite state machines in hardware designs. We present the theory of fusion of state machines. Given a set of such machines, we present a polynomial time algorithm to compute another set of machines which can tolerate the required number of faults in the system.Item Essays of new information systems design and pricing for supporting information economy(2005) Fang, Fang; Whinston, Andrew B.In recent years, the rapid development of Internet has significantly changed people’s life style and the way business is conducted, even though accompanied by fear and sceptism. Research on developing new business models regarding to innov ative use of the Internet has become very important to justify the value of Internet. In addition, designing new pricing mechanisms tailored for the new Internet access infrastructures also has great values in supporting and expanding the Internet usage. This dissertation contains three essays exploring those issues. In the first essay, an emerging Internet Business the “Prediction Market” is described and examined. The market is specifically designed for collecting dispersed information from a wide variety of agents. In order to achieve efficient information elicitation and aggregation, the agents are characterized according to their information pre cision and the cost to induce their information. The optimal selection rules are characterized and a betting mechanism which can implement this selection rule is proposed and analyzed. The second essay proposes a new pricing mechanism for price discrimination under demand uncertainty, which can be applied to allocating Internet access. An option framework is used so that users with higher valuation for Internet usage can purchase options beforehand, which gives them the right to exercise the option and get guaranteed demand execution. Such an option framework has three advantages compared with previous congestion pricing mechanism. First, it helps reveal the customers valuation information ex ante and hence allows the service provider to conduct price discrimination. Second, it improves allocation efficiency when capacity is tight. Lastly, it provides customers’ demand information ex ante to facilitate the service provider’s capacity investment decision. Thethirdessayexaminesthepricingissueforanemergingnetworkinfrastruc ture – the Wireless Mesh Network. In such a network, every user can become a router themselves and hence they have the right to decide whether and how much traffic they pass to their neighbors. The overall network quality is highly sensitive to where the users are located and whether they have incentive to share their device capacity with their neighbors. The profitability of a decentralized linear pricing scheme is analyzed under such a network infrastructure. The efficiency loss due to the individual users’ pricing power is estimated and comparison is conducted with traditional Wi-Max infrastructure.Item Essays on market-based information systems design and e-supply chain(2005-12) Guo, Zhiling, 1974-; Whinston, Andrew B.Item HyPerModels: hyperdimensional performance models for engineering design(2005) Turner, Cameron John; Crawford, Richard H.Engineering design is an iterative process where the designer determines an appropriate set of design variables and cycle parameters so as to achieve a set of performance index goals. The relationships between design variables, cycle parameters and performance indices define the design space, a hyperdimensional representation of possible designs. To represent the design space, engineers employ metamodels, a technique that builds approximate or surrogate models of other models. Metamodels may be constructed from a wide variety of mathematical basis functions but Hyperdimensional Performance Models (HyPerModels) derived from Non-Uniform Rational Bsplines (NURBs) offer many unique advantages when compared to other metamodeling approaches. NURBs are defined by a set of control points, knot vectors and the NURBs orders, resulting in a highly robust and flexible curve definition that has become the de facto computer graphics standard. The defining components of a NURBs HyPerModel can be used to define adaptive sequential sampling algorithms that allow the designer to efficiently survey the design space for interesting regions. The data collected from design space surveys can be represented with a HyPerModel by adapting NURBs fitting algorithms, originally developed for computer graphics, to address the unique challenges of representing a hyperdimensional design space. With a HyPerModel representation, visualization of the design space or design subspaces such as the Pareto subspace is possible. HyPerModels support design space analysis for adaptive sequential sampling algorithms, to detect robust design space regions or for fault detection by comparing multiple HyPerModels obtained from the same system. Significantly, HyPerModels uniquely allow multi-start optimization algorithms to locate the global metamodel optimum in finite time. Each of these capabilities is demonstrated with demonstration problems including brushless DC motor fault detection and composite material I-beam and gas turbine engine design problems with the HyPerMaps software package. HyPerMaps defines the necessary algorithms to adaptively sample a design space, construct a HyPerModel and to use a HyPerModel for visualization, analysis or optimization. With HyPerMaps, an engineering designer has a window into the hyperdimensional design space, allowing the designer to explore the design space for undiscovered design variable combinations with superior performance capabilities.Item Methodology for creating human-centered robots : design and system integration of a compliant mobile base(2012-05) Wong, Pius Duc-min; Sentis, Luis; Deshpande, AshishRobots have growing potential to enter the daily lives of people at home, at work, and in cities, for a variety of service, care, and entertainment tasks. However, several challenges currently prevent widespread production and use of such human-centered robots. The goal of this thesis was first to help overcome one of these broad challenges: the lack of basic safety in human-robot physical interactions. Whole-body compliant control algorithms had been previously simulated that could allow safer movement of complex robots, such as humanoids, but no such robots had yet been documented to actually implement these algorithms. Therefore a wheeled humanoid robot "Dreamer" was developed to implement the algorithms and explore additional concepts in human-safe robotics. The lower mobile base part of Dreamer, dubbed "Trikey," is the focus of this work. Trikey was iteratively developed, undergoing cycles of concept generation, design, modeling, fabrication, integration, testing, and refinement. Test results showed that Trikey and Dreamer safely performed movements under whole-body compliant control, which is a novel achievement. Dreamer will be a platform for future research and education in new human-friendly traits and behaviors. Finally, this thesis attempts to address a second broad challenge to advancing the field: the lack of standard design methodology for human-centered robots. Based on the experience of building Trikey and Dreamer, a set of consistent design guidelines and metrics for the field are suggested. They account for the complex nature of such systems, which must address safety, performance, user-friendliness, and the capability for intelligent behavior.Item Wireless transceiver for the TLL5000 platform : an exercise in system design(2009-12) Perkey, Jason Cecil; Gharpurey, Ranjit; McDermott, Mark William (Ph. D. in electrical and computer engineering)This paper will present the hardware system design, development, and plan for implementation of a wireless transceiver for The Learning Labs 5000 (TLL5000) educational platform. The project is a collaborative effort by Vanessa Canac, Atif Habib, and Jason Perkey to design and implement a complete wireless system including physical hardware, physical layer (PHY-layer) modulation and filters, error correction, drivers and user-interface software. While there are a number of features available on the TLL5000 for a wide variety of applications, there is currently no system in place for transmitting data wirelessly from one circuit board to another. The system proposed in this report is comprised of an external transceiver that communicates with a software application running on the TLL-SILC 6219 ARM9 processor that is interfaced with the TLL5000 baseboard. The details of a reference design, the hardware from the GNU Radio project, are discussed as a baseline and source of information. The state of the project and hardware design is presented as well as the specific portions of the project to which Jason Perkey made significant contributions.