Browsing by Subject "Energy efficiency"
Now showing 1 - 20 of 24
- Results Per Page
- Sort Options
Item A comprehensive study on electrochromic glazing versus conventional shading device in the context of energy efficiency(2018-05) Amindeldar, Sanaz; Fajkus, MatthewThis study is focused on a comparative simulation-based energy performance assessment of conventional shading devices and electrochromic glazing (EC) in an office building in hot climate. Seven fixed exterior shading devices and four EC glazing alternatives have been modeled in EnergyPlus and the energy saving potential of them has been compared in South, East and West orientation. The results indicate that different levels of energy saving (in heating, cooling and electrical lighting) can be reached using each alternative scenario, which also varies by orientation. EC glazing can provide either considerable saving or waste in different orientations which is highly dependent on the control strategy. The comprehensive analysis provided in this thesis helps designers choose among the alternatives with an understanding of energy efficiency according to their criteria of concern.Item Analytical methods and strategies for using the energy-water nexus to achieve cross-cutting efficiency gains(2013-12) Sanders, Kelly Twomey; Webber, Michael E., 1971-Energy and water resources share an important interdependency. Large quantities of energy are required to move, purify, heat, and pressurize water, while large volumes of water are necessary to extract primary energy, refine fuels, and generate electricity. This relationship, commonly referred to as the energy-water nexus, can introduce vulnerabilities to energy and water services when insufficient access to either resource inhibits access to the other. It also creates areas of opportunity, since water conservation can lead to energy conservation and energy conservation can reduce water demand. This dissertation analyzes both sides of the energy-water nexus by (1) quantifying the extent of the relationship between these two resources and (2) identifying strategies for synergistic conservation. It is organized into two prevailing themes: the energy consumed for water services and the water used in the power sector. In Chapter 2, a national assessment of United States' energy consumption for water services is described. This assessment is the first to quantify energy embedded in water at the national scale with a methodology that differentiates consistently between primary and secondary uses of energy for water. The analysis indicates that energy use in the residential, commercial, industrial, and power sectors for direct water and steam services was approximately 12.3 quadrillion BTU or 12.6% of 2010 annual primary energy consumption in the United States. Additional energy was used to generate steam for indirect process heating, space heating, and electricity generation. Chapter 3 explores the potential energy and emissions reductions that might follow regional shifts in residential water heating technologies. Results suggest that the scale of energy and emissions benefits derived from shifts in water heating technologies depends on regional characteristics such as climate, electricity generation mix, water use trends, and population demographics. The largest opportunities for energy and emissions reductions through changes in water heating approaches are in locations with carbon dioxide intensive electricity mixes; however, these are generally areas that are least likely to shift toward more environmentally advantageous devices. In Chapter 4, water withdrawal and consumption rates for 310 electric generation units in Texas are incorporated into a unit commitment and dispatch model of ERCOT to simulate water use at the grid scale for a baseline 2011 case. Then, the potential for water conservation in the power generation sector is explored. Results suggest that the power sector might be a viable target for cost-effective reductions in water withdrawals, but reductions in water consumption are more difficult and more expensive to target.Item Assessing the performance of demand-side strategies and renewables : cost and energy implications for the residential sector(2015-05) Bouhou, Nour El Imane; Machemehl, Randy B.; Blackhurst, Michael F.; Caldas, Carlos H; Olmstead, Sheila M; Hersh, MatthewMany public and private entities have heavily invested in efficiency measures and renewable sources to generate energy savings and reduce fossil fuel consumption. Private utilities have invested over $4 billion in energy efficiency with 56% of these investments directed towards consumer incentives. However, the magnitude of the expected savings and the effectiveness of the technological measures remain uncertain. Multiple studies attribute the reasons driving these uncertainties to behavioral phenomena such as “the rebound effect.” This work provides insights on the uncertainties generating potential differences between expected and observed performances of demand-side measures (DSM) and distributed generation strategies, using mixed methods that employ both empirical analyses and engineering economics. This study also provides guidelines to stakeholders to effectively use the benefits from DSM strategies towards asset preservation for affordable multifamily houses. Section 2 describes how joint efficiency gains compare to similar singular efficiency gains for single-family households and discusses the implications of these differences. This work provides empirical models of marginal technical change for multiple residential electricity end-uses, including space conditioning technologies, appliances, devices, and electric vehicles. Results indicate that the relative household level of technological sophistication significantly influences the performance of demand-side measures, particularly the presence of a programmable thermostat. As to space conditioning, results demonstrate that sufficient consistent technical improvement leads to net energy savings, which could be due to technical factors or to a declining marginal rebound effect. Section 3 empirically evaluates the performance of distributed residential photovoltaic (PV) solar panels and identifies the technological and demographic factors influencing PV performance and adoption choice. Results show that modeling PV adoption choice significantly impacts the household energy demand, suggesting that the differences in the actual evaluated behavioral responses and the self-reported changes in electricity consumption are more complex than assumed by other studies. The analysis indicates that electricity use decreases marginally for PV adopters if sufficient efficiency improvements in space conditioning are made. Results further imply that households that adopt solar panels might “take back” roughly 24% of the annual electricity production for PV technologies. Section 4 describes replicable engineering economic models for estimating conventional rehabilitation, energy, and water retrofit costs for low-income multi-family housing units. The purpose of this study is to prioritize policy interventions aimed at maintaining property location and use, and to identify the capital investment needs that could be partially provided by local and state housing authorities. Section 5 synthesizes the work, describes the future work, provides guidelines for local and state efficiency program administrators, and insights on prioritizing and designing efficiency interventions.Item Beam lift - a study of important parameters : (1) well bore orientation effects on liquid entry into the pump. (2) Pumping unit counterbalance effects on power usage. (3) Pump friction and the use of sinker bars.(2016-05) Carroll, Grayson Michael; Bommer, Paul Michael; Espinoza, David N.This study will discuss three different aspects of rod pumping. Chapter 1 will focus on flow regimes associated with low-pressure horizontal wells. By understanding how oil and gas interact with each other both in the horizontal and vertical portions of the wellbore, downhole pump assemblies can be optimized to increase pump fillage. The addition of a flexible dip tube into the horizontal section of the wellbore allows for the ability to set the pump above the kickoff point but is only effective the dip tube can be engineered to be submerged in the fluid. Chapter 2 is an evaluation of the effect of pumping unit counterbalance on power consumption. If the pumping unit is out of balance, it will generate power during portions of the stroke. The Motorwise motor controller attempts to save power by shutting off the motor and allowing the rotational inertia of the unit to operate the pump. Although this device does save power on pumping units that are out of balance, it was concluded that it was of little use if the operator can maintain balance of the pumping unit. Chapter 3 discusses the role of viscous friction in rod string design and the importance of sinker bars in counteracting compression forces at pump level. On the downstroke, the plunger must overcome any mechanical friction as well as the viscous friction from fluid flowing through the traveling valve and the annular space between the plunger and inside of the barrel. In a barrel completely full of liquid, the plunger will establish a free fall velocity. If the plunger is required to fall faster than free fall, the plunger must be pushed. If the plunger can reach terminal velocity with additional weight, the increase in viscous friction inside the pump equals the weight added. Thus, the critical plunger velocity for the onset of buckling of a ¾” sucker rod varies depending on the viscosity of the fluid. Adding a single 1.5”, 25-foot sinker bar is sufficient to counteract the compression from viscous pump friction up to practical pumping speed limits.Item Comparison of two architectural intervention strategies for climate resilience : floor-to-ceiling height and overhang depth(2020-12) Lehr, Robert Z.; Felkner, JuliannaWe compare two different design intervention strategies for building resilience to climate change in Austin, Texas. The impact on the total energy use and cooling load are analyzed, with both projected to increase from 2020 to 2100[1]. The comparison uses a building designed to 2015 IECC standards and is modeled in the eQuest 3.65 energy modeling tool. The design intervention strategies compared are floor-to-ceiling height and overhang depth to determine their impact on reducing the total energy use and cooling load of a mid-rise office building. The simulations in eQuest are modeled for three different decades; 2020, 2050, and 2100. These are compared through the use of projected weather files generated by the IPCC. Compared to a baseline building with nine foot ceiling levels and zero foot overhangs, two additional building models are simulated for each year using two design intervention strategies. 1: Adding a three foot overhang to the baseline building. 2: Reducing the baseline line ceiling height to eight feet with a zero foot overhang. The results of the simulation show that a) an overall energy use reduction of up to 5219 kWh (1.182%) by 2050 and 7058 kWh (1.517%) by 2100 occurs using design intervention 1:, b) an overall energy use reduction of up 5246 kWh (1.188%) by 2050 and 6937 kWh (1.276%) by 2100 occurs using design intervention 2:, and c) the cooling load distribution does not follow this same pattern with design intervention strategy 2: reducing the cooling capacity the most in both 2050 and 2100.Item Economic analysis on energy efficiency certificate trading in Texas(2021-04-02) Khomaini, Achmad Zulfikar; Zarnikau, Jay William, 1959-; Spence, David B.Energy efficiency is the key to sustainable development; thus, decoupling economic growth from unsustainable energy demand is essential. The Public Utility Commission of Texas (PUCT) has mandated several utility companies' energy efficiency annual goals. While some utilities had been able to easily meet the goals, the other utility companies had struggled to meet the objectives. It is a certificate issued by independent certifying bodies confirming market actors' energy savings claims because of energy efficiency improvement measures. The certificate trading enables utilities buying certificates to meet their energy efficiency goal. If a utility company can implement more energy efficiency programs in terms of the number of kWh/kW with a relatively lower cost beyond their goal, they can sell their energy saving excess to other utility companies that have to implement more expensive energy efficiency measures. The simulation aims to minimize the cost of meeting the goal for energy reduction through energy programs by selecting the least-cost programs. This simulation will calculate the overall cost saving. There are two types of energy efficiency goals which are based on kW and kWh. Additional constraints are also implemented such as Mandatory low-income programs (MLIP) to ensure each utility implements low-income households' measures and 30% Within Service Area (WSA) to ensure 30% of their goal is achieved by their own program. This research suggested that enabling energy efficiency certificate trading minimizes the total cost for achieving the Texas energy efficiency goal. In almost all simulated cases, all utilities financially benefit from joining the trading system because the utility companies spend less to achieve their own energy efficiency goal. In all cases, adding MLIP and WSA constraints increases the total cost to achieve the goal. Furthermore, in terms of the policy, Stakeholder analysis suggested that policymakers consider each utility's different role. Utilities that are within the Electric Reliability Council of Texas (ERCOT) electricity market will have different business structures compared to the ones who are not. Furthermore, integrating Energy efficiency certificate and Renewable Energy Credit would also be more complicated, considering, not all utilities have an obligation to achieve the Renewable Energy Credit target.Item Energy analysis of toplighting strategies for office buildings in Austin(2012-12) Motamedi, Sara; Garrison, Michael; Novoselac, Atila; Whitsett, DasonThe purpose of this study is to determine the energy impacts of daylighing through toplights in a hot humid climate. Daylight in the working environment improves the quality of the space, and productivity of employees. In addition, natural light is a free energy resource. On one hand, a proper design of daylight such as distributed toplights can reduce the electrical lighting consumption. On the other hand, in a hot climate like Austin heat gain is a major concern. Therefore, this thesis is shaped around this question: Can toplighting strategies save energy in Austin despite the fact that buildings receive more direct heat gain through toplights? The importance of daylighting is more revealed since electrical lighting takes up a significant portion of the total building energy use (21%). In this thesis I investigated the reduction of lighting electricity and compared that with the total effects of toplights on external conductance, lighting heat gain and solar gain. The results of my thesis show that regarding the site energy a proper toplighting strategy can save electrical lighting up to (70%) with smaller impact on heating and cooling loads. This means that toplights generally can be energy efficient alternatives for a one storey office building. Developing my research I studied which toplights are more efficient: north sawtooth roofs, south sawtooth roofs, monitor roofs or very simple skylights. I compared different toplighting strategies and provided a design guide containing graphs of site energy, source energy, annual cost saving per square feet, as well as light distribution of each toplight. I believe this can accelerate implementation of efficient toplighting strategies in the design process. Concluding how significantly efficient daylighting is over heat gain, I finalized my research by comparison of skylights with different visible transmission (VT) and solar heat gain coefficient (SHGC). The major result of this thesis is that proper toplighting strategies can save energy despite the increased solar gain. It is anticipated that the thesis findings will promote the implementation of toplighting strategies and higher VT glass type in the energy efficient building industry.Item Energy efficient high bandwidth DRAM for throughput processors(2021-05-03) O'Connor, James Michael; Swartzlander, Earl E., Jr., 1945-; Erez, Mattan; Fussell, Donald; John, Lizy K; Keckler, Stephen W; Reddi, Vijay JGraphics Processing Units (GPUs) and other throughput processing architectures have scaled performance through simultaneous improvements in compute capability and aggregate memory bandwidth. Satisfying the increasing bandwidth demands of future systems without a significant increase in the power budget for the DRAM is a key challenge going forward. A new DRAM architecture, Fine-Grained DRAM, significantly reduces the energy consumption by partitioning the DRAM die into many small independent units, called grains, each of which has a local I/O link to the processor. With this architecture, the on-DRAM data movement energy is greatly reduced due to the much shorter wiring distance between the cell array and the local I/O. At the same time, the energy on the link between the DRAM and GPU remains low by leveraging novel energy-efficient encoding techniques well-suited to the narrow buses. Furthermore, wasteful row overfetch energy due to sparse accesses to large DRAM rows is significantly reduced by reducing the effective DRAM row size in an area efficient manner. This Fine-Grained DRAM architecture enables future reliable, multi-TB/sec memory systems within a power budget comparable to current GPU memory systems.Item Energy-efficient mechanisms for managing on-chip storage in throughput processors(2012-05) Gebhart, Mark Alan; Keckler, Stephen W.; Burger, Douglas C.; Erez, Mattan; Fussell, Donald S.; Lin, Calvin; McKinley, Kathryn S.Modern computer systems are power or energy limited. While the number of transistors per chip continues to increase, classic Dennard voltage scaling has come to an end. Therefore, architects must improve a design's energy efficiency to continue to increase performance at historical rates, while staying within a system's power limit. Throughput processors, which use a large number of threads to tolerate memory latency, have emerged as an energy-efficient platform for achieving high performance on diverse workloads and are found in systems ranging from cell phones to supercomputers. This work focuses on graphics processing units (GPUs), which contain thousands of threads per chip. In this dissertation, I redesign the on-chip storage system of a modern GPU to improve energy efficiency. Modern GPUs contain very large register files that consume between 15%-20% of the processor's dynamic energy. Most values written into the register file are only read a single time, often within a few instructions of being produced. To optimize for these patterns, we explore various designs for register file hierarchies. We study both a hardware-managed register file cache and a software-managed operand register file. We evaluate the energy tradeoffs in varying the number of levels and the capacity of each level in the hierarchy. Our most efficient design reduces register file energy by 54%. Beyond the register file, GPUs also contain on-chip scratchpad memories and caches. Traditional systems have a fixed partitioning between these three structures. Applications have diverse requirements and often a single resource is most critical to performance. We propose to unify the register file, primary data cache, and scratchpad memory into a single structure that is dynamically partitioned on a per-kernel basis to match the application's needs. The techniques proposed in this dissertation improve the utilization of on-chip memory, a scarce resource for systems with a large number of hardware threads. Making more efficient use of on-chip memory both improves performance and reduces energy. Future efficient systems will be achieved by the combination of several such techniques which improve energy efficiency.Item Evaluating an energy efficiency project for an existing commercial building(2011-12) Krasner, William Paul; Nichols, Steven Parks, 1950-; Duvic, Robert Conrad, 1947-In this thesis I provide general guidelines for a commercial building owner’s decision making process for heating, ventilation, and air-conditioning (HVAC) system energy efficiency projects, discuss an example HVAC project at an existing building, and recommend the most energy-efficient, cost-effective project option. First, a building’s HVAC system’s inefficiencies are identified. The systems and the components can be investigated to understand the nature of the operations. In the building owner’s interests, possible alternatives can be developed to address the systems with improvements. Consulting engineers, contractors, and other building professionals can assist in this process. There are necessary engineering and construction considerations for defining realistic project alternatives. With the alternatives, there are costs, benefits, and trade-offs. The costs, which mainly include the investment and the operational costs, and the benefits, which mainly include the available financial incentives, defined in dollars, are identified for the alternatives. The alternatives can be evaluated with Building Life Cycle Cost (BLCC) software. In this evaluation the net present-value (NPV) method is used to rank the alternatives. Then, the highest-ranking, lowest life-cycle cost, alternative is recommended for the owner. In the example, an existing commercial building’s HVAC systems are considered. The construction plans, the facilities records, and the existing field conditions were investigated and analyzed. A few operational inefficiencies were identified. To address two of these existing inefficiencies, there were alternatives considered to replace the standard-efficiency air handling unit motors with premium-efficiency motors and to renovate the ventilation system with an energy recovery wheel. The investment costs, the available rebates, the net annual energy savings, and the energy and other operational costs were estimated, over a 30-year study period, for each of these alternatives, and compared to the costs of the existing system. The BLCC evaluations were performed across a range of discount rates in the present-value calculations. Based on the lowest present-value life-cycle cost reports, the premium-efficiency motor replacement project only is recommended.Item E³ : energy-efficient EDGE architectures(2010-08) Govindan, Madhu Sarava; Keckler, Stephen W.; Burger, Douglas C.; McKinley, Kathryn S.; Chiou, Derek; Hunt, Jr., Warren A.; Brooks, DavidIncreasing power dissipation is one of the most serious challenges facing designers in the microprocessor industry. Power dissipation, increasing wire delays, and increasing design complexity have forced industry to embrace multi-core architectures or chip multiprocessors (CMPs). While CMPs mitigate wire delays and design complexity, they do not directly address single-threaded performance. Additionally, programs must be parallelized, either manually or automatically, to fully exploit the performance of CMPs. Researchers have recently proposed an architecture called Explicit Data Graph Execution (EDGE) as an alternative to conventional CMPs. EDGE architectures are designed to be technology-scalable and to provide good single-threaded performance as well as exploit other types of parallelism including data-level and thread-level parallelism. In this dissertation, we examine the energy efficiency of a specific EDGE architecture called TRIPS Instruction Set Architecture (ISA) and two microarchitectures called TRIPS and TFlex that implement the TRIPS ISA. TRIPS microarchitecture is a first-generation design that proves the feasibility of the TRIPS ISA and distributed tiled microarchitectures. The second-generation TFlex microarchitecture addresses key inefficiencies of the TRIPS microarchitecture by matching the resource needs of applications to a composable hardware substrate. First, we perform a thorough power analysis of the TRIPS microarchitecture. We describe how we develop architectural power models for TRIPS. We then improve power-modeling accuracy using hardware power measurements on the TRIPS prototype combined with detailed Register Transfer Level (RTL) power models from the TRIPS design. Using these refined architectural power models and normalized power modeling methodologies, we perform a detailed performance and power comparison of the TRIPS microarchitecture with two different processors: 1) a low-end processor designed for power efficiency (ARM/XScale) and 2) a high-end superscalar processor designed for high performance (a variant of Power4). This detailed power analysis provides key insights into the advantages and disadvantages of the TRIPS ISA and microarchitecture compared to processors on either end of the performance-power spectrum. Our results indicate that the TRIPS microarchitecture achieves 11.7 times better energy efficiency compared to ARM, and approximately 12% better energy efficiency than Power4, in terms of the Energy-Delay-Squared (ED²) metric. Second, we evaluate the energy efficiency of the TFlex microarchitecture in comparison to TRIPS, ARM, and Power4. TFlex belongs to a class of microarchitectures called Composable Lightweight Processors (CLPs). CLPs are distributed microarchitectures designed with simple cores and are highly configurable at runtime to adapt to resource needs of applications. We develop power models for the TFlex microarchitecture based on the validated TRIPS power models. Our quantitative results indicate that by better matching execution resources to the needs of applications, the composable TFlex system can operate in both regimes of low power (similar to ARM) and high performance (similar to Power4). We also show that the composability feature of TFlex achieves a signification improvement (2 times) in the ED² metric compared to TRIPS. Third, using TFlex as our experimental platform, we examine the efficacy of processor composability as a potential performance-power trade-off mechanism. Most modern processors support a form of dynamic voltage and frequency scaling (DVFS) as a performance-power trade-off mechanism. Since the rate of voltage scaling has slowed significantly in recent process technologies, processor designers are in dire need of alternatives to DVFS. In this dissertation, we explore processor composability as an architectural alternative to DVFS. Through experimental results we show that processor composability achieves almost as good performance-power trade-offs as pure frequency scaling (no changes in supply voltages), and a much better performance-power trade-off compared to voltage and frequency scaling (both supply voltage and frequency change). Next, we explore the effects of additional performance-improving techniques for the TFlex system on its energy efficiency. Researchers have proposed a variety of techniques for improving the performance of the TFlex system. These include: (1) block mapping techniques to trade off intra-block concurrency with communication across the operand network; (2) predicate prediction and (3) operand multi-cast/broadcast mechanism. We examine each of these mechanisms in terms of its effect on the energy efficiency of TFlex, and our experimental results demonstrate the effects of operand communication, and speculation on the energy efficiency of TFlex. Finally, this dissertation evaluates a set of fine-grained power management (FGPM) policies for TFlex: instruction criticality and controlled speculation. These policies rely on a temporally and spatially fine-grained dynamic voltage and frequency scaling (DVFS) mechanism for improving power efficiency. The instruction criticality policy seeks to improve power efficiency by mapping critical computation in a program to higher performance-power levels, and by mapping non-critical computation to lower performance-power levels. Controlled speculation policy, on the other hand, maps blocks that are highly likely to be on correct execution path in a program to higher performance levels, and the other blocks to lower performance levels. Our experimental results indicate that idealized instruction criticality and controlled speculation policies improve the operating range and flexibility of the TFlex system. However, when the actual overheads of fine-grained DVFS, especially energy conversion losses of voltage regulator modules (VRMs), are considered the power efficiency advantages of these idealized policies quickly diminish. Our results also indicate that the current conversion efficiencies of on-chip VRMs need to improve to as high as 95% for the realistic policies to be feasible.Item Fate of the Houston skyline : stategies adopted for rehabilitating mid-century modern high-rises(2014) Srinivasan, Urmila; Holleran, MichaelA recent report by Terrapin Bright Green “Mid-century (Un) Modern” discusses the desperate condition of mid-century modern high-rises in Manhattan. The article argues that it would be beneficial both economically and environmentally to demolish these buildings and build new ones with an assumed increase in FAR. To re-build, repair or re-skin are the questions Mid-century Modern High-rises (MMH) face today. This study focuses on Houston, Texas, which is very different from New York City both climatically and from a planning stand point. It is dreaded for its hot and humid climate and notorious for its consistent refusal to adopt any zoning. These high-rises in Houston represent the economic success of the city immediately after WWII. These buildings were constructed as the city transformed from the Bayou City to the Space city. In this study I have mapped the status of these high-rises and the strategies that were used to renovate them. The questions I further wish to address are how preservation or energy efficiency are addressed while renovating these buildings. Even preservationists might agree that all buildings are not equal and a new look would benefit some. The real challenge lies in resolving the grey areas, where one is not talking about a Seagram or a Lever House, but a well designed environmentally sensitive building.Item Low temperature heat and water recovery from supercritical coal plant flue gas(2015-08) Reimers, Andrew Samuel; Webber, Michael E., 1971-; Buckingham, Fred PFor this work, I constructed an original thermodynamic model to estimate waste heat and water recovery from the flue gas of a supercritical coal plant burning lignite, subbituminous, or bittuminous coal. This model was written in MATLAB as a list of linear equations based on first and second law analyses of the power plant components. This research is relevant because coal accounted for the largest increase in primary energy consumption worldwide as recently as 2013. Coal-fired electricity generation is particularly water intensive. As populations increase, especially in the developing world, much of the increased demand for electricity will be provided by new coal-fired power plants. One way to improve the efficiency of a coal-fired power plant is to recover the low temperature waste heat from the flue gas and use it to preheat combustion air or boiler feedwater. A low temperature economizer or flue gas cooler can be used for this purpose to achieve overall efficiency improvements as high as 0.4%. However, a side effect of the efficiency improvements is an increase in water consumption factor of nearly 10%. The water consumption factor can be reduced with the addition of a flue gas dryer after the flue gas cooler. The flue gas dryer is a condensing heat exchanger between the flue gas and ambient air. As the flue gas cools, its water content condenses and can be recovered and treated for use within the plant. In general, the results indicate that low temperature waste heat and water recover from boiler flue gas would be more feasible and beneficial for coal plants burning lignite as opposed to higher quality coal. Because these plants already have a lower efficiency, the relative increase in efficiency is somewhat higher. Similarly, the relative increase in water consumption factor is somewhat lower for a lignite plant. The high moisture content and dew point of the flue gas produced from lignite combustion makes it easier to recover water with a flue gas dryer. The higher water recovery factor along with the lower water consumption factor means that a greater percentage of the water evaporated in the cooling tower can be recovered in the flue gas dryer of a lignite plant than for a plant burning higher quality coal.Item Modeling and optimization for energy efficient large scale cooling operation(2013-12) Kapoor, Kriti; Edgar, Thomas F.Optimal chiller loading (OCL) is described as a means to improve the energy efficiency of a chiller plant operation. It is formulated as a multi-period constrained mixed integer non-linear optimization problem to optimize the total cooling load distribution through accurate chiller models. OCL is solved as a set of quadratic programs using sequential programming algorithm (SQP) in MATLAB. Based on application of the methodology to chiller systems at UT Austin and a semiconductor manufacturing facility, OCL can result in an annual energy savings of about 8%. However, the savings may reduce considerably in case of additional physical constraints on overall plant operation. With the addition of thermal energy storage (TES) to the system, OCL can reduce the daily cooling costs in the case of time varying electricity prices by 13.45% on an average. The energy efficiency of a chiller plant as a function of its chiller arrangement is studied by using fitted chiller models. If all other variables are kept same, chillers operating in parallel consume up to 9.62% less power as compared to when they are operated in series. Otherwise, chillers may operate up to 12.26% more efficiently in series depending on their chilled water outlet temperature values. The answer to the optimal chiller arrangement can be straightforward in some cases or can be a complex optimization problem in others.Item Opportunities for urban water systems to deliver demand-side benefits to the electric grid(2018-06-12) Vitter, Jeffrey Scott, Jr.; Webber, Michael E., 1971-; Leibowicz, Benjamin D.; Rai, Varun; Nagy, ZoltanThe U.S. electricity grid's ongoing transformation to integrate renewable or distributed generation, address aging infrastructure, and improve grid resilience and reliability all motivate increasing the base of available demand-side resources that offer services to the grid. Water systems have several characteristics relevant to increasing the amount of demand-side services provided to the electric grid, including unique physical and chemical properties, location within urban areas, inextricable linkages between energy and water use, and untapped potential in the space. This research addresses opportunities to provide two types of demand-side service from within the water sector: load management and energy efficiency. Improved pump scheduling at municipal pump stations was explored in a case study to quantify the influence of electric rate design on the amount of load management that water utilities can affordably provide. The analysis found significant potential for electric and water utilities to cooperate on rate design and load scheduling, and that rate structure is a key enabler of mutually beneficial arrangements. Environmental and economic impacts of community-scale water recycling were addressed through the formulation of an optimal capacity and dispatch model. The model was demonstrated in a case study, which found that the community-scale system can be economically feasible in certain areas and might significantly decrease reliance on central water utilities, but that relying on grid electricity will significantly increase demand and associated emissions. The results motivate exploration of community-scale systems within microgrids with increased availability of renewable energy. In the residential sector, very high sampling rate data is used to develop machine learning classifiers to categorize end use water events by appliance type. Classifier performance is shown to improve with the addition of coincident electricity data and dedicated sub-meter data. Results from this work have potential to improve customer awareness of water use and facilitate adoption of efficient appliances or conservation behaviors. This work is extended via a spatio-economic analysis of cost effectiveness for residential water-related appliance retrofits. The analysis unites novel data sets to create an interactive online tool that allows users to evaluate energy savings and avoided emissions based on heterogenous usage, behavioral parameters, and geographic factors. Together, this body of research identifies promising opportunities for new technology, operational strategies, and policies within the water sector to support ongoing transformation towards a cleaner, responsive, and resilient electric grid.Item Performance and energy efficiency via an adaptive MorphCore architecture(2014-05) Khubaib; Patt, Yale N.The level of Thread-Level Parallelism (TLP), Instruction-Level Parallelism (ILP), and Memory-Level Parallelism (MLP) varies across programs and across program phases. Hence, every program requires different underlying core microarchitecture resources for high performance and/or energy efficiency. Current core microarchitectures are inefficient because they are fixed at design time and do not adapt to variable TLP, ILP, or MLP. I show that if a core microarchitecture can adapt to the variation in TLP, ILP, and MLP, significantly higher performance and/or energy efficiency can be achieved. I propose MorphCore, a low-overhead adaptive microarchitecture built from a traditional OOO core with small changes. MorphCore adapts to TLP by operating in two modes: (a) as a wide-width large-OOO-window core when TLP is low and ILP is high, and (b) as a high-performance low-energy highly-threaded in-order SMT core when TLP is high. MorphCore adapts to ILP and MLP by varying the superscalar width and the out-of-order (OOO) window size by operating in four modes: (1) as a wide-width large-OOO-window core, 2) as a wide-width medium-OOO-window core, 3) as a medium-width large-OOO-window core, and 4) as a medium-width medium-OOO-window core. My evaluation with single-thread and multi-thread benchmarks shows that when highest single-thread performance is desired, MorphCore achieves performance similar to a traditional out-of-order core. When energy efficiency is desired on single-thread programs, MorphCore reduces energy by up to 15% (on average 8%) over an out-of-order core. When high multi-thread performance is desired, MorphCore increases performance by 21% and reduces energy consumption by 20% over an out-of-order core. Thus, for multi-thread programs, MorphCore's energy efficiency is similar to highly-threaded throughput-optimized small and medium core architectures, and its performance is two-thirds of their potential.Item Power-aware processor system design(2020-05) Kalyanam, Vijay Kiran; Abraham, Jacob A.; Orshansky , Michael; Pan, David; Touba, Nur; Tupuri, RaghuramWith everyday advances in technology and low-cost economics, processor systems are moving towards split grid shared power delivery networks (PDNs) while providing increased functionality and higher performance capabilities resulting in increased power consumption. Split grid refers to dividing up the power grid resources among various homogeneous and heterogeneous functional modules and processors. When the PDN is shared and common across multiple processors and function blocks, it is called a Shared PDN. In order to keep the power in control on a split-grid shared PDN, the processor system is required to operate when various hardware modules interact with each other while the supply voltage (V [subscript DD]) and clock frequency (F [subscript CLK]) are scaled. Software or hardware assisted power-collapse and low-power retention modes can be automatically engaged in the processor system. The processor system should also operate at maximum performance under power constraints while consuming the full thermal design power (TDP). The processor system should neither violate board and card current limits nor the power management integrated circuit (PMIC) limits or its slew rate requirements for current draw on the shared PDN. It is expected to operate within thermal limits below an operating temperature. The processor system is also required to detect and mitigate current violations within microseconds and temperature violations in milliseconds. The processor system is expected to be robust and should be able to tolerate voltage droops. Its importance is highlighted with the processor system being on shared PDN. Because of the sharing of the PDN, the voltage droop mitigation scheme is expected to be quick and must suppress V [subscript DD] droop propagation at the source while only introducing negligible performance penalties during this mitigation. Without a solution for V [subscript DD] droop in place, the entire V [subscript DD] of shared PDN is forced to be at a higher voltage, increasing overall system power. This can potentially affect the days of use (DoU) of battery-operated systems, and reliability and cooling of wired systems. A multi-threaded processor system is expected to monitor the current, power and voltage violations and react quickly without affecting the performance of its hardware threads while maintaining quality of service (QoS). Early high-level power estimates are a necessity to project how much power will be consumed by a future processor system. These power projections are used to plan for software use cases and to reassign power-domains of processors and function blocks belonging to the shared PDN. Additionally, it helps to re-design boards and power-cards, re-implement the PDN, change PMIC and plan for additional power, current, voltage and temperature violation related mitigation schemes if the existing solutions are insufficient. The split grid shared PDN that is implemented in a system-on-chip (SoC) is driven by low cost electronics and forces multiple voltage rails for a better energy efficiency. To support this, there is a need for incorporation of voltage levels and power-states into a processor behavioral register transfer level (RTL) model. Low power verification is a must in a split-grid PDN. To facilitate these, the RTL is annotated with voltage supplies and isolation circuits that engage and protect during power collapse scenarios across various voltage domains. The power-aware RTL design is verified, identified and corrected for low power circuit and RTL bugs prior to tape-out. The mandatory features to limit current, power, voltage and temperatures in these high performance and power hungry processor systems introduce a need to provide high level power projections for a processor system accounting for various split-grid PDN supplying V [subscript DD] to the processor, the interface bus, various function blocks, and co-processors. To solve this problem, a power prediction solution is provided that has an average-power error of 8% in prediction and works with reasonable accuracy by tracking instantaneous power for unknown software application traces. The compute time to calculate power using the generated prediction model is 100000X faster and uses 100X less compute memory compared to a commercial electronic design automation (EDA) RTL power tool. This solution is also applied to generate a digital power meter (DPM) in hardware for real-time power estimates while the processor is operational. These high-level power estimates project the potential peak-currents in these processor systems. This resulted in a need for new tests to be created and validated on silicon in order to functionally stress the split-grid shared PDN for extreme voltage droop and sustained high current usage scenarios. For this reason, functional test sequences are created for high power and voltage stress testing of multi-threaded processors. The PDN is a complex system and needs different functional test sequences to generate various kinds of high and low power instruction packets that can stress it. These voltage droop stress tests affect V [subscript MIN] margins in various voltage and frequency modes of operation in a commercial multi-threaded processor. These results underscore a need for voltage mitigation solutions. The processor system operating on a split grid shared PDN can have its V [subscript MIN] increased due to voltage stress tests or a power-virus software application. The shared PDN imposes requirements to mitigate the voltage noise at the source and avoid any possibility of increases to the shared PDN V [subscript DD]. This necessitates implementing a proactive system that can mitigate voltage droop before it occurs while lowering the processor’s minimum voltage of operation (V [subscript MIN]) to help in system power reduction. To mitigate the voltage droops, a proactive clock gating system (PCGS) is implemented with a voltage clock gate (VCG) circuit that uses a digital power meter (DPM) and a model of a PDN to predict the voltage droop before its occurrence. Silicon results show PCGS achieves 10% higher clock frequency (F [subscript CLK]) and 5% lower supply voltage (V [subscript DD]) in a 7nm processor. Questions arise about the effectiveness of PCGS over a reactive voltage droop mitigation scheme in the context of a shared PDN. This results in analysis of PCGS and its comparison against a reactive voltage droop mitigation scheme. This work shows the importance of voltage droop mitigation reaction time for a split grid shared PDN and highlights benefits of PCGS in its ability to provide better V [subscript MIN] of the entire split grid shared PDN. The silicon results from power-stress tests shows the possibility of the high-power processor system exceeding board or power-supply card current capacity and thermal violations. This requires designing a limiting system that can adapt processor performance. This limiting system is expected to meet the stringent system latency of 1 µs for sustained peak-current violations and react in the order of milli-seconds for thermal mitigation. It is also expected of this system to maintain the desired Quality of Service (QoS) of the multi-threaded processor. This results in implementation of a current and temperature limiting response circuit in a 7nm commercial processor. The randomized pulse modulation (RPM) circuit adapts processor performance and reduces current violations in the system within 1 µs and maintains thread fairness with a 0.4% performance resolution across a wide range of operation from 100% to 0.4%. Hard requirements from SoC software and hardware require the processor systems to be within the TDP and power budgets and processors sharing the split gird PDN. Power consumed by the threads (processors) are now exceeded by added functionality of new threads (processors), which could consume much higher power compared to power of previous generation processors. The threads (processors) operate cohesively in a multi-threaded processor system and though there is a large difference in magnitude of power profiles across threads (processors), the overall performance of the multi-threaded processor is not expected to be compromised. This enforces a need for a power limiting system that can specifically slow down the high-power threads (processors) to meet power-budgets and not affect performance of low-power threads. For this reason, a thread specific multi-thread power limiting (MTPL) mechanism is designed that monitors the processor power consumption using the per thread DPM (PTDPM). Implemented in 7nm for a commercial processor, silicon results demonstrate that the thread specific MTPL does not affect the performance of low power threads during power limiting until the current (power) is limited to very low values. For high power threads and during higher current (power) limiting scenarios, the thread specific MTPL shows similar performance to a conventional global limiting mechanism. Thus, the thread specific MTPL enables the multi-threaded processor system to operate at a higher overall performance compared to a conventional global mechanism across most of the power budget range. For the same power budget, the processor performance can be up to 25% higher using the thread specific MPTL compared to using a global power limiting scheme. In summary, in this dissertation design for power concepts are presented for a processor system on a split-grid shared PDN through various solutions that address challenges in high-power processors and help alleviate potential problems. These solutions range from embedding power-intent, to incorporating voltage droop prediction intelligence through power usage estimation, maintaining quality of service within a stringent system latency, to slowing down specific high-power threads of a multi-threaded processor. All these methods can work cohesively to incorporate power-awareness in the processor systems, making the processors energy efficient and operate reliably within the TDP.Item Principled control of approximate programs(2015-12) Sui, Xin, Ph.D.; Pingali, Keshav; Chiou, Derek; Dhillon, Inderjit; Fussell, Donald S.; Ramachandran, VijayaIn conventional computing, most programs are treated as implementations of mathematical functions for which there is an exact output that must computed from a given input. However, in many problem domains, it is sufficient to produce some approximation of this output. For example, when rendering a scene in graphics, it is acceptable to take computational short-cuts if human beings cannot tell the difference in the rendered scene. In other problem domains like machine learning, programs are often implementations of heuristic approaches to solving problems and therefore already compute approximate solutions to the original problem. This is the key insight for the new research area, approximate computing, which attempts to trade-off such approximations against the cost of computational resources such as program execution time, energy consumption, and memory usage. We believe that approximate computing is an important step towards a more fundamental and comprehensive goal that we call information-efficiency. Current applications compute more information (bits) than are needed to produce their outputs, and since producing and transporting bits of information inside a computer requires energy/computation time/memory usage, information-inefficient computing leads directly to resources inefficiency. Although there is now a fairly large literature on approximate computing, system researchers have focused mostly on what we can call the forward problem; that is, they have explored different ways in both hardware and software to introduce approximations in a program and have demonstrated that these approximations can enable significant execution speedups and energy savings with some quality degradation of the result. However, these efforts do not provide any guarantee on the amount of the quality degradation. Since the acceptable amount of degradation usually depends on the scenario in which the application is deployed, it is very important to be able to control the degree of approximation. In this dissertation, we refer to this problem as the inverse problem. Relatively little is known about how to solve the inverse problem in a disciplined way. This dissertation makes two contributions towards solving the inverse problem. First, we investigate a large set of approximate algorithms from a variety of domains in order to understand how approximation is used in real-world applications. From this investigation, we determine that many approximate programs are tunable approximate programs. Tunable approximate programs have one or more parameters called knobs that can be changed to vary the quality of the output of the approximate computation as well as the corresponding cost. For example, an iterative linear equation solver can vary the number of iterations to trade quality of the solution versus the execution time, a Monte Carlo path tracer can change the number of sampling light paths to trade the quality of the resulting image against execution time, etc. Tunable approximate programs provide many opportunities for trading accuracy versus cost. By carefully analyzing these algorithms, we have found a set of patterns for how approximation is applied in tunable programs. Our classification can be used to identify new approximation opportunities in programs. A second contribution of this dissertation is an approach to solving the inverse problem for tunable approximate programs. Concretely, the problem is to determine knob settings to minimize the cost while keeping the quality degradation within a given bound. There are four challenges: i) for real-world applications, the quality and cost are usually complex non-linear functions of the knobs and these functions are usually hard to express analytically; ii) the quality and the cost for an application vary greatly for different inputs; iii) when an acceptable quality degradation bound is presented, determining the knob setting has to be very efficient so that the extra overhead incurred by the identification will not exceed the cost saved by the approximation; and iv) the approach should be general so that it can be applied to many applications. To meet these requirements, we formulate the inverse problem as a constrained optimization problem and solve it using a machine learning based approach. We build a system which uses machine learning techniques to learn cost and quality models for the program by profiling the program with a set of representative inputs. Then, when a quality degradation bound is presented, the system searches these error and cost models to identify the knob settings which can achieve the best cost savings while simultaneously guaranteeing the quality degradation bound statistically. We evaluate the system with a set of real world applications, including a social network graph partitioner, an image search engine, a 2-D graph layout engine, a 3-D game physics engine, a SVM solver and a radar signal processing engine. The experiments showed great savings in execution time and energy savings for a variety of quality bounds.Item Qualitative and quantitative optimization of skylights : a comprehensive and inclusive analysis of skylight sizes for an office while providing enough daylight, avoiding glare and saving energy(2017-08) Motamedi, Sara; Liedl, Petra, 1976-; Novoselac, Atila; Garrison, Michael; Moore, Steven; Gomes, Francisco; Passe, UlrikeWhile windows connect inside to outside, daylight entering through windows is a key element in architectural design. Although electrical lighting is able to replace daylight as an essential lighting requirement, daylight has qualitative and quantitative aspects that distinguish it from its competitor, electrical lighting. One of the most unique characteristics of daylight is its variability in time, including different qualities of daylighting from sunset to sunrise, and from equinox to solstice. In addition, by regulating a circadian rhythm and hormone secretion, daylight impacts the physiological and psychological well-being of human beings. Moreover, daylight through windows carries information that flows from outside to inside and makes occupants aware of the outside world. While availability of daylight has been praised in building design, uneven distribution of daylight, reflective surfaces and excessive daylight may cause glare issues and visual discomfort which need to be avoided in daylight design. Beyond all the qualitative aspects of daylight, daylight, as a free resource, is able to illuminate the space and replace electrical lighting and lower electricity utility bills. This quantitative aspect of daylight has been the center of attention among researchers, designers and builders, as lowering CO₂ emissions and environmental design have gained momentum in the building industry. Different stakeholders have various interests in qualitative and quantitative aspects of daylight, which eventually shape the design context. The interests of different stakeholders, including owners, environmentalists and occupants, may merge or conflict in different projects, which shows that daylight quality and quantity may have different weights, depending on the context of the project at hand. This dissertation aims to provide an algorithmic platform to consider a context for skylight design by including all the interests of different stakeholders while either scaling importance of the different interests or requiring minimum qualities and performance targets. This dissertation proposes different methodological approaches for its platform to include both qualitative and quantitative aspects in designing skylights for a one-storey office building in different climates. Three different approaches are proposed in this dissertation, encompassing unconstrained optimization, constrained optimization and monetary metrics. In the unconstrained optimization approach, the algorithmic platform has been developed to implement Parametric Analysis (PA) and Gradient Descent (GD) methods in order to optimize Skylight to Floor area Ratio (SFR) while saving energy consumption, as a quantitative aspect of daylight, and improving daylighting quality by providing sufficient daylight without causing glare discomfort. This platform was built as an Inclusive Integrative Algorithm (IIA) to weight different qualitative and quantitative aspects of daylight. The algorithm is able to perform single or multi-objective optimization by either applying GD or PA. In this approach, a single-objective optimization, considering only energy efficiency, showed that the optimal SFR was 6% in the examined climates of Austin, Chicago and San Francisco, for 300 lux lighting level and Lighting Power Density of 0.8 watt/sqft. The unconstrained optimization approach implemented a weighting system for an aggregated metric, including Mean Daylight (MD) and imperceptible Daylight Glare Probability (iDGP) and Ratio of Energy Saving (RES), which resulted in a SFR of 11% as the inclusive optimal solution for all the examined climates. In addition to the discussion of inclusive optimization considering both daylight and energy performance and scaling their importance, this dissertation initiated the use of GD for the unconstrained optimization in single and multi-objective optimization. The result showed that GD is considerably faster than the traditional method, PA, while predicting the optimal solution with higher resolution. For example, GD resulted in 6.22% SFR for the San Francisco climate as an energy efficient optimal solution by only 9 iterations. However, PA required 10,000 iterations to find the optimal solution with the same resolution. Thus, GD has shown a promising result for the future of multi-objective optimization in building design. In addition to the unconstrained optimization, this dissertation applied the second approach, constrained optimization, by imposing different thresholds for two sets of metrics, including daylight availability and glare. Where Useful Daylight Illuminance (UDI) and spatial Daylight Autonomy (sDA) of 100% were used, the inclusive optimal SFRs were 9-10%, 8-10% and 9% for the climates of San Francisco, Austin and Chicago, respectively. For the other set of daylight metrics, MD of 50% and Mean Daylight Glare Probability (mDGP) of 35% were used, which resulted in optimal solutions of 7-14%, 7-11% and 8-13% SFR for San Francisco, Austin and Chicago, respectively. Therefore, multi-objective optimization considering both daylight and energy performance resulted in different inclusive optimal solutions to energy optimization alone. The study also concludes that optimal solutions depend on applied metrics and daylight thresholds. For the third approach this research investigated the monetary gains from energy efficiency and increased productivity. Assuming that productivity does not occur in spaces with poor daylight performance, inclusive optimal solutions will be the scenarios that most probably boost productivity. The study indicated that the energy cost saving is always negligible compared to the monetary gains from minimum increased productivity (1%). This conclusion may influence an owner’s perspective toward the quality of daylight performance and its resultant productivity increase. Although the proposed algorithm (IIA) has been used to perform multi-objective optimization for skylight design, this platform can be used in the design process to optimize any fenestration, including widows, based on daylight availability, glare and energy factors. GD as one of the contributions of this dissertation is a faster and more accurate method which can facilitate the application of multi-objective optimization for daylight analysis in the early stage of design.Item Quantifying the economic and environmental tradeoffs of electricity mixes in Texas, including energy efficiency potential using the Rosenfeld effect as a basis for evaluation(2010-12) Lott, Melissa Christenberry; Webber, Michael E., 1971-; Schmidt, PhilipElectricity is a complex and interesting topic for research and investigation. From a systems level, electricity includes many steps from its generation (power plants) to transmission and distribution to delivery and final use. Within each of these steps are a set of tradeoffs that are region-specific, depending heavily on the types of generation technologies and input fuels used to generate the electricity. These tradeoffs are complex and often not positively correlated to one another, producing a web of information that makes conclusions regarding the net benefit of changes to the electricity generation mix unobvious and difficult to determine using general rules of thumb. As individuals look to change the mix of technologies and fuels used to generate electricity for environmental or economic reasons, this complex web results in a lack of clarity and understanding of the consequences of particular choices. Quantitative tools could provide individuals with clear information and improved understanding of the tradeoffs associated with changes to the electricity mix. Unfortunately, prior to this research, no such tools existed that provided a clear, rigorous, and unbiased quantitative comparison of the region-specific environmental and economic tradeoffs associated with changes to the electricity mix. This research filled this gap by developing a methodology for calculating the environmental and economic impacts of changes to the electricity generation mix for individual regions. This methodology was applied specifically to Texas to develop the Texas Interactive Power Simulator (TIPS), an interactive online tool accessible via the internet. This tool is currently used for direct instruction at The University of Texas at Austin for undergraduate courses. Preliminary data were collected to determine the usefulness of this tool as a classroom aid. These data revealed that a majority of students enjoy using the TIPS tool, felt that they learned about the tradeoffs of electricity generation methods by using TIPS, and wish that there were more learning tools like TIPS available to them. This research also investigated the potential to use energy efficiency to satisfy a portion of the electricity demand that would otherwise be supplied using a generation technology. The methodology and series of decision criteria that were developed with this investigation were used to determine the amount of generation that could reasonably be satisfied with energy efficiency technologies and supportive policies for a particular region of interest, in this case Texas. This methodology was established using the Rosenfeld Effect as a basis for evaluating the energy efficiency potential in a specific region, providing a more realistic maximum energy efficiency value than using theoretical maximum gains based on current best available technology. It was then compared to efficiency potential estimates by the American Council for an Energy-Efficient Economy (ACEEE) and the Public Utility Commission of Texas (PUCT). In this research, I found that Texas is unlikely to realize more than an annual savings of 11% or about 1.5 megawatt-hours per capita compared to 2007 use levels based on nominal energy efficiency approaches. When this potential savings was applied to offset future demand increases in Texas, it was found that new generation capacity would still be needed over the next few decades to meet increasing total electricity demand. I used the economic and environmental tradeoff analysis and energy efficiency limitations methodologies that I established in my research to calculate the economic and environmental tradeoffs of changes to the electricity mix resulting from several scenarios, including federal energy and climate legislation, nuclear renaissance, high wind power growth, and maximizing energy efficiency. The outputs from these scenarios yielded the following observations: 1. Energy efficiency is unlikely to replace more than 11% of total per capita electricity demand in Texas. This level of energy efficiency might reduce total demand in the state, but population growth and its corresponding impacts on state electricity use might outpace the savings from energy efficiency in the long-term. This population growth could result in an overall increase in total annual state electricity use, despite energy efficiency gains. 2. While nuclear power might be environmentally advantageous from the standpoint of total emission of greenhouse gases compared to fossil fuel-fired power plants, it has very high up-front capital costs and is very water-intensive. 3. A federal combined energy efficiency and renewable portfolio standard might require states to install new renewable power generation capacity. In some states, including Texas, the amount of required new generation capacity may be small because of existing state initiatives encouraging renewable generation capacity to be installed in the state and the potential to offset some generation requirements using energy efficiency.