Browsing by Subject "Scheduling"
Now showing 1 - 20 of 25
- Results Per Page
- Sort Options
Item A study of the effectiveness of the Clear Flow Matrix in building construction projects(2018-08) Vitorio Tiezzi, Augusto; O'Brien, William J.; Borcherding, John D.Cost overruns, schedule delays, and contractual claims are commonplace in construction projects. These issues often are the result of poor planning by the construction management team, or the improper alignment of field production management and control with the project schedule. To ensure that the schedule is properly executed in the construction process, the production management and control system must be manageable, intuitive and visually evident for all levels of management and trade supervisors. This need is especially critical for building construction, where client requirements often result in changing project demands, particularly for interior equipment and finishes. The various finish trades for a large building with many segments or floors may produce a large number of trade-location activities for the construction team to manage during construction. Thus, a good production plan is required to implement the schedule. Lott Brothers Construction Company has created a novel production management and control technique entitled, the Clear Flow Matrix (CFMx). The technique consists of a matrix integration of the trade activities and locations wherein time and workflow rhythm are represented through the progress of a unique Balanced Workfront, which balances client completion demand and trade contractor operations efficiency as trade work progresses through the building areas. The visual nature of CFMx and the Balanced Workfront provides the project participants with a production framework for managing production and documentation of trade-exchanges so critical for quality completion of the work in accordance with the contract schedule. The thesis discusses the theoretical underpinning of the CFMx and how the embedded concepts are used to produce this visual matrix framework for production management and control to deliver the project in full alignment with the master schedule. The thesis provides several example applications of the Clear Flow Matrix to various types of building construction projects. Data collected from the applications through jobsite observations of the work (Work Sampling Analysis) and questionnaire interviews of trade contractor foremen and project managers provide insights into the effectiveness of the Clear Flow Matrix in comparison with other production management techniques currently used in the construction industry.Item Accelerating deep learning training : a storage perspective(2021-12-01) Mohan, Jayashree; Chidambaram, Vijay; Phanishayee, Amar; Witchel, Emmett; Rossbach, Christopher J; Krahenbuhl, PhilippDeep Learning, specifically Deep Neural Networks (DNNs), is stressing storage systems in new ways, moving the training bottleneck to the data pipeline (fetching, pre-processing data, and writing checkpoints), rather than computation at the GPUs; this leaves the expensive accelerator devices stalled for data. While prior research has explored different ways of accelerating DNN training time, the impact of storage systems, specifically the data pipeline, on ML training has been relatively unexplored. In this dissertation, we study the role of data pipeline in various training scenarios, and based on the insights from our study, we present the design and evaluation of systems that accelerate training. We first present a comprehensive analysis of how the storage subsystem affects the training of the widely used DNN models by building a tool, DS-Analyzer. Our study reveals that in many cases, DNN training time is dominated by data stalls: time spent waiting for data to be fetched from(or written to) storage and pre-processed. We then describe CoorDL, a user-space data loading library to address data stalls in dedicated single-user servers with fixed resource capacities. Next, we design and evaluate Synergy, a work-load aware scheduler for shared GPU clusters that mitigates data stalls by allocating auxiliary resources like CPU and memory cognizant of workload requirements. Finally, we present CheckFreq, a framework that frequently writes model state to storage (checkpoint) for fault-tolerance, thereby reducing wasted GPU work on job interruptions, while also minimizing stalls due to checkpointing. Our dissertation shows that data stalls squander away the improved performance of faster GPUs. Our dissertation further demonstrates that an efficient data pipeline is critical to speeding up end-to-end training, by building and evaluating systems that mitigate data stalls in several training scenarios.Item Control-friendly scheduling algorithms for multi-tool, multi-product manufacturing systems(2011-12) Bregenzer, Brent Constant; Qin, Joe; Hasenbein, John J.; Edgar, Thomas F.; Hwang, Gyeong S.; Kutanoglu, Erhan; Bonnecaze, Roger T.The fabrication of semiconductor devices is a highly competitive and capital intensive industry. Due to the high costs of building wafer fabrication facilities (fabs), it is expected that products should be made efficiently with respect to both time and material, and that expensive unit operations (tools) should be utilized as much as possible. The process flow is characterized by frequent machine failures, drifting tool states, parallel processing, and reentrant flows. In addition, the competitive nature of the industry requires products to be made quickly and within tight tolerances. All of these factors conspire to make both the scheduling of product flow through the system and the control of product quality metrics extremely difficult. Up to now, much research has been done on the two problems separately, but until recently, interactions between the two systems, which can sometimes be detrimental to one another, have mostly been ignored. The research contained here seeks to tackle the scheduling problem by utilizing objectives based on control system parameters in order that the two systems might behave in a more beneficial manner. A non-threaded control system is used that models the multi-tool, multi-product process in a state space form, and estimates the states using a Kalman filter. Additionally, the process flow is modeled by a discrete event simulation. The two systems are then merged to give a representation of the overall system. Two control system matrices, the estimate error covariance matrix from the Kalman filter and a square form of the system observability matrix called the information matrix, are used to generate several control-based scheduling algorithms. These methods are then tested against more tradition approaches from the scheduling literature to determine their effectiveness on both the basis of how well they maintain the outputs near their targets and how well they minimize the cycle time of the products in the system. The two metrics are viewed simultaneously through use of Pareto plots and merits of the various scheduling methods are judged on the basis of Pareto optimality for several test cases.Item Data-driven modeling and optimization of sequential batch-continuous process(2016-05) Park, Jungup; Edgar, Thomas F.; Baldea, Michael; Djurdjanovic, Dragan; Rochelle, Gary T; Truskett, Thomas MDriven by the need to lower capital expenditures and operating costs, as well as by competitive pressure to increase product quality and consistency, modern chemical processes have become increasingly complex. These trends are manifest, on the one hand, in complex equipment configurations and, on the other hand, in a broad array of sensors (and control systems), which generate large quantities of operating data. Of particular interest is the combination of two traditional routes of chemical processing: batch and continuous. Batch to continuous processes (B2C), which constitute the topic of this dissertation, comprise of a batch section, which is responsible for preparing the materials that are then processed in the continuous section. In addition to merging the modeling, control and optimization approaches related to the batch and continuous operating paradigms --which are radically different in many aspects-- challenges related to analyzing the operation of such processes arise from the multi-phase flow. In particular, we will be considering the case where a particulate solid is suspended in a liquid ``carrier'', in the batch stage, and the two-phase mixture is conveyed through the continuous stage. Our explicit goal is to provide a complete operating solution for such processes, starting with the development of meaningful and computationally efficient mathematical models, continuing with a control and fault detection solution, and finally, a production scheduling concept. Owing to process complexity, we reject out of hand the use of first-principles models, which are inevitably high dimensional and computationally expensive, and focus on data-driven approaches instead. Raw data obtained from chemical industry are subject to noise, equipment malfunction and communication failures and, as such, data recorded in process historian databases may contain outliers and measurement noise. Without proper pretreatment, the accuracy and performance of a model derived from such data may be inadequate. In the next chapter of this dissertation, we address this issue, and evaluate several data outlier removal techniques and filtering methods using actual production data from an industrial B2C system. We also address a specific challenge of B2C systems, that is, synchronizing the timing of the batch data need with the data collected from the continuous section of the process. Variable-wise unfolded data (a typical approach for batch processes) exhibit measurement gaps between the batches; however, this type of behavior cannot be found in the subsequent continuous section. These data gaps have an impact on data analysis and, in order to address this issue, we provide a method for filling in the missing values. The batch characteristic values are assigned in the gaps to match the data length with the continuous process, a procedure that preserves meaningful process correlations. Data-driven modeling techniques such as principal component analysis (PCA) and partial least squares (PLS) regression are well-established for modeling batch or continuous processes. In this thesis, we consider them from the perspective of the B2C systems under consideration. Specific challenges that arise during modeling of these systems are related to nonlinearity, which, in turn, is due to multiple operating modes associated with different product types/product grades. In order to deal with this, we propose partitioning the gap-filled data set into subsets using k-means clustering. Using the clustering method, a large data set that reflects multiple operating modes and the associated nonlinearity can be broken down into subsets in which the system exhibits a potentially linear behavior. Also, in order to further increase the model accuracy, the inputs to the model need to be refined. Unrelated variables may corrupt the resulting model by introducing unnecessary noise and irrelevant information. By properly eliminating any uninformative variables, the model performance can be improved along with the interpretability. We use variable selection methods to investigate the model coefficients or variable importance in projection (VIP) values to determine the variables to retain in the model. Developing a model to estimate the final product quality poses different challenges. Measuring and quantifying the final product quality online can be limited due to physical and economic constraints. Physically, there are some quantities that cannot be measured due to sensor sizes or surrounding environments. Economically, the offline ``lab'' measurements may lead to destroying the sample used for the testing. These constraints lead to multiple sampling rates. The process measurements are stored and available continuously in real-time, but the quality measurements have much lower sampling rate. In order to account for this discrepancy, the online process measurements are down-sampled to match the sampling frequency of the lab measurements, and subsequently, soft sensors are can be developed to estimated the final product quality. With the soft sensor in place, the process needs to be optimized to maximize the plant efficiency. Using the real-time optimization, the optimal sequence of manipulated inputs that minimizes the off-spec products are calculated. In addition, the optimal sequences of setpoints can be calculated by carrying out the scheduling calculation with the process model. Traditionally, the scheduling calculation is carried out without taking the process dynamics into account, which could result in off-spec products if a disturbance is introduced. Incorporating the process dynamics into the scheduling layer poses many different challenges numerically. The proposed time scale bridging model (SBM) is able to capture the input-output behavior of the process while greatly reducing the computational complexity and time.Item DSP operating systems(2011-12) Kardonik, Michael; Garg, Vijay K. (Vijay Kumar), 1963-This report presents operating systems that are designed to run on some of today’s the most popular DSP platforms. We look at functionally that those OSes provide to users, how they compare to general market embedded OS (like VxWorks, Linux), how they fit newest DSP platforms that features multicore architecture and highly integrated SoC. We also want to understand how those OSes can be utilized to implement selected real-time scheduling approaches.Item Exploiting hardware heterogeneity and parallelism for performance and energy efficiency of managed languages(2015-12) Jibaja, Ivan; Witchel, Emmett; McKinley, Kathryn S.; Blackburn, Stephen M; Batory, Don; Lin, CalvinOn the software side, managed languages and their workloads are ubiquitous, executing on mobile, desktop, and server hardware. Managed languages boost the productivity of programmers by abstracting away the hardware using virtual machine technology. On the hardware side, modern hardware increasingly exploits parallelism to boost energy efficiency and performance with homogeneous cores, heterogenous cores, graphics processing units (GPUs), and vector instructions. Two major forms of parallelism are: task parallelism on different cores and vector instructions for data parallelism. With task parallelism, the hardware allows simultaneous execution of multiple instruction pipelines through multiple cores. With data parallelism, one core can perform the same instruction on multiple pieces of data. Furthermore, we expect hardware parallelism to continue to evolve and provide more heterogeneity. Existing programming language runtimes must continuously evolve so programmers and their workloads may efficiently utilize this evolving hardware for better performance and energy efficiency. However, efficiently exploiting hardware parallelism is at odds with programmer productivity, which seeks to abstract hardware details. My thesis is that managed language systems should and can abstract hardware parallelism with modest to no burden on developers to achieve high performance, energy efficiency, and portability on ever evolving parallel hardware. In particular, this thesis explores how the runtime can optimize and abstract heterogenous parallel hardware and how the compiler can exploit data parallelism with new high-level languages abstractions with a minimal burden on developers. We explore solutions from multiple levels of abstraction for different types of hardware parallelism. (1) For asymmetric multicore processors (AMP) which have been recently introduced, we design and implement an application scheduler in the Java virtual machine (JVM) that requires no changes to existing Java applications. The scheduler uses feedback from dynamic analyses that automatically identify critical threads and classifies application parallelism. Our scheduler automatically accelerates critical threads, honors thread priorities, considers core availability and thread sensitivity, and load balances scalable parallel threads on big and small cores to improve the average performance by 20% and energy efficiency by 9% on frequency-scaled AMP hardware for scalable, non-scalable, and sequential workloads over prior research and existing schedulers. (2) To exploit vector instructions, we design SIMD.js, a portable single instruction multiple data (SIMD) language extension for JavaScript (JS), and implement its compiler support that together add fine-grain data parallelism to JS. Our design principles seek portability, scalable performance across various SIMD hardware implementations, performance neutral without SIMD hardware, and compiler simplicity to ease vendor adoption on multiple browsers. We introduce type speculation, compiler optimizations, and code generation that convert high-level JS SIMD operations into minimal numbers of SIMD native instructions. Finally, to accomplish wide adoption of our portable SIMD language extension, we explore, analyze, and discuss the trade-offs of four different approaches that provide the functionality of SIMD.js when vector instructions are not supported by the hardware. SIMD.js delivers an average performance improvement of 3.3× on micro benchmarks and key graphic algorithms on various hardware platforms, browsers, and operating systems. These language extension and compiler technologies are in the final approval process to be included in the JavaScript standards. This thesis shows using virtual machine technologies protects programmers from the underlying details of hardware parallelism, achieves portability, and improves performance and energy efficiency.Item Fluid and queueing networks with Gurvich-type routing(2015-08) Sisbot, Emre Arda; Hasenbein, John J.; Bickel, James Eric; Cudina, Milica; Djurdjanovic, Dragan; Khajavirad, AidaQueueing networks have applications in a wide range of domains, from call center management to telecommunication networks. Motivated by a healthcare application, in this dissertation, we analyze a class of queueing and fluid networks with an additional routing option that we call Gurvich-type routing. The networks we consider include parallel buffers, each associated with a different class of entity, and Gurvich-type routing allows to control the assignment of an incoming entity to one of the classes. In addition to routing, scheduling of entities is also controlled as the classes of entities compete for service at the same station. A major theme in this work is the investigation of the interplay of this routing option with the scheduling decisions in networks with various topologies. The first part of this work focuses on a queueing network composed of two parallel buffers. We form a Markov decision process representation of this system and prove structural results on the optimal routing and scheduling controls. Via these results, we determine a near-optimal discrete policy by solving the associated fluid model along with perturbation expansions. In the second part, we analyze a single-station fluid network composed of N parallel buffers with an arbitrary N. For this network, along with structural proofs on the optimal scheduling policies, we show that the optimal routing policies are threshold-based. We then develop a numerical procedure to compute the optimal policy for any initial state. The final part of this work extends the analysis of the previous part to tandem fluid networks composed of two stations. For two different models, we provide results on the optimal scheduling and routing policies.Item Fundamentals of distributed transmission in wireless networks : a transmission-capacity perspective(2011-05) Liu, Chun-Hung; Andrews, Jeffrey G.; Shakkottai, Sanjay; Arapostathis, Ari; Morton, David; Vishwanath, SriramInterference is a defining feature of a wireless network. How to optimally deal with it is one of the most critical and least understood aspects of decentralized multiuser communication. This dissertation focuses on distributed transmission strategies that a transmitter can follow to achieve reliability while reducing the impact of interference. The problem is investigated from three directions : distributed opportunistic scheduling, multicast outage and transmission capacity, and ergodic transmission capacity, which study distributed transmission in different scenarios from a transmission-capacity perspective. Transmission capacity is spatial throughput metric in a large-scale wireless network with outage constraints. To understand the fundamental limits of distributed transmission, these three directions are investigated from the underlying tradeoffs in different transmission scenarios. All analytic results regarding the three directions are rigorously derived and proved under the framework of transmission capacity. For the first direction, three distributed opportunistic scheduling schemes -- distributed channel-aware, interferer-aware and interferer-channel-aware scheduling are proposed. The main idea of the three schemes is to avoid transmitting in a deep fading and/or sever interfering context. Theoretical analysis and simulations show that the three schemes are able to achieve high transmission capacity and reliability. The second direction focuses on the study of the transmission capacity problem in a distributed multicast transmission scenario. Multicast transmission, wherein the same packet must be delivered to multiple receivers, has several distinctive traits as opposed to more commonly studied unicast transmission. The general expression for the scaling law of multicast transmission capacity is found and it can provide some insight on how to do distributed single-hop and multi-hop retransmissions. In the third direction, the transmission capacity problem is investigated for Markovain fading channels with temporal and spatial ergodicity. The scaling law of the ergodic transmission capacity is derived and it can indicate a long-term distributed transmission and interference management policy for enhancing transmission capacity.Item Lightweight offload engines for worklist management and worklist-directed prefetching(2017-12) Zhang, Dan; Chiou, Derek; Erez, Mattan; Gerstlauer, Andreas; Pingali, Keshav; Khubaib, KhubaibThe importance of irregular applications such as graph analytics is rapidly growing with the rise of Big Data. However, parallel graph workloads tend to perform poorly on general-purpose chip multiprocessors (CMPs) due to poor cache locality, low compute intensity, frequent synchronization, uneven task sizes, and dynamic task generation. At high thread counts, execution time is dominated by worklist synchronization overhead and cache misses. Researchers have proposed hardware worklist accelerators to address scheduling costs, but these proposals often harden a specific scheduling policy and do not address high cache miss rates. This thesis presents Minnow, a technique that addresses these bottlenecks by augmenting each core in a CMP with a memory throughput-optimized lightweight engine connected through an accelerator interface. These engines offload worklist operations from worker threads, reducing synchronization costs and improving scalability. The engines also perform worklist-directed prefetching, a software prefetching technique that exploits knowledge of upcoming tasks to perform nearly perfectly accurate and timely prefetch operations. In this thesis, we first characterize several graph applications within a popular graph analytics framework to determine their performance and bottlenecks. Next, Minnow and worklist-directed prefetching are discussed in detail, including the Minnow accelerator interface, microarchitecture, and prefetch flow control mechanism. Finally, the benefits of Minnow and worklist-directed prefetching are evaluated within a cycle-accurate microarchitectural simulator.Item Modeling, control, and optimization of combined heat and power plants(2014-05) Kim, Jong Suk; Edgar, Thomas F.Combined heat and power (CHP) is a technology that decreases total fuel consumption and related greenhouse gas emissions by producing both electricity and useful thermal energy from a single energy source. In the industrial and commercial sectors, a typical CHP site relies upon the electricity distribution network for significant periods, i.e., for purchasing power from the grid during periods of high demand or when off-peak electricity tariffs are available. On the other hand, in some cases, a CHP plant is allowed to sell surplus power to the grid during on-peak hours when electricity prices are highest while all operating constraints and local demands are satisfied. Therefore, if the plant is connected with the external grid and allowed to participate in open energy markets in the future, it could yield significant economic benefits by selling/buying power depending on market conditions. This is achieved by solving the power system generation scheduling problem using mathematical programming. In this work, we present the application of mixed-integer nonlinear programming (MINLP) approach for scheduling of a CHP plant in the day-ahead wholesale energy markets. This work employs first principles models to describe the nonlinear dynamics of a CHP plant and its individual components (gas and steam turbines, heat recovery steam generators, and auxiliary boilers). The MINLP framework includes practical constraints such as minimum/maximum power output and steam flow restrictions, minimum up/down times, start-up and shut-down procedures, and fuel limits. We provide case studies involving the Hal C. Weaver power plant complex at the University of Texas at Austin to demonstrate this methodology. The results show that the optimized operating strategies can yield substantial net incomes from electricity sales and purchases. This work also highlights the application of a nonlinear model predictive control scheme to a heavy-duty gas turbine power plant for frequency and temperature control. This scheme is compared to a classical PID/logic based control scheme and is found to provide superior output responses with smaller settling times and less oscillatory behavior in response to disturbances in electric loads.Item Models to predict and influence consumer demand : applications to television advertising and solar panel adoption(2019-08) Souyris, Sebastián; Balakrishnan, Anant; Duan, Jason; Seshadri, Sridhar; Lai, Guoming; Tompaidis, Stathis; Rai, VarunThis dissertation consists of two independent essays that share the common goal of predicting and influencing consumer demand: (i) Scheduling Advertising on Television; and (ii) Peer Effects in the Diffusion of Solar Panels: A Dynamic Discrete Choice Approach. These essays are data-driven problems of operations and management science, which we address using optimization and econometrics.Item Monitoring the Effects of the Dallas/Fort Worth Regional Airport. Volume I, Ground Transportation Impacts(Council for Advanced Transportation Studies, 1976-12) Dunlay, William J., Jr; Henry, Lyndon; Caffery, Thomas G.; Wiersig, Douglas W.; Zambrano, Waldo A.The report presents new conceptual and methodological approaches to developing models to interrelate airline schedules, airport-based employee work-shift schedules, and airport access ground traffic volumes in any time period for a given report. The results of a survey of ground travel at the Dallas/Fort Worth Regional Airport are presented and analyzed. Specific ground transportation impacts of the installation of this relatively new airport are assessed. Models are described which (1) express volumes of automobiles carrying airline passengers and visitors as a function of airline schedules and (2) transform existing or future employee work-shift schedules into estimates of incoming and outgoing employee vehicle volumes in any time interval. Preliminary research toward the development of a model to estimate public transit passenger volumes as a function of airline passenger volumes is also described.Item On-demand planning of a school of autonomous mobile robots for prioritized task completion(2020-05-06) Bakshi, Soovadeep; Chen, Dongmei, Ph. D.; Beaman, Joseph J; Longoria, Raul G; Tanaka, Takashi; Hanasusanto, Grani AUsing autonomous mobile robots (AMRs) to collaboratively complete tasks has been intensively pursued by both industry and academia. Despite advancements in the field of AMR operation, there are still many challenges associated with this domain. For instance, to optimally operate numerous AMRs to collaboratively complete a large number of tasks in real-time is highly challenging. First, prioritized tasks need to be continuously and optimally assigned to the entire school of AMRs. When the number of AMRs is larger than 20 and the number of tasks is greater than 500, the computational cost of optimally assigning tasks to each AMR and scheduling these AMRs to complete the assigned tasks is significant, which renders most operations of such an AMR system infeasible in real time. Secondly, for a typical field equipped with AMRs, the tasks involve going from one location to another, making the search space asymmetric. The AMR needs to travel from its starting location to its ending location to complete a task. Thus, this AMR problem is operating in the ‘task space’ instead of the actual ‘point space’, i.e., the tasks are assigned to individual AMRs without repetitions instead of the points/locations in the field. This aspect will dramatically increase the complexity of the optimization process and traditional ‘point space’ methods might not apply. Thirdly, the tasks might have different priority levels. The fact that completing tasks with higher priorities earlier is preferred during clustering and scheduling adds another dimension to the optimization process, which can be defined as an asymmetric ‘priority space’. The asymmetricity of the ‘priority space’ also affects the clustering and scheduling of the AMRs. There has been extensive research on scheduling of AMRs to complete tasks by traveling to specific locations. However, most algorithms consider that the tasks do not have priorities. The ‘priority space’ necessitates exploring new methods for optimal scheduling in an asymmetric space. Finally, the clustering and scheduling of the school of AMRs should be adapted to operational and environmental changes. For instance, a battery-powered AMR will need to efficiently utilize its energy, and therefore, energy conscientious trajectory generation as well as optimal recharge scheduling of AMRs under such circumstances is also of great interest. The overall planning process should be completed in a timely manner such that the operation of the school of AMRs can be implemented in real time, i.e., on-demand. In order to address the above challenges, this research will develop algorithms for real-time control of a school of AMRs that can optimally cluster the AMRs with tasks having various levels of priorities, schedule each AMR to complete the tasks with the highest efficiency, generate energy-efficient trajectories for each AMR, and perform energy-based recharge scheduling.Item Online learning for scheduling in wireless networks(2022-05-06) Tariq, Isfar; Shakkottai, Sanjay; De Veciana, Gustavo A; Caramanis, Constantine; Baccelli, Francois; Hasenbein, John JOver the last few years, online learning has grown in importance as it allows us to build systems that can interact with the environment while continuously learning from past interactions to improve future decisions to maximize some objective. While online learning is used in several areas like recommendation systems, however, due to the complexity of wireless scheduling it is unclear how to utilize online learning. For instance, MU-MIMO scheduling involves the selection of a user subset and associated rate selection each time-slot for varying channel states (the vector of quantized channels matrices for each of the users) — a complex integer optimization problem that is different for each channel state. We propose that a low-dimensional structure is present in the wireless systems which can be exploited through online learning. For instance, channel-states "near" each other will likely have the same optimal solution. In our first problem, we present a framework through which we formulate the wireless scheduling problem as a multi-armed bandit problem. We then propose an online algorithm that can cluster the channel-states and learn the capacity region of these clusters. We show that our algorithms can significantly reduce the complexity of online learning for wireless settings and provide regret guarantees for our algorithm. In the second problem, we expand on our previous work and present (1) a framework that further exploits the low-dimensional structure present in the system by clustering users and (2) an online algorithm that utilizes the parameters learned by our previous algorithms to optimize the subset of users to be scheduled for given channel-state. We show that our algorithms can not only converge faster but also improve the overall throughput of the system. We also provide regret guarantees for the user clustering algorithm.Item Planning and scheduling in semiconductor manufacturing(2010-08) Zarifoglu, Emrah; Kutanoglu, Erhan; Hasenbein, John J.; Morton, David P.; Popova, Elmira; Gilbert, Stephen M.Semiconductor manufacturing is one of the most complex existing manufacturing systems. It requires constant improvement to meet demands and expectations. This dissertation studies semiconductor manufacturing under three main topics, preventive maintenance scheduling, lot size management and AMHS scheduling. We first provide an optimization based decomposition algorithm and a heuristic algorithm to solve preventive maintenance scheduling problem along with direct optimization. Then, we develop an analytic tool to investigate and find optimal lot sizes to run in a manufacturing environment to minimize cycle time. Finally, we propose an optimization based AMHS scheduling algorithm and compare its performance to a myopic algorithm.Item A process for determining construction contract duration for highway projects(2004-05-22) Bollig, Christopher Matthew; O'Connor, James ThomasThis paper describes a process for estimating the construction contract time for highway projects. The process developed uses the critical path method (CPM) to assess complex, non-repetitive construction items, while at the same time, assessing horizontal, repetitive construction items with the linear scheduling method. This is chiefly done by the identification of strategic sequencing options and critical-path-limited scoping of the model.Item Project Controls and Management Systems : current practice and how it has changed over the past decade(2017-12-06) Mostafa, Kareem Tarek; O'Brien, William J.Project Controls and Management System (PCMS) refers to an ecosystem of processes, tools and personnel required for the proper planning and execution of capital projects throughout the different phases of design, procurement, construction and startup. This can be divided into different focus areas (functions) that would include Estimating, Planning, Scheduling, Cost Control, Change Management, Progressing, and Forecasting. Various trends such as globalization, contractor specialization and information technology developments have impacted the way PCMS are implemented and made it the subject of extensive research over the past years to investigate how to best utilize those trends. Replicating the research methodology used in a 2011 report published by the Construction Research Institute (CII), this work aims to investigate the current status of PCMS implementation and how it has changed over the past decade. It was concluded that while the original PCMS principles are still valid, adoption has drastically changed in terms of efficiency for the majority of the functions. The research also identifies areas of potential concerns and provides recommendations for further improvement.Item Robust Optimization in the Scheduling of Concentrated Solar Power Plants(2023) Topete, Jerry; Ruiz, Juan P.The scheduling of renewable energy systems, such as in Concentrated Solar Power (CSP) plants, proves to be a difficult task with active research and innovation in progress. The inherent uncertainty of some model parameters that are used to generate the schedule may result in less than favorable objective function values and sometimes infeasibility when the actual realization of such parameter values are different from the ones used in the model. Robust optimization techniques can be used to make scheduling decisions while accounting for uncertainty. In this work we propose a methodology for generating the optimal schedule of a CSP plant under uncertainty. Starting from a given forecast of the uncertain parameters (such as heat from the sun, or the price at which power is sold), we artificially generate a range of realization scenarios by using a deviation array, which then act as input to a CSP scheduling model in order to obtain a recommendation of the power sold. The forecasts/scenarios are defined by probability distributions. These recommendations are then input into another CSP scheduling model in which the forecasts act as realizations. The result is a scenario matrix used to make scheduling decisions by calculating the expected benefit of choosing a certain schedule. We illustrate this framework in a particular CSP plant. For this case study our results indicate that, for uncertain heat from the sun, the most beneficial schedule is based upon a medium sun heat forecast, while our second case where the pricing of power sold is uncertain, the most beneficial schedule is a low pricing forecast. This method can be extended to real forecast sets over procedurally generated ones, providing a schedule without much computational effort. We suggest that future work explores broader forecast probability distributions as well as introducing the possibility of infeasible scenarios.Item RT-WiFi networks for wireless cyber-physical applications(2017-05-03) Leng, Quan; Mok, Aloysius Ka-Lau; Gouda, Mohamed G.; Han, Song; Qiu, LiliApplying wireless technologies to cyber-physical systems (CPSs) has received significant attention owing to their great advantages in enhanced system mobility and reduced deployment and maintenance cost. However, existing wireless technologies either cannot provide the real-time guarantee on packet delivery or do not have enough bandwidth to satisfy the requirements of cyber-physical applications. To satisfy the communication requirements in the wireless infrastructure for cyber-physical applications, we design a flexible real-time high-speed wireless communication platform called RT-WiFi to support a wide range of wireless cyber-physical applications. RT-WiFi is designed to provide deterministic timing guarantee on packet delivery with adjustable sampling rate. It features a set of configurable components for adjusting design trade-offs including sampling rate, latency variance, reliability and compatibility to Wi-Fi networks. In this dissertation, we first present the design and implementation of the RT-WiFi MAC layer. Based on the RT-WiFi MAC layer, we present network management techniques that schedule resources in RT-WiFi networks. For network management techniques, we first describe the jitter-free scheduling algorithm that minimizes the communication jitter under both static and dynamic topologies. Then we present the scheduling algorithms to coordinate channel assignments and packet transmissions in RT-WiFi networks containing multiple access points. We conduct a series of experiments and simulations to validate the design and demonstrate the advantages of RT-WiFi and the proposed network management algorithms. A case study that integrates RT-WiFi with a real cyber-physical application is included to show its performance in real world applications.Item Schedulers for next generation wireless networks : realizing QoE trade-offs for heterogeneous traffic mixes(2018-04-30) Anand, Arjun; De Veciana, Gustavo; Baccelli, Francois; Shakkottai, Sanjay; Hasenbein, John; Vikalo, HarisIn this thesis we will focus on the design of schedulers for next generation wireless networks which support application mixes, characterized by different, possibly complex, application/user Quality of Experience (QoE) metrics. The central problem underlying resource allocation for such systems is realizing QoE trade-offs among various applications/users given the dynamic loads and capacity variability they would typically see. In the first part of the thesis our focus is on applications where QoE depends on flow-level delay-based metrics. We consider system-wide metrics which directly capture both users' QoE metrics and appropriate QoE trade-offs among various applications for a wide range of system loads. This approach is different from the traditional wireless scheduler designs which have been driven by rate-based criteria, e.g., utility maximizing/proportionally fair, and/or queue-based packet schedulers which do not directly reflect the link between flow-level delays and users' QoE. In the second part of this thesis we address the key design challenges in networks supporting Ultra Reliable Low Latency Communications (URLLC) traffic which requires extremely high reliability (99.999%) and very low delays (1 msec). We will explore three different types flow delay-based metrics in this proposal, based on 1) overall mean delay; 2) functions of mean delays; and, 3) mean of functions of delays. We begin by considering minimization of mean flow delay for an M/GI/1 queuing model for a wireless Base Station (BS) where the flow size distributions are of the New Better than Used in Expectation + Decreasing Hazard Rate (NBUE +DHZ) type. Such a flow size distribution have been observed in real systems and we too validate this model based on collected data. Using a combination of analysis and simulation we show that our scheduler achieves good performance for users that might correspond to interactive applications like web browsing and/or stored video streaming and is robust to variations in system loads. Next we consider a generalization of this approach where we minimize a metric based on cost functions of the mean flow delays in a multi-class system where users/flows are classified based on their respective QoE requirements and each class's QoE requirement is modeled by its respective cost function. This approach helps us model QoE more accurately and gives us more flexibility in considering QoE trade-offs among heterogeneous user classes. We optimize two different metrics based on how we average the cost functions of delays, namely, functions of mean delays; and mean of functions of delays. The former can be used when users' experiences are sensitive to mean delays and while the latter can be used when user's experience is also sensitive to higher moments of delays, e.g., variance or soft thresholds on delay. Extensive simulations confirm the effectiveness of our proposed approaches at realizing various QoE trade-offs and performance. In 5G wireless networks URLLC traffic is expected to support many applications like industrial automation, mission critical traffic, virtual traffic etc, where the wireless network has to reliability transport small packets with very high reliability and low delays. We address the following aspects related to the system design for URLLC traffic, 1) quantifying the impact of various system parameters like system bandwidth, link SINR, delay and latency constraints on URLLC 'capacity'; 2) provisioning wireless system appropriately to meet URLLC Quality of Service (QoS) requirements; and, 3) designing efficient Hybrid Automatic Repeat Request (HARQ) schemes for transmitting small packets. Further, due the heterogeneity in delay requirements between URLLC and other types of traffic, sharing radio resources between them creates its own unique challenges. We develop efficient multiplexing schemes between URLLC traffic and other mobile broadband traffic based on preemptive puncturing/superposition of the mobile broadband transmissions by URLLC transmissions.