Browsing by Subject "GRASP"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Home therapist network modeling(2011-12) Shao, Yufen; Bard, Jonathan F.; Jarrah, Ahmad I.; Lasdon, Leon; Morton, David P.; Kutanoglu, ErhanHome healthcare has been a growing sector of the economy over the last three decades with roughly 23,000 companies now doing business in the U.S. producing over $56 billion in combined annual revenue. As a highly fragmented market, profitability of individual companies depends on effective management and efficient operations. This dissertation aims at reducing costs and improving productivity for home healthcare companies. The first part of the research involves the development of a new formulation for the therapist routing and scheduling problem as a mixed integer program. Given the time horizon, a set of therapists and a group of geographically dispersed patients, the objective of the model is to minimize the total cost of providing service by assigning patients to therapists while satisfying a host of constraints concerning time windows, labor regulations and contractual agreements. This problem is NP-hard and proved to be beyond the capability of commercial solvers like CPLEX. To obtain good solutions quickly, three approaches have been developed that include two heuristics and a decomposition algorithm. The first approach is a parallel GRASP that assigns patients to multiple routes in a series of rounds. During the first round, the procedure optimizes the patient distribution among the available therapists, thus trying to reach a local optimum with respect to the combined cost of the routes. Computational results show that the parallel GRASP can reduce costs by 14.54% on average for real datasets, and works efficiently on randomly generated datasets. The second approach is a sequential GRASP that constructs one route at a time. When building a route, the procedure tracks the amount of time used by the therapists each day, giving it tight control over the treatment time distribution within a route. Computational results show that the sequential GRASP provides a cost savings of 18.09% on average for the same real datasets, but gets much better solutions with significantly less CPU for the same randomly generated datasets. The third approach is a branch and price algorithm, which is designed to find exact optima within an acceptable amount of time. By decomposing the full problem by therapist, we obtain a series of constrained shortest path problems, which, by comparison are relatively easy to solve. Computational results show that, this approach is not efficient here because: 1) convergence of Dantzig-Wolfe decomposition is not fast enough; and 2) subproblem is strongly NP-hard and cannot be solved efficiently. The last part of this research studies a simpler case in which all patients have fixed appointment times. The model takes the form of a large-scale mixed-integer program, and has different computational complexity when different features are considered. With the piece-wise linear cost structure, the problem is strongly NP-hard and not solvable with CPLEX for instances of realistic size. Subsequently, a rolling horizon algorithm, two relaxed mixed-integer models and a branch-and-price algorithm were developed. Computational results show that, both the rolling horizon algorithm and two relaxed mixed-integer models can solve the problem efficiently; the branch-and-price algorithm, however, is not practical again because the convergence of Dantzig-Wolfe decomposition is slow even when stabilization techniques are applied.Item Optimization models for manufacturing and personnel scheduling(2017-09-14) Hur, Youngbum; Bard, Jonathan F.; Hasenbein, John J; Kutanoglu, Erhan; Kiermaier, Ferdinand; Frey, MarkusPersonnel scheduling problems have been studied by many researchers over the last five decades but much of the literature has ignored the array of break types used in practice. We investigate the benefits that flexibility offers in daily shift scheduling, especially when demand is uncertain. The different forms of flexibility considered include shift start times, the number of breaks, break lengths, and break placement. Five related mixed-integer programming models are developed and used to compare break scheduling in advance and either sequentially or in real time for various shift and break profiles. In addition, we investigate the same problem under stochastic demand. We formulate a multi-stage stochastic programming model and then transform it into a two-stage model to ease the computational burden. For testing purpose, we consider 61 scenarios. Five metrics are used for evaluating performance. While the full range of shift and break options are rarely considered in personnel scheduling problems, many practical aspects of machine setups have been neglected in scheduling semiconductor assembly and test (AT) operations. We examine all sides of the problem in a multi-machine, multi-tooling environment to see the impact of using a hierarchical approach to setups on facility performance. The primary objectives of the problem investigated are to minimize the number of shortages of key devices and to maximize weighted throughput, in that order, over a planning horizon of up to five days. Secondary objectives include minimizing the number of machines used to meet output targets, and minimizing makespan. For the shift scheduling problems with flexible breaks the application studied involves airport ground handlers; for the hierarchical machine setup problem for semiconductor assembly and test facilities testing was done with data provided by Texas Instruments. In Chapter 2, we investigate the benefits of flexibility for shifts and breaks with both deterministic and randomized demand. A rolling horizon approach is proposed for real-time break scheduling as demand unfolds over the day. In Chapter 3, we extend the shift scheduling problem to more realistically accommodate stochasticity. We introduce a two-stage stochastic programming model and determine the value of stochastic solutions and the expected value of perfect information. In Chapter 4, we develop an optimization model for scheduling multi-pass lots under hierarchical machine setup rules at assembly and test facilities. We determine machine setups, lot assignments and sequences using a greedy randomized adaptive search procedure.Item Pickup and delivery problems with side constraints(2012-12) Qu, Yuan, Ph. D.; Bard, Jonathan F.; Lasdon, Leon; Morton, David P; Kutanoglu, Erhan; Pachon, JulianPickup and delivery problems (PDPs) have been studied extensively in past decades. A wide variety of research exits on both exact algorithms and heuristics for generic variations of the problem as well as real-life applications, which continue to spark new challenges and open up new opportunities for researchers. In this dissertation, we study two variations of pickup and delivery problem that arise in industry and develop new computational methods that are shown to be effective with respect to existing algorithms and scheduling procedures found in practice. The first problem is the pickup and delivery problem with transshipment (PDPT). The work presented here was inspired by a daily route planning problem at a regional air carrier. In structuring the analysis, we describe a unique way to model the transshipment option on a directed graph. With the graph as the foundation, we implemented a branch and price algorithm. Preliminary results showed that it has difficulty in solving large instances. As an alternative, we developed a greedy randomized adaptive search procedure (GRASP) with several novel features. In the construction phase, shipment requests are inserted into routes until all demand is satisfied or no feasible insertion exists. In the improvement phase, an adaptive large neighborhood search algorithm is used to reconstruct portions of the feasible routes. Specialized removal and insertion heuristics were designed for this purpose. We also developed a procedure for generating problem instances in the absence of any in the literature. Testing was done on existing PDP data sets and generated PDPT data set. For the former, the performance and solution quality of the GRASP were comparable to the best known heuristics. For the latter, GRASP found the near optimal solution in most test cases. In the second part of the dissertation, we focus on a new version of the heterogeneous PDP in which the capacity of each vehicle can be modified by reconfiguring its interior to satisfy different types of customer demands. The work was motivated by a daily route planning problem arising at a senior activity center. A fleet of configurable vans is available each day to transport participants to and from the center as well as to secondary facilities for rehabilitative and medical treatment. To find solutions, we developed a two-phase heuristic that makes use of ideas from greedy randomized adaptive search procedures with multiple starts. In phase I, a set of good feasible solutions is constructed using a series of randomized procedures. A representative subset of those solutions is selected as candidates for improvement by solving a max diversity problem. In phase II, an adaptive large neighborhood search (ALNS) heuristic is used to find local optima by reconstructing portions of the feasible routes. Also, a specialized route feasibility check with vehicle type reassignment is introduced to take full advantage of the heterogeneous nature of vehicles. The effectiveness of the proposed methodology is demonstrated by comparing the solutions it provided for the equivalent of several weeks with those that were used in practice and derived manually. The analysis indicates that anywhere from 30% to 40% savings can be achieved with the multi-start ALNS heuristic. An exact method is introduced based on branch and price and cut for settings with more restricted time windows. In the procedure, the master problem at each node in the search tree is solved by column generation to find a lower bound. To improve the bound, subset-row inequalities are applied to the variables of the master problem. Columns are generated by solving the pricing subproblems with a labeling algorithm enhanced by new dominance conditions. Local search on the columns is used to quickly find promising alternatives. Implementation details and ways to improve the performance of the overall procedure are discussed. Testing was done on a set of real instances as well as a set of randomly generated instances with up to 50 customer requests. The results show that optimal solutions are obtained in majority of cases.Item Simulation and optimization techniques applied in semiconductor assembly and test operations(2016-05) Jia, Shihui; Bard, Jonathan F.; Morrice, Douglas J; Hasenbein, John; Khajavirad, Aida; Gao, ZhufengThe importance of back-end operations in semiconductor manufacturing has been growing steadily in the face of higher customer expectations and stronger competition in the industry. In order to achieve low cycle times, high throughput, and high utilization while improving due-date performance, more effective tools are needed to support machine setup and lot dispatching decisions. In previous work, the problem of maximizing the weighted throughput of lots undergoing assembly and test (AT), while ensuring that critical lots are given priority, was investigated and a greedy randomized adaptive search procedure (GRASP) developed to find solutions. Optimization techniques have long been used for scheduling manufacturing operations on a daily basis. Solutions provide a prescription for machine setups and job processing over a finite the planning horizon. In contrast, simulation provides more detail but in a normative sense. It tells you how the system will evolve in real time for a given demand, a given set of resources and rules for using them. A simulation model can also accommodate changeovers, initial setups and multi-pass requirements easily. The first part of the research is to show how the results of an optimization model can be integrated with the decisions made within a simulation model. The problem addressed is defined in terms of four hierarchical objectives: minimize the weighted sum of key device shortages, maximize weighted throughput, minimize the number of machines used, and minimize makespan for a given set of lots in queue, and a set of resources that includes machines and tooling. The facility can be viewed as a reentrant flow shop. The basic simulation was written in AutoSched AP (ASAP) and then enhanced with the help of customization features available in the software. Several new dispatch rules were developed. Rule_First_setup is able to initialize the simulation with the setups obtained with the GRASP. Rule_All_setups enables a machine to select the setup provided by the optimization solution whenever a decision is about to be made on which setup to choose subsequent to the initial setup. Rule_Hotlot was also proposed to prioritize the processing of the hot lots that contain key devices. The objective of the second part of the research is to design and implement heuristics within the simulation model to schedule back-end operations in a semiconductor AT facility. Rule_Setupnum lets the machines determine which key device to process according to a machine setup frequency table constructed from the GRASP solution. GRASP_asap embeds a more robust selection features of GRASP in the ASAP model through customization. This allows ASAP to explore a larger portion of the feasible region at each decision point by randomizing machine setups using adaptive probability distributions that are a function of solution quality. Rule_Greedy, which is a simplification of GRASP_asap, always picks the setup for a particular machine that gives the greatest marginal improvement in the objective function among all candidates. The purpose of the third part of the research is to statistically validate the relative effectiveness of our top six dispatch rules by comparing their performance on 30 real and randomly generated data sets. Using both GRASP and our ASAP discrete event simulation model, we have (1) identified the general order of dispatch rule performance, (2) investigated the impact of having setups installed on machines at time zero on rule performance, (3) determined the conditions under which restricting the maximum number of changeover affects the rule performance, and (4) studied the factors that might simultaneously affect rule performance with the help of a common random numbers experimental design. In the analysis, the first two objectives, weighted key device shortages and weighted throughput, are used to measure outcomes.