# Browsing by Subject "Mathematical optimization"

Now showing 1 - 18 of 18

- Results Per Page
1 5 10 20 40 60 80 100

- Sort Options
Ascending Descending

Item Adaptive multiscale modeling of polymeric materials using goal-oriented error estimation, Arlequin coupling, and goals algorithms(2008-05) Bauman, Paul Thomas, 1980-; Oden, J. Tinsley (John Tinsley), 1936-Show more Scientific theories that explain how physical systems behave are described by mathematical models which provide the basis for computer simulations of events that occur in the physical universe. These models, being only mathematical characterizations of actual phenomena, are obviously subject to error because of the inherent limitations of all mathematical abstractions. In this work, new theory and methodologies are developed to quantify such modeling error in a special way that resolves a fundamental and standing issue: multiscale modeling, the development of models of events that transcend many spatial and temporal scales. Specifically, we devise the machinery for a posteriori estimates of relative modeling error between a model of fine scale and another of coarser scale, and we use this methodology as a general approach to multiscale problems. The target application is one of critical importance to nanomanufacturing: imprint lithography of semiconductor devices. The development of numerical methods for multiscale modeling has become one of the most important areas of computational science. Technological developments in the manufacturing of semiconductors hinge upon the ability to understand physical phenomena from the nanoscale to the microscale and beyond. Predictive simulation tools are critical to the advancement of nanomanufacturing semiconductor devices. In principle, they can displace expensive experiments and testing and optimize the design of the manufacturing process. The development of such tools rest on the edge of contemporary methods and high-performance computing capabilities and is a major open problem in computational science. In this dissertation, a molecular model is used to simulate the deformation of polymeric materials used in the fabrication of semiconductor devices. Algorithms are described which lead to a complex molecular model of polymer materials designed to produce an etch barrier, a critical component in imprint lithography approaches to semiconductor manufacturing. Each application of this so-called polymerization process leads to one realization of a lattice-type model of the polymer, a molecular statics model of enormous size and complexity. This is referred to as the base model for analyzing the deformation of the etch barrier, a critical feature of the manufacturing process. To reduce the size and complexity of this model, a sequence of coarser surrogate models is generated. These surrogates are the multiscale models critical to the successful computer simulation of the entire manufacturing process. The surrogate involves a combination of particle models, the molecular model of the polymer, and a coarse-scale model of the polymer as a nonlinear hyperelastic material. Coefficients for the nonlinear elastic continuum model are determined using numerical experiments on representative volume elements of the polymer model. Furthermore, a simple model of initial strain is incorporated in the continuum equations to model the inherit shrinking of the A coupled particle and continuum model is constructed using a special algorithm designed to provide constraints on a region of overlap between the continuum and particle models. This coupled model is based on the so-called Arlequin method that was introduced in the context of coupling two continuum models with differing levels of discretization. It is shown that the Arlequin problem for the particle-tocontinuum model is well posed in a one-dimensional setting involving linear harmonic springs coupled with a linearly elastic continuum. Several numerical examples are presented. Numerical experiments in three dimensions are also discussed in which the polymer model is coupled to a nonlinear elastic continuum. Error estimates in local quantities of interest are constructed in order to estimate the modeling error due to the approximation of the particle model by the coupled multiscale surrogate model. The estimates of the error are computed by solving an auxiliary adjoint, or dual, problem that incorporates as data the quantity of interest or its derivatives. The solution of the adjoint problem indicates how the error in the approximation of the polymer model inferences the error in the quantity of interest. The error in the quantity of interest represents the relative error between the value of the quantity evaluated for the base model, a quantity typically unavailable or intractable, and the value of the quantity of interest provided by the multiscale surrogate model. To estimate the error in the quantity of interest, a theorem is employed that establishes that the error coincides with the value of the residual functional acting on the adjoint solution plus a higher-order remainder. For each surrogate in a sequence of surrogates generated, the residual functional acting on various approximations of the adjoint is computed. These error estimates are used to construct an adaptive algorithm whereby the model is adapted by supplying additional fine-scale data in certain subdomains in order to reduce the error in the quantity of interest. The adaptation algorithm involves partitioning the domain and selecting which subdomains are to use the particle model, the continuum model, and where the two overlap. When the algorithm identifies that a region contributes a relatively large amount to the error in the quantity of interest, it is scheduled for refinement by switching the model for that region to the particle model. Numerical experiments on several configurations representative of nano-features in semiconductor device fabrication demonstrate the effectiveness of the error estimate in controlling the modeling error as well as the ability of the adaptive algorithm to reduce the error in the quantity of interest. There are two major conclusions of this study: 1. an effective and well posed multiscale model that couples particle and continuum models can be constructed as a surrogate to molecular statics models of polymer networks and 2. an error estimate of the modeling error for such systems can be estimated with sufficient accuracy to provide the basis for very effective multiscale modeling procedures. The methodology developed in this study provides a general approach to multiscale modeling. The computational procedures, computer codes, and results could provide a powerful tool in understanding, designing, and optimizing an important class of semiconductormanufacturing processes. The study in this dissertation involves all three components of the CAM graduate program requirements: Area A, Applicable Mathematics; Area B, Numerical Analysis and Scientific Computation; and Area C, Mathematical Modeling and Applications. The multiscale modeling approach developed here is based on the construction of continuum surrogates and coupling them to molecular statics models of polymer as well as a posteriori estimates of error and their adaptive control. A detailed mathematical analysis is provided for the Arlequin method in the context of coupling particle and continuum models for a class of one-dimensional model problems. Algorithms are described and implemented that solve the adaptive, nonlinear problem proposed in the multiscale surrogate problem. Large scale, parallel computations for the base model are also shown. Finally, detailed studies of models relevant to applications to semiconductor manufacturing are presented.Show more Item An adaptive tabu search approach to cutting and packing problems(2003) Harwig, John Michael; Barnes, J. Wesley.Show more This research implements an adaptive tabu search procedure that controls the partitioning, ordering and orientation features of a two–dimensional orthogonal packing solution, details an effective fine-grained objective function for cutting and packing problems, and presents effective move neighborhoods to find very good answers in a short period of time. Results meet or exceed all other techniques presented in literature for a common test set for all 500 instances. These techniques have natural extension into both two dimensional arbitrarily shaped and three dimensional orthogonal packing heuristics that use rules based on given partitions, orders and orientations. Techniques to extend this research into those new horizons and methods that might improve the search are also presented.Show more Item An advanced tabu search approach to the airlift loading problem(2006) Roesener, August G.; Barnes, J. WesleyShow more This dissertation details an algorithm to solve the Airlift Loading Problem (ALP). Given a set of cargo to be transported from an aerial port of embarkation to one or more aerial ports of debarkation, the ALP seeks to pack the cargo items onto pallets (if necessary), partition the set of cargo items into aircraft loads, select an efficient and effective set of aircraft from available aircraft, and to place the cargo in allowable positions on those aircraft. The ALP differs from most partitioning and packing problems described in the literature because, in addition to spatial constraints, factors such as allowable cabin load, balance, and temporal restrictions on cargo loading availability and cargo delivery requirements must be considered. While classical methods would be forced to attack such problems in a hierarchical fashion by solving a sequence of related subproblems, this research develops an algorithm to simultaneously solve the combined problem by employing an advanced tabu search approach.Show more Item CAD for nanolithography and nanophotonics(2011-08) Ding, Duo; Pan, David Z.; Chen, Ray T.; Ghosh, Joydeep; Orshansky, Michael E.; Torres, J. Andres; Touba, NurShow more As the semiconductor technology roadmap further extends, the development of next generation silicon systems becomes critically challenged. On the one hand, design and manufacturing closures become much more difficult due to the widening gap between the increasing integration density and the limited manufacturing capability. As a result, manufacturability issues become more and more critically challenged in the design of reliable silicon systems. On the other hand, the continuous scaling of feature size imposes critical issues on traditional interconnect materials (Cu/Low-K dielectrics) due to power, delay and bandwidth concerns. As a result, multiple classes of new materials are under research and development for future generation technologies. In this dissertation, we investigate several critical Computer-Aided Design (CAD) challenges under advanced nanolithography and nanophotonics technologies. In addressing these challenges, we propose systematic CAD methodologies and optimization techniques to assist the design of high-yield and high-performance integrated circuits (IC) with low power consumption. In Very Large Scale Integration (VLSI) CAD for nanolithography, we study the manufacturing variability under resolution enhancement techniques (RETs) and explore two important topics: (1) fast and high fidelity lithography hotspot detection; (2) generic and efficient manufacturability aware physical design. For the first topic, we propose a number of CAD optimization and integration techniques to achieve the following goals in detecting lithography hotspots: (a) high hotspot detection accuracy; (b) low false-positive rate (hotspot false-alarms); (c) good capability to trade-off between detection accuracy and false-alarms; (d) fast CPU run-time; and (e) excellent layout coverage and computation scalability as design gets more complex. For the second topic, we explore the routing stage by incorporating post-RET manufacturability models into the mathematical formulation of a detailed router to achieve: (a) significantly reduced lithography-unfriendly patterns; (b) small CPU run-time overhead; and (c) formulation generality and compatibility to all types of RETs and evoling manufacturing conditions. In VLSI CAD for nanophotonics, we focus on three topics: (1) characterization and evaluation of standard on-chip nanophotonics devices; (2) low power planar routing for on-chip opto-electrically interconnected systems; (3) power-efficient and thermal-reliable design of nanophotonics Wavelength Division Multiplexing for ultra-high bandwidth on-chip communication. With simulations and experiments, we demonstrate the critical role and effectiveness of Computer-Aided Design techniques as the semiconductor industry marches forward in the deeper sub-micron (45nm and below) domain.Show more Item Decision-making frameworks for practical industrial applications in optimal process design and control(2021-08-16) Costandy, Joseph Gamal Nessim; Baldea, Michael; Edgar, Thomas F.; Beaman, Joseph; Bonnecaze, Roger; Rochelle, GaryShow more While economics are the driving force behind many of the decisions made by industrial stakeholders, the methodologies employed to make high-level decisions often utilize heuristics that may not be quantitatively optimal. In this dissertation, I develop optimization-based frameworks that enable quantitatively driven high-level decision-making for two problems of practical industrial significance. In the first part of the dissertation, I address the problem of deciding the operating mode (batch or continuous-flow) of a chemical process, while taking into account the fundamental differences in the natures of the two operating modes (such as the batch advantage of utilizing reactors for the manufacture of multiple products, or the batch disadvantage of reactor cleanup between campaigns), the size and cost of the respective reactor units, and the potential use of reactor networks to optimize performance. I develop a first-principles-based non-dimensionalization algorithm that unifies the model for all reactor types and chemical systems from the two operating modes which enables direct performance comparisons between reactors of the two operating modes. In addition, I introduce a novel discretization method, the orthogonal collocation on finite elements for reactors (OCFERE), that allows the consideration of networks of reactors of either of the two operating modes, and I unify the description of the economics of the two operating modes. This results in a framework that encompasses the solution of a single optimization problem to make the decision about operating mode and find the optimal reactor network design. In the second part of the dissertation, I address the problem of quantifying the monetary value of improvements in process control. While methods have been developed for quantifying the value of control in the case of predominantly steady-state processes, there has been no attempt to quantify the monetary value of control for predominantly transient processes. I first review the problem, and highlight the relationship between optimal scheduling and process control for transient processes. Then, I utilize the general framework of integrated scheduling and control to develop novel performance functions that enable the quantification of the monetary value of control from a scheduling perspective for a predominantly transient process. I posit that the transition time between one product and the next in a production sequence can be used as a performance metric over which the value of control can be quantified.Show more Item Deterministic approximations in stochastic programming with applications to a class of portfolio allocation problems(2001-08) Dokov, Steftcho Pentchev; Morton, David P.Show more Optimal decision making under uncertainty involves modeling stochastic systems and developing solution methods for such models. The need to incorporate randomness in many practical decision-making problems is prompted by the uncertainties associated with today’s fast-paced technological environment. The complexity of the resulting models often exceeds the capabilities of commercially available optimization software, and special purpose solution techniques are required. Three main categories of solution approaches exist for attacking a particular stochastic programming instance. These are: large-scale mathematical programming algorithms, Monte-Carlo sampling-based techniques, and deterministically valid bound-based approximations. This research contributes to the last category. First, second-order lower and upper bounds are developed on the expectation of a convex function of a random vector. Here, a “second-order bound” means that only the first and second moments of the underlying random parameters are needed to compute the bound. The vector’s random components are assumed to be independent and to have bounded support contained in a hyper-rectangle. Applications to stochastic programming test problems and analysis of numerical performance are also presented. Second, assuming additional relevant moment information is available, higher-order upper bounds are developed. In this case the underlying random vector can have support contained in either a hyper-rectangle or a multidimensional simplex, and the random parameters can be either dependent or independent. The higher-order upper bounds form a decreasing sequence converging to the true expectation, and yielding convergence of the optimal decisions. Finally, applications of the higher-order upper bounds to a class of portfolio optimization problems are presented. Mean-variance and mean-varianceskewness efficient portfolio frontiers are considered in the context of a specific portfolio allocation model as well as in general and connected with applications of the higher-order upper bounds in utility theoryShow more Item An idempotent-analytic ISS small gain theorem with applications to complex process models(2002) Potrykus, Henry George; Qin, S. JoeShow more In this dissertation a general nonlinear input-to-state stability small gain theory is developed using idempotent analytic techniques. The small gain theorem presented may be applied to system complexes, such as those arising in process modelling, and allows for the determination of a practical compact attractor in the system’s state space. Thus, application of the theorem reduces the analysis of the system to one semi-local in nature. In particular, physically practical bounds on the region of operation of a complex system may be deduced. The theorem is proved within the context of the idempotent semiring K ⊂ End⊕ 0 (R≥0). We also show that particular to linear and power law input-to-state disturbance gain functions the deduction of the resulting sufficient condition for input-to-state stability may be performed efficiently, using any suitable dynamic programming algorithm. We indicate, through examples, how an analysis of the (weighted, directed) graph of the system complex gives a computable means to delimit (in an easily understood form) robust input-to-state stability bounds. Applications of the theory to practical chemical engineering systems yielding novel results round out the work and conclude the main body of the dissertation.Show more Item Integrated planning and budget allocation for highway maintenance, rehabilitation, and capital construction projects(2019-07-09) France-Mensah, Jojo; O'Brien, William J.; Leite, Fernanda L.; Machemehl, Randy; Zhang, Zhanmin; Jiao, JunfengShow more Highway infrastructure is one of the critical components of the infrastructure network needed for the socio-economic development of a country. However, increased urbanization, limited funds, the need to consider sustainability continue to challenge the planning process for developing and maintaining highway infrastructure. Accordingly, decision-makers are tasked with making optimal decisions while achieving the strategic goals set by federal, state, district, and/or local highway agencies. Pivotal to making such resource allocation decisions, is the availability and accuracy of asset-related data and planning constraints which can guide data-driven decisions to be made by State Highway agencies (SHAs). Currently, several decision-makers still depend significantly on subjective engineering judgment to make decisions on funds allocation. Hence, there is a need for more formal and logical approaches to resource allocation as well as evaluation metrics for conducting alternatives analysis. This notwithstanding, the development of multiple incompatible legacy systems and the presence of several funding categories with stringent project eligibility requirements underpins a “siloed” approach to planning for highway infrastructure. There are often multiple functional groups working on the same asset network but with heterogeneous information systems and distinct decision-making practices. This “siloed” approach can create inefficiencies in projects selection and lead to inter-project conflicts in the highway projects proposed by these different functional groups. When left unaddressed, these spatial-temporal conflicts among projects can result in the misuse of limited taxpayer dollars and ultimately, a lower performance of the network. To address these issues with budget allocation and integrated highway planning, this study contributes to the body of knowledge in three primary ways. First, the study provides a synthesized analysis of budget allocation methods and provides a comprehensive approach to evaluating the performance of different methods employed for M&R decision-making. Secondly, this study formulates and accounts for the impact of multiple funding categories and project eligibility restrictions in budget allocation models. The inclusion of this pragmatic characteristic of M&R decision-making demonstrates the inefficiencies that can result from having increasing restrictions on multiple funding categories. Thirdly, a shared ontology is developed to enable a dynamic link between planning information and projects information. The resulting formalized representation (ontology) was validated by using multiple approaches including automated consistency checking, task-based evaluation, and data-driven evaluation. An implementation tool was also developed and applied to an actual case study problem. The tool was validated by using a Charrette test and feedback from subject-matter experts.Show more Item Investigating the use of tabu search to find near-optimal solutions in multiclassifier systems(2003) Korycinski, Donna Kay; Crawford, Melba M.; Barnes, J. Wesley.Show more Binary trees provide an ideal framework for many decision problems due to their logical, understandable structures and the computational advantages of the “divide and conquer” paradigm. They can be particularly advantageous for classification applications, which involve categorization of information into groups that are in some sense homogeneous. Algorithms used in construction of decision trees used in classification problems are typically greedy. A new algorithm was developed in this study which incorporates Tabu Search (TS) in the feature selection aspect of hierarchical classification trees. Specifically, it is implemented within the hierarchical classification problem framework of the Binary Hierarchical Classifier (BHC) which has been shown to be advantageous for classification problems with a large number of output classes. The algorithm incorporates feature selection as a means for input space and classifier complexity reduction for a static tree; the algorithm was also extended and coupled with the BHC to allow TS feature selection to aid in building the class hierarchy. Finally, a new algorithm was developed which uses TS in the rearrangement of the nodes of a binary classification tree. Since the use of highly accurate classification algorithms is vital in fields such as medical diagnoses, character recognition, target detection, and land cover mapping, the primary goal of this research is to attain improved classification accuracies.Show more Item Join optimization in a compiled OPS5 environment(1989) Lofaso, Bernie Joseph; Miranker, Daniel P.Show more The purpose of this thesis has been to determine the merits of incorporating a join ordering scheme for the compilation of OPS5 programs. We have constructed a compiler which uses the TREAT match algorithm and run benchmarks using three ordering schemes. Initial results based on simple hand-timings showed little advantage for ordering for most cases, but one benchmark showed dramatic improvement. Subsequent analysis has revealed that improvements to the join phase were masked by implementation problems with the compiled system. Further analysis of results concentrating solely on the join phase revealed that the ordering methods did improve the performance of the join phase by significant amounts in four of the six benchmark programs. A more detailed analysis of three benchmarks (including the two which showed no improvement) was carried out. For the benchmarks showing no improvement it was determined that these particular runs and perhaps the rule systems they pertain to are not amenable to the type of static join ordering that we have performedShow more Item Numerical methods for d-parametric nonlinear programming with chemical process control and optimization applications(2005) Hale, Elaine Thompson, 1978-; Qin, S. JoeShow more The problem of optimization often arises whenever we want to influence a complex system, because we usually want our impact on such a system to be the best it can be in some particular sense. Such complex optimization problems are often accompanied by another problem, however: uncertainty. Specifically, there may be uncertainty in the structure of our mathematical model, uncertainty in various physical constants in our model or uncertainty in the weighting of various conflicting objectives. The purpose of this dissertation is to develop one particular approach to optimization under uncertainty: parametric nonlinear programming (pNLP). Nonlinear programming is the optimization of a scalar objective function subject to a finite number of equality and inequality constraints over some finite number of variables u ∈ R n : min u f (u) s.t. h (u) = 0 g (u) ≤ 0. (1) v Parametric nonlinear programming attempts to solve this problem as a function of a finite number of parameters α ∈ R d : u ∗ (α) = arg { min u f (u, α) s.t. h (u, α) = 0 g (u, α) ≤ 0 } . (2) This work approaches the pNLP problem numerically with a predictorcorrector (continuation) method. Such methods have been developed for general underdetermined nonlinear equations. Such equations are derived for this problem using the Fritz-John necessary conditions for optimality. Predictor-corrector methods specifically customized for the single parameter nonlinear programming problem (1-pNLP) have already been developed quite extensively [71, 112], but a comprehensive multi-parameter method has not yet been developed. This work aims to combine some of the 1-pNLP work with Rheinboldt and Brodzik’s work on multi-dimensional predictor-correctors in order to allow for global exploration of the mpNLP problem [18, 149]. Specifically, as in the previous work on 1-pNLP, where possible the particular structure of the parametric nonlinear programming problem is exploited to both improve computation times and to yield more optimization-specific information than a general multi-parametric predictor-corrector algorithm would. The algorithm developed also improves on Brodzik’s work by including auto-scaling, step-size adaptation and, most importantly, singularity handling. Further, several verification examples, and several more complex applications are presented and discussed. The applications include implicit optimization model adequacy and multi-objective optimization.Show more Item Optimal transit route network design problem : algorithms, implementations, and numerical results(2004-05) Fan, Wei, 1974-; Machemehl, Randy B.Show more Item Optimization column compression multipliers(2007-08) Bickerff, K'Andrea Catherine, 1967-; Swartzlander, Earl E.Show more With delay proportional to the logarithm of the multiplier word length, column compression multipliers are the fastest multipliers. Unfortunately, since the design community has assumed that fast multiplication can only be realized through custom design and layout, column compression multipliers are often dismissed as too timeconsuming and complex because of their irregular structure. This research demonstrates that an automated multiplier generation and layout process makes the column compression multiplier a viable option for application specific CMOS products. Techniques for optimal multiplier designs are identified through analysis of area, delay, and power characteristics of Wallace, Dadda, and Reduced Area multipliers.Show more Item Regularity of free boundary in variational problems(2005) Teixeira, Eduardo Vasconcelos Oliveira; Caffarelli, Luis A.Show more We study the existence and geometric properties of an optimal configurations to a variational problem with free boundary. More specifically, we analyze the nonlinear optimization problem in heat conduction which can be described as follows: given a surface ∂D ⊂ R n and a positive function ϕ defined on it (temperature distribution of the body D), we want to find an optimal configuration Ω ⊃ ∂D (insulation), that minimizes the loss of heat in a stationary situation, where the amount of insulating material is prescribed. This situation also models problems in electrostatic, potential flow in fluid mechanics among others. The quantity to be minimized, the flow of heat, is given by a monotone operator on the flux uµ. Mathematically speaking, let D ⊂ R n be a given smooth bounded domain and ϕ: ∂D → R+ a positive continuous function. For each domain Ω surrounding D such that Vol.(Ω \ D) = 1, we consider the potential associated to the configuration Ω, i.e., the harmonic function on Ω\D taking boundary data u ∂D ≡ ϕ and u ∂Ω ≡ 0, and compute J(Ω) := Z ∂D Γ(x,uµ(x))dσ, vii where µ is the inward normal vector defined on ∂D and Γ is a continuous family of convex functions. Our goal is to study the existence and geometric properties of an optimal configuration related to the functional J. In other words, our purpose is to study the problem: minimize { J(u) := Z ∂D Γ(x,uµ(x))dσ : u: DC → R, u = ϕ on ∂D, ∆u = 0 in {u > 0} and Vol.(supp u) = 1 } Among other regularity properties of an optimal configuration, we prove analyticity of the free boundary up to a small singular set. We also establish uniqueness and symmetry results when ∂D has a given symmetry. Full regularity of the free boundary is obtained under these symmetry conditions imposed on the fixed boundary.Show more Item Robust algorithms for area and power optimization of digital integrated circuits under variability(2008-12) Mani, Murari, 1981-; Orshansky, MichaelShow more As device geometries shrink, variability of process parameters becomes pronounced, resulting in a significant impact on the power and timing performance of integrated circuits. Deterministic optimization algorithms for power and area lack capabilities for handling uncertainty, and may lead to over-conservative solutions. As a result, there is an increasing need for statistical algorithms that can take into account the probabilistic nature of process parameters. Statistical optimization techniques however suffer from the limitation of high computational complexity. The objective of this work is to develop efficient algorithms for optimization of area and power under process variability while guaranteeing high yield. The first half of the dissertation focuses on two design-time techniques: (i) a gate sizing approach for area minimization under timing variability; (ii) an algorithm for total power minimization considering variability in timing and power. Design-time methods impose an overhead on each instance of the fabricated chip since they lack the ability to react to the actual conditions on the chip. In the second half of the dissertation we develop joint design-time and post-silicon co-optimization techniques which are superior to design-time only optimization methods. Specifically, we develop (i) a methodology for optimization of leakage power using design-time sizing and post silicon tuning using adaptive body bias; (ii) an optimization technique to minimize the total power of a buffer chain while considering the finite nature of adaptability afforded. The developed algorithms effectively improve the overconservatism of the corner-based deterministic algorithms and permit us to target a specified yield level accurately. As the magnitude of variability increases, it is expected that statistical algorithms will become increasingly important in future technology generations.Show more Item Statistical algorithms for circuit synthesis under process variation and high defect density(2007-12) Singh, Ashish Kumar, 1981-; Orshansky, MichaelShow more As the technology scales, there is a need to develop design and optimization algorithms under various scenarios of uncertainties. These uncertainties are introduced by process variation and impact both delay and leakage. For future technologies at the end of CMOS scaling, not only process variation but the device defect density is projected to be very high. Thus realizing error tolerant implementation of Boolean functions with minimal redundancy overhead remains a challenging task. The dissertation is concerned with the challenges of low-power and area digital circuit design under high parametric variability and high defect density. The technology mapping provides an ideal starting point for leakage reduction because of higher structural freedom in the choices of implementations. We first describe an algorithm for technology mapping for yield enhancement that explicitly takes parameter variability into account. We then show how leakage can be reduced by accounting for its dependence on the signal state, and develop a fast gain-based technology mapping algorithm. In some scenarios the state probabilities can not be precise point values but are modeled as an interval. We extended the notion of mean leakage to the worst case mean leakage which is defined as the sum of maximal mean leakage of circuit gates over the feasible probability realizations. The gain-based algorithm has been generalized to optimize this proxy leakage metric by casting the problem within the framework of robust dynamic programming. The testing is performed by selecting various instance probabilities for the primary inputs that are deviations from the point probabilities with respect to which a point probability based gain based mapper has been run. We obtain leakage improvement for certain test probabilities with the interval probability based over the point probability based mapper. Next, we present techniques based on coding theory for implementing Boolean functions in highly defective fabrics that allow us to tolerate errors to a certain degree. The novelty of this work is that the structure of Boolean functions is exploited to minimize the redundancy overhead. Finally we have proposed an efficient analysis approach for statistical timing, which can correctly propagate the slope in the path-based statistical timing analysis. The proposed algorithm can be scaled up to one million paths.Show more Item Stochastic network interdiction: models and methods(2005) Pan, Feng; Morton, David P.Show more We develop stochastic network interdiction models and associated solution methods. In its simplest form, our model consists of a smuggler who wishes to traverse a network from an origin to a destination without being detected. Probabilities associated with the indigenous transportation network specify likelihoods that a smuggler can traverse each arc in the network undetected. By installing a detector on an arc we can decrease that probability. The decisionmaking problem is to select arcs to receive detectors subject to budget and policy constraints. The goal is to minimize the probability that a smuggler evades detection when the smuggler’s origin-destination pair is known only through a probability distribution. The model has two stages: first we install detectors then the random origin-destination pair of the smuggler is revealed and the smuggler selects a maximum-reliability path knowing detector locations and detection probabilities. When we consider that detectors can only be installed on the “boundary” of the network, we show that the model can be reduced to an interdiction problem on a simpler bipartite network. In other variants of the model, the smuggler has partial information on detector locations and may have a different perception (than the interdictor) of the detection probabilities. These models are cast as stochastic mixed-integer programs, and the complexity of the models is investigated. Our solution procedure includes scenario reduction, other preprocessing techniques and decomposition methods, all exploiting special structures in our stochastic network interdiction problems. We further enhance our solution procedures by developing a class of valid inequalities to tighten the integer-programming formulation. This work is motivated by the Second Line of Defense (SLD) program, a cooperative effort between the US Department of Energy and the Russian Federation State Customs Committee. SLD’s primary goal is to minimize the risk of illicit trafficking of nuclear materials and technologies through detection and deterrence by enhancing border detection capabilities.Show more Item A tabu search approach to the strategic airlift problem(2004) Lambert, Garrett Randall; Barnes, J. Wesley.Show more Air Mobility Command (AMC) planners currently use simulation systems or large-scale linear programming (LP) models in studying the strategic airlift problem. Simulations are descriptive in nature and therefore cannot prescribe optimal flight schedules. Aggregation is used in large-scale LP models to make the problem tractable and thus much operational level detail is lost. AMC planners need a tool which prescribes good solutions while maintaining the operational level detail necessary to produce flight schedules. This research outlines a robust algorithm that obtains excellent solutions to the strategic airlift problem that possess the operational level detail necessary for AMC planners to develop the detailed routing and scheduling of strategic airlift aircraft. The algorithm utilizes the tabu search methodology.Show more