Novel convex optimization techniques for circuit analysis and synthesis
MetadataShow full item record
Technology scaling brings about the need for computationally efficient methods for circuit analysis, optimization, and synthesis. Convex optimization is a special class of mathematical optimization problems which minimize convex functions over convex sets. For general optimization problems, finding a local minimum is often easier than finding a global optimal. The inherent convexity ensures that a local minimum must be a global minimum so that convex optimization problems can be solved efficiently and reliably. These techniques are widely used in the EDA community because not only many problems can inherently be modeled as convex optimization problems, but also convex approximations work well even for non-convex problems. In particular, this dissertation explores several problems in VLSI design by applying recent convex optimization techniques. This dissertation first addresses the equation-based analog synthesis which finds design parameters that achieve optimal performance with a given circuit topology. The problem of analog synthesis in deep submicron technology is that convex modeling of circuit performance functions is not accurate enough while non-convex models are generally hard to solve. This dissertation proposes two techniques to solving this problem by leveraging the recent advances in semidefinite programming problem (SDP) relaxation of polynomial optimization. The first technique addresses the computational intractability of SDP relaxation in polynomial optimization formulation. Modeling analog performance functions via general polynomials allows for highly accurate fitting of SPICE data due to their inherent non-convexity and non-linearity. However, the computational complexity of solving the corresponding SDP relaxation scales exponentially with the degree of a polynomial. This dissertation studies fitting high-degree but sparse polynomials with special structure to enable efficient subsequent optimization phase. Results demonstrate that by handling higher-degree polynomials, the fitting error can be drastically reduced, which translates to a significant increase in the rate of constraint satisfaction. The second technique addresses the main challenge of using geometric programming (GP): the mismatch between what GP can accurately fit and the behavior of true analog performance functions. This approach proposes fitting device models as high-order monomials, defined as the exponential functions of polynomials in the logarithmic variables. The introduction of high-order monomials allows significant improvements in model accuracy. Using SDP-relaxations, the proposed strategy is able to obtain efficient near-optimal global solutions. Next, this dissertation focuses on computational improvements in power grid reduction. Analysis and simulation of power grids for large integrated circuits is computationally expensive due to the tremendous number of elements in power grids. The goal of power grid reduction is to produce a sparse approximation of the original grid with far fewer elements, that approximately preserves the voltage-current relations, so that the simulation time of the reduced model is greatly reduced. This dissertation models the power grid using the admittance matrix, known as the graph Laplacian, and proposes a convex optimization formulation of the Laplacian matrix approximation. In particular, this approach adopts the L1-norm regularization to enforce sparsity in the resulting power grid and controls the approximation quality only for currents within realistic ranges. This improves the previous methods which are overly conservative in terms of degree of sparsity. Next, this dissertation considers energy-efficiency in hardware implementations of machine learning applications using approximate computing. Unlike existing work at transistor level, gate level, or micro-architecture level, this dissertation exploits the inherent error-resilience of machine learning applications at the algorithmic level using the random projection technique, also known as sketching. Sketching of large matrices into their lower-dimensional representations can significantly reduce the cost of computation and storage and save the energy. This dissertation develops a highly energy-efficient hardware implementation of random projection by taking the hardware unique cost, e.g. latency, area and energy, into consideration. In particular, this dissertation demonstrates that randomness inherent in the projection matrices can be exploited to drastically reduce operand bit-width to achieve greater savings in the cost of hardware implementation. Finally, this dissertation develops a lossy compression algorithm for saving the expensive one-time-programmable memory in the context of SRAM physical unclonable functions (PUF). PUFs are ``silicon fingerprints'' that generate entropy from manufacturing process variation and are used in secure key generation and device authentication. SRAM PUFs generate unique randomness using the power-up values from the mismatch between cross-coupled inverters. However, the major challenge of using SRAM PUFs lies in the reliability of responses. Error-correction-codes (ECC) are required to be used in order to guarantee reliable reconstruction. Code syndrome of ECC needs to be stored, as helper data, in non-volatile memories. In particular, for original silicon authentication, helper data needs to be stored in one-time-programmable memory, which is limited and expensive. The proposed lossy compression algorithm allows a significant reduction of helper data size (HDS) for reliability mask by treating a fraction of reliable bits as unreliable. Based on the observation that cell reliability information is correlated across temperature, the proposed scheme utilizes the stochastic concentration theory to obtain an accurate confidence level for average bit-error-rate, which permits HDS reduction using only room-temperature characterization.