Browsing by Subject "Verification"
Now showing 1 - 10 of 10
- Results Per Page
- Sort Options
Item Algorithms for analyzing parallel computations(2017-08) Chauhan, Himanshu; Garg, Vijay K. (Vijay Kumar), 1963-; Julien, Christine; Mittal, Neeraj; Nikolova, Evdokia; Pingali, Keshav; Reddi, Vijay JanapaPredicate detection is a powerful technique to verify parallel programs. Verifying correctness of programs using this technique involves two steps: first we create a partial order based model, called a computation, of an execution of a parallel program, and then we check all possible global states of this model against a predicate that encodes a faulty behavior. A partial order encodes many total orders, and thus even with one execution of the program we can reason over multiple possible alternate execution scenarios. This dissertation makes algorithmic contributions to predicate detection in three directions. Enumerating all consistent global states of a computation is a fundamental problem requirement in predicate detection. Multiple algorithms have been proposed to perform this enumeration. Among these, the breadth-first search (BFS) enumeration algorithm is especially useful as it finds an erroneous consistent global state with the least number of events possible. The traditional algorithm for BFS enumeration of consistent global states was given more than two decades ago and is still widely used. This algorithm, however, requires space that in the worst case may be exponential in the number of processes in the computation. We give the first algorithm that performs BFS based enumeration of consistent global states of a computation in space that is polynomial in the number of processes. Detecting a predicate on a computation is a hard problem in general. Thus, in order to devise efficient detection and analysis algorithms it becomes necessary to use the knowledge about the properties of the predicate. We present algorithms that exploit the properties of two classes of predicates, called stable and counting predicates, and provide significant reduction — exponential in many cases — in time and space required to detect them. The technique of computation slicing creates a compact representation, called slice, of all global states that satisfy a class of predicates called regular predicate. We present the first distributed and online algorithm to create a slice of a computation with respect a regular predicate. In addition, we give efficient algorithms to create slices of two important temporal logic formulas even when their underlying predicate is not regular but either the predicate or its negation is efficiently detectable.Item Automatic generation of coverage directives targeting signal relationships by statically analyzing RTL(2017-05-04) Gupta, Kshitiz; Abraham, Jacob A.; McDermott, MarkThe coverage problem has been a long standing issue in simulation- based veri cation. Coverage metrics are required to track the progress and justify completeness of simulation vectors. This thesis presents a scalable methodology to automatically generate coverage directives which augment line coverage for complete coverage of the RTL. The directives target ambiguity in register relationships derived by statically analyzing behavioral RTL. A Python-based tool has been built on the presented methodology to gener- ates implication properties for coverage. The generated properties for two cores (Amber and V-scale) are instantiated within the testbenches provided with them. Simulation results with all the test programs highlight module instances with high line coverage but uncovered properties. These properties are formally proven to be reachable, thus highlighting coverage holes with the provided tests and the usefulness of our process.Item Correct low power design transformations for hardware systems(2013-08) Viswanath, Vinod; Abraham, Jacob A.We present a generic proof methodology to automatically prove correctness of design transformations introduced at the Register-Transfer Level (RTL) to achieve lower power dissipation in hardware systems. We also introduce a new algorithm to reduce switching activity power dissipation in microprocessors. We further apply our technique in a completely different domain of dynamic power management of Systems-on-Chip (SoCs). We demonstrate our methodology on real-life circuits. In this thesis, we address the dual problem of transforming hardware systems at higher levels of abstraction to achieve lower power dissipation, and a reliable way to verify the correctness of the afore-mentioned transformations. The thesis is in three parts. The first part introduces Instruction-driven Slicing, a new algorithm to automatically introduce RTL/System level annotations in microprocessors to achieve lower switching power dissipation. The second part introduces Dedicated Rewriting, a rewriting based generic proof methodology to automatically prove correctness of such high-level transformations for lowering power dissipation. The third part implements dedicated rewriting in the context of dynamically managing power dissipation of mobile and hand-held devices. We first present instruction-driven slicing, a new technique for annotating microprocessor descriptions at the Register Transfer Level in order to achieve lower power dissipation. Our technique automatically annotates existing RTL code to optimize the circuit for lowering power dissipated by switching activity. Our technique can be applied at the architectural level as well, achieving similar power gains. We first demonstrate our technique on architectural and RTL models of a 32-bit OpenRISC pipelined processor (OR1200), showing power gains for the SPEC2000 benchmarks. These annotations achieve reduction in power dissipation by changing the logic of the design. We further extend our technique to an out-of-order superscalar core and demonstrate power gains for the same SPEC2000 benchmarks on architectural and RTL models of PUMA, a fixed point out-of-order PowerPC microprocessor. We next present dedicated rewriting, a novel technique to automatically prove the correctness of low power transformations in hardware systems described at the Register Transfer Level. We guarantee the correctness of any low power transformation by providing a functional equivalence proof of the hardware design before and after the transformation. Dedicated rewriting is a highly automated deductive verification technique specially honed for proving correctness of low power transformations. We provide a notion of equivalence and establish the equivalence proof within our dedicated rewriting system. We demonstrate our technique on a non-trivial case study. We show equivalence of a Verilog RTL implementation of a Viterbi decoder, a component of the DRM System-On-Chip (SoC), before and after the application of multiple low power transformations. We next apply dedicated rewriting to a broader context of holistic power management of SoCs. This in turn creates a self-checking system and will automatically flag conflicting constraints or rules. Our system will manage power constraint rules using dedicated rewriting specially honed for dynamic power management of SoC designs. Together, this provides a common platform and representation to seamlessly cooperate between hardware and software constraints to achieve maximum platform power optimization dynamically during execution. We demonstrate our technique in multiple contexts on an SoC design of the state-of-the-art next generation Intel smartphone platform. Finally, we give a proof of instruction-driven slicing. We first prove that the annotations automatically introduced in the OR1200 processor preserve the original functionality of the machine using the ACL2 theorem prover. Then we establish the same proof within our dedicated rewriting system, and discuss the merits of such a technique and a framework. In the context of today's shrinking hardware and mobile internet devices, lowering power dissipation is a key problem. Verifying the correctness of transformations which achieve that is usually a time-consuming affair. Automatic and reliable methods of verification that are easy to use are extremely important. In this thesis we have presented one such transformation, and a generic framework to prove correctness of that and similar transformations. Our methodology is constructed in a manner that easily and seamlessly fits into the design cycle of creating complicated hardware systems. Our technique is also general enough to be applied in a completely different context of dynamic power management of mobile and hand-held devices.Item On the modular verification and design of firewalls(2012-08) Bhattacharya, Hrishikesh; Gouda, Mohamed G., 1947-; Lam, Simon S.; Mok, Aloysius K.; Qiu, Lili; Garg, Vijay K.Firewalls, packet filters placed at the boundary of a network in order to screen incoming packets of traffic (and discard any undesirable packets), are a prominent component of network security. In this dissertation, we make several contributions to the study of firewalls. 1. Current algorithms for verifying the correctness of firewall policies use O(n[superscrip d]) space, where n is the number of rules in the firewall (several thousand) and d the number of fields in a rule (about five). We develop a fast probabilistic firewall verification algorithm, which runs in time and space O(nd), and determines whether a firewall F satisfies a property P. The algorithm is provably correct in several interesting cases -- notably, for every instance where it states that F does not satisfy P -- and the overall probability of error is extremely small, of the order of .005%. 2. As firewalls are often security-critical systems, it may be necessary to verify the correctness of a firewall with no possibility of error, so there is still a need for a fast deterministic firewall verifier. In this dissertation, we present a deterministic firewall verification algorithm that uses only O(nd) space. 3. In addition to correctness, optimizing firewall performance is an important issue, as slow-running firewalls can be targeted by denial-of-service attacks. We demonstrate in this dissertation that in fact, there is a strong connection between firewall verification and detection of redundant rules; an algorithm for one can be readily adapted to the other task. We suggest that our algorithms for firewall verification can be used for firewall optimization also. 4. In order to help design correct and efficient firewalls, we suggest two metrics for firewall complexity, and demonstrate how to design firewalls as a battery of simple firewall modules rather than as a monolithic sequence of rules. We also demonstrate how to convert an existing monolithic firewall into a modular firewall. We propose that modular design can make firewalls easy to design and easy to understand. Thus, this dissertation covers all stages in the life cycle of a firewall -- design, testing and verification, and analysis -- and makes contributions to the current state of the art in each of these fields.Item Production and analysis of traditional and non-traditional radioxenon isotopes(2015-05) Klingberg, Franziska Julietta; Biegalski, Steven R.; Biegalski, Kendra M.F.; Haas, Derek A.; Landsberger, Sheldon; Saey, Paul R.J.; Schneider, ErichRadioxenon releases can originate from fission during nuclear detonations (atmospheric, underground, and underwater), research and commercial reactors, and medical isotope production facilities. Their impacts on atmospheric sample analysis have to be well understood to distinguish between clandestine activities and commercial operations. The global community relies on atmospheric monitoring of radioxenon, among other technologies, to monitor emissions from underground nuclear tests. The Comprehensive Nuclear Test-Ban Treaty (CTBT) incorporates radioxenon monitoring within International Monitoring System (IMS) with a focus on the traditional radioxenon isotopes ¹³¹ [superscript m] Xe, ¹³³ [superscript m] Xe, ¹³³Xe, and ¹³⁵Xe. To strengthen environmental monitoring for radioxenon, a method to produce high purity radioxenon samples was developed. The University of Texas’ 1.1 MW TRIGA research reactor was used for radioactive sample production via neutron activation. The reactor facilities include a pneumatic system for precise timing when irradiating samples. In order to use the pneumatic facilities, gaseous samples have been encapsulated in quartz to fit into the polyethylene vials designed for the system; this method also minimizes leakage, and avoids contaminants from entering the sample. Enriched, stable, isotopically pure xenon gas was irradiated with neutrons in order to activate it to radioxenon isotopes, yielding a complete set of radioxenon isotopes including non-traditional – ¹²⁵Xe, ¹²⁷Xe, ¹²⁹ [superscript m] Xe, ¹³⁵ [superscript m] Xe and ¹³⁷Xe – and traditional radioxenon isotopes. The samples were analyzed with a β--γ coincidence detector; the measurement of the non-traditional isotopes in an ARSA-style β--γ coincidence detector were the first of their kind. Measurements of the ¹³¹ [superscript m] Xe, ¹³³ [superscript m] Xe, ¹³³Xe, and ¹³⁵Xe were used to determine the β--γ coincidence efficiency of the detector and the metastable versus ground state production ratio after irradiation of ¹³³Xe and ¹³⁵Xe. Regions of interest (ROI) were defined for ¹²⁵Xe, ¹²⁷Xe, ¹²⁹ [superscript m] Xe, and ¹³⁷Xe to estimate their interference with the traditional isotopes. The newly defined ROIs aid in distinguishing between radioxenon signatures originating from fission and those mainly originating from neutron activation, thus advancing atmospheric sample analysis in the context of CTBT verification.Item Proof engineering for large-scale verification projects(2019-09-17) Celik, Ahmet; Gligoric, Milos; Pingali, Keshav; Rossbach, Christopher J; Khurshid, SarfrazSoftware controls many aspects of our daily lives, thus, software correctness is of utmost importance. One way to develop correct-by-construction software is by using proof assistants, i.e., writing machine-checked proofs of correctness at the level of executable code. Although the obtained guarantees via such development are highly desirable, proof assistants are not currently well adapted to large-scale software development, and are expensive to use in terms of both time and expertise. In particular, the productivity of proof engineers is lowered by inadequate interfaces, processes, and tool support, which lone expert users may not be hindered by, but become serious problems in large-scale projects with many contributors. This dissertation shows that research in traditional software engineering can improve productivity considerably in large-scale verification projects that use proof assistants and facilitate proliferation of formally verified software. Specifically, this dissertation, inspired by research in the software engineering area on regression testing, software evolution, and mutation testing, presents three main bodies of research with the goal to speed up proof checking and help proof engineers to evaluate the quality of their verification projects. First, this dissertation introduces regression proof selection, a technique that tracks fine-grained dependencies between Coq definitions, propositions, and proofs, and only checks those proofs affected by changes between two revisions. We instantiate the technique in a tool dubbed iCoq. We applied iCoq to track dependencies across many revisions in several large Coq projects and measured the time savings compared to proof checking from scratch and when using Coq’s timestamp-based toolchain for incremental proof checking. Our results show that proof checking with iCoq is up to 10 times faster than the former and up to 3 times faster than the latter. Second, this dissertation describes the design and implementation of piCoq, a set of techniques that blend the power of parallel proof checking and proof selection. piCoq can track dependencies between files, definitions, and lemmas and perform parallel checking of only those files or proofs affected by changes between two project revisions. We applied piCoq to perform regression proving over many revisions of several large open source projects. Our results indicate that proof-level parallelism and proof selection is consistently much faster than both sequential checking from scratch and sequential checking with proof selection. In particular, 4-way parallelization is up to 28.6 times faster than the former, and up to 2.8 times faster than the latter. Third, this dissertation introduces mutation proving, a technique for analyzing quality of verification projects that use proof assistants. We implemented our technique for the Coq proof assistant in a tool dubbed mCoq. mCoq applies a set of mutation operators to Coq functions and datatypes, and then checks proofs of lemmas affected by operator application. We applied mCoq to several medium and large scale Coq projects, and recorded whether proofs passed or failed when applying different mutation operators. We then qualitatively analyzed the failed proofs, and found several examples of weak and incomplete specifications.Item Systems engineering processes for a student-based design laboratory(2009-12) Garner, Michael Dax; Bishop, Robert H., 1957-; Guerra, Lisa A.A student-based university environment for engineering design and development is much different from a product development environment within the aerospace industry. Therefore, a different approach to systems engineering should be considered. By its very nature, a university product development laboratory thrives on creativity and rejects bureaucracy. Experience shows that continuity and discipline within a project is crucial for success. The practice of systems engineering enables technical project discipline. Systems engineering is the art and science of developing an operable system that meets requirements within imposed constraints. The purpose of this thesis is to describe the systems engineering processes and techniques necessary for a student-based project, and explicitly show how to implement these processes. Although attempts have been made to utilize a few systems engineering techniques in past projects, many students did not properly and consistently apply those techniques to the technical design work. The goal of the thesis is to tailor the NASA systems engineering processes to a student-based design laboratory environment and to apply the methodologies to the mission design of Paradox. The Picosatellite for Autonomous Rendezvous and Docking on-Orbit eXperiment, or Paradox, is the second of four missions to demonstrate autonomous rendezvous and docking with a picosatellite-class satellite. A strong technical contribution highlighted within the thesis involves developing an open architecture rendezvous targeting algorithm for the Paradox mission in the face of large mission architecture uncertainties. The robust targeting algorithm builds from previous work utilizing an optimizer based on the Clohessey-Wiltshire equations and an iterative Lambert targeter. The contribution extends the rendezvous transfer times by including a multi-revolution Lambert targeter. The rendezvous algorithm will perform successfully given any launch vehicle and target spacecraft vehicle supporting the notion of an open architecture to satisfy the mission. The development of the algorithm is embedded within the context of the systems engineering processes to clearly showcase the intimate connection between systems engineering processes and the technical engineering design of a mission.Item Using the jump number problem to efficiently detect global predicates in distributed systems(2017-05) Clinton, Trokon Edward; Garg, Vijay K. (Vijay Kumar), 1963-Detecting global predicates of a distributed computation is a key problem in testing and debugging distributed programs. It consists of searching the global state space of events to determine whether a given predicate could have occurred. For example a programmer may be interested in verifying whether a parallel program violates a global invariant, or detect a race condition between concurrent threads. This is a challenging problem because the number of consistent global states can grow exponentially when the number of events in the computation increases. This paper presents techniques that tackle the state explosion problem and help detect whether an arbitrary predicate is true in polynomial time. We first present a brute force algorithm, and then improve the performance with an exact and heuristic algorithm inspired by the jump number problem.Item Verification of sequential and concurrent libraries(2010-08) Deshmukh, Jyotirmoy Vinay; Emerson, E. Allen; Aziz, Adnan; Garg, Vijay K.; Chase, Craig M.; Khurshid, Sarfraz; Mok, Aloysius K.The goal of this dissertation is to present new and improved techniques for fully automatic verification of sequential and concurrent software libraries. In most cases, automatic software verification is plagued by undecidability, while in many others it suffers from prohibitively high computational complexity. Model checking -- a highly successful technique used for verifying finite state hardware circuits against logical specifications -- has been less widely adapted for software, as software verification tends to involve reasoning about potentially infinite state-spaces. Two of the biggest culprits responsible for making software model checking hard are heap-allocated data structures and concurrency. In the first part of this dissertation, we study the problem of verifying shape properties of sequential data structure libraries. Such libraries are implemented as collections of methods that manipulate the underlying data structure. Examples of such methods include: methods to insert, delete, and update data values of nodes in linked lists, binary trees, and directed acyclic graphs; methods to reverse linked lists; and methods to rotate balanced trees. Well-written methods are accompanied by documentation that specifies the observational behavior of these methods in terms of pre/post-conditions. A pre-condition [phi] for a method M characterizes the state of a data structure before the method acts on it, and the post-condition [psi] characterizes the state of the data structure after the method has terminated. In a certain sense, we can view the method as a function that operates on an input data structure, producing an output data structure. Examples of such pre/post-conditions include shape properties such as acyclicity, sorted-ness, tree-ness, reachability of particular data values, and reachability of pointer values, and data structure-specific properties such as: "no red node has a red child'', and "there is no node with data value 'a' in the data structure''. Moreover, methods are often expected not to violate certain safety properties such as the absence of dangling pointers, absence of null pointer dereferences, and absence of memory leaks. We often assume such specifications as implicit, and say that a method is incorrect if it violates such specifications. We model data structures as directed graphs, and use the two terms interchangeably. Verifying correctness of methods operating on graphs is an instance of the parameterized verification problem: for every input graph that satisfies [phi], we wish to ensure that the corresponding output graph satisfies [psi]. Control structures such as loops and recursion allow an arbitrary method to simulate a Turing Machine. Hence, the parameterized verification problem for arbitrary methods is undecidable. One of the main contributions of this dissertation is in identifying mathematical conditions on a programming language fragment for which parameterized verification is not only decidable, but also efficient from a complexity perspective. The decidable fragment we consider can be broadly sub-divided into two categories: the class of iterative methods, or methods which use loops as a control flow construct to traverse a data structure, and the class of recursive methods, or methods that use recursion to traverse the data structure. We show that for an iterative method operating on a directed graph, if we are guaranteed that if the number of destructive updates that a method performs is bounded (by a constant, i.e., O(1)), and is guaranteed to terminate, then the correctness of the method can be checked in time polynomial in the size of the method and its specifications. Further, we provide a well-defined syntactic fragment for recursive methods operating on tree-like data structures, which assures that any method in this fragment can be verified in time polynomial in the size of the method and its specifications. Our approach draws on the theory of tree automata, and we show that parameterized correctness can be reduced to emptiness of finite-state, nondeterministic tree automata that operate on infinite trees. We then leverage efficient algorithms for checking the emptiness of such tree automata to obtain a tractable verification framework. Our prototype tool demonstrates the low theoretical complexity of our technique by efficiently verifying common methods that operate on data structures. In the second part of the dissertation, we tackle another obstacle for tractable software verification: concurrency. In particular, we explore application of a static analysis technique based on interprocedural dataflow analysis to predict and document deadlocks in concurrent libraries, and analyze deadlocks in clients that use such libraries. The kind of deadlocks that we focus result from circular dependencies in the acquisition of shared resources (such as locks). Well-written applications that use several locks implicitly assume a certain partial order in which locks are acquired by threads. A cycle in the lock acquisition order is an indicator of a possible deadlock within the application. Methods in object-oriented concurrent libraries often encapsulate internal synchronization details. As a result of information hiding, clients calling the library methods may cause thread safety violations by invoking methods in a manner that violates the partial ordering between lock acquisitions that is implicit within the library. Given a concurrent library, we present a technique for inferring interface contracts that speciy permissible concurrent method calls and patterns of aliasing among method arguments that guarantee deadlock-free execution for the methods in the library. The contracts also help client developers by documenting required assumptions about the library methods. Alternatively, the contracts can be statically enforced in the client code to detect potential deadlocks in the client. Our technique combines static analysis with a symbolic encoding for tracking lock dependencies, allowing us to synthesize contracts using a satisfiability modulo theories (SMT) solver. Additionally, we investigate extensions of our technique to reason about deadlocks in libraries that employ signalling primitives such as wait-notify for cooperative synchronization. We demonstrate its scalability and efficiency with a prototype tool that analyzed over a million lines of code for some widely-used open-source Java libraries in less than 50 minutes. Furthermore, the contracts inferred by our approach have been able to pinpoint real bugs, i.e. deadlocks that have been reported by users of these libraries.Item Verification of successive convexification algorithm(2016-05) Berning, Andrew Walter, Jr.; Akella, Maruthi Ram, 1972-; Acikmese, BehcetIn this report, I describe a technique which allows a non-convex optimal control problem to be expressed and solved in a convex manner. I then verify the resulting solution to ensure its physical feasibility and its optimality. The original, non-convex problem is the fuel-optimal powered landing problem with aerodynamic drag. The non-convexities present in this problem include mass depletion dynamics, aerodynamic drag, and free final time. Through the use of lossless convexification and successive convexification, this problem can be formulated as a series of iteratively solved convex problems that requires only a guess of a final time of flight. The solution’s physical feasibility is verified through a nonlinear simulation built in Simulink, while its optimality is verified through the general nonlinear optimal control software GPOPS-II.