UT Electronic Theses and Dissertations
Permanent URI for this communityhttps://hdl.handle.net/2152/4
This collection contains University of Texas at Austin electronic theses and dissertations (ETDs). The collection includes ETDs primarily from 2001 to the present. Some pre-2001 theses and dissertations have been digitized and added to this collection, but those are uncommon. The library catalog is the most comprehensive list of UT Austin theses and dissertations.
Since 2010, the Office of Graduate Studies at UT Austin has required all theses and dissertations to be made publicly available in Texas ScholarWorks; however, authors are able to request an embargo of up to seven years. Embargoed ETDs will not show up in this collection. Most of the ETDs in this collection are freely accessible to all users, but some pre-2010 works require a current UT EID at point of use. Please see the FAQs for more information. If you have a question about the availability of a specific ETD, please contact tsw@utlists.utexas.edu.
Some items in this collection may contain offensive images or text. The University of Texas Libraries is committed to maintaining an accurate and authentic scholarly and historic record. An authentic record is essential for understanding our past and informing the present. In order to preserve the authenticity of the historical record we will not honor requests to redact content, correct errors, or otherwise remove content, except in cases where there are legal concerns (e.g. potential copyright infringement, inclusion of HIPAA/FERPA protected information or Social Security Numbers) or evidence of a clear and imminent threat to personal safety or well-being. This policy is in keeping with the American Library Association code of ethics to resist efforts to censor library resources, and the Society of American Archivists code of ethics that states "archivists may not willfully alter, manipulate, or destroy data or records to conceal facts or distort evidence."
Authors of these ETDs have retained their copyright while granting the University of Texas Libraries the non-exclusive right to reproduce and distribute their works.
Browse
Browsing UT Electronic Theses and Dissertations by Author "Aaronson, Scott"
- Results Per Page
- Sort Options
Item CCZ magic(2023-04-19) Ebrahimi, Amir Pascal; Aaronson, Scott; Soloveichik, David; Glaudell, AndrewUniversal quantum computation can be achieved with a gate set that includes a generating set of Clifford gates and at least one non-Clifford gate [1]. When it comes to classically simulating quantum computation, Clifford gates generate an important class of circuits known as stabilizer circuits, which are efficiently simulable [2]. Stabilizer simulators can be extended to support universal quantum computation by using magic state injection gadgets and magic state decomposition. The computational scaling directly relates to the exact stabilizer rank, χ, of the magic state, which is the minimum number of stabilizer states in superposition that are necessary to represent the state [3]. We show that decompositions of Clifford magic states of the form |D⟩ = D |+⟩[superscript ⊗n], where D is a diagonal circuit composed of Z, CZ, and CCZ gates, have χ = 2 for n ≤ 5. We also establish that all CCZ circuits of n ≤ 5 qubits are Clifford equivalent to k ≤ 2 CCZ circuits. These results provide a new lower-bound for the complexity of simulating Clifford+CCZ circuits in general. We suggest the use of a state-injection gadget of |D⟩ in contrast to the naïve approach of multiple state-injection gadgets of individual |CCZ⟩ states. Since extended stabilizer simulators have a gadget-based method of simulation, this approach can improve classical simulation of quantum computation when using a Clifford+CCZ gate set.Item Challenging variants of the Collatz Conjecture(2018-10-12) Denend, Matthew Alexander; Aaronson, Scott; Heule, Marijn, 1979-The Collatz Conjecture (also known as the 3N + 1 problem) is simple to explain, yet proving that all positive integers following the Collatz Mapping must converge to 1 has eluded mathematicians for over half a century. Aaronson and Heule are exploring solving the Collatz Conjecture using an approach involving string rewrite systems: Aaronson transformed the Conjecture into a string rewrite system and Heule has been applying parallel SAT solvers on instances of this system. Similar approaches have been applied successfully to other mathematical problems. We started looking into simpler variants of the conjecture. This thesis defines some of these variants and investigates easily provable as well as very hard variants. We study the hardness of unsolved variants by computing the number of rewrite steps needed up to 1 billion. Our hardness prediction method suggests that proving termination of the challenging variants should be considerably easier compared to solving the original conjecture.Item Chemical richness from abstract physical parts(2020-09-11) Breik, Keenan Souhail; Soloveichik, David; Aaronson, Scott; Plaxton, C. Greg; Seelig, GeorgEvery living cell is a tiny, intricate chemical computer. But what aspects of chemistry can create that richness? This thesis explores and expands two abstract models of chemistry, binding networks and chemical linkages, and shows that despite their simplicity, they can exhibit complex behavior. A binding network is a minimal model driven by thermodynamics that treats a molecule as simply a bag of binding sites. We prove that natural questions about binding networks at thermodynamic equilibrium are computationally hard. To answer them quickly in practice, we develop algorithms and software that translate to boolean satisfiability. By introducing a kinetic framework, we then prove that binding networks allow catalysis, a powerful, inherently kinetic behavior. Chemical linkages add minimal geometric structure to binding networks using rods and joints. Out of chemical linkages, we construct ATP-like catalyzed splitting, a fueled motor, and chemo-mechanical coupling. We show that surprisingly, graph structure alone can explain the behavior of these machines. This provides a theoretical foundation toward understanding and mimicking the complexity of living systems.Item Entanglement in superconducting heterostructures, and quantum circuit simulation hardware(2021-04-27) Ostrove, Corey I; Reichl, L. E.; La Cour, Brian; Aaronson, Scott; Potter, Andrew; Lai, KejiWe begin this dissertation by studying noise correlations in superconducting heterostructures of various geometries. In recent years there has been a resurgence of interest in the nonlocal transport properties of superconducting heterostructures due to the possibility of their serving as a source of electronic entanglement in solid state quantum information processors. Devices designed for this purpose are called Cooper pair splitting devices. The utility of these devices as entanglement sources is known to have connections to the positivity of noise cross correlations in spatially separated leads. In Chapter 1 we outline the theoretical prerequisites for this work, outlining the scattering theory framework based on the Bogoliubov-de Gennes equations we adopt. Within this framework we apply a methodology first introduced by Demers and by Blonder, Tinkham and Klapwijk (BTK) in the early 1980s to find the scattering matrix for our superconducting structures. The current, local and nonlocal shot noise can all be expressed in terms of the underlying scattering processes. This framework allows us to investigate the behavior of the current and noise correlations in the structure as we change the geometry and other key system parameters such as the system size, superconducting phase difference and temperature. We also introduce the Andreev approximation, a commonly used approximation which simplifies the scattering theory for superconducting heterostructures. In Chapter 2, we study the local and nonlocal shot noise in a quasi-1D normal-superconducting-normal (NSN) geometry using material parameters relevant to high-T [subscript c] superconductivity. The scattering and shot noise distributions are studied in the short, intermediate and long system size limits, allowing us to examine the qualitative differences in these three parameter regimes. This allows us to, for example, identify the signatures of over-the-gap geometric resonances in the shot noise distributions that appear in the long system size limit. We also break the nonlocal shot noise distributions down further and study the individual contributions to the nonlocal shot due to particle-particle, hole-hole and particle-hole scattering processes. In Chapter 3, we extend our investigation of superconducting heterostructures to the more complicated NSNSN geometry. A novel feature introduced in the geometry is the presence of subgap quasibound states, which show up as resonances in the scattering matrix. We show that these quasibound states dramatically impact the nonlocal shot noise distributions in the system. At energies near the quasibound states the dominant transmission channel through the system is a process called particle-hole transmission, which results in sharp positive peaks in the nonlocal shot noise distribution of the system. The behavior of the nonlocal noise correlations as we change the size of the superconducting and normal regions is investigated and it is found that there is a "sweet spot'' with respect to the size of the superconducting regions that maximizes the positivity of the nonlocal noise distributions as well as a periodic-like behavior in the positivity of the noise distributions with respect to the normal region size. The results of the full scattering theory for the NSNSN geometry are compared to the results obtained using the Andreev approximation, where we find that the Andreev approximation breaks down at energies close to the quasibound state energies. In the second half of this dissertation we focus on work related to the development of a prototype special-purpose quantum circuit simulation device based on commercial off-the-shelf high-speed analog signal processing hardware. In Chapter 4 we introduce the embedding scheme used to represent quantum states and quantum gates in the frequency domain of a classical analog voltage signal. Experimental results are presented from an early two-qubit prototype device for the fidelity of the state generation and gate application circuits. In Chapter 5, a more in-depth investigation into the modeling of classical errors within our signal processing based simulation method is performed in terms of the effects this noise has on the results of the quantum computation being simulatied. It is shown, for example, that additive white gaussian noise (AWGN) in our system has the same effect as applying a depolarizing channel to the qubits in the simulation. We then perform a simulation of a simple quantum error correction (QEC) protocol using the device and show that, even in the presence of classical noise in the simulation hardware, an overall enhancement in the performance of gate operations as a result of applying QEC is observed.Item Exploring quantum matter in the era of quantum computers(2023-08-31) Zhang, Yuxuan, Ph. D.; Aaronson, Scott; Potter, Andrew C.; MacDonald, Allan H; Paban, Sonia; Shankar, ShyamThe rise of quantum computing is poised to revolutionize the scientific and technological landscape, unlocking new possibilities for computational problem-solving and physical simulations beyond the reach of classical computers. In this thesis, we delve into the latest developments in quantum computation and explore the potential and challenges of this groundbreaking technology in applications ranging from condensed matter physics to quantum supremacy demonstrations. The first section of the thesis presents a series of algorithms tailored for near-term quantum computers, focusing on simulating quantum matter and addressing optimization challenges. We delve into the distinctive features of tensor network methods, highlighting their prowess in efficiently simulating intricate quantum systems and their potential in resolving formidable material science conundrums. Subsequently, we shed light on how these classical numerical techniques can be transformed into quantum algorithms, facilitating a quantum memory advantage in simulating diverse quantum matter forms, including fermionic systems and thermal states. We conclude this section with the introduction of a physics-driven quantum algorithm tailored for combinatorial optimization issues. Our near-term algorithms are substantiated by experiments conducted on a trapped-ion quantum computer. Additionally, we offer insights for the experimental implementation of these algorithms with cavity-transmon quantum simulators. In the second part of the paper, we explore photons as a resource for quantum computation and communication. By introducing a benchmarking tool for measurement-based photonic quantum computers, we establish a direct connection between the quality of quantum information processing and experimental imperfections in optical systems. Furthermore, to enable high-fidelity quantum communication, we propose a general framework for all-photonic one-way quantum repeaters based on the measurement-based error correction, which can be adapted to any Calderbank-Shor-Steane code including the recently discovered quantum low-density parity-check codes. We numerically show that the new repeater scheme provides equally good performance while reducing the resources by orders of magnitudes compared to previous works. Finally, we discuss the concept of circuit complexity and its relation to black hole physics and quantum supremacy. As a warm-up example, we study binding complexity, a special type of circuit complexity, in the context of arbitrary state preparation and unitary transformation. Next, we highlight the hardness results of random circuit sampling and the challenges faced by recent experimental demonstrations of quantum supremacy in various platforms. We conclude the thesis by proposing a solution to the verification problem for quantum supremacy experiments with peaked-circuit sampling. Overall, this thesis explores various aspects of the current state-of-the-art in quantum computing and its potential impact on fields such as condensed matter physics, information processing, and hardware design.Item Exploring the relationship between Deep Neural Networks and Neural Tangent Kernel via covariance manipulation(2023-08) Makhanov, Henry; Karch, Andreas, 1972-; Aaronson, ScottIn this study, we examine the relationship between Deep Neural Networks (DNNs) and the Neural Tangent Kernel (NTK), with a particular emphasis on covariance manipulation. Using the foundational framework provided by Roberts and Yaida in their book, "Deep Learning Theory," we methodically explore the impact of introducing a Diagonal matrix [upsih] on covariance. Our work results in the derivation of equations that detail the variance of the stochastic second layer NTK of the preactivations and the variance of the fluctuations in the stochastic second layer metric of preactivations. Through numerical analysis, we compare these derived variances. Our findings, while preliminary, suggest a noteworthy observation regarding the possible relationship between DNNs and NTK. This study serves as a stepping stone to further research in the realm of deep learning theory.Item Inherently quantum lower bounds on computational complexity(2023-07-21) Kretschmer, William Werner; Aaronson, Scott; Childs, Andrew M; Gál, Anna; Zuckerman, DavidComputational complexity theory is usually phrased in terms of decision problems and Boolean functions, whose inputs and outputs are purely classical. However, many quantum computational tasks motivated by physics, machine learning, and cryptography involve operation on quantum information in a fundamental way. In this dissertation, we investigate the complexity of various "inherently quantum" computational tasks that have no classical analogue. In particular, we (1) introduce new tools for proving lower bounds on quantum query complexity augmented with quantum states and unitary transformations, (2) establish black-box separations in structural complexity that illustrate fundamental differences between classical and quantum randomness, and (3) construct relativized instantiations of quantum cryptographic primitives that do not rely on one-way functions. We prove these results through the framework of quantum query complexity and its generalizations.Item Models of streaming computation(2021-08-13) Kallaugher, John M.; Price, Eric, Ph. D.; Aaronson, Scott; Zuckerman, David; Kapralov, MichaelStreaming algorithms, which process very large datasets received one update at a time, are a key tool in large-scale computing. This thesis studies the theory of streaming algorithms, and in particular the implications of how we model streaming algorithms. In constructing such a model, two questions arise: • What kinds of stream must the algorithm handle? • What is the algorithm allowed to do? We describe new streaming algorithms and prove new lower bounds in various streaming models, illustrating how the answers to these questions change which kinds of streaming problem are tractable. These include: • New algorithms for counting triangles in the insertion-only and linear sketching models, along with tight (up to log factors) lower bounds. • A quantum streaming algorithm for counting triangles, giving the first quantum advantage for a natural one-pass streaming problem. • A complete characterization of the linear sketching complexity of subgraph counting problems in constant-degree graphs. • An exponential separation between linear sketching and turnstile streaming under natural restrictions on the stream, complementing a prior series of work [Gan08, LNW14, AHLW16] that establishes equivalences between these models in the absence of such restrictions. • A new model of random-order streaming in which some correlations are allowed between edge arrival times, along with tight (up to polylog factors) upper and lower bounds for the problem of finding components in this model. A common theme in many of these results is the connection between sampling algorithms and lower bounds derived from the techniques of Boolean Fourier analysis. We give new methods for extending these techniques to settings such as linear sketching and random-order streaming.Item On computationally efficient learning for stabilizers and beyond(2023-07-20) Liang, Daniel You; Aaronson, Scott; Arunachalam, Srinivasan; Klivans, Adam; Soloveichik, DavidArtificial intelligence, big data, machine learning, neural networks – look up any recent research proposal and with good probability at least one of these phrases will appear. It’s no secret that learning has taken this era of computer science by storm in our attempt to create software that perform extremely complicated tasks. As one of the most accurate models of our physical world currently known, it then makes sense to think about what kinds of quantum systems can or cannot be learned. As with many problems in quantum information and quantum computing, the simplest non-trivial versions of these problems start with the stabilizer formalism. In this dissertation, we examine learning problems centered around the stabilizer formalism in various different models from a theoretical standpoint using the tools of computer science and quantum information. Specifically, our focus will be on computational complexity, rather than sample complexity. We begin by looking at learning in the tomographical sense. Here, one has black-box access to copies of an unknown quantum state |ψ⟩ and want to learn properties of the state or outright given an approximation of |ψ⟩. In this setting, [Mon17] gave an efficient learning algorithm for stabilizer states. The key algorithmic tool was Bell difference sampling, which allows one to sample from the stabilizer group of a stabilizer state. [GNW21] extended the analysis of Bell difference sampling beyond just stabilizer states. Throughout Part I we turn to Bell difference sampling to improve upon learning algorithms for states with only a few (i.e., either O(log n) or strictly less than n depending on context) T gates. By using symplectic Fourier analysis, which is the generalization of Boolean Fourier analysis for a symplectic vector space over 𝔽 [superscript 2n over subscript 2], we derive powerful tools to understand the Bell difference sampling distribution. With these tools we first give a tolerant property testing algorithm for stabilizer states. That is, we give an algorithm that distinguishes whether a state is ε1 close to some stabilizer state or ε2 far from all stabilizer states for certain parameter regimes of ε1 and ε2. We use our improved knowledge of Bell difference sampling to improve upon the completeness and soundness analysis of the property tester given by [GNW21], which is not tolerant. A second application is stabilizer fidelity estimation and approximation. Given a state |ψ⟩ that is O(1) close to a stabilizer state, we output such a stabilizer state in time 2 [superscript O(n)]. This beats the previous 2 [superscript O(n²)] brute force search algorithm. Having such a stabilizer state also lets us figure out how close |ψ⟩ is to being stabilizer. A third application is extending Montanaro’s learning algorithm to the output of Clifford + O(log n) non-Clifford gate circuits. More generally, our algorithm interpolates between Montanaro’s algorithm and pure state tomography algorithms with runtime that is poly(n)∗exp(t) where t is the number of non-Clifford gates. This asymptotically matches the runtime of classical simulation algorithms for such circuits. A key algorithmic step in this work is the ability to “compress” the “stabilizer-ness” of a state onto a few qubits, allowing the “non-stabilizer-ness” to be brute-forced on the remaining qubits. Our final application is pseudorandomness lower bounds. Introduced by [JLS18], a pseudorandom quantum state ensemble is a set of quantum states that are computationally indistinguishable from Haar random. By re-purposing algorithms from above, we produce a test that behaves differently when given a state produced by less than n T gates in a Clifford + T circuit versus being given a Haar random state. We note that this is tight assuming the existence of linear-time quantum-secure One-Way Functions. Pivoting now, we also study the stabilizer formalism in the PAC learning framework proposed by [Val84]. Here one does not have control over the measurements, but must make do regardless (within information theoretic limits). We analyze the problem in two ways. First we show that, unlike stabilizer states, learning the associated Clifford unitaries in the proper PAC model is NP-hard. This is done by a reduction to the problem of finding a full rank matrix in an affine subspace of matrices over 𝔽₂. The second is studying stabilizer states in the presence of noise. We utilize the Statistical Query framework, a popular modification to the PAC learning framework that is inherently tolerant to noise. There, we also show hardness in this framework by a reduction to Learning Parities with Noise. This gives evidence that even in the PAC model stabilizer states are hard to learn with noise.Item Quantum copy protection and unclonable cryptography(2023-08-25) Liu, Jiahui, Ph. D.; Aaronson, Scott; Gál, Anna; Waters, Brent; Wu, David; Zhandry, MarkWhile quantum computers are often seen as a threat to modern cryptographic systems, if honest parties and authorities make use of quantum computers, we can develop new cryptographic protocols and applications. These protocols, such as information-theoretic key exchange, unforgeable banknotes, and position-based cryptographic protocols, rely on the unclonability of quantum information and demonstrate a quantum advantage by offering classically unachievable security guarantees. Quantum copy protection, put forward by Aaronson(CCC'09), is one important example among the new capabilities. A copy protection scheme aims at preventing adversarial users from making pirate copies of a software program. We would like to achieve such a scheme by encoding a classical functionality into a quantum state so that any user with one copy of the state can run the program, but any malicious user trying to duplicate the state would fail. In this work, we investigate the feasibility of designing provably secure quantum copy protection protocols by combining classical post-quantum cryptographic tools with quantum information.Item Quantum meets optimization and machine learning(2023-07-31) Zhang, Ruizhe; Moshkovitz, Dana; Aaronson, Scott; Chia, Nai-Hui; Dimakis, Georgios-Alexandros; Hunter-Jones, Nick; Price, EricWith the advent of the quantum era, what role the quantum computer will play in optimization and machine learning becomes a natural and salient question. The development of novel quantum computing techniques is essential to showcase the quantum advantage in these fields. At the same time, new findings in classical optimization and machine learning algorithms also have the potential to stimulate quantum computing research. In the dissertation, we explore the fascinating connections between quantum computing, optimization, and machine learning, paving the way for transformative advances in all three fields. First, on the quantum side, we present efficient quantum algorithms for fundamental problems in sampling, optimization, and quantum physics. Our results highlight the practical advantages of quantum computing in these fields. In addition, we introduce new approaches to quantum complexity theory for characterizing the quantum hardness of optimization and machine learning problems. Second, on the optimization side, we improve the efficiency of the state-of-the-art classical algorithms for solving semi-definite programming (SDP), Fourier sensing, dynamic distance estimation, etc. Our classical results are closely intertwined with quantum computing. Some of them serve as stepping stones to new quantum algorithms, while others are motivated by quantum applications or inspired by quantum techniques. Third, on the machine learning side, we develop fast classical and quantum algorithms for training over-parameterized neural networks with provable guarantees of convergence and generalization. Furthermore, we contribute to the security aspect of machine learning by theoretically investigating some potential approaches to (classically) protect private data in collaborative machine learning and to (quantumly) protect the copyright of machine learning models. Fourth, we investigate the concentration and discrepancy properties of hyperbolic polynomials and higher-order random walks, which could potentially be applied to quantum computing, optimization, and machine learning.Item Studies in holographic complexity(2021-05-06) Couch, Josiah D.; Fischler, Willy; Aaronson, Scott; Distler, Jacques; Kilic, Can; Paban, SoniaThis dissertation will present the work I have done on the conjectured relationship between various bulk quantities designed to capture the growth of the wormhole in eternal black hole spacetimes and the circuit complexity of the boundary state within the context of the AdS/CFT correspondence, i.e., on the topic of ’holographic complexity.’ Four papers are presented here, each focused on the bulk side of this proposed relationship. In these papers, my various co-authors and I seek to improve our understanding of the bulk quantities in question (action of a causal diamond, maximal volume) and test the internal consistency of these proposals and their consistency with our intuition and understanding of the boundary field theory. In particular, the first of these papers focuses on properties of maximal volume slices in black hole spacetimes, along with consequences for the ’complexity = volume’ conjecture. The next paper considers whether ’complexity = action’ is consistent with the intuition we develop about the time evolution of the boundary circuit complexity in space-times dual to non-commutative field theories. The third paper deals with a possible relationship between the rate of increase of complexity and the thermodynamic volume of black hole spacetimes. Finally, the last paper deals with restrictions of the ’complexity = action’ and ’complexity = volume’ conjectures to boundary subregions and their corresponding entanglement wedges and seeks to test the consistency of a conjecture relating these restrictions to the purification complexity of the reduced density matrix