# Browsing by Subject "Computational complexity"

Now showing 1 - 8 of 8

- Results Per Page
1 5 10 20 40 60 80 100

- Sort Options
Ascending Descending

Item Efficient evolution of neural networks through complexification(2004) Stanley, Kenneth Owen; Miikkulainen, RistoShow more Artificial neural networks can potentially control autonomous robots, vehicles, factories, or game players more robustly than traditional approaches. Neuroevolution, i.e. the artificial evolution of neural networks, is a method for finding the right topology and connection weights to specify the desired control behavior. The challenge for neuroevolution is that difficult tasks may require complex networks with many connections, all of which must be set to the right values. Even if a network exists that can solve the task, evolution may not be able to find it in such a high-dimensional search space. This dissertation presents the NeuroEvolution of Augmenting Topologies (NEAT) method, which makes search for complex solutions feasible. In a process called complexification, NEAT begins by searching in a space of simple networks, and gradually makes them more complex as the search progresses. By starting minimally, NEAT is more likely to find efficient and robust solutions than neuroevolution methods that begin with large fixed or randomized topologies; by elaborating on existing solutions, it can gradually construct even highly complex solutions. In this dissertation, NEAT is first shown faster than traditional approaches on a challenging reinforcement learning benchmark task. Second, by building on existing structure, it is shown to maintain an ”arms race” even in open-ended coevolution. Third, NEAT is used to successfully discover complex behavior in three challenging domains: the game of Go, an automobile warning system, and a real-time interactive video game. Experimental results in these domains demonstrate that NEAT makes entirely new applications of machine learning possible.Show more Item Exploring the computational power of chemical reaction networks and more deterministic / mechanistic models of molecular computing(2024-08) Luchsinger, Austin; David Soloveichik; Khurshid, Sarfraz; Garg, VijayShow more Nanotechnology is one of the most important emerging fields today. It has the potential to revolutionize medicine, electronics, materials engineering, environmental science, and more. Indeed, many breakthroughs have been made through the ability to manipulate matter at the nanoscale. Within the past few decades, scientists have begun to seriously look at molecular systems as computational devices. Molecular computing systems are a particularly promising approach to nanotechnology, as they operate in wet environments (where traditional electronics typically struggle) and they have the potential to easily interface with natural systems. This study of molecular computing systems has come primarily in the form of making simplified models that abstract away chemical details while still capturing an essence of how these systems behave. The work of this dissertation answers questions about the computational power of some of these models. In particular, results are presented for chemical reaction networks, thermodynamic binding/affinity networks, and thermodynamic tile self-assembly. The first part of this dissertation presents a method of optimally encoding information in chemical reaction networks. Despite the fact that chemical reaction networks are not capable of universal computation, this dissertation shows that they can encode information almost as efficiently as Turing-complete systems. This demonstrates how to fully exploit the information available within chemical reaction networks. The next part of the dissertation considers models that account for the thermodynamics of these systems. This model of thermodynamics is simplified, assigning favorable energies for each chemical bond and each free molecular complex. The first result for these models is showing that finding a low-barrier kinetic path is PSPACE-complete. In addition to the computational complexity analysis of low-barrier reachability, this dissertation also presents a new generalization of closed thermodynamic systems (called thermodynamic affinity networks) and uses this generalization to gain a better understanding of the computational power of a less general model (known as thermodynamic binding networks). The final part of this dissertation presents a perspective that marries computational kinetic paths to thermodynamically favorable configurations in a thermodynamic tile assembly model. This dissertation shows that, despite the thermodynamic equilibrium of any closed system being decidable, there exist undecidable questions about a system as it grows over time.Show more Item Lower bound methods for multiparty communication complexity(2006) Ford, Jeffrey Stephen; Gál, A. (Anna)Show more Multiparty communication complexity is a measure of the amount of communication required to compute a function whose input is distributed among multiple parties. Proving lower bounds in the “Number on the Forehead” model of Chandra, Furst, and Lipton has been quite difficult, but results are of importance in many areas including time-space tradeoffs and lower bounds for VLSI and Boolean circuits, as well as constructions of universal traversal sequences and pseudorandom generators for space bounded computation. Currently, the largest known lower bounds for explicit functions are of the form Ω(n/2 k ), where k is the number of players and n is the number of bits missed by each player. All of the previous lower bounds of this type utilize some version of the discrepancy method, thus they also apply to vii the distributional and randomized multiparty communication complexity models. We introduce three new measures, tensor weight, tensor norm, and tensor rank, each of which can be used to prove lower bounds on multiparty communication complexity. We prove that the discrepancy of a multidimensional tensor can be estimated using the weight of the tensor. This measure appears to be simpler to analyze and it gives close to optimal bounds on discrepancy. Our second measure, the norm, can be used to estimate the weight, and thus discrepancy. We show an interesting connection between tensor norm and Lagrange multipliers. The tensor rank method is not based on the discrepancy method. We also give a variant of tensor rank that provides lower bounds on probabilistic communication complexity. We give an extension of the Hadamard property of matrices for tensors in multiple dimensions, and we present an Ω(n/2 k ) lower bound on any k-party, n-bit communication problem represented by a Hadamard tensor. We also give explicit constructions of Hadamard tensors giving Ω(n/2 k ) lower bounds for a new class of Boolean functions. We also examine all functions previously known to have Ω(n/ck ) lower bounds and show their place in a unifying framework of nearly Hadamard tensors.Show more Item Matrix rigidity : a survey(2022-05-06) Xie, Yangxinyu; Gal, A. (Anna)Show more Since Valiant’s establishment of matrix rigidity to analyse circuit complexity, various contributions to the bounds of matrix rigidity of special candidates has bloomed. In this thesis, we cover the following topics: existing explicit and semi-explicit rigidity lower bounds for various families of matrices, Paturi-Pudlák dimensions, rigid sets, connections between data structure lower bounds and rigidity lower bounds and non-rigidity results.Show more Item On indexing large databases for advanced data models(2001-08) Samoladas, Vasilis; Miranker, Daniel P.Show more In the last decade, the relational data model has been extended in numerous ways, including geographic information systems, abstract data types and object models, constraint and temporal databases, and on-line analytical processing. We study the indexing requirements of these data models. In many cases, these requirements are fulfilled by efficient techniques for multidimensional range search. Previous techniques for multidimensional range search, such as the R-tree and its variants, are based on ad hoc assumptions on the nature of the workloads they index, and have been known to suffer from reduced scalability and robustness. We adopt an alternative approach; our study focuses on techniques that provide worst-case performance guarantees, and thus overcome these deficiencies. iii Indexability, proposed by Hellerstein, Koutsoupias and Papadimitriou, is a novel memory model for external memory. In indexability, the complexity of indexing is quantified by two parameters: storage redundancy and access overhead. Indexability focuses on the inherent trade-off between these two parameters. We study multidimensional range search under indexability. Our results are of two kinds; indexing schemes for various problems, and corresponding lower bounds. We develop indexing schemes for interval management, multidimensional arrays, and various types of planar range search. We derive a lower-bounds theorem for arbitrary indexing schemes, and apply it to multidimensional range search, proving most of our indexing schemes to be optimal. We then leverage our theoretical work to the design of access methods. We solve the long-standing open problem of an optimal external-memory priority search tree. Our structure, the EPS-tree, is based on indexability results. We also explore dynamization, and develop techniques with optimal amortized and worst-case cost. We implement and evaluate experimentally our access method. Our experiments demonstrate that EPS-trees achieve excellent search and update performance, comparable to that of B+-trees on one-dimensional datasets. Our experiments with large datasets demonstrate the scalability and robustness of our techniques. We also affirm the relevance of space-I/O trade-off in achieving high indexing performance. We conclude that the EPS-tree is an efficient, robust access method for a wide range of problems. Its success affirms the merits of systematic use of redundancy, and nominates indexability as a prominent methodologyShow more Item Structure of a firm's knowledge base and the effectiveness of technological search(2004) Yayavaram, Sai Krishna; Fredrickson, James W.; Ahuja, GautamShow more This dissertation examines the impact of coupling that exists between the knowledge elements of a firm (i.e., the structure of the firm’s knowledge base) on the firm’s technological search activity. I define coupling as the decision made on how the search across two knowledge elements should be combined and distinguish it from interdependence, which is the inherent relationship between these two elements. I ask two questions: 1) How does the structure of a firm’s knowledge base impact the usefulness of a firm’s inventions? 2) How does the prior structure of a firm’s knowledge base affect the malleability of the knowledge base? Malleability is defined as the capacity for adaptive change. Inventions are generated when existing knowledge elements are combined in novel ways. Given the large number of potential combinations of knowledge elements, the problem of searching for technological inventions is computationally intractable. I use the NK model to study this computationally complex problem and argue that the structure of a knowledge base can mitigate the negative effects of complexity. In the first part of the dissertation, I show through computer simulations that a structure that is nearly decomposable (i.e. high coupling within a cluster of elements along with low coupling across clusters) increases the effectiveness of search on an NK landscape. In the second part of the dissertation, I test the relationship between near decomposability in the structure of a knowledge base and technology search in the context of the worldwide semiconductor industry. I find support for the hypothesis that a nearly decomposable structure improves the search for technological inventions. Further, I also find support for the hypothesis that firms with a nearly decomposable structure are likely to undergo a larger change in their knowledge base over time.Show more Item Techniques for analyzing the computational power of constant-depth circuits and space-bounded computation(2006) Trifonov, Vladimir Traianov; Gál, A. (Anna)Show more The subject of computational complexity theory is to analyze the difficulty of solving computational problems within different models of computation. Proving lower bounds is easier in less powerful models and proving upper bounds is easier in the more powerful models. This dissertation studies techniques for analyzing the power of models of computation which are at the frontier of currently existing methods. First, we study the power of certain classes of depth-three circuits. The power of such circuits is largely not understood and studying them under further restrictions has received a lot of attention. We prove exponential lower bounds on the size of certain depth-three circuits computing parity. Our approach is based on relating the lower bounds to correlation between parity and modular polynomials, and expressing the correlation with exponential sums. We show a new expression for the exponential sum which involves a certain affine space corresponding to the polynomial. This technique gives a unified treatment and generalization of bounds which include the results of Goldmann on linear polynomials and Cai, Green, and Thierauf on symmetric polynomials. We obtain bounds on the exponential sums for lasses of polynomials of large degree and with a large number of terms, which previous techniques did not apply to. Second, we study the space complexity of undirected st- connectivity. We prove an O(log n log log n) upper bound on the space complexity of undirected st-connectivity. This improves the previous O(log4=3 n) bound due to Armoni et al. and is a big step towards the conjectured optimal O(log n) bound. Independently of our work and using different techniques recently Reingold proved the optimal bound. Interest in this question comes from the fact that undirected st-connectivity is complete for SL, a class of problems between L and NL. It has been noticed that questions in the space context tend to be easier to answer than the corresponding questions in the time context. Since understanding the power of non-determinism over determinism presents a major challenge to complexity theory, studying complexity lasses between L and NL, which are the smallest natural classes capturing deterministic and non-deterministic space-bounded computation, is important.Show more Item Topics in computational social choice theory(2022-05-02) Li, Fu (Ph. D. in computer science); Plaxton, C. Greg; Zuckerman, David; Chawla, Shuchi; Garg, Vijay K.Show more Collective decision making arises in many real life situations, such as elections, matching applicants to job openings, and partitioning students into working groups. Over the past three decades, computer scientists have used techniques from the design and analysis of algorithms and complexity theory to develop social choice theory from a computational perspective, leading to the new research area of computational social choice theory. Broadly speaking, this dissertation makes two kinds of contributions to this new research area. First, we provide new algorithms and hardness results for two settings. In our first setting, we revisit the classical housing markets model in which we seek to compute a suitable reallocation of n houses to n agents, each of whom initially owns a particular house and has strict preferences over the set of n houses. In the variants of this model that we study, graph-based restrictions are imposed on the exchange of houses. More specifically, we consider two cases. In the first case, we are given a graph where the vertex set corresponds to the set of agents, and we assume that two agents can only swap houses if they are adjacent in the graph and the swap is Pareto-improving; thus, in this case, the locations of the agents remain fixed in the graph, while the houses can move around. The second case is similar except we assume that the locations of the houses remain fixed (i.e., each vertex now corresponds to a house) and the agents move around. In our second setting, we consider a variant of the classical coalition formation game, the fractional hedonic game. Such a game can be represented by a directed weighted graph where the set of vertices corresponds to the set of players, and where the weight of an edge (i,j) from player i to player j denotes the value that player i has for player j. Given a partitioning of the players into coalitions, the utility of player i is defined to be the average value that player i assigns to the members of their coalition. We assume that there is a limit on the number of coalitions that can be formed. In this setting, we study the problem of computing a social welfare maximizing partition and the problem of computing a Nash stable partition. We study these two problems for a variety of different graph classes. Second, we design mechanisms without money for new games related to facility location and resource sharing. In the facility location game that we consider, a central planner plans to build a heterogeneous set of facilities (e.g., a school, a water treatment plant) and each agent reports which of these facilities they consider to be “obnoxious”. The location of each agent is assumed to be fixed, and the utility of an agent is based on the minimum distance of the agent to an obnoxious facility. The goal is to design strategyproof mechanisms that maximize either the sum of the agent utilities or the minimum utility of any agent. In the resource sharing game that we consider, agents pool their computational resources in order to better accommodate fluctuations in individual demand over a sequence of rounds, and the agents specify their demand for each round at the outset. We present a group-strategyproof and non-wasteful mechanism that guarantees each agent at least half of the utility they could achieve without sharing their resources.Show more