Using and saving randomness
dc.contributor.advisor | Zuckerman, David I. | |
dc.contributor.committeeMember | Moshkovitz, Dana | |
dc.contributor.committeeMember | Price, Eric | |
dc.contributor.committeeMember | Zhou, Yuan | |
dc.creator | Chen, Xue, Ph. D. | |
dc.date.accessioned | 2018-07-30T15:38:07Z | |
dc.date.available | 2018-07-30T15:38:07Z | |
dc.date.created | 2018-05 | |
dc.date.issued | 2018-06-21 | |
dc.date.submitted | May 2018 | |
dc.date.updated | 2018-07-30T15:38:07Z | |
dc.description.abstract | Randomness is ubiquitous and exceedingly useful in computer science. For example, in sparse recovery, randomized algorithms are more efficient and robust than their deterministic counterparts. At the same time, because random sources from the real world are often biased and defective with limited entropy, high-quality randomness is a precious resource. This motivates the studies of pseudorandomness and randomness extraction. In this thesis, we explore the role of randomness in these areas. Our research contributions broadly fall into two categories: learning structured signals and constructing pseudorandom objects. Learning a structured signal. One common task in audio signal processing is to compress an interval of observation through finding the dominating k frequencies in its Fourier transform. We study the problem of learning a Fourier-sparse signal from noisy samples, where [0, T] is the observation interval and the frequencies can be “off-grid”. Previous methods for this problem required the gap between frequencies to be above 1/T, which is necessary to robustly identify individual frequencies. We show that this gap is not necessary to recover the signal as a whole: for arbitrary k-Fourier-sparse signals under ℓ₂ bounded noise, we provide a learning algorithm with a constant factor growth of the noise and sample complexity polynomial in k and logarithmic in the bandwidth and signal-to-noise ratio. In addition to this, we introduce a general method to avoid a condition number depending on the signal family F and the distribution D of measurement in the sample vi complexity. In particular, for any linear family F with dimension d and any distribution D over the domain of F, we show that this method provides a robust learning algorithm with O(d log d) samples. Furthermore, we improve the sample complexity to O(d) via spectral sparsification (optimal up to a constant factor), which provides the best known result for a range of linear families such as low degree multivariate polynomials. Next, we generalize this result to an active learning setting, where we get a large number of unlabeled points from an unknown distribution and choose a small subset to label. We design a learning algorithm optimizing both the number of unlabeled points and the number of labels. Pseudorandomness. Next, we study hash families, which have simple forms in theory and efficient implementations in practice. The size of a hash family is crucial for many applications such as derandomization. In this thesis, we study the upper bound on the size of hash families to fulfill their applications in various problems. We first investigate the number of hash functions to constitute a randomness extractor, which is equivalent to the degree of the extractor. We present a general probabilistic method that reduces the degree of any given strong extractor to almost optimal, at least when outputting few bits. For various almost universal hash families including Toeplitz matrices, Linear Congruential Hash, and Multiplicative Universal Hash, this approach significantly improves the upper bound on the degree of strong extractors in these hash families. Then we consider explicit hash families and multiple-choice schemes in the classical problems of placing balls into bins. We construct explicit hash families of almost-polynomial size that derandomizes two classical multiple-choice schemes, which match the maximum loads of a perfectly random hash function. | |
dc.description.department | Computer Science | |
dc.format.mimetype | application/pdf | |
dc.identifier | doi:10.15781/T2H98ZX4G | |
dc.identifier.uri | http://hdl.handle.net/2152/65850 | |
dc.language.iso | en | |
dc.subject | Query complexity | |
dc.subject | Linear families | |
dc.subject | Spectral sparsification | |
dc.subject | Active regression | |
dc.subject | Sparse Fourier transform | |
dc.subject | Hash families | |
dc.subject | Randomness extractor | |
dc.subject | Chaining argument | |
dc.subject | Multiple-choice schemes | |
dc.title | Using and saving randomness | |
dc.type | Thesis | |
dc.type.material | text | |
thesis.degree.department | Computer Sciences | |
thesis.degree.discipline | Computer Science | |
thesis.degree.grantor | The University of Texas at Austin | |
thesis.degree.level | Doctoral | |
thesis.degree.name | Doctor of Philosophy |
Access full-text files
Original bundle
1 - 1 of 1