Browsing by Subject "Computer security"
Now showing 1 - 9 of 9
- Results Per Page
- Sort Options
Item Automatic validation of secure authentication protocols(2003-08) Kim, Kyoil, 1964-; Abraham, Jacob A.Item Data privacy : the non-interactive setting(2009-05) Narayanan, Arvind, 1981-; Shmatikov, VitalyThe Internet has enabled the collection, aggregation and analysis of personal data on a massive scale. It has also enabled the sharing of collected data in various ways: wholesale outsourcing of data warehousing, partnering with advertisers for targeted advertising, data publishing for exploratory research, etc. This has led to complex privacy questions related to the leakage of sensitive user data and mass harvesting of information by unscrupulous parties. These questions have information-theoretic, sociological and legal aspects and are often poorly understood. There are two fundamental paradigms for how the data is released: in the interactive setting, the data collector holds the data while third parties interact with the data collector to compute some function on the database. In the non-interactive setting, the database is somehow \sanitized" and then published. In this thesis, we conduct a thorough theoretical and empirical investigation of privacy issues involved in non-interactive data release. Both settings have been well analyzed in the academic literature, but simplicity of the non-interactive paradigm has resulted in its being used almost exclusively in actual data releases. We analyze several common applications including electronic directories, collaborative ltering and recommender systems, and social networks. Our investigation has two main foci. First, we present frameworks for privacy and anonymity in these dierent settings within which one might dene exactly when a privacy breach has occurred. Second, we use these frameworks to experimentally analyze actual large datasets and quantify privacy issues. The picture that has emerged from this research is a bleak one for noninteractivity. While a surprising level of privacy control is possible in a limited number of applications, the general sense is that protecting privacy in the non-interactive setting is not as easy as intuitively assumed in the absence of rigorous privacy denitions. While some applications can be salvaged either by moving to an interactive setting or by other means, in others a rethinking of the tradeos between utility and privacy that are currently taken for granted appears to be necessary.Item Economic analysis on information security and risk management(2007) Zhao, Xia, 1977-; Whinston, Andrew B.This dissertation consists of three essays that explore economic issues on information security and risk management. In the first essay, we develop an economic mechanism which coordinates security strategies of Service Providers (SPs). SPs are best positioned to safeguard the Internet. However, they generally do not have incentives to take such a responsibility in the distributed computing environment. The proposed certification mechanism induces SPs to voluntarily accept the liability of Internet security. SPs who take the liability signal their capability in conducting secure computing and benefit from such recognition. We use a game-theoretic model to examine SPs' incentives and the social welfare. Our results show that the certification mechanism can generate a more secure Internet communication environment. The second essay studies the impact of cyberinsurance and alternative risk management solutions on firms' information security strategies. In the existing literature, cyberinsurance has been proposed as a solution to transfer information risks and reduce security spending. However, we show that cyberinsurance by itself is deficient in addressing the overinvestment issue. We find that the joint use of cyberinsurance and risk pooling arrangement optimizes firms' security investment. In the case with a large number of firms, we show that firms will invest at the socially optimal level. The third essay examines the information role of vendors' patching strategies. Patching after software release has become an important stage in the software development cycle. In the presence of quality uncertainty, we show that vendors can leverage the patch release times to signal the quality of their software products. We define a new belief profile and identify two types of separating equilibria in a dynamic setting.Item Efficient, provably secure code constructions(2011-05) Agrawal, Shweta Prem; Vishwanath, Sriram; Boneh, Dan; Zuckerman, David; Garg, Vijay; Caramanis, Constantine; Sanghavi, SujayThe importance of constructing reliable and efficient methods for securing digital information in the modern world cannot be overstated. The urgency of this need is reflected in mainstream media--newspapers and websites are full of news about critical user information, be it credit card numbers, medical data, or social security information, being compromised and used illegitimately. According to news reports, hackers probe government computer networks millions of times a day, about 9 million Americans have their identities stolen each year and cybercrime costs large American businesses 3.8 million dollars a year. More than 1 trillion worth of intellectual property has already been stolen from American businesses. It is this evergrowing problem of securing valuable information that our thesis attempts to address (in part). In this thesis, we study methods to secure information that are fast, convenient and reliable. Our overall contribution has four distinct threads. First, we construct efficient, "expressive" Public Key Encryption systems (specifically, Identity Based Encryption systems) based on the hardness of lattice problems. In Identity Based Encryption (IBE), any arbitrary string such as the user's email address or name can be her public key. IBE systems are powerful and address several problems faced by the deployment of Public Key Encryption. Our constructions are secure in the standard model. Next, we study secure communication over the two-user interference channel with an eavesdropper. We show that using lattice codes helps enhance the secrecy rate of this channel in the presence of an eavesdropper. Thirdly, we analyze the security requirements of network coding. Network Coding is an elegant method of data transmission which not only helps achieve capacity in several networks, but also has a host of other benefits. However, network coding is vulnerable to "pollution attacks" when there are malicious users in the system. We design mechanisms to prevent pollution attacks. In this setting, we provide two constructions -- a homomorphic Message Authentication Code (HMAC) and a Digital Signature, to secure information that is transmitted over such networks. Finally, we study the benefits of using Compressive Sensing for secure communication over the Wyner wiretap channel. Compressive Sensing has seen an explosion of interest in the last few years with its elegant mathematics and plethora of applications. So far however, Compressive Sensing had not found application in the domain of secrecy. Given its inherent assymetry, we ask (and answer in the affirmative) the question of whether it can be deployed to enable secure communication. Our results allow linear encoding and efficient decoding (via LASSO) at the legitimate receiver, along with infeasibility of message recovery (via an information theoretic analysis) at the eavesdropper, regardless of decoding strategy.Item Hop integrity: a defense against denial-of-service attacks(2003) Huang, Chin-Tser; Gouda, Mohamed G., 1947-A computer network is said to provide hop integrity iff the following three conditions hold for every pair of adjacent routers p and q in the network. First, p does not forward any message to q if q has not been up and reachable. Second, when q receives a message m supposedly from p, then q can check that m was not modified after it was sent. Third, when q receives a message m supposedly from p, then q can check that m was not a replay of an old message sent by p. In this dissertation, we propose three protocols that can be added to the routers in a computer network so that the network can provide hop integrity, and thus overcome most denial-of-service attacks. These three protocols are the secure address resolution protocol, the weak hop integrity protocol, and the strong hop integrity protocol. The secure address resolution protocol includes an inviteaccept protocol and a request-reply protocol, and requires a secure server connected to the Ethernet. The weak hop integrity protocol includes a secret exchange protocol and an integrity check protocol. The strong hop integrity protocol combines a soft sequence number protocol with the weak hop integrity protocol. We also present an alternative way to achieve strong hop integrity with hard sequence numbers. All the protocols are stateless, require small overhead, and do not constrain the network protocol in the routers in any way.Item Reliability and security of vector routing protocols(2011-05) Li, Yan, doctor of computer science; Gouda, Mohamed G., 1947-; Lam, Simon S.; Mok, Aloysius K.; Qiu, Lili; Elmallah, Ehab S.As the Internet becomes the ubiquitous infrastructure for various applications, demands on the reliability, availability and security of routing protocols in the Internet are becoming more stringent. Unfortunately, failures are still common in the daily operation of a network. Service disruption for even a short time can seriously affect the quality of real-time applications, such as VoIP and video on demand applications. Moreover, critical business and government applications require routing protocols to be robust against malicious attacks, such as denial of Service attacks. This dissertation proposes three techniques to address some reliability and security concerns in intra-domain (distance vector) routing protocols and inter-domain (path vector) routing protocols. The first technique addresses the problem of service disruption that arises from sudden link failures in distance vector routing protocols. We consider two types of link failures: single link failures and shared risk link group failures. For single link failures, we propose an IP fast reroute mechanism to reroute packets around the failed links. This fast reroute mechanism is the first that does not require complete knowledge of the network topology and does not require changing of the original routing protocol. This mechanism proactively computes a set of relay nodes that can be used to tunnel the rerouted packets immediately after the detection of a link or node failure. The mechanism includes an algorithm for a node to automatically identify itself as a candidate relay node for a reroute link and notify the source node of the reroute link of its candidacy. The source node can then decide the validity of a candidate relay node. The mechanism also includes an algorithm to suppress redundant notification messages. We then extend our IP fast reroute mechanism for single link failures to accommodate shared risk link group failures. We achieve this goal by introducing one more bit information. Through simulations, I show that the proposed mechanisms succeed in rerouting around failed links about 100% of the time, with the length of the reroute path being comparable to the length of the re-converged shortest path. The second technique addresses the problem that arises from allowing any node to route data packets to any other node in the network (and consequently allow any adversary node to launch DoS attacks against other nodes in the network). To solve this problem, we propose a blocking option to allow a node u to block a specified set of nodes and prevent each of them from sending or forwarding packets to node u. The blocking option intends to discard violating packets near the adversary nodes that generated them rather than near their ultimate destinations. We then discuss unintentionally blocked nodes, called blind nodes and extend the routing protocols to allow each node to communicate with its blind nodes via some special nodes called joint nodes. Finally, I show, through extensive simulation, that the average number of blind nodes is close to zero when the average number of blocked nodes is small. The third technique addresses the problem that arises when a set of malicious ASes in the Internet collude to hijack an IP prefix from its legitimate owner in BGP. (Note that none of previous proposals for protecting BGP against IP prefix hijacking is effective when malicious ASes can collude.) To solve this problem, we propose an extension of BGP in which each listed AS in an advertised route supplies a certified full list of all its peers. Then I present an optimization where each AS in an advertised route supplies only a balanced peer list, that is much smaller than its full peer list. Using real Internet topology data, I demonstrate that the average, and largest, balanced peer list is 92% smaller than the corresponding full peer list. Furthermore, in order to handle the dynamics of the Internet topology, we propose algorithms on how to issue certificates to reflect the latest changes of the Internet topology graph. Although the results in this dissertation are presented in the context of distance vector and path vector routing protocols, many of these results can be extended to link state routing protocols as well.Item Reliable SRAM fingerprinting(2011-08) Kim, Joonsoo, Ph. D.; Abraham, Jacob A.; Touba, Nur A.; Banerjee, Sanjay; Pan, David Z.; Gebara, Fadi H.Device identification, as human identification has been, has become critical to mitigate growing security problems. In the era of ubiquitous computing, it is important to ensure universal device identities that are versatile in number of ways, for example, to enhance computer security or to enable large-scale data capture, management and analysis. For device identities, simple labeling works only if they are properly managed under a highly controlled environment. We can also impose hard-coded serial numbers into non-volatile memories but it is well known that this is expensive and vulnerable to security attacks. Hence, it is desirable to develop reliable and secure device identification methods using fingerprint-like characteristics of the electronic devices. As technology scales, process variation has become the most critical barrier to overcome for modern chip development. Ironically, there are some research works to exploit the aggressive process variation for the identification of individual devices. They find measurable physical characteristics that are unique to each integrated circuit. Among them, device identification using initial power-up values of SRAM cells, called SRAM fingerprints, has been emphasized lately in part due to the abundant availability of SRAM cells in modern microprocessors. More importantly, since the cross-coupled inverter structure of each SRAM cell amplifies even the small mismatches between two inverter nodes, it is thus very sensitive to and maximizes the effect of random process variation, making SRAM fingerprints to acquire great features as a naturally inherent device ID. Therefore, this work focuses on achieving reliable device identification using SRAM fingerprints. As of date, this dissertation shows the most comprehensive feature characterization of SRAM fingerprints based on the large datasets measured from the real devices under various environmental conditions. SRAM fingerprints in three different process technologies - IBM 32nm SOI technology, IBM 65nm bulk technology, and TSMC 90nm low-k dielectric technology - have been investigated across different temperatures or voltages. By using formal statistical tools, the required features for SRAM fingerprints necessary to be usable as device IDs - uniqueness, randomness, independence, reproducibility, etc. - have been empirically proven. As some of the previous works mentioned, there is an inherent unreliability of the initial states of SRAM cells so that there is always some chance of errors during identification process. It is observed that, under environmental variations, the instability aggravates even more. Most of the previous work, however, ignores the temperature dependence of the SRAM power-up values, which turns out to be critical against our past speculations and becomes a real challenge in realizing a reliable SRAM-based device identification. Note that temperature variation will not be negligible in many situations, for example, authentication of widely distributed sensors. We show that it is possible to achieve SRAM-based device identification system that reliably operates under a wide range of temperatures. The proposed system is composed of three major steps: enrollment, system evaluation, and matching. During the enrollment process, power-up samples of SRAM fingerprints are captured from each manufactured device and the feature information or characterization identifier (CID) is characterized to generate a representative fingerprint value associated with the product device. By collecting the samples and the CIDs, system database gets constructed before distributing devices to the field. During the matching process, we take a single sample fingerprint of a power-cycle experiment, the field identifier (FID), and perform a match against a repository of CID's of all manufactured devices. There is an additional monitoring subsystem, called system evaluation, that estimates the system accuracy with the system database. It controls the system parameters while maintaining the system accuracy requirement. This work delivers a total-package statistical framework that raises design issues of each step and provides systematic solutions to deal with these inter-related issues. We provide statistical methods to determine sample size for the enrollment of chip identities, to generate the representative fingerprint features with the limited number of test samples, and to estimate the system performance along with the proposed system parameter values and the confidence interval of the estimation. A novel matching scheme is proposed to improve the system accuracy and increase population coverage under environmental variations, especially temperature variation. Several advanced mechanisms to exploit the instability for our benefit is also discussed along with supporting state-of-the-art circuit technologies. All these pioneering theoretical frameworks have been validated by the comprehensive empirical analysis based on the real SRAM fingerprint datasets introduced earlier. This dissertation covers a wide range of multidisciplinary research areas including solid-state device physics, computer security, biometrics, statistics, and pattern matching. The main contribution here is that this work provides a comprehensive interdisciplinary framework to enable reliable SRAM fingerprinting, even if the fingerprint, depending on ambient conditions, exhibits nondeterministic behaviors. Furthermore, the interdisciplinary bases introduced in our work are expected to provide generic fundamental methodologies that apply to device fingerprints in general, not just to SRAM fingerprints.Item Robust behavioral malware detection(2018-06-18) Kazdagli, Mikhail; Tiwari, Mohit; Shakkottai, Sanjay; Gligoric, Milos; Khurshid, Sarfraz; Christodorescu, MihaiComputer security attacks evolve to evade deployed defenses. Recent attacks have ranged from exploiting generic software vulnerabilities in memory-unsafe languages such as buffer overflows and format string vulnerabilities to exploiting logic errors in web applications, through means such as SQL injection and cross-site scripting. Furthermore, recent attacks have focused on escalating privileges and stealing sensitive information by exploiting new hardware or operating system (OS) interfaces. Computer security attacks are also now relying on social engineering techniques to run malicious programs on victims' machines; instances of such abuse include phishing and watering hole attacks, both of which trick people into running malicious code or divulging confidential information. Thus, traditional computer security methods, such as OS confinement and program analysis, will not prevent new attacks that do not violate OS confinement or present illegal program behaviors. Another challenge is that traditional security approaches have large trusted code bases (TCBs), which include hardware, OSs, and other software components that implement authentication and authorization logic across a distributed system. This is a vulnerable area because these components are complex and often contain vulnerabilities that undermine the overall system's integrity or confidentiality. Evasive attacks on vulnerable systems -- especially in instances where trusted components turn malicious -- inspire the creation of defenses that can augment formally specified mechanisms against known threats. Specifically, this thesis advances the state of the art in behavioral malware detection -- detecting previously unknown malware in the very early stages of infection within an enterprise network. Here we assess three fundamental insights of modern-day attacks and then describe a cross-layer defense against such attacks. First, we make a low-level machine state visible to behavioral analysis, significantly minimizing the TCB and its associated vulnerabilities. Specifically, our behavioral detector utilizes an executable code's dynamic properties, with architectural and micro-architectural states as input. Second, we evaluate behavioral detectors against adaptive adversaries. For this purpose, we introduce a new metric to determine a detector's robustness against malware modifications, which serves as a step toward explainability of machine learning-based malware detectors. Finally, we exploit the fact that attacks spread through only a limited number of vectors and propose new techniques to analyze the resulting dynamic correlations created among machines. These insights show that behavioral detectors can efficiently protect both individual devices and end hosts within enterprise networks. We present three types of such behavioral detectors. Sherlock protects resource-constrained devices, such as mobile phones and Internet-of-things (IoT) devices, without modifying the software/hardware stack. Sherlock's supervised and unsupervised versions outperform prior work by 24.7% and 12.5% (area under the curve (AUC) metric), respectively, and detects stealthy malware that often evades static analysis tools. The second behavioral detector, Shape-GD, protects devices within an enterprise network. It monitors devices on the network, aggregates data from weak local detectors, overlays that with network-level information, and then makes early, robust predictions regarding malicious activity. Shape-GD achieves its goals by exploiting latent attack semantics. Specifically, it analyzes communication patterns across multiple devices, partitioning them into neighborhoods. Devices within the same neighborhood are likely to be exposed to the same attack vector. Furthermore, we hypothesize that the conditional distribution of false positives is different from that of true positives; i.e., given a neighborhood of nodes, we can compute the aggregate distributional shape of alert feature vectors from the neighborhood itself and provide robust labels. We evaluate Shape-GD by emulating a large community of Windows systems using the system call traces from a few thousand malicious and benign applications; we simulate both a phishing attack in a corporate email network as well as a watering hole attack through a popular website. In both scenarios, Shape-GD identifies malware early on (~100 infected nodes in a ~100K-node system for watering hole attacks, and ~10 of ~1,000 for phishing attacks) and robustly (with ~100% global true-positive and ~1% global false-positive rates). The third behavioral detector, Centurion, detects malware across machines monitored by an anti-virus company. It is able to analyze behavior from 5 million Symantec client machines in real time and discovers malware by correlating file downloads across multiple machines. Compared with a recent local detector that analyzes metadata from file downloads, Centurion reduced the number of false positives from ~1M to ~110K and increased the true-positive rate by a factor of ~2.5. In addition, on average, Centurion detects malware 345 days earlier than commercial anti-virus products.Item Security architects in practice(2006-12) Lakshminarayanan, Vidya L.; Perry, Dewayne E.While security has long been a significant issue in military systems, the spread of the internet has stimulated a growing interest in, and increasing demand for, secure systems. Understanding how architects manage security requirements in practice is a necessary first step in providing repeatable processes using effective techniques, methods, and architectural structures. In this thesis, I present the results of multiple cases of practicing security architects: key aspects in security requirements, critical issues in managing security requirements, essential characteristics of security architects, how architects define security architecture, and how requirements are transformed into architectures and implemented.