Browsing by Subject "Computer network protocols--Design"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Design and analysis of self-stabilizing sensor network protocols(2007-08) Choi, Young-ri; Gouda, Mohamed G., 1947-A sensor is a battery-operated small computer with an antenna and a sensing board that can sense magnetism, sound, heat, etc. Sensors in a network communicate and cooperate with other sensors to perform given tasks. A sensor network is exposed to various dynamic factors and faults, such as topology changes, energy saving features, unreliable communication, and hardware/software failures. Thus, protocols in this sensor network should be able to adapt to dynamic factors and recover from faults. In this dissertation, we focus on designing and analyzing a class of sensor network protocols, called self-stabilizing protocols. A self-stabilizing protocol is guaranteed to return to a state where it performs its intended function correctly, when some dynamic factors or faults corrupt the state of the protocol arbitrarily. Therefore, in order to make a sensor network resilient to dynamic factors and faults, each protocol in the sensor network should be self-stabilizing. We first develop a state-based model that can be used to formally specify sensor network protocols. This model accommodates several unique characteristics of sensor networks, such as unavoidable local broadcast, probabilistic message transmission, asymmetric communication, message collision, and timeout actions and randomization steps. Second, we present analysis methods for verifying and analyzing the correctness and self-stabilization properties of sensor network protocols specified in this model. Third, using the state-based model and analysis methods, we design three self-stabilizing sensor network protocols, prove their self-stabilization properties, and estimate their performance. These three self-stabilizing protocols are a sentry-sleeper protocol that elects a sentry from a group of sensors at the beginning of each time period, a logical grid routing protocol that builds a routing tree whose root is the base station, and a family of flood sequencing protocols that distinguish between fresh and redundant flood messages using sequence numbers.Item Hop integrity: a defense against denial-of-service attacks(2003) Huang, Chin-Tser; Gouda, Mohamed G., 1947-A computer network is said to provide hop integrity iff the following three conditions hold for every pair of adjacent routers p and q in the network. First, p does not forward any message to q if q has not been up and reachable. Second, when q receives a message m supposedly from p, then q can check that m was not modified after it was sent. Third, when q receives a message m supposedly from p, then q can check that m was not a replay of an old message sent by p. In this dissertation, we propose three protocols that can be added to the routers in a computer network so that the network can provide hop integrity, and thus overcome most denial-of-service attacks. These three protocols are the secure address resolution protocol, the weak hop integrity protocol, and the strong hop integrity protocol. The secure address resolution protocol includes an inviteaccept protocol and a request-reply protocol, and requires a secure server connected to the Ethernet. The weak hop integrity protocol includes a secret exchange protocol and an integrity check protocol. The strong hop integrity protocol combines a soft sequence number protocol with the weak hop integrity protocol. We also present an alternative way to achieve strong hop integrity with hard sequence numbers. All the protocols are stateless, require small overhead, and do not constrain the network protocol in the routers in any way.Item Protocol design for dynamic Delaunay triangulation(2008-12) Lee, Dong-young, 1973-; Lam, Simon S., 1947-Delaunay triangulation (DT) is a useful geometric structure for net-working applications. We define a distributed DT and present a necessary and sufficient condition for a distributed DT to be correct. This condition is used as a guide for protocol design. We investigate the design of join, leave, failure, and maintenance protocols for a set of nodes in d-dimension (d > 1) to construct and maintain a distributed DT in a dynamic environment. The join, leave, and failure protocols in the suite are proved to be correct for a single join, leave, and failure, respectively. For a system under churn, it is impossible to maintain a correct distributed DT continually. We define an accuracy metric such that accuracy is 100% if and only if the distributed DT is correct. The suite also includes a maintenance protocol designed to recover from incorrect system states and to improve accuracy. In designing the protocols, we make use of two novel observations to substantially improve protocol efficiency. First, in the neighbor discovery process of a node, many replies to the node's queries contain redundant information. Second, the use of a new failure protocol that employs a proactive approach to recovery is better than the reactive approaches used in prior work. Experimental results show that our new suite of protocols maintains high accuracy for systems under churn and each system converges to 100% accuracy after churning stopped. They are much more efficient than protocols in prior work. To illustrate the usefulness of distributed DT for networking applications, we also present several application protocols including greedy routing, finding a closest existing node, clustering, broadcast, and geocast. Bose and Morin proved in 2004 that greedy routing always succeeds to find the destination node on a DT. We prove that greedy routing always finds a closest existing node to a given point, and our broadcast and geocast protocols always deliver a message to every target node. Our broadcast and geocast protocols are also efficient in the sense that very few target nodes receive duplicate messages, and non-target nodes receive no message. Performance characteristics of greedy routing, broadcast, and geocast are investigated using simulation experiments. We also investigate the impact of inaccurate coordinates on the performance of greedy routing, broadcast, and geocast.Item Protocol design for scalable and reliable group rekeying(2005) Zhang, Xincheng; Lam, Simon S., 1947-In secure group communications, group users share a symmetric key, called group key. The group key is used for encrypting data traffic among group users or restricting access to resources intended for group users only. A key server needs to change the group key after users join and leave (called group rekeying), by composing a rekey message that consists of encrypted new keys (encryptions, in short) and delivering it to all users. When group size is large, it becomes infeasible to rekey per user join or leave because of its high processing and bandwidth overheads. Instead, the key server changes the group key per rekey interval, the length of which indicates how tight the group access control is. It is desired to reduce the overhead of group rekeying as much as possible in order to allow frequent rekeying. To address the scalability issue of the key server, Wong, Gouda, and Lam proposed the key tree approach in 1998. The same idea also appears in vi RFC 2627. The scalability of rekey transport, however, was not addressed. The objective of this dissertation is to design a scalable and reliable rekey transport protocol and evaluate its performance. Rekey transport differs from data transport because rekey messages require scalable, reliable, and realtime delivery. Furthermore, each user needs only a small subset of all of the encryptions in a rekey message. We have proposed a scalable and reliable rekey transport protocol; its efficiency benefits from the special properties of rekey transport. The protocol runs in two steps: a multicast step followed by a unicast recovery step. Proactive forward error correction (FEC) is used in multicast to reduce delivery latency and limit the number of users who need unicast recovery. The unicast recovery step provides eventual reliability; it also reduces the worst-case delivery latency as well as user bandwidth overhead. In the protocol design, various technical issues are addressed. First, a key identification scheme is proposed for each user to identify the subset of new keys that it needs. The communication cost of this scheme is only several bytes per packet. Second, we investigate how to space the sending times of packets to make proactive FEC resilient to burst loss. Lastly, an adaptive FEC scheme is proposed to make the number of users who need unicast recovery controlled around a small target value under dynamic network conditions. This scheme also makes FEC bandwidth overhead and rekey interval close to the the feasible minima. Application-layer multicast (ALM) offers new opportunities to do naming and routing. In the dissertation, we have studied how to use ALM to support concurrent rekey and data transport in secure group communications. Rekey traffic is bursty and requires fast delivery. It is desired to reduce rekey bandwidth overhead as much as possible since it competes for available bandwidth with data traffic. Towards this goal, we propose a multicast scheme that exploits proximity in the underlying network. We further propose a rekey message splitting scheme to significantly reduce rekey bandwidth overhead at each user access link and network link. We formulate and prove correctness properties for the multicast scheme and rekey message splitting scheme. Simulation results show that our approach can reduce rekey bandwidth overhead from several thousand encryptions to less than ten encryptions for more than 90% of users in a group of 1024 users.