Tuesday, May 10, 2011

Coverage Control for mobile sensing networks

Here is a summary for the steps that this algorithm took to try and achieve coverage control for a mobile sensing network.

1. Each node calculates the voronoi partitions for itself and its neighbors
2. Use lloyd's algorithm to find the voronoi centroidal partition
3. Monitor for any nodes leaving or entering an area
4. Adjust sensing radius and communication radius

This process keeps going on as long as nodes are still moving around.

This algorithm looks like it was for sensor nodes that can control their movement as opposed to being attached to something that moves independently. So this type of protocol is for a very specific type of WSN that would most likely be very expensive to implement which to me is going against the reason you would use a WSN.

Variable Radii Connected Sensor Cover in Sensor Networks

This paper was very thorough in explaining everything that this protocol will do. It gave a list of all of the terms that are used throughout the paper that was very helpful for understanding what was going on.

This paper shows three different ways to obtain coverage using variable radii sensors. The method that they ended up saying was the best was the Voronoi based coverage method. This makes sense to me because all of the other coverage methods that they used required far more computations and this in turn made it cost more to use (energy costs). The Voronoi based approach outperformed all of the other protocols they presented.

Thursday, April 28, 2011

Secure routing for Mobile ad-hoc networks

MANET is self organised interconnection of wireless communication devies. The key challenge in mobile ad hoc networks is to provide security to the network becuase in the dynamic enviornment roaming nodes moves in and out of the network all the times that can bring malicious nodes to the network which can reroute or sometimes clog the n/w. So to avoid such scenario, authors presented a routing protocol that will mitigate the effects of such malicious nodes.Protocol fabricates consists of diverse paths from S-D.And also it is based on assumpation that there exists a shared key b'w source and destination. Source disperses the message into no of parts and each node is equipped with SRP header that consists of query identificaion number, Query sequence number and MAC. And once the data is received by destination node, It will validate the data by comparing it with information conatined in SRP header.
And in the end, Protocol guarantees thte route replies will never be rejected or reach back the querying node.

Routing Techniques in WSN- A survey-Part 4

GAF: Represents Gepgraphic ad aptive fidelity. In this protocol, Nodes already know thier location through GPS. Network is divided into grids. Set of nodes are assigned a grid, and optimize the sleep and active times w/out affecting routing fidelity and in this way energy is conserved therby increasing the n/w lifetime.

GEAR: Uses geographic attributes from data to route packets towards destinationinstead of using whole network. Learning and estimated costs are associated with each node and a hole is created in the region when node doesn't have a neighbor closer to target other than itself. Protocol consosts of two phases: Forwarding packet towards target region and Forwarding packet within region.

GOAFR: Its a combination of greedy and face roting algorithms.Greedy algorithm doesn't work best for non dense n/w's and OFR aims to find best node in n/w closer to destination.Thus by combining both of these algorithms it becomes optimal for both worst and average case efficiency.

SPAN: It selects nodes as coordinators based on position.Node becomes a coordinator if 2 neighbors can't reach each other directly.

Multipath routing protocol:It encourages the use of multiple path rather one single path.Introduced approach is to use path with highest residual energy until the energy falls below back up path after which back up path is used. In this way energy resources are maintained for long n/w lifetime.

Query based and Negotiation based routing:In query based, destination node sends query through network. Once the query is matched, node with data sends it to destination node.
Negotiation based routing uses high level data descriptors that eliminate redundant data transmission through negotiation.

Non coherent processing: Nodes will locally process raw data before being sent to other nodes for further processing. Consists of three phases: Target detection,Data collection and preprocessing,Memebership declaration and Central node election.

Coherent data processing: Data is forwarded to aggragators after min processing like time stamping, duplicate suppression.Also it is selected to perform energy efficient routing.

Tuesday, April 26, 2011

Reliable data transport and congestion control in wireless sensor networks-part1

One of the main challenges of WSN is reliable data transport. Reliable data delivery and congestion control are the two major functions in transport layer. Depending on the direction of data flow of data flow of the applications, the data transport can be classified into sensors-to-sink and sink-to-sensors. In this part I am going to give a brief over view of reliable data transport protocols.
PSFQ: This protocol guarantees reliable data delivery from sink-to-sensors. It comprises of three components: pump operation, fetch operation and report operation. It guarantees the reliability by fast fetching packets from neighboring nodes after a packet sequence gap is detected. It also deals with hop-by-hop loss recovery. Main drawback of PSFQ is use of in-sequence forwarding for message delivery to accomplish the pump slowly operation and this results in wastage of precious bandwidth.
RMST: Its primary goal is the delivery of large pieces of data to all subscribed sinks. RMST is NACK-based; it places responsibility for loss detection at the receivers (which can be intermediate nodes as well as the actual sinks). Missing fragments requests are uni cast from the sink to source. Caches in intermediate nodes allow for fast recovery. This scheme lacks in congestion control and energy efficiency.
GARUDA: Uses core-recovery idea to implement reliable downstream data delivery. Some nodes in the network play the role as loss recovery servers, and other non-core nodes need to have one-hop connection with at least one core nodes. GARUDA works in two-stage recovery. GARUDA's design is not optimized for very large messages and therefore it does not use features such as pipe lining which are critical fore reduced data propagation latency in large networks.

Routing Techniques in Wireless Sensor Networks: A Survey - Part2

Directed Diffusion: It is a data-centric application paradigm where Base Station (BS) broadcasts the interests and sources will reply to the interests. Even though this protocol is very popular it has its own drawbacks. This is unsuitable for one-time queries, is not applicable to applications that require continuous data delivery to the BS and is not energy efficient.


Rumor Routing: This protocol floods the events rather than queries and each node maintains an event table and generated agent which propagate information to distant nodes. This protocol is energy-efficient and handles node-failures but doesn't work well when number of events is large.


MCFA: This algorithm exploits the fact that the direction of routing is always known and nodes doesn't have to maintain a routing table. In this each message is broadcasted to its neighbors and node checks if it is on the least-cost path between the sensor node and the base station.


Gradient-Based Routing (GBR): Key idea of this protocol is to memorize the height of the node (min number of hops) to the base station when interest is diffused. GBR uses data aggregation and traffic spreading to uniformly divide the traffic over the network. Three different data dissemination techniques are discussed in GBR, Stochastic scheme, Energy-based scheme and Stream-based scheme.


COUGAR: This protocol adds a query layer that lies between the network and application layers. Sensor nodes select a leader node to transmit the data. Base Station generates a query plan which specifies the flow of data and selection of leader to a query. In this protocol, addition of query layer to each node is an overhead and leader nodes should be dynamically maintained to prevent them from being hot-spots.


ACQUIRE: This protocol divides complex queries into sub-queries. When BS sends query each node tries to respond to query before forwarding it to another node. This protocol is not evaluated through simulations.


Energy-Aware Routing: The main objective of this protocol is to increase network lifetime. In this set of paths are maintained based on energy-levels. Route set up is complicated in this protocol.


LEACH : In this few nodes act as cluster head nodes and these nodes are rotated. This protocol is operated in two-phase. In setup phase clusters are organized and cluster heads are selected. In steady phase data is transferred to base station. Some issues with LEACH are, assumes all nodes can transmit with enough power and also assumes that cluster heads consume same energy as non-cluster head nodes.

Monday, April 25, 2011

Secure Routing in Wireless Sensor Networks: Attacks and Countermeasures

The paper highlights the importance of security mechanisms in wireless sensor network routing protocols. The authors point out that adding security to a routing protocol after the design is complete is very difficult due to the limited processing capabilities of the sensor nodes and hence, security should be incorporated as one of the design parameters of a routing protocols. Various security threats and their counter measures, for both generic and specific routing protocols have been discussed.

Wednesday, April 20, 2011

TAG: a Tiny Aggregation service for Ad-Hoc Sensor Networks

This paper presents TAG, an aggregation service suitable for Ad-Hoc Sensor Networks. In a centralized approach the sensed data from all the nodes in the network is passed to the base station for further processing, however, TAG will have the nodes to process the sensed data locally, performing an aggregation function as the data is passed between nodes. This allows for a great reduction in the communication overhead and energy savings.

At start up, TAG will configure the network in the form of a tree structure, so each node ends up being either a leaf node or a node with children (parent node). The root of the tree talks directly to the base station. When data from the nodes is requested, every node will receive the data query (distribution phase). Then, every node will sense its parameter (temperature, light, magnetic field, acceleration, sound, power, etc), perform the aggregation function (MAX, MIN, COUNT, AVG, SUM, MEDIAN, HISTOGRAM, etc), and send the data to its parent node for further processing. This data measurement, processing and propagation will continue until the root is reached (collection phase). At that point, the final hop from the root to the base station will contain the final value of the query and no further processing will be necessary.

In the centralized approach, the data from every node requires to complete the whole path to the base station which adds up, and typically makes the nodes close to the base station to exhaust its batteries. TAG, however, will require only one hop per link, flowing from the leaf nodes all the way up to the root. This accounts for the communication and energy consumption reduction mentioned before.

Some optimizations have been made to reduce even more the communication necessary to complete the queries. For example, Hypothesis testing: nodes hear neighbors and can decide not to transmit if they know that will not contribute to the aggregate function (i.e. having a value under MAX, does not contribute to the final value); The SQL HAVING clause reduces communication by discarding data via comparisons like <,>,<>,etc; Other techniques overcome data loss, for example Caching: parent nodes will remember records from their children and use them whenever their links to its children are lost.

Experimentation shows that the communication overhead can be reduced by ~50% when comparing TAG to a centralized approach (even without applying the optimizations mentioned).

The nodes used by TAG are full fledged computers called motes. They run an operating system called TinyOS. The queries use the SQL syntax, which result very easy to use unlike low level languages like C. In this way, many professionals of different disciplines can write declarative queries in a short time.

Monday, April 18, 2011

Reliable data transport and congestion control in wireless sensor networks

Reliable data delivery and congestion control are two fundamental transport layer functions. Due to the specific characteristics of Wireless Sensor Networks (WSNs), traditional transportlayer protocols (e.g. Transmission Control Protocol (TCP) and User Datagram Protocol (UDP)) that are widely used in the Internet may not be suitable for WSNs.
In this paper, the characteristics of WSNs are reviewed and the requirements and challenges of reliable data transport over WSNs are presented. The issues with applying traditional transport protocols over WSNs are discussed. We then survey recent research progress in developing suitable transport protocols for WSNs. The proposed reliable data transport and congestion control protocols for WSNs are reviewed and summarised. Finally, we describe some future research directions of transport protocol in WSNs

Routing Techniques in Wireless Sensor Networks: A Survey

In this paper the author investigates many routing, power management, and data dissemination protocols.Wireless Sensor Networks (WSNs) consist of small nodes with sensing, computation, and wireless communications capabilities.
The focus, however, has been given to the routing protocols which might differ depending on the application and network architecture. In this paper, we present a survey of the state-of-the-art routing techniques in WSNs. The author first outlines the design challenges for routing protocols in WSNs followed by a comprehensive survey of different routing techniques.
Overall, the routing techniques are classified into three categories based on the underlying network structure: flat, hierarchical, and location-based routing. Furthermore,these protocols can be classified into multipath-based, query-based, negotiation-based, QoS-based, and coherent-based depending on the protocol operation.
We study the design trade offs between energy and communication overhead savings in every routing paradigm. We also highlight the advantages and performance issues of each routing technique. The paper concludes with possible future research areas.

Tuesday, April 12, 2011

Routing Techniques - Part3

PEGASIS - This is a greedy technique to find the group of nodes whose data packets have to be aggregated. The network lifetime of this protocol is twice that of LEACH protocol.

TEEN & APTEEN - Cluster head broadcasts thresholds along with different attributes to its group to decide when to switch the cluster nodes' transmitter. This functionality helps to save energy in the network.

MECN - This is another clustering technique, where sub-networks are formed in the network based on the transmission power of the nodes. The sub-network used for transmission is the one which will consume less transmission power.

VGA - The entire network is divided into grids and each grid forms a separate cluster. The routing mechanism aggregates locally and globally at each cluster head in the grid and at the designated master aggregator respectively.

HPAR - The Max-min zPmin algorithm will ensure routing of the data packets along the route which has maximum over all minimum of the remaining power of the nodes.

Monday, March 14, 2011

Hierarchical Power Management in Disruption Tolerant Networks with Traffic-Aware Optimization

In this paper, the author investigates power management in DTNs with high randomness in the node mobility. Author presents a hierarchical power management framework, in which nodes control two radio interfaces to discover contacts. In addition, provides traffic aware approximation algorithms to save energy while discovering enough contacts to deliver the traffic load in the network. Simulation results from two mobility models show that generalized power management mechanism could achieve better energy efficiency than mechanisms relying only on one radio for contact discovery. In addition, when the traffic load can be predicted, approximation algorithms helps nodes to save significant amount of energy while handling the expected traffic load. Finally, the additional information allows the one-radio architecture to save equivalent energy to the two-radio architecture, overcoming the disadvantage of not having the additional radio.

Wednesday, March 9, 2011

A wake up scheme for sensor networks: Achieving Balance betwwen energy saving and end to end delay.

The energy consumption of wireless sensors is very critical since often their energy supply comes from a small battery with limited lifetime. Some techniques like STEM (Sparse Topology Energy Management) make use of a low power wake up channel in addition to a high power data channel. The idea is to minimize energy consumption by turning off the data channel when it is not used. However there is a drawback, the low power wake up channel reduces the maximum range. Also, this technique (STEM) has a considerable latency on the wake up process.
This paper proposes a technique named PTW(Pipelined Tone Wakeup) that also incorporates 2 radios, but for the wakeup channel, a tone is transmitted for a period of time long enough for all neighbors to be recognized, so all of them are awakened, but only one of them receives the notification packet, the awakened nodes that do not receive the notification packet will go back to sleep after a timeout expiration. The energy saving is achieved by passing the duty of the wakeup from the receiver to the sender. In this case, the receiver is alternating between sleep and tone monitoring where the listening period is shortened and the sleeping period is increased compared to STEM.
The end to end delay improvement is achieved by overlapping the data transmission time (data channel), to the awakening time (wakeup channel). So during this process, a node will be receiving data and sending the wake up tone at the same time.

Tuesday, March 8, 2011

DMAC - An Adaptive Energy-Efficient and Low-Latency MAC for Data Gathering in Sensor Networks

In general, MAC protocols for sensor networks utilize activation/sleep duty cycles so as to improve energy efficiency and prolong network lifetime. However, this results in the data forwarding interruption problem, whereby nodes on a multihop path to the sink are not all awake when the data delivery is in progress. This results in sleep latency. The paper proposes DMAC, which eliminates the sleep latency by basing the active/sleep schedule of a node on its depth on the forwarding tree. Hence nodes wake up sequentially as data moves up the forwarding tree. The paper also proposes mechanisms to adapt the duty cycles based on the current traffic conditions in the network to further reduce sleep latency and improve energy efficiency. Moreover, they propose a data prediction mechanism and the use of more to send (MTS) packets to reduce the latency in scenarios where when a node has more data to send than it could possibly send in the current cycle, or when there is channel contention from its siblings. Though the approach works well for links which are pretty reliable, the delay in packet delivery increases significantly when the channel is noisy, since if a packet is lost during transmission, it can only be re-sent during the next cycle. The authors compare the performance of DMAC with SMAC and a MAC in which nodes are ON at all times. The results show that DMAC has lesser latency and is more energy efficient compared to SMAC.

Thursday, February 17, 2011

Coverage Problems in Wireless Ad hoc sensor networks

In this paper the author introduced the concept of optimization of wireless adhoc sensor networks by discussing one of the fundamental issues named coverage. Coverage is the measure of quality of service in the sensor network. In this paper, author addresses coverage probelem using two key methodologies: Voronoi Diagrams and Delaunay triangulation. In addition, author also describes two types of coverages: deteministic and stochastic. In deterministic coverage is deployed in the predefined shapes. The two methods of deployment which have been discussed are uniformily and weighted deployment. In stochastic coverage, sensors are randomly deployed. One method to analyze that coverage is using best case and worst case scenarios. Worst case coverage is based on Voronoi diagram i.e max edges are calculated using breach weight method following the line segments. the max edges reflect the worst case coverage since they are placed at maximum distance from the sensor. Similarly in best case coverage, a path is found using the metric of closest proximity to the sensor. This method is based on Delaunay triangulation and helps in calculating the min edge and hence the best signal strength. In the end with the help of sensor deployment heuristics, it is shown that using breach and support improvement can be increased by the addition of extra sensors in the randomly placed sensor network.

Thursday, February 10, 2011

Localization for Mobile Sensors (Part 2)

The basic ideas behind the operation of the MonteCarlo localization algorithm are:
- Prediction phase: The objective is to use the information about previous location estimations to obtain new estimations about the current positions.
- Update phase: based on the received observations, the weights of the nodes are updated.
- Normalization phase: the new weights are normalized to ones and zeros, so that the posterior distribution can be simulated.
The prediction and update phases contain recursive calculations that depend on data obtained from the previous calculations. After the normalization phase, the weak weights are discarded since we only want to concentrate on trajectories with the larger weights. If too many samples are discarded, and the current number of samples is below certain threshold (typically 50) then a re-sampling is made to keep enough valid samples.
The concept of resolution limit is introduced, and it refers to the probability that a node can move a distance d without changing connectivity. This is an important parameter for this technique.
The implementation of security in this algorithm is more feasible than with other techniques, since it supports bidirectional verification, key establishment and there are continued location estimates. When nodes and seeds move, rogue nodes can cause only limited damage.
The algorithm was evaluated extensively and compared with the performance of other techniques as Centroid and Amorphous, particularly, the accuracy is the key metric in all experiments. MCL outperforms the other techniques in accuracy, when seed and/or node density increases, when the range presents irregularities, but it is greatly affected with group motion, so in the later case, motion control is required.
The main result is surprising and counterintuitive, mobility in this algorithm, can improve accuracy and reduce the costs of Localization, even with severe memory limits, low seed density and irregular node transmissions. Future work is required regarding security and types of motion.

Localization for Mobile Sensor Networks (Part 1)

Localization is an important step for many applications such as vehicle tracking, environment monitoring and location-based routing. Although there has been a lot of work addressing localization, they do not consider scenarios where the nodes in the network experience non-uniform and uncontrolled motion. Although localization in the latter case appears to complicate the process, this paper proposes a method which exploits the mobility of the nodes to estimate their locations. The paper uses a sequencial monte carlo method which recursively filters the location samples of the nodes and concieves a resonable location estimate.The central idea of the approach is to use the velocity of the nodes to estimate their potential posterior locations and recursively filter impossible estimates using new observations that nodes recieve from seeds.

Tuesday, February 8, 2011

Sequence Based Localization in WSN

This is a RF-signal based localization scheme which works even in case of channel error. The core design of this novel approach is to have the entire localization space divided into different regions by constructing the perpendicular bisectors between each pair of reference nodes, the ones whose locations are known. These regions are called vertices, edges and faces. The authors introduce the concept of Location Sequence which is the combination of distance ranks from each reference node to the constructed regions. The length of this Location Sequence is based on the number of reference nodes in the localization space and these sequences are processed based on the statistical metrics: Spearman’s rank order correlation coefficient and Kendall’s Tau. The Kendall’s Tau metric is shown to have less localization error when compared to the other. The location of the unknown node is estimated based on the RSS measurement from the regions and constructing its own Location Sequence. The centroid of the region with the nearest matching Location Sequence of the unknown node is the identified location of the unknown node. The SBL shows improvement in localization error in comparison to the other localization approaches.

Monday, February 7, 2011

Ad Hoc Positioning System (APS)

In this paper authors proposed a distributed, hop by hop positioning algorithm (APS) to provide approximate location for all nodes in a network. The key features of APS are that it is decentralized, it does not need special infrastructure and provides absolute positioning. They used simplified version of the GPS triangulation. In this for an arbitrary node to estimate its own position in the plane it has to have estimates to a number (atleast 3) of landmarks. Immediate neighbors of the landmark use signal strength measurement to estimate the distance to landmark where as second hop neighbors can use any of the three propagation methods discussed in this paper. The authors proposed and discussed about the pros and cons of DV-Hop propagation method, DV-distance propagation method and Euclidean propagation method. They did simulations to evaluate the performance of the proposed propagation methods. They conclude stating that actual locations obtained by APS are on average less than one radio hop from the true location and positions produced by APS are usable by geodesic and geographic routing algorithms.

Thursday, February 3, 2011

Beautiful

The new blog looks really good :).