Thursday, February 17, 2011

Coverage Problems in Wireless Ad hoc sensor networks

In this paper the author introduced the concept of optimization of wireless adhoc sensor networks by discussing one of the fundamental issues named coverage. Coverage is the measure of quality of service in the sensor network. In this paper, author addresses coverage probelem using two key methodologies: Voronoi Diagrams and Delaunay triangulation. In addition, author also describes two types of coverages: deteministic and stochastic. In deterministic coverage is deployed in the predefined shapes. The two methods of deployment which have been discussed are uniformily and weighted deployment. In stochastic coverage, sensors are randomly deployed. One method to analyze that coverage is using best case and worst case scenarios. Worst case coverage is based on Voronoi diagram i.e max edges are calculated using breach weight method following the line segments. the max edges reflect the worst case coverage since they are placed at maximum distance from the sensor. Similarly in best case coverage, a path is found using the metric of closest proximity to the sensor. This method is based on Delaunay triangulation and helps in calculating the min edge and hence the best signal strength. In the end with the help of sensor deployment heuristics, it is shown that using breach and support improvement can be increased by the addition of extra sensors in the randomly placed sensor network.

Thursday, February 10, 2011

Localization for Mobile Sensors (Part 2)

The basic ideas behind the operation of the MonteCarlo localization algorithm are:
- Prediction phase: The objective is to use the information about previous location estimations to obtain new estimations about the current positions.
- Update phase: based on the received observations, the weights of the nodes are updated.
- Normalization phase: the new weights are normalized to ones and zeros, so that the posterior distribution can be simulated.
The prediction and update phases contain recursive calculations that depend on data obtained from the previous calculations. After the normalization phase, the weak weights are discarded since we only want to concentrate on trajectories with the larger weights. If too many samples are discarded, and the current number of samples is below certain threshold (typically 50) then a re-sampling is made to keep enough valid samples.
The concept of resolution limit is introduced, and it refers to the probability that a node can move a distance d without changing connectivity. This is an important parameter for this technique.
The implementation of security in this algorithm is more feasible than with other techniques, since it supports bidirectional verification, key establishment and there are continued location estimates. When nodes and seeds move, rogue nodes can cause only limited damage.
The algorithm was evaluated extensively and compared with the performance of other techniques as Centroid and Amorphous, particularly, the accuracy is the key metric in all experiments. MCL outperforms the other techniques in accuracy, when seed and/or node density increases, when the range presents irregularities, but it is greatly affected with group motion, so in the later case, motion control is required.
The main result is surprising and counterintuitive, mobility in this algorithm, can improve accuracy and reduce the costs of Localization, even with severe memory limits, low seed density and irregular node transmissions. Future work is required regarding security and types of motion.

Localization for Mobile Sensor Networks (Part 1)

Localization is an important step for many applications such as vehicle tracking, environment monitoring and location-based routing. Although there has been a lot of work addressing localization, they do not consider scenarios where the nodes in the network experience non-uniform and uncontrolled motion. Although localization in the latter case appears to complicate the process, this paper proposes a method which exploits the mobility of the nodes to estimate their locations. The paper uses a sequencial monte carlo method which recursively filters the location samples of the nodes and concieves a resonable location estimate.The central idea of the approach is to use the velocity of the nodes to estimate their potential posterior locations and recursively filter impossible estimates using new observations that nodes recieve from seeds.

Tuesday, February 8, 2011

Sequence Based Localization in WSN

This is a RF-signal based localization scheme which works even in case of channel error. The core design of this novel approach is to have the entire localization space divided into different regions by constructing the perpendicular bisectors between each pair of reference nodes, the ones whose locations are known. These regions are called vertices, edges and faces. The authors introduce the concept of Location Sequence which is the combination of distance ranks from each reference node to the constructed regions. The length of this Location Sequence is based on the number of reference nodes in the localization space and these sequences are processed based on the statistical metrics: Spearman’s rank order correlation coefficient and Kendall’s Tau. The Kendall’s Tau metric is shown to have less localization error when compared to the other. The location of the unknown node is estimated based on the RSS measurement from the regions and constructing its own Location Sequence. The centroid of the region with the nearest matching Location Sequence of the unknown node is the identified location of the unknown node. The SBL shows improvement in localization error in comparison to the other localization approaches.

Monday, February 7, 2011

Ad Hoc Positioning System (APS)

In this paper authors proposed a distributed, hop by hop positioning algorithm (APS) to provide approximate location for all nodes in a network. The key features of APS are that it is decentralized, it does not need special infrastructure and provides absolute positioning. They used simplified version of the GPS triangulation. In this for an arbitrary node to estimate its own position in the plane it has to have estimates to a number (atleast 3) of landmarks. Immediate neighbors of the landmark use signal strength measurement to estimate the distance to landmark where as second hop neighbors can use any of the three propagation methods discussed in this paper. The authors proposed and discussed about the pros and cons of DV-Hop propagation method, DV-distance propagation method and Euclidean propagation method. They did simulations to evaluate the performance of the proposed propagation methods. They conclude stating that actual locations obtained by APS are on average less than one radio hop from the true location and positions produced by APS are usable by geodesic and geographic routing algorithms.

Thursday, February 3, 2011

Beautiful

The new blog looks really good :).