16.08.2013 Views

Gugrajah_Yuvaan_ Ramesh_2003.pdf

Gugrajah_Yuvaan_ Ramesh_2003.pdf

Gugrajah_Yuvaan_ Ramesh_2003.pdf

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Simulation ofa Load Balancing Routing Protocol Chapter 3<br />

The type of information appended depends on the DLAR scheme that is<br />

implemented. The destination accepts multiple route requests within an appropriate<br />

time frame, reply_delay, after receiving the first route request. This allows the<br />

destination to determine which routes are available and the quality of the available<br />

routes. The destination chooses the most suitable route and transmits a route reply<br />

packet using this route to the source. Once the source receives the route reply, it can<br />

begin transmitting the data on the selected route.<br />

While the route is being used, intermediate nodes piggyback load information on the<br />

data packets. The destination node uses this information to monitor the status of the<br />

route to determine if the route is becoming too congested. The destination can decide<br />

to find a new route before the current route fails and broadcasts a route request in the<br />

same way that the source initiates a route request procedure. The route request is<br />

eventually propagated to the source, which can analyse the load information attached<br />

to the route request by intermediate nodes and send the next packet on the most<br />

suitable route. In this way routes are dynamically selected during a session in order<br />

to perform load balancing and reduce congestion in the network.<br />

Unlike AODV and DSR, DLAR does not allow intermediate nodes to respond to<br />

route requests. Only the destination node is allowed to respond to route requests,<br />

which prevents stale route information from the caches of intermediate node being<br />

used. Preventing intermediate nodes from generating route replies also prevents a<br />

flood of route replies from multiple intermediate nodes with cached information and<br />

reduces the congestion caused by the reply storm.<br />

Figure 3-2 demonstrates the congestion that is created when intermediate nodes are<br />

allowed to respond to route requests with route replies, hence neglecting the potential<br />

congestion that can result. Assume that node 2 initiates a route request for a route to<br />

node 7 and obtains the route {2, 4, 5, 6, 7}. When node 3 requests a route to node 7,<br />

node 4 responds with a cached route resulting in node 3 using the route {3, 4, 5, 6,<br />

7}. This new route overlaps the previous route and the same process occurs when<br />

node 1 requests a route to node 6. Node 1 receives the route reply from node 4<br />

indicating that {I, 8,4,5, 6} is a suitable route. It can be seen that although allowing<br />

3-4

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!