The ns Manual (formerly ns Notes and Documentation)1 - NM Lab at ...
The ns Manual (formerly ns Notes and Documentation)1 - NM Lab at ... The ns Manual (formerly ns Notes and Documentation)1 - NM Lab at ...
$mmonitor trace-topo;# trace entire topology$mmonitor filter Common ptype_ $ptype(cbr) ;# filter on ptype_ in Common header$mmonitor filter IP dst_ $group;# AND filter on dst_ address in IP header$mmonitor print-trace;# begin dumping periodic traces to specified filesThe following sample output illustrates the output file format (time, count):0.16 00.17999999999999999 00.19999999999999998 00.21999999999999997 60.23999999999999996 110.25999999999999995 120.27999999999999997 1230.1.2 Protocol Specific configurationIn this section, we briefly illustrate the protocol specific configuration mechanisms for all the protocols implemented in ns.Centralized Multicast The centralized multicast is a sparse mode implementation of multicast similar to PIM-SM [9]. ARendezvous Point (RP) rooted shared tree is built for a multicast group. The actual sending of prune, join messages etc. toset up state at the nodes is not simulated. A centralized computation agent is used to compute the forwarding trees and setup multicast forwarding state, 〈S, G〉 at the relevant nodes as new receivers join a group. Data packets from the senders to agroup are unicast to the RP. Note that data packets from the senders are unicast to the RP even if there are no receivers for thegroup.The method of enabling centralised multicast routing in a simulation is:set mproto CtrMcastset mrthandle [$ns mrtproto $mproto];# set multicast protocolThe command procedure mrtproto{} returns a handle to the multicast protocol object. This handle can be used to controlthe RP and the boot-strap-router (BSR), switch tree-types for a particular group, from shared trees to source specific trees,and recompute multicast routes.$mrthandle set_c_rp $node0 $node1$mrthandle set_c_bsr $node0:0 $node1:1;# set the RPs;# set the BSR, specified as list of node:priority$mrthandle get_c_rp $node0 $group ;# get the current RP ???$mrthandle get_c_bsr $node0;# get the current BSR$mrthandle switch-treetype $group$mrthandle compute-mroutes;# to source specific or shared tree;# recompute routes. usually invoked automatically as neededNote that whenever network dynamics occur or unicast routing changes, compute-mroutes{} could be invoked to recomputethe multicast routes. The instantaneous re-computation feature of centralised algorithms may result in causalityviolations during the transient periods.263
Dense Mode The Dense Mode protocol (DM.tcl) is an implementation of a dense–mode–like protocol. Depending onthe value of DM class variable CacheMissMode it can run in one of two modes. If CacheMissMode is set to pimdm(default), PIM-DM-like forwarding rules will be used. Alternatively, CacheMissMode can be set to dvmrp (loosely basedon DVMRP [30]). The main difference between these two modes is that DVMRP maintains parent–child relationships amongnodes to reduce the number of links over which data packets are broadcast. The implementation works on point-to-point linksas well as LANs and adapts to the network dynamics (links going up and down).Any node that receives data for a particular group for which it has no downstream receivers, send a prune upstream. A prunemessage causes the upstream node to initiate prune state at that node. The prune state prevents that node from sending datafor that group downstream to the node that sent the original prune message while the state is active. The time duration forwhich a prune state is active is configured through the DM class variable, PruneTimeout. A typical DM configuration isshown below:DM set PruneTimeout 0.3DM set CacheMissMode dvmrp$ns mrtproto DM;# default 0.5 (sec);# default pimdmShared Tree Mode Simplified sparse mode ST.tcl is a version of a shared–tree multicast protocol. Class variable arrayRP_ indexed by group determines which node is the RP for a particular group. For example:ST set RP_($group) $node0$ns mrtproto STAt the time the multicast simulation is started, the protocol will create and install encapsulator objects at nodes that havemulticast senders, decapsulator objects at RPs and connect them. To join a group, a node sends a graft message towards theRP of the group. To leave a group, it sends a prune message. The protocol currently does not support dynamic changes andLANs.Bi-directional Shared Tree Mode BST.tcl is an experimental version of a bi–directional shared tree protocol. As inshared tree mode, RPs must be configured manually by using the class array RP_. The protocol currently does not supportdynamic changes and LANs.30.2 Internals of Multicast RoutingWe describe the internals in three parts: first the classes to implement and support multicast routing; second, the specificprotocol implementation details; and finally, provide a list of the variables that are used in the implementations.30.2.1 The classesThe main classes in the implementation are the class mrtObject and the class McastProtocol. There are alsoextensions to the base classes: Simulator, Node, Classifier, etc. We describe these classes and extensions in this subsection.The specific protocol implementations also use adjunct data structures for specific tasks, such as timer mechanisms by detaileddense mode, encapsulation/decapsulation agents for centralised multicast etc.; we defer the description of these objects to thesection on the description of the particular protocol itself.264
- Page 213 and 214: Chapter 23Changes made to the IEEE
- Page 215 and 216: Part IIISupport214
- Page 217 and 218: endenddocument pargvcPrint out argc
- Page 219 and 220: 24.4.2 Memory Conservation TipsSome
- Page 221 and 222: Chapter 25Mathematical SupportThe s
- Page 223 and 224: set run [lindex $argv 0]}if {$run <
- Page 225 and 226: }$sizeRNG next-substream# print the
- Page 227 and 228: protected:RNG* rng_;};Classes deriv
- Page 229 and 230: ns-random is implemented in ~ns/mis
- Page 231 and 232: Chapter 26Trace and Monitoring Supp
- Page 233 and 234: The monitor-queue function is const
- Page 235 and 236: also used to accumulate packet drop
- Page 237 and 238: }name,th->size(),flags,iph->flowid(
- Page 239 and 240: PT_TORA,PT_DSR,PT_AODV,// insert ne
- Page 241 and 242: The QueueMonitor class is not deriv
- Page 243 and 244: $ns_ trace-all This is the command
- Page 245 and 246: Chapter 27Test Suite SupportThe ns
- Page 247 and 248: ... ...}$ns_ at $opt(stop).1 "$self
- Page 249 and 250: Example: To enable debugging in Que
- Page 251 and 252: Chapter 29Unicast RoutingThis secti
- Page 253 and 254: Asymmetric Routing Asymmetric routi
- Page 255 and 256: class RouteLogic This class defines
- Page 257 and 258: — The procedure init-all{} is a g
- Page 259 and 260: Global Actions Once the detailed ac
- Page 261 and 262: This dumps next hop information in
- Page 263: $ns mrtproto ST;# specify shared tr
- Page 267 and 268: and unique label (id). Thus, “inc
- Page 269 and 270: add-iif{ifid link},add-oif{link if}
- Page 271 and 272: CtrMcastComp is the centralised mul
- Page 273 and 274: $ns_ mrtproto This command specifi
- Page 275 and 276: Chapter 31Network DynamicsThis chap
- Page 277 and 278: v 0.8123 link-up 3 5v 0.8123 link-u
- Page 279 and 280: The first argument is the time at w
- Page 281 and 282: $ns_ rtmodel This command defines
- Page 283 and 284: $ns set-address-format hierarchical
- Page 285 and 286: 32.4 Hierarchical Routing with Sess
- Page 287 and 288: Chapter 33UDP Agents33.1 UDP Agents
- Page 289 and 290: Chapter 34TCP AgentsThis section de
- Page 291 and 292: set ns [new Simulator]set node1 [$n
- Page 293 and 294: 34.2.1 The Base TCP SinkThe base TC
- Page 295 and 296: Agent/TCP/FullTcp set dupseg_fix_ t
- Page 297 and 298: 34.5 Tracing TCP DynamicsThe behavi
- Page 299 and 300: Chapter 35SCTP AgentsThis chapter d
- Page 301 and 302: Figure 35.1: Example of a Multihome
- Page 303 and 304: Note: the actual value of these tra
- Page 305 and 306: 1.526624 1 4 sctp 1500 -------D 0 1
- Page 307 and 308: $ns at 5.0 "finish"$ns run35.5.2 Mu
- Page 309 and 310: Chapter 36Agent/SRMThis chapter des
- Page 311 and 312: 36.1.2 Other Configuration Paramete
- Page 313 and 314: 3.6274 n 0 m r 1 type repair servi
$mmonitor trace-topo;# trace entire topology$mmonitor filter Common ptype_ $ptype(cbr) ;# filter on ptype_ in Common header$mmonitor filter IP dst_ $group;# AND filter on dst_ address in IP header$mmonitor print-trace;# begin dumping periodic traces to specified files<strong>The</strong> following sample output illustr<strong>at</strong>es the output file form<strong>at</strong> (time, count):0.16 00.17999999999999999 00.19999999999999998 00.21999999999999997 60.23999999999999996 110.25999999999999995 120.27999999999999997 1230.1.2 Protocol Specific configur<strong>at</strong>ionIn this section, we briefly illustr<strong>at</strong>e the protocol specific configur<strong>at</strong>ion mechanisms for all the protocols implemented in <strong>ns</strong>.Centralized Multicast <strong>The</strong> centralized multicast is a sparse mode implement<strong>at</strong>ion of multicast similar to PIM-SM [9]. ARendezvous Point (RP) rooted shared tree is built for a multicast group. <strong>The</strong> actual sending of prune, join messages etc. toset up st<strong>at</strong>e <strong>at</strong> the nodes is not simul<strong>at</strong>ed. A centralized comput<strong>at</strong>ion agent is used to compute the forwarding trees <strong>and</strong> setup multicast forwarding st<strong>at</strong>e, 〈S, G〉 <strong>at</strong> the relevant nodes as new receivers join a group. D<strong>at</strong>a packets from the senders to agroup are unicast to the RP. Note th<strong>at</strong> d<strong>at</strong>a packets from the senders are unicast to the RP even if there are no receivers for thegroup.<strong>The</strong> method of enabling centralised multicast routing in a simul<strong>at</strong>ion is:set mproto CtrMcastset mrth<strong>and</strong>le [$<strong>ns</strong> mrtproto $mproto];# set multicast protocol<strong>The</strong> comm<strong>and</strong> procedure mrtproto{} retur<strong>ns</strong> a h<strong>and</strong>le to the multicast protocol object. This h<strong>and</strong>le can be used to controlthe RP <strong>and</strong> the boot-strap-router (BSR), switch tree-types for a particular group, from shared trees to source specific trees,<strong>and</strong> recompute multicast routes.$mrth<strong>and</strong>le set_c_rp $node0 $node1$mrth<strong>and</strong>le set_c_bsr $node0:0 $node1:1;# set the RPs;# set the BSR, specified as list of node:priority$mrth<strong>and</strong>le get_c_rp $node0 $group ;# get the current RP ???$mrth<strong>and</strong>le get_c_bsr $node0;# get the current BSR$mrth<strong>and</strong>le switch-treetype $group$mrth<strong>and</strong>le compute-mroutes;# to source specific or shared tree;# recompute routes. usually invoked autom<strong>at</strong>ically as neededNote th<strong>at</strong> whenever network dynamics occur or unicast routing changes, compute-mroutes{} could be invoked to recomputethe multicast routes. <strong>The</strong> i<strong>ns</strong>tantaneous re-comput<strong>at</strong>ion fe<strong>at</strong>ure of centralised algorithms may result in causalityviol<strong>at</strong>io<strong>ns</strong> during the tra<strong>ns</strong>ient periods.263