Traffic Management for the Available Bit Rate (ABR) Service in ...
Traffic Management for the Available Bit Rate (ABR) Service in ... Traffic Management for the Available Bit Rate (ABR) Service in ...
is equal to the sum of the TCP window sizes of all TCP connections. In other words the bu ering requirement for ABR becomes the same as that for UBR if we consider the source queues into consideration. This observation is true only in certain ABR networks. If the ATM ABR network is an end-to-end network, the source end systems can directly ow control the TCP sources. In such a case, the TCP will do a blocking send, i.e., and the data will go out of the TCP machine's local disk to the ABR source's bu ers only when there is su cient space in the bu ers. The ABR service may also be o ered at the backbone networks, i.e., between two routers. In these cases, the ABR source cannot directly ow control the TCP sources. The ABR ow control moves the queues from the network to the sources. If the queues over ow at the source, TCP throughput will degrade. We substantiate this in section 8.15.5 Note that we have studied the case of in nite tra c (like a large le transfer application) on top of TCP. In section 8.19, we show that bursty (idle/active) appli- cations on TCP can potentially result in unbounded queues. However, in practice, a well-designed ABR system can scale well to support a large number of applications like bursty WWW sources running over TCP [79]. 8.15 Factors A ecting Bu ering Requirements of TCP over ATM-ABR Service In this section we present sample simulation results to substantiate the preceding claims and analyses. We also analyze the e ect of some important factors a ecting the ABR bu ering requirements. The key metric we observe is the maximum queue length. We look at the e ect of VBR in two separate sections, the second of which deals with multiple MPEG-2 sources using aVBRconnection. 295
All our simulations presented use the n Source con guration presented earlier. Recall that it has a single bottleneck link shared by n ABR sources. All links run at 155.52 Mbps and are of the same length. We experiment with the number of sources and the link lengths. All tra c is unidirectional. A large (in nite) le transfer application runs on top of TCP for the TCP sources. Nmay assume values 1, 2, 5, 10, 15 and the link lengths 1000, 500, 200, 50 km. The maximum queue bounds also apply to con gurations with heterogenous link lengths. The TCP and ABR parameters are set in the same way as in the earlier simula- tions. For satellite round trip (550 ms) simulations, the window is set using the TCP window scaling option to 34000 2 8 bytes. 8.15.1 E ect of Number of Sources In Table 8.2, we notice that although the bu ering required increases as the num- ber of sources is increased, the amount of increase slowly decreases. As later results will show, three RTTs worth of bu ers are su cient even for large number of sources. In fact, one RTT worth of bu ering is su cient for many cases: for example, the cases where the number of sources is small. The rate allocations among contending sources were found to be fair in all cases. 8.15.2 E ect of Round Trip Time (RTT) From Table 8.3, we nd that the maximum queue approaches 3 RTT link bandwidth, particularly for metropolitan area networks (MANs) with RTTs in the range of 6 ms to 1.5 ms. This is because the RTT values are lower and in such cases, the e ect 296
- Page 271 and 272: after reaching the goal. The time-b
- Page 273 and 274: Figure 7.8: Closed-Loop Bursty Tra
- Page 275 and 276: The algorithm measures the load and
- Page 277 and 278: Medium Bursts Medium bursts are exp
- Page 279 and 280: count-based technique may be insu c
- Page 281 and 282: Figure 7.13: An event trace illustr
- Page 283 and 284: CHAPTER 8 SUPPORTING INTERNET APPLI
- Page 285 and 286: u ering which does not depend upon
- Page 287 and 288: 4 make it to the destination are ar
- Page 289 and 290: Figure 8.3: At the ATM layer, the T
- Page 291 and 292: issues and e ect of bursty applicat
- Page 293 and 294: from the loss and they trigger the
- Page 295 and 296: 8.8 Performance Metrics We measure
- Page 297 and 298: the performance is fair. Also, the
- Page 299 and 300: Figure 8.7(b) shows the rates (ACRs
- Page 301 and 302: Window Size in bytes vfive-tcp/opti
- Page 303 and 304: The e ect of large bu ers on CLR is
- Page 305 and 306: 8.13 Summary of TCP/IP performance
- Page 307 and 308: Feedback delay: Twice the delay fro
- Page 309 and 310: on a link, two cells are expected a
- Page 311 and 312: state only after the switch algorit
- Page 313 and 314: queue length is less susceptible to
- Page 315 and 316: to equalize rates for fairness, and
- Page 317 and 318: QB = Link bandwidth (RT T ; T ) and
- Page 319 and 320: u ered at the end-system, and not i
- Page 321: Part b): When ABR load goes away, t
- Page 325 and 326: Averaging RTT(ms) Feedback Max Q Th
- Page 327 and 328: a modi ed version of the ERICA algo
- Page 329 and 330: ABR is better than UBR in these (en
- Page 331 and 332: problems. During the ON time, the V
- Page 333 and 334: hand, the frequency of the VBR is h
- Page 335 and 336: like ERICA+ which uses the queueing
- Page 337 and 338: minimum fairshare is low. This may
- Page 339 and 340: 8.17 E ect of Long-Range Dependent
- Page 341 and 342: terminology) are called \Presentati
- Page 343 and 344: The key point is that the MPEG-2 ra
- Page 345 and 346: the MPEG-2 Transport Stream, and st
- Page 347 and 348: 8.17.4 Observations on the Long-Ran
- Page 349 and 350: For the video sources, we choose me
- Page 351 and 352: Video Sources ABR Metrics # Mean St
- Page 353 and 354: However, with modi cations to ERICA
- Page 355 and 356: Video Sources ABR Metrics # Avg Src
- Page 357 and 358: Video Sources ABR Metrics # Avg Src
- Page 359 and 360: On the other hand, if the applicati
- Page 361 and 362: of managing bu ers, queueing, sched
- Page 363 and 364: hence control the total load on the
- Page 365 and 366: call such a switch a \VS/VD switch"
- Page 367 and 368: Figure 9.2: Per-class queues in a n
- Page 369 and 370: which arises is where the rate calc
- Page 371 and 372: 9.2 The ERICA Switch Scheme: Renota
is equal to <strong>the</strong> sum of <strong>the</strong> TCP w<strong>in</strong>dow sizes of all TCP connections. In o<strong>the</strong>r words<br />
<strong>the</strong> bu er<strong>in</strong>g requirement <strong>for</strong> <strong>ABR</strong> becomes <strong>the</strong> same as that <strong>for</strong> UBR if we consider<br />
<strong>the</strong> source queues <strong>in</strong>to consideration. This observation is true only <strong>in</strong> certa<strong>in</strong> <strong>ABR</strong><br />
networks. If <strong>the</strong> ATM <strong>ABR</strong> network is an end-to-end network, <strong>the</strong> source end systems<br />
can directly ow control <strong>the</strong> TCP sources. In such a case, <strong>the</strong> TCP will do a block<strong>in</strong>g<br />
send, i.e., and <strong>the</strong> data will go out of <strong>the</strong> TCP mach<strong>in</strong>e's local disk to <strong>the</strong> <strong>ABR</strong><br />
source's bu ers only when <strong>the</strong>re is su cient space <strong>in</strong> <strong>the</strong> bu ers. The <strong>ABR</strong> service<br />
may also be o ered at <strong>the</strong> backbone networks, i.e., between two routers. In <strong>the</strong>se<br />
cases, <strong>the</strong> <strong>ABR</strong> source cannot directly ow control <strong>the</strong> TCP sources. The <strong>ABR</strong> ow<br />
control moves <strong>the</strong> queues from <strong>the</strong> network to <strong>the</strong> sources. If <strong>the</strong> queues over ow at<br />
<strong>the</strong> source, TCP throughput will degrade. We substantiate this <strong>in</strong> section 8.15.5<br />
Note that we have studied <strong>the</strong> case of <strong>in</strong> nite tra c (like a large le transfer<br />
application) on top of TCP. In section 8.19, we show that bursty (idle/active) appli-<br />
cations on TCP can potentially result <strong>in</strong> unbounded queues. However, <strong>in</strong> practice, a<br />
well-designed <strong>ABR</strong> system can scale well to support a large number of applications<br />
like bursty WWW sources runn<strong>in</strong>g over TCP [79].<br />
8.15 Factors A ect<strong>in</strong>g Bu er<strong>in</strong>g Requirements of TCP over<br />
ATM-<strong>ABR</strong> <strong>Service</strong><br />
In this section we present sample simulation results to substantiate <strong>the</strong> preced<strong>in</strong>g<br />
claims and analyses. We also analyze <strong>the</strong> e ect of some important factors a ect<strong>in</strong>g<br />
<strong>the</strong> <strong>ABR</strong> bu er<strong>in</strong>g requirements. The key metric we observe is <strong>the</strong> maximum queue<br />
length. We look at <strong>the</strong> e ect of VBR <strong>in</strong> two separate sections, <strong>the</strong> second of which<br />
deals with multiple MPEG-2 sources us<strong>in</strong>g aVBRconnection.<br />
295