Traffic Management for the Available Bit Rate (ABR) Service in ...
Traffic Management for the Available Bit Rate (ABR) Service in ... Traffic Management for the Available Bit Rate (ABR) Service in ...
Note that the service capabilities in such a situation is also a ected by a Use-it- or-Lose-it (UILI) implementation at the source or switch (as described in chapter 7). The UILI mechanism would reduce the source rate of the VC's carrying bursty TCP connections, and hence control the queues at the network switches. The e ect of UILI in such conditions is for future study. However, it has been shown that such worst case scenarios may not appear in practice due to the nature of WWW applications and the ABR closed-loop feed- back mechanism [79]. Note that since the WWW tra c exhibits higher variation, techniques like averaging of load, and compensation for residual error (queues) as described in section 8.16.1 need to be used to minimize the e ects of load variation. In summary, though bursty applications on TCP can potentially result in unbounded queues, a well-designed ABR system can scale well to support a large number of applications like bursty WWW sources running over TCP. 8.20 Summary of TCP over ABR results This section uni es the conclusions drawn in each of the sections in this chapter (see sections 8.13, 8.15.6, 8.16.2, 8.18.5, and 8.19). In brief, the ABR service is an attractive option to support TCP tra c scalably. It o ers high throughput for bulk le transfer applications and low latency to WWW applications. Further, it is fair to connections, which means that access will not be denied, and performance will not be unnecessarily degraded for any of the competing connections. This chapter shows that it is possible to achieve zero cell loss for a large number of TCP connections with a small amount of bu ers. Hence, the ABR implementators can tradeo the complexity 333
of managing bu ers, queueing, scheduling and drop policies with the complexity of implementing a good ABR feedback scheme. In section 8.13 we rst noted that TCP can achieve maximum throughput over ATM if switches provide enough bu ering to avoid loss. The study of TCP dynamics over the ABR service showed that initially when the network is underloaded or the TCP sources are starting to send new data, they are limited by their congestion win- dow sizes (window-limited), rather than by the network-assigned rate (rate-limited). When cell losses occur, TCP loses throughput due to two reasons - a) a single cell loss results in a whole TCP packet to be lost, and, b) TCP loses time due to its large timer granularity. Intelligent drop policies (like avoiding drop of \End of Message (EOM)" cells, and \Early Packet Discard (EPD)" can help improve throughput). A large number of TCP sources can increase the total thoughput because each window size is small and the e ect of timeout and the slow start procedure is reduced. We also saw that the ATM layer \Cell Loss Ratio (CLR)" metric is not a good indicator of TCP throughput loss. Further, we saw that the switch bu ers should not be di- mensioned based on the ABR Source parameter \Transient Bu er Exposure (TBE)". Bu er dimensioning should be based upon the performance of the switch algorithm (for ABR), and the round trip time. In section 8.15.6, we summarized the derivation and simulation of switch bu er requirements for maximum throughput of TCP over ABR. The factors a ecting the bu er requirements are round trip time, switch algorithm parameters, feedback delay, presence of VBR tra c, or two-way TCP tra c. For a switch algorithm like ERICA, the bu er requirements are about 3 RT T Link bandwidth. The derivation is valid for in nite applications (like le transfer) running over TCP. Though the queueing 334
- Page 309 and 310: on a link, two cells are expected a
- Page 311 and 312: state only after the switch algorit
- Page 313 and 314: queue length is less susceptible to
- Page 315 and 316: to equalize rates for fairness, and
- Page 317 and 318: QB = Link bandwidth (RT T ; T ) and
- Page 319 and 320: u ered at the end-system, and not i
- Page 321 and 322: Part b): When ABR load goes away, t
- Page 323 and 324: All our simulations presented use t
- Page 325 and 326: Averaging RTT(ms) Feedback Max Q Th
- Page 327 and 328: a modi ed version of the ERICA algo
- Page 329 and 330: ABR is better than UBR in these (en
- Page 331 and 332: problems. During the ON time, the V
- Page 333 and 334: hand, the frequency of the VBR is h
- Page 335 and 336: like ERICA+ which uses the queueing
- Page 337 and 338: minimum fairshare is low. This may
- Page 339 and 340: 8.17 E ect of Long-Range Dependent
- Page 341 and 342: terminology) are called \Presentati
- Page 343 and 344: The key point is that the MPEG-2 ra
- Page 345 and 346: the MPEG-2 Transport Stream, and st
- Page 347 and 348: 8.17.4 Observations on the Long-Ran
- Page 349 and 350: For the video sources, we choose me
- Page 351 and 352: Video Sources ABR Metrics # Mean St
- Page 353 and 354: However, with modi cations to ERICA
- Page 355 and 356: Video Sources ABR Metrics # Avg Src
- Page 357 and 358: Video Sources ABR Metrics # Avg Src
- Page 359: On the other hand, if the applicati
- Page 363 and 364: hence control the total load on the
- Page 365 and 366: call such a switch a \VS/VD switch"
- Page 367 and 368: Figure 9.2: Per-class queues in a n
- Page 369 and 370: which arises is where the rate calc
- Page 371 and 372: 9.2 The ERICA Switch Scheme: Renota
- Page 373 and 374: The unknowns in the above equations
- Page 375 and 376: Figure 9.9: Two methods to measure
- Page 377 and 378: 9.4 VS/VD Switch Design Options 9.4
- Page 379 and 380: # VC Rate VC Input Rate Input Rate
- Page 381 and 382: 8 uses source rate measurement, we
- Page 383 and 384: The allocated rate update and the e
- Page 385 and 386: sources in chapter 6. We expect the
- Page 387 and 388: con guration mentioned in the table
- Page 389 and 390: can be very di erent for di erent V
- Page 391 and 392: CHAPTER 10 IMPLEMENTATION ISSUES At
- Page 393 and 394: With an enhanced UBR service, appli
- Page 395 and 396: 2. Some switch schemes have a proce
- Page 397 and 398: 4. Large legacy switches have a pro
- Page 399 and 400: section 9. Further, WAN switches wo
- Page 401 and 402: CHAPTER 11 SUMMARY AND FUTURE WORK
- Page 403 and 404: good transient performance. Since r
- Page 405 and 406: variant background tra c conditions
- Page 407 and 408: APPENDIX A SOURCE, DESTINATION AND
- Page 409 and 410: 7. After following behaviors #5 and
Note that <strong>the</strong> service capabilities <strong>in</strong> such a situation is also a ected by a Use-it-<br />
or-Lose-it (UILI) implementation at <strong>the</strong> source or switch (as described <strong>in</strong> chapter 7).<br />
The UILI mechanism would reduce <strong>the</strong> source rate of <strong>the</strong> VC's carry<strong>in</strong>g bursty TCP<br />
connections, and hence control <strong>the</strong> queues at <strong>the</strong> network switches. The e ect of UILI<br />
<strong>in</strong> such conditions is <strong>for</strong> future study.<br />
However, it has been shown that such worst case scenarios may not appear <strong>in</strong><br />
practice due to <strong>the</strong> nature of WWW applications and <strong>the</strong> <strong>ABR</strong> closed-loop feed-<br />
back mechanism [79]. Note that s<strong>in</strong>ce <strong>the</strong> WWW tra c exhibits higher variation,<br />
techniques like averag<strong>in</strong>g of load, and compensation <strong>for</strong> residual error (queues) as<br />
described <strong>in</strong> section 8.16.1 need to be used to m<strong>in</strong>imize <strong>the</strong> e ects of load variation.<br />
In summary, though bursty applications on TCP can potentially result <strong>in</strong> unbounded<br />
queues, a well-designed <strong>ABR</strong> system can scale well to support a large number of<br />
applications like bursty WWW sources runn<strong>in</strong>g over TCP.<br />
8.20 Summary of TCP over <strong>ABR</strong> results<br />
This section uni es <strong>the</strong> conclusions drawn <strong>in</strong> each of <strong>the</strong> sections <strong>in</strong> this chapter<br />
(see sections 8.13, 8.15.6, 8.16.2, 8.18.5, and 8.19). In brief, <strong>the</strong> <strong>ABR</strong> service is an<br />
attractive option to support TCP tra c scalably. It o ers high throughput <strong>for</strong> bulk<br />
le transfer applications and low latency to WWW applications. Fur<strong>the</strong>r, it is fair to<br />
connections, which means that access will not be denied, and per<strong>for</strong>mance will not be<br />
unnecessarily degraded <strong>for</strong> any of <strong>the</strong> compet<strong>in</strong>g connections. This chapter shows that<br />
it is possible to achieve zero cell loss <strong>for</strong> a large number of TCP connections with a<br />
small amount of bu ers. Hence, <strong>the</strong> <strong>ABR</strong> implementators can tradeo <strong>the</strong> complexity<br />
333