Traffic Management for the Available Bit Rate (ABR) Service in ...
Traffic Management for the Available Bit Rate (ABR) Service in ... Traffic Management for the Available Bit Rate (ABR) Service in ...
TCP smoothes out its window increase by using small steps on the receipt of every new ack from the destination. If the ow of acks is smooth, the window increases are smooth. As a result, TCP data cannot load the network in sudden bursts. However, when the tra c ows in both directions on TCP connections, the behavior is more complex. This is because the acknowledgements may be received in bursts, rather than being spaced out over time. As a result, the TCP window increases by amounts larger than one MSS, and the source sends data in larger bursts. In the worst case, all the acknowledgements for the previous cycle are received at once, i.e., acknowledgements for D = 1 RTT Link bandwidth bytes of data is received at once. This results in the 2D bytes of data (D due to data using the portion of the windows acknowledged, and D due to data using the expansion of the window by the slow-start exponential increase policy) sent in the current cycle. Assuming that there is bandwidth available to transmit this data upto the bottleneck, the worst case queue is 2D. Observe that the total average load measured over an RTT is still twice the Link bandwidth, however there is an extra instantaneous queue of D =1 RTT Link bandwidth. This is because the entire queue builds up within a cell transmission time. In the case when TCP load is smooth, two cells are being input for every cell drained from the queue. Observe that once the sources are rate-limited, the TCP burstiness due to ack compression is hidden from the ATM network because the excess cells are 291
u ered at the end-system, and not inside the ATM network. This is a sig- ni cant performance beni t considering the fact that without rate-control the bu er requirement may bevery large [82]. 7. E ect of VBR backgrounds: The presence of higher priority background tra c implies that the ABR capacity isvariable. There are two implications of the variation in capacity: a) the e ect on the rate of TCP acks and the window growth, and, b) the e ect on the switch rate allocation algorithm (ERICA). Part a): The e ect of VBR background on the TCP \ack clock" (i.e., the rate of TCP acknowledgements (ACKs) is described below. If VBR comes on during zero ABR load, it does not a ect the TCP ACK clock because there are no ABR cells in the pipe. This is the case when VBR comes on during the idle periods of TCP, especially in the initial startup phase of TCP. If VBR comes on during low load (initial load less than unity), such that the resulting total load is less than unity, it increases the TCP packet transmission time. Increased packet transmission time means that the inter-ACK time is increased (called \ACK expansion"). Increase in inter- ACK time slows down the window increase since the \ACK clock" runs slower now. Slower window increases imply reduced ABR load in the next cycle, irrespective of whether VBR is o or on. Speci cally, if VBR goes o , the inter-ACK spacing starts decreasing by half every round trip (due to TCP window doubling every round trip). Since we have assumed that the system was in low load previously, this 292
- Page 267 and 268: The switch maintains a local alloca
- Page 269 and 270: PNI = f0, 1g : f1 ) No rule 5b, 0 )
- Page 271 and 272: after reaching the goal. The time-b
- Page 273 and 274: Figure 7.8: Closed-Loop Bursty Tra
- Page 275 and 276: The algorithm measures the load and
- Page 277 and 278: Medium Bursts Medium bursts are exp
- Page 279 and 280: count-based technique may be insu c
- Page 281 and 282: Figure 7.13: An event trace illustr
- Page 283 and 284: CHAPTER 8 SUPPORTING INTERNET APPLI
- Page 285 and 286: u ering which does not depend upon
- Page 287 and 288: 4 make it to the destination are ar
- Page 289 and 290: Figure 8.3: At the ATM layer, the T
- Page 291 and 292: issues and e ect of bursty applicat
- Page 293 and 294: from the loss and they trigger the
- Page 295 and 296: 8.8 Performance Metrics We measure
- Page 297 and 298: the performance is fair. Also, the
- Page 299 and 300: Figure 8.7(b) shows the rates (ACRs
- Page 301 and 302: Window Size in bytes vfive-tcp/opti
- Page 303 and 304: The e ect of large bu ers on CLR is
- Page 305 and 306: 8.13 Summary of TCP/IP performance
- Page 307 and 308: Feedback delay: Twice the delay fro
- Page 309 and 310: on a link, two cells are expected a
- Page 311 and 312: state only after the switch algorit
- Page 313 and 314: queue length is less susceptible to
- Page 315 and 316: to equalize rates for fairness, and
- Page 317: QB = Link bandwidth (RT T ; T ) and
- Page 321 and 322: Part b): When ABR load goes away, t
- Page 323 and 324: All our simulations presented use t
- Page 325 and 326: Averaging RTT(ms) Feedback Max Q Th
- Page 327 and 328: a modi ed version of the ERICA algo
- Page 329 and 330: ABR is better than UBR in these (en
- Page 331 and 332: problems. During the ON time, the V
- Page 333 and 334: hand, the frequency of the VBR is h
- Page 335 and 336: like ERICA+ which uses the queueing
- Page 337 and 338: minimum fairshare is low. This may
- Page 339 and 340: 8.17 E ect of Long-Range Dependent
- Page 341 and 342: terminology) are called \Presentati
- Page 343 and 344: The key point is that the MPEG-2 ra
- Page 345 and 346: the MPEG-2 Transport Stream, and st
- Page 347 and 348: 8.17.4 Observations on the Long-Ran
- Page 349 and 350: For the video sources, we choose me
- Page 351 and 352: Video Sources ABR Metrics # Mean St
- Page 353 and 354: However, with modi cations to ERICA
- Page 355 and 356: Video Sources ABR Metrics # Avg Src
- Page 357 and 358: Video Sources ABR Metrics # Avg Src
- Page 359 and 360: On the other hand, if the applicati
- Page 361 and 362: of managing bu ers, queueing, sched
- Page 363 and 364: hence control the total load on the
- Page 365 and 366: call such a switch a \VS/VD switch"
- Page 367 and 368: Figure 9.2: Per-class queues in a n
TCP smoo<strong>the</strong>s out its w<strong>in</strong>dow <strong>in</strong>crease by us<strong>in</strong>g small steps on <strong>the</strong> receipt of<br />
every new ack from <strong>the</strong> dest<strong>in</strong>ation. If <strong>the</strong> ow of acks is smooth, <strong>the</strong> w<strong>in</strong>dow<br />
<strong>in</strong>creases are smooth. As a result, TCP data cannot load <strong>the</strong> network <strong>in</strong> sudden<br />
bursts. However, when <strong>the</strong> tra c ows <strong>in</strong> both directions on TCP connections,<br />
<strong>the</strong> behavior is more complex. This is because <strong>the</strong> acknowledgements may be<br />
received <strong>in</strong> bursts, ra<strong>the</strong>r than be<strong>in</strong>g spaced out over time. As a result, <strong>the</strong><br />
TCP w<strong>in</strong>dow <strong>in</strong>creases by amounts larger than one MSS, and <strong>the</strong> source sends<br />
data <strong>in</strong> larger bursts.<br />
In <strong>the</strong> worst case, all <strong>the</strong> acknowledgements <strong>for</strong> <strong>the</strong> previous cycle are received<br />
at once, i.e., acknowledgements <strong>for</strong> D = 1 RTT L<strong>in</strong>k bandwidth bytes of<br />
data is received at once. This results <strong>in</strong> <strong>the</strong> 2D bytes of data (D due to data<br />
us<strong>in</strong>g <strong>the</strong> portion of <strong>the</strong> w<strong>in</strong>dows acknowledged, and D due to data us<strong>in</strong>g <strong>the</strong><br />
expansion of <strong>the</strong> w<strong>in</strong>dow by <strong>the</strong> slow-start exponential <strong>in</strong>crease policy) sent <strong>in</strong><br />
<strong>the</strong> current cycle. Assum<strong>in</strong>g that <strong>the</strong>re is bandwidth available to transmit this<br />
data upto <strong>the</strong> bottleneck, <strong>the</strong> worst case queue is 2D. Observe that <strong>the</strong> total<br />
average load measured over an RTT is still twice <strong>the</strong> L<strong>in</strong>k bandwidth, however<br />
<strong>the</strong>re is an extra <strong>in</strong>stantaneous queue of D =1 RTT L<strong>in</strong>k bandwidth. This<br />
is because <strong>the</strong> entire queue builds up with<strong>in</strong> a cell transmission time. In <strong>the</strong><br />
case when TCP load is smooth, two cells are be<strong>in</strong>g <strong>in</strong>put <strong>for</strong> every cell dra<strong>in</strong>ed<br />
from <strong>the</strong> queue.<br />
Observe that once <strong>the</strong> sources are rate-limited, <strong>the</strong> TCP burst<strong>in</strong>ess due to ack<br />
compression is hidden from <strong>the</strong> ATM network because <strong>the</strong> excess cells are<br />
291