Traffic Management for the Available Bit Rate (ABR) Service in ...
Traffic Management for the Available Bit Rate (ABR) Service in ... Traffic Management for the Available Bit Rate (ABR) Service in ...
The target utilization (U) and the TUB present afewtradeo s. During overload (transients), U a ects queue drain rate. Lower U increases drain rate during tran- sients, but reduces utilization in steady state. Further, higher U also constrains the size of the TUB. Anarrow TUB slows down the convergence to fairness (since the formula depends on ) but has smaller oscillations in steady state. A wide TUB results in faster progress towards fairness, but has more oscillations in steady state. We nd that a TUB of 90%(1 0.1) used in our simulations is a good choice. The switch averaging interval a ects the stability of z. Shorter intervals cause more variation in the z and hence more oscillations. Larger intervals cause slow feedback and hence slow progress towards steady state. The OSU scheme parameters can be set relatively independent of the target work- load and network extent. Variance in measurement isthekey error factor in the OSU scheme, and a larger interval is desirable to smooth the e ect of such variance. Some schemes, on the other hand, are very sensitive totheworkload and network diameter in their choice of parameter values. An easy way to identify such schemes is that they recommend di erent parameter values for di erent network con gurations. For example, a switch parameter may be di erent for WAN con gurations than in a LAN con guration. A switch generally has some VCs travelling short distances while oth- ers travelling long distances. While it is ok to classify a VC as a local or wide area VC, it is often not correct to classify a switch as a LAN switch oraWAN switch. In a nationwide internet consisting of local networks, all switches could be classi ed as WAN switches. Note that the problem becomes more di cult when the scheme uses many parameters, and/or the parameters are not independent of each other. 107
5.2.3 Use Measured Rather Than Declared Loads Many schemes prior to OSU scheme, including the MIT scheme, used source declared rates for computing their allocation without taking into account the actual load at the switch. In the OSU scheme, we measure the current total load. All unused capacity is allocated to contending sources. We use the source's declared rate to compute a particular sources' allocation but use the switch's measured load to decide whether to increase or decrease. Thus, if the sources lie or if the source's information is out-of-date, our approach maynotachieve fairness but it still achieves e ciency. For example, suppose a personal computer connected to a 155 Mbps link is not be able to transmit more than 10 Mpbs because of its hardware/software limitation. The source declares a desired rate of 155 Mbps, but is granted 77.5 Mbps since there is another VC sharing the link going out from the switch. Now if the computer is unable to use any more than 10 Mbps, the remaining 67.5 Mbps is reserved for it and cannot be used by the second VC and the link bandwidth is wasted. The technique of measuring the total load has become minimum required part of most switch algorithms. Of course, some switches may measure each individual source's cell rate rather than relying on the information in the RM cell 5.2.4 Congestion Detection: Input Rate vs Queue Length Most congestion control schemes for packet networks in the past were window based. Most of these schemes use queue length as the indicator of congestion. When- ever the queue length (or its average) is more than a threshold, the link is considered congested. This is how initial rate-based scheme proposals were also being designed. 108
- Page 83 and 84: The adaptive FCVC algorithm [63] co
- Page 85 and 86: np. Thus, long path VCs have fewer
- Page 87 and 88: networks like ATM can have complete
- Page 89 and 90: scheme followed by a discussion of
- Page 91 and 92: exibility of decoupling the enforce
- Page 93 and 94: 4.6.1 Key Techniques In EPRCA, the
- Page 95 and 96: If the mean ACR is not a good estim
- Page 97 and 98: is the ratio of the input rate to t
- Page 99 and 100: which shares the link equally as al
- Page 101 and 102: proportional to the the unused ABR
- Page 103 and 104: The key technique in the scheme is
- Page 105 and 106: the bandwidth allocations to satis
- Page 107 and 108: 4.10 DMRCA scheme The Dynamic Max R
- Page 109 and 110: on its CCR. Further MAX times out i
- Page 111 and 112: other words, a di erent set of para
- Page 113 and 114: A combination of several ideas in a
- Page 115 and 116: 4.13 SP-EPRCA scheme The SP-EPRCA s
- Page 117 and 118: RTD and shuts o the source (for sta
- Page 119 and 120: 4.14.1 Common Drawbacks Though the
- Page 121 and 122: The ATM Tra c Management standard a
- Page 123 and 124: 5.1.1 Control-Cell Format The contr
- Page 125 and 126: The last two elds are used in the b
- Page 127 and 128: Figure 5.3: Flow chart for updating
- Page 129 and 130: LAF in cell Max(LAF in cell, z) The
- Page 131 and 132: the time of departure (instant mark
- Page 133: the steady state. The system operat
- Page 137 and 138: said about the maximum queue length
- Page 139 and 140: The key problem with some unipolar
- Page 141 and 142: Figure 5.6: Space time diagram show
- Page 143 and 144: As noted, these heuristics do not g
- Page 145 and 146: Transmitted Cell Rate ABR Queue Len
- Page 147 and 148: 4. The RM cell contains a timestamp
- Page 149 and 150: 1. The source o ered average cell r
- Page 151 and 152: Transmitted Cell Rate Link Utilizat
- Page 153 and 154: Transmitted Cell Rate Link Utilizat
- Page 155 and 156: Transmitted Cell Rate Link Utilizat
- Page 157 and 158: Figure 5.17: The parking lot fairne
- Page 159 and 160: Figure 5.19: Network con guration w
- Page 161 and 162: Transmitted Cell Rate Link Utilizat
- Page 163 and 164: Transmitted Cell Rate Link Utilizat
- Page 165 and 166: 5.8 Proof: Fairness Algorithm Impro
- Page 167 and 168: Region 2: y s and x s and U(1 + ) x
- Page 169 and 170: eing divided into eight non-overlap
- Page 171 and 172: Proof for Region 2 Triangular regio
- Page 173 and 174: have: and y 0 = y(1 ; ) z x + y 0 =
- Page 175 and 176: The OSU scheme is, therefore, incom
- Page 177 and 178: workloads, where input load and cap
- Page 179 and 180: (a) Regions used to prove Claim C1
- Page 181 and 182: 6.1 The Basic ERICA Algorithm The s
- Page 183 and 184: A ow chart of the basic algorithm i
The target utilization (U) and <strong>the</strong> TUB present afewtradeo s. Dur<strong>in</strong>g overload<br />
(transients), U a ects queue dra<strong>in</strong> rate. Lower U <strong>in</strong>creases dra<strong>in</strong> rate dur<strong>in</strong>g tran-<br />
sients, but reduces utilization <strong>in</strong> steady state. Fur<strong>the</strong>r, higher U also constra<strong>in</strong>s <strong>the</strong><br />
size of <strong>the</strong> TUB.<br />
Anarrow TUB slows down <strong>the</strong> convergence to fairness (s<strong>in</strong>ce <strong>the</strong> <strong>for</strong>mula depends<br />
on ) but has smaller oscillations <strong>in</strong> steady state. A wide TUB results <strong>in</strong> faster<br />
progress towards fairness, but has more oscillations <strong>in</strong> steady state. We nd that a<br />
TUB of 90%(1 0.1) used <strong>in</strong> our simulations is a good choice.<br />
The switch averag<strong>in</strong>g <strong>in</strong>terval a ects <strong>the</strong> stability of z. Shorter <strong>in</strong>tervals cause<br />
more variation <strong>in</strong> <strong>the</strong> z and hence more oscillations. Larger <strong>in</strong>tervals cause slow<br />
feedback and hence slow progress towards steady state.<br />
The OSU scheme parameters can be set relatively <strong>in</strong>dependent of <strong>the</strong> target work-<br />
load and network extent. Variance <strong>in</strong> measurement is<strong>the</strong>key error factor <strong>in</strong> <strong>the</strong> OSU<br />
scheme, and a larger <strong>in</strong>terval is desirable to smooth <strong>the</strong> e ect of such variance. Some<br />
schemes, on <strong>the</strong> o<strong>the</strong>r hand, are very sensitive to<strong>the</strong>workload and network diameter<br />
<strong>in</strong> <strong>the</strong>ir choice of parameter values. An easy way to identify such schemes is that<br />
<strong>the</strong>y recommend di erent parameter values <strong>for</strong> di erent network con gurations. For<br />
example, a switch parameter may be di erent <strong>for</strong> WAN con gurations than <strong>in</strong> a LAN<br />
con guration. A switch generally has some VCs travell<strong>in</strong>g short distances while oth-<br />
ers travell<strong>in</strong>g long distances. While it is ok to classify a VC as a local or wide area<br />
VC, it is often not correct to classify a switch as a LAN switch oraWAN switch. In<br />
a nationwide <strong>in</strong>ternet consist<strong>in</strong>g of local networks, all switches could be classi ed as<br />
WAN switches. Note that <strong>the</strong> problem becomes more di cult when <strong>the</strong> scheme uses<br />
many parameters, and/or <strong>the</strong> parameters are not <strong>in</strong>dependent of each o<strong>the</strong>r.<br />
107