Final Test and Evaluation Report
Final Test and Evaluation Report
Final Test and Evaluation Report
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
IST-2000-25394 Project Moby Dick<br />
D0504<br />
<strong>Final</strong> <strong>Test</strong> <strong>and</strong> <strong>Evaluation</strong> <strong>Report</strong><br />
Contractual Date of Delivery to the CEC: 31 December 2003<br />
Actual Date of Delivery to the CEC: 16 January 2004<br />
Author(s): partners in WP5 (cf. page 3)<br />
Participant(s):<br />
partners in WP5<br />
Workpackage:<br />
WP5<br />
Security:<br />
Public<br />
Nature:<br />
<strong>Report</strong><br />
Version: 1.0<br />
Total number of pages: 86<br />
Abstract:<br />
This document describes the final tests done to the definite Moby Dick implementation. User tests with<br />
real applications evaluate the measure the project meet its goals <strong>and</strong> expert tests evaluate the performance<br />
of Moby Dick.<br />
Keyword list:<br />
final, field trial, validation, testing, test-bed.<br />
Moby Dick WP5 1 / 86
d0504.doc Version 1.0 15.01.2004<br />
Executive Summary<br />
This document describes the final tests done to the definite Moby Dick implementation. User tests with<br />
real applications evaluate the measure the project meet its goals <strong>and</strong> expert tests evaluate the performance<br />
of Moby Dick.<br />
The test beds are presented, then the expert tests aimed to evaluate Moby Dick performance are described<br />
<strong>and</strong> then the results are gathered <strong>and</strong> analysed showing clearly how Moby Dick works <strong>and</strong> giving advices<br />
to 4G networks.<br />
The user tests are also presented. These tests face users employing real application with Moby Dick<br />
infrastructure, making them experience all Moby Dick functionality <strong>and</strong> asking them their advice, thus<br />
knowing if the Moby Dick objectives are fulfilled. As an annex, different public Moby Dick<br />
demonstrations are described.<br />
Moby Dick WP5 2 / 86
d0504.doc Version 1.0 15.01.2004<br />
Authors<br />
Partner Name Phone / Fax / E-mail<br />
T-Systems Hans J. Einsiedler Phone: +49 30 3497 3518<br />
Fax: +49 30 3497 3519<br />
E-mail: hans.einsiedler@t-systems.com<br />
NEC M. Liebsch Phone: +49 (0) 62-21–90511–44<br />
Fax: +49 (0) 62-21–90511–45<br />
E-mail: mobile@ccrle.nec.de<br />
Ralf Schmitz Phone: +49-6221-13 70 8-12<br />
Fax: +49-6221-13 70 8-28<br />
E-mail: Ralf.Schmitz@ccrle.nec.de<br />
Telemaco Melia Phone: +49 (0) 62-21–90511–44<br />
Fax: +49 (0) 62-21–90511–45<br />
E-mail: mobile@ccrle.nec.de<br />
UC3M Antonio Cuevas Phone: +34 916248838<br />
Fax: +34 916248749<br />
E-mail: acuevas@it.uc3m.es<br />
Carlos García Phone: +34 916248802.<br />
Fax: +34 916248749<br />
E-mail: cgarcia@it.uc3m.es<br />
Carlos J. Bernardos Phone: +34 916248756<br />
Fax: +34 916248749<br />
E-mail: cjbc@it.uc3m.es<br />
Pablo Serrano-Yañez Phone: +34 916248756<br />
Fax: +34 916248749<br />
E-mail: isoto@it.uc3m.es<br />
Jose I. Moreno Phone: +34 916249183<br />
Fax: +34 916248749<br />
E-mail: jmoreno@it.uc3m.es<br />
USTUTT Jürgen Jähnert Phone: +49-711-685-4273<br />
Fax: +49-711-678-8363<br />
E-mail: jaehnert@rus.uni-stuttgart.de<br />
Jie Zhou Phone : +49.711.6855531<br />
E-mail : zhou@rus.uni-stuttgart.de<br />
Hyung-Woo Kim Phone : +49.711.6854514<br />
E-mail : kim@rus.uni-stuttgart.de<br />
GMD / FhG Davinder Pal Singh Phone: +49-30-3463-7175<br />
Fax: +49-30-3463-8175<br />
E-mail: singh@fokus.fhg.de<br />
Cristian Constantin Phone : +49 30 3463 7146<br />
E-mail : constantin@fokus.fhg.de<br />
Moby Dick WP5 3 / 86
d0504.doc Version 1.0 15.01.2004<br />
PTIN Victor Marques Phone: +351.234.403311<br />
Fax: +351.234.420722<br />
E-mail: victor-m-marques@ptinovacao.pt<br />
Rui Aguiar Phone: +35-1-234-381937<br />
Fax: +351.234.381941<br />
E-mail: ruilaa@it.pt<br />
Diogo Gomez Phone: +35-1-234-381937<br />
Fax: +351.234.381941<br />
E-mail: dgomes@av.it.pt<br />
MOTOROLA Christophe Beaujean Phone: +33-1-69354812<br />
Fax: +33-1-69352501<br />
E-mail: christophe.beaujean@motorola.com<br />
EURECOM Michelle Wetterwald Phone: +33-493-002631<br />
Fax: +33-493-002627<br />
E-mail: michelle.wetterwald@eurecom.f<br />
UKR Piotr Pacyna Phone: +48-12-6174040<br />
Fax: +48-12-6342372<br />
E-mail: pacyna@kt.agh.edu.pl<br />
Janusz Gozdecki Phone: +48-12-6173599<br />
Fax: +48-12-6342372<br />
E-mail: gozdecki@kt.agh.edu.pl<br />
ETHZ Hasan Phone: +41 1 632 7012<br />
Fax: +41 1 632 1035<br />
Email: hasan@tik.ee.ethz.ch<br />
Pascal Kurtansky Phone: +41 1 632 7012<br />
Fax: +41 1 632 1035<br />
Email: pascal.kurtansky@ethz.ch<br />
I2R Parijat Mishra E-mail: parijat@i2r.a-star.edu.sg<br />
Moby Dick WP5 4 / 86
d0504.doc Version 1.0 15.01.2004<br />
Table of Contents<br />
AUTHORS..........................................................................................................3<br />
ABBREVIATIONS ..............................................................................................9<br />
1. INTRODUCTION .....................................................................................11<br />
2. TESBEDS USED FOR THE TESTS........................................................12<br />
2.1 <strong>Test</strong> Bed in Madrid.................................................................................................................12<br />
2.2 <strong>Test</strong> Bed in Stuttgart...............................................................................................................13<br />
2.3 Other test beds ........................................................................................................................14<br />
2.3.1 TD_CDMA integration <strong>Test</strong>bed in Eurecom ......................................................................14<br />
2.3.2 AAA <strong>Test</strong>bed in Fokus......................................................................................................16<br />
2.3.3 Logging & auditing testbed in ETH ...................................................................................16<br />
2.3.4 WCDMA+ DiffServ testbed in Motorola ...........................................................................17<br />
2.3.5 Overall test bed in PTIN....................................................................................................18<br />
3. EXPERT TESTS DESCRIPTION AND METHODOLOGY ......................19<br />
3.1 Introduction ............................................................................................................................19<br />
3.2 Log Transfer Delay .................................................................................................................19<br />
3.2.1 <strong>Test</strong> Description ................................................................................................................19<br />
3.2.2 Measurement Parameters...................................................................................................19<br />
3.2.3 Measurement Process <strong>and</strong> Configuration............................................................................20<br />
3.3 Auditing Speed........................................................................................................................21<br />
3.3.1 <strong>Test</strong> Description <strong>and</strong> Measurement Parameters ..................................................................22<br />
3.3.2 Measurement Process <strong>and</strong> Configuration............................................................................22<br />
3.4 Charging performance............................................................................................................23<br />
3.4.1 <strong>Test</strong> description .................................................................................................................23<br />
3.4.2 Measurement parameters...................................................................................................23<br />
3.4.3 <strong>Test</strong> realization, measurement process, interactions required. .............................................23<br />
3.5 AAA Scalability tests...............................................................................................................23<br />
3.5.1 <strong>Test</strong>s description <strong>and</strong> motivation........................................................................................23<br />
3.5.2 Variables to measure .........................................................................................................24<br />
3.5.3 Measurement process, test realization ................................................................................24<br />
3.6 DSCP Marking Software (DMS) ............................................................................................25<br />
3.6.1 Filter loading.....................................................................................................................25<br />
3.6.1.1 <strong>Test</strong> description .............................................................................................................25<br />
3.6.1.2 Measurement parameter.................................................................................................25<br />
3.6.1.3 <strong>Test</strong> realization ..............................................................................................................25<br />
3.6.2 DSCP dumping from AAA................................................................................................25<br />
3.6.2.1 <strong>Test</strong> description .............................................................................................................25<br />
3.6.2.2 Measurement parameter.................................................................................................25<br />
3.6.2.3 <strong>Test</strong> realization ..............................................................................................................26<br />
Moby Dick WP5 5 / 86
d0504.doc Version 1.0 15.01.2004<br />
3.6.3 DMS <strong>and</strong> IPsec .................................................................................................................26<br />
3.6.3.1 <strong>Test</strong> description .............................................................................................................26<br />
3.6.3.2 <strong>Test</strong> realization ..............................................................................................................26<br />
3.7 QoS entities communication delays ........................................................................................26<br />
3.7.1 <strong>Test</strong> description <strong>and</strong> purpose:.............................................................................................26<br />
3.7.2 Variables to measure Measurement parameters: .................................................................26<br />
3.7.3 Measurement process, test realization Interactions required:...............................................27<br />
3.8 QoS context installation time in the QoSM in the nAR..........................................................27<br />
3.8.1 <strong>Test</strong>s purpose <strong>and</strong> description ............................................................................................27<br />
3.8.2 Variables to measure .........................................................................................................27<br />
3.8.3 Measurement process, test realization ................................................................................27<br />
3.9 FHO.........................................................................................................................................27<br />
3.9.1 <strong>Test</strong> description .................................................................................................................27<br />
3.9.2 Measurement parameters...................................................................................................28<br />
3.10 Paging......................................................................................................................................29<br />
3.10.1 <strong>Test</strong> description .................................................................................................................29<br />
3.10.2 Measurement parameters...................................................................................................29<br />
3.10.3 <strong>Test</strong> realization..................................................................................................................29<br />
3.11 Inter-domain H<strong>and</strong>over..........................................................................................................30<br />
3.11.1 <strong>Test</strong> description .................................................................................................................30<br />
3.11.2 Measurement parameters...................................................................................................30<br />
3.11.3 Measurement process ........................................................................................................30<br />
4. EXPERT TESTS REALIZATION, RESULTS, ANALYSIS AND<br />
ASSESSMENT .................................................................................................30<br />
4.1 Introduction ............................................................................................................................30<br />
4.2 Log Transfer Delay .................................................................................................................30<br />
4.2.1 Message Transfer Time .....................................................................................................30<br />
4.2.2 Logs Transfer Time...........................................................................................................33<br />
4.2.3 Conclusions.......................................................................................................................38<br />
4.3 Auditing Speed........................................................................................................................38<br />
4.3.1 Entity Availability.............................................................................................................38<br />
4.3.2 User Registration...............................................................................................................40<br />
4.3.3 Service Authorization........................................................................................................43<br />
4.3.4 Conclusions.......................................................................................................................44<br />
4.4 AAA Scalability tests...............................................................................................................44<br />
4.4.1 AAA Scalability tests at FhG FOKUS................................................................................44<br />
4.4.1.1 Scalability tests done in Madrid <strong>and</strong> Stuttgart.................................................................45<br />
4.4.2 Analysis of results.............................................................................................................46<br />
4.5 Charging performance............................................................................................................46<br />
4.5.1 Results ..............................................................................................................................46<br />
4.5.2 Analysis of results.............................................................................................................47<br />
4.6 DSCP Marking Software (DMS) ............................................................................................47<br />
4.6.1 Filter loading.....................................................................................................................47<br />
4.6.2 DSCP dumping from AAA................................................................................................47<br />
4.6.3 DMS <strong>and</strong> IPsec .................................................................................................................48<br />
4.7 QoS entities communication delays ........................................................................................49<br />
4.7.1 Results ..............................................................................................................................49<br />
Moby Dick WP5 6 / 86
d0504.doc Version 1.0 15.01.2004<br />
4.7.2 Analysis of results.............................................................................................................49<br />
4.8 QoS context installation time in the QoSM in the nAR..........................................................50<br />
4.8.1 Results ..............................................................................................................................50<br />
4.8.2 Analysis of results.............................................................................................................51<br />
4.9 FHO.........................................................................................................................................51<br />
4.9.1 <strong>Test</strong>bed in Madrid .............................................................................................................51<br />
4.9.1.1 Results ..........................................................................................................................51<br />
4.9.1.2 Analysis of results. ........................................................................................................52<br />
4.9.2 <strong>Test</strong>bed in Stuttgart ...........................................................................................................53<br />
4.9.2.1 Results ..........................................................................................................................53<br />
4.9.2.2 Analysis of results .........................................................................................................54<br />
4.10 Paging......................................................................................................................................56<br />
4.10.1 <strong>Test</strong>bed in Madrid .............................................................................................................56<br />
4.10.1.1 <strong>Test</strong> results ................................................................................................................56<br />
4.10.1.2 Analysis of test results ...............................................................................................56<br />
4.10.2 <strong>Test</strong>bed in Stuttgart ...........................................................................................................61<br />
4.10.2.1 <strong>Test</strong> results ................................................................................................................61<br />
4.10.2.2 Analysis of test results ...............................................................................................61<br />
4.11 Inter-domain H<strong>and</strong>over..........................................................................................................64<br />
4.11.1 Results ..............................................................................................................................64<br />
4.11.2 Analysis of results.............................................................................................................65<br />
5. USER TESTS ..........................................................................................65<br />
5.1 VoIP ........................................................................................................................................65<br />
5.1.1 Description of the test........................................................................................................65<br />
5.1.2 Realization ........................................................................................................................65<br />
5.1.3 Results ..............................................................................................................................67<br />
5.1.4 Expert evaluation: Characterization of RAT traffic pattern .................................................67<br />
5.2 Video Streaming......................................................................................................................67<br />
5.2.1 Description........................................................................................................................67<br />
5.2.2 <strong>Test</strong> Realization.................................................................................................................68<br />
5.2.3 Results ..............................................................................................................................69<br />
5.3 Internet radio..........................................................................................................................69<br />
5.3.1 Description........................................................................................................................69<br />
5.3.2 Realization ........................................................................................................................70<br />
5.3.3 Results ..............................................................................................................................71<br />
5.3.4 Expert evaluation: TCP during FHO ..................................................................................74<br />
5.4 Internet coffee .........................................................................................................................74<br />
5.4.1 Results ..............................................................................................................................75<br />
5.5 Quake 2 championship............................................................................................................75<br />
5.5.1 Results ..............................................................................................................................76<br />
6. CONCLUSIONS ......................................................................................77<br />
7. REFERENCES ........................................................................................78<br />
8. ANNEX: PUBLIC DEMONSTRATIONS..................................................79<br />
8.1 Mobile Summit in Aveiro, Moby Dick Demonstration...........................................................79<br />
Moby Dick WP5 7 / 86
d0504.doc Version 1.0 15.01.2004<br />
8.2 Moby Dick Summit in Stuttgart .............................................................................................80<br />
8.3 Moby Dick workshop in Singapore ........................................................................................85<br />
8.4 Demonstration to high school students at UC3M...................................................................86<br />
Moby Dick WP5 8 / 86
d0504.doc Version 1.0 15.01.2004<br />
Abbreviations<br />
Abbreviation<br />
Full Name<br />
3GPP<br />
Third Generation Partnership Project<br />
AAA<br />
Authentication, Authorisation <strong>and</strong> Accounting<br />
AAA.f<br />
Foreign AAA server<br />
AAA.h<br />
Home AAA server<br />
AAAC<br />
Authentication, Authorisation, Accounting <strong>and</strong> Charging<br />
AD<br />
Administrative Domain<br />
AF<br />
Assured Forwarding per-hop-behaviour<br />
AG<br />
Access Gateway<br />
AM<br />
Acknowledge Mode (of RLC)<br />
AN<br />
Access Network<br />
API<br />
Application Programming Interface<br />
AR<br />
Access Router<br />
AS<br />
Access Stratum<br />
ASM<br />
Application Specific Module<br />
ASN.1<br />
Abstract Syntax Notation<br />
B3G<br />
Beyond Third Generation<br />
BACK<br />
Binding Acknowledgement<br />
BU<br />
Binding Update<br />
CN<br />
Corresponding Node<br />
CoA<br />
Care-of Address<br />
COPS<br />
Common Open Policy Service protocol<br />
COPS-ODRA COPS - Outsourcing Diffserv Resource Allocation<br />
CPU<br />
Computer Processing Unit<br />
CVS<br />
Concurrent Version System<br />
DB<br />
Database system<br />
DFN<br />
Deutsches Forschungsnetz<br />
DiffServ<br />
Differentiated Services<br />
DNS<br />
Domain Name System<br />
DoCoMo<br />
DSCP<br />
Differentiated Services Code Point<br />
EF<br />
Expedited Forwarding per-hop-behaviour<br />
ESSID<br />
Extended Service Set Identifier<br />
EUI<br />
End user identifier<br />
EUI-64<br />
End User Identifier<br />
Euro6IX<br />
FBACK<br />
Fast h<strong>and</strong>over Binding Acknowledgement<br />
FBU<br />
Fast h<strong>and</strong>over Binding Update<br />
FIFA<br />
GRAAL<br />
Generic Radio Access Adaptation Layer<br />
HA<br />
Home Agent<br />
HMIPv6 Hierarchical Mobile IP version 6<br />
HTTP<br />
HyperText Trasfer Protocol<br />
HTTP(S)<br />
Secure HyperText Trasfer Protocol<br />
ICMP<br />
Internet Control Message Protocol<br />
IETF<br />
Internet Engineering Task Force<br />
IP<br />
Internet Protocol<br />
LAN<br />
Local Area Network<br />
MAC<br />
Media-access control layer<br />
MAC<br />
Medium Access Control<br />
MAQ<br />
Mobility-AAA-QoS sequence<br />
MIND<br />
MIP<br />
Mobile IP<br />
MIPv6 Mobile IP version 6<br />
MN<br />
Mobile Node<br />
MT<br />
Mobile Terminal<br />
MT<br />
Mobile Terminal<br />
MTNM<br />
Mobile Terminal Networking Manager<br />
nAR<br />
New Access Router<br />
NAS<br />
Network access stratum<br />
NCP<br />
Networking Control Panel<br />
oAR<br />
Old Access Router<br />
OVERDRIVE<br />
PA<br />
Paging Agent<br />
PDCP<br />
Packet Data Convergence Protocol<br />
PDCP<br />
Packet Data Convergence Protocol<br />
PEP<br />
Policy enforcement point<br />
Moby Dick WP5 9 / 86
d0504.doc Version 1.0 15.01.2004<br />
PHY<br />
PHY<br />
QoS<br />
QoSB<br />
RAN<br />
RedIRIS+A95<br />
REQ<br />
RG<br />
RLC<br />
RLC<br />
RR(subnet)<br />
RRC<br />
RT Linux<br />
SLA<br />
SLS<br />
SNMP<br />
SP<br />
Ssh<br />
TTI<br />
UPS<br />
UPS<br />
URP<br />
USA<br />
VAT<br />
VIC<br />
VoIP<br />
W-CDMA<br />
WCDMA<br />
WLAN<br />
Physical layer in terms of ISO/OSI reference model<br />
Physical LayerRG<br />
Quality of Service<br />
Quality of Server Broker including a policy server<br />
Radio Access Network<br />
Informatic Resources Interconnection Network<br />
Request message<br />
Radio Gateway<br />
Radio link control layer<br />
Radio Link Control<br />
Radio Router for a specific sub network<br />
Radio Resource Control<br />
Real-time Linux<br />
Service Level Agreement<br />
Service Level Specification<br />
Simple Network Management Protocol<br />
Service Provider<br />
Secure shell application<br />
Transmission Time Interval<br />
User profile subset<br />
User Profile Specification<br />
User Registration Process<br />
United States of America<br />
Voice over IP<br />
Wideb<strong>and</strong> Code Division Multiple Access<br />
Wideb<strong>and</strong> Code Division Multiple Access<br />
Wireless LAN<br />
Moby Dick WP5 10 / 86
d0504.doc Version 1.0 15.01.2004<br />
1. Introduction<br />
This deliverable is a report on the final trials done in the Moby Dick project. This deliverable is a clear<br />
follow up of D0503 (see [12]) where parts of Moby dick architecture were evaluated <strong>and</strong> feedback given<br />
to different work packages to head them to the final implementation of Moby Dick architecture which is<br />
evaluated here.<br />
All the Moby Dick architecture being now implemented, its evaluation, described in this document will<br />
allow to answer to the following fundamental questions:<br />
Is the Moby Dick implementation fulfilling the planed goals?<br />
What is the performance of Moby Dick Architecture?<br />
The deliverable also aims to give valuable advice to future 4G networks based on Moby Dick successful<br />
experience <strong>and</strong> describe the Moby Dick public demonstrations done since the delivery of d0503 till now.<br />
To answer to these questions this deliverable is structured as follows. First the test beds are presented,<br />
then the expert tests aimed to evaluate Moby Dick performance are described <strong>and</strong> then the results are<br />
gathered <strong>and</strong> analysed. The user tests are presented in the next section. These tests face users employing<br />
real application with Moby Dick infrastructure, making them experience all Moby Dick functionality <strong>and</strong><br />
asking them their advice, thus knowing if the Moby Dick objectives are fulfilled. <strong>Final</strong>ly the conclusions<br />
are gathered. As an annex, different public Moby Dick demonstrations are described.<br />
Moby Dick WP5 11 / 86
d0504.doc Version 1.0 15.01.2004<br />
2. Tesbeds used for the tests<br />
2.1 <strong>Test</strong> Bed in Madrid<br />
Madrid’s test bed has already been described in other deliverables such as D0503, see [5]. Small changes<br />
in the test bed architecture were made in Madrid. We had to define a clear core network <strong>and</strong> all the access<br />
routers must have one access interface <strong>and</strong> one core interface. Further one machine was added in Madrid<br />
test bed to be able to physically separate the Paging Agent <strong>and</strong> the Home Agent. Also to have better<br />
testing capabilities a tuneable network emulator was installed in the border of UC3M network to tailor the<br />
connection with Stuttgart.<br />
Changes in topology are:<br />
? Termita eth1 attached to 2001:0720:410:1003 sub-network<br />
? Larva10 <strong>and</strong> larva9 Ethernet connection moved to sub network 2001:0720:410:1004<br />
Three machines were added:<br />
? Viudanegra. viudanegra is an AR to the whole Madrid’s Moby Dick network. It lays between<br />
grillo, <strong>and</strong> 2001:0720:410:1003 sub-network It has two Ethernet interfaces, one attached to<br />
2001:0720:410:1003 sub-network <strong>and</strong> the other to grillo<br />
? Larva1: An IPv6-in-IPv4 tunnel is established between larva1 <strong>and</strong> viudanegra due to the fact that<br />
NistNET emulator (used to tailor the connection) runs only in IPv4.<br />
? Aranya, it assumes the function of the AAAAC server who was before in escarabajo. The PA is<br />
installed there. It is connected to the 2001:0720:410:1003 sub-network<br />
The final test bed is:<br />
GEANT<br />
grillo<br />
CISCO 7500<br />
2001:0720:0410:1001::/64<br />
eth0<br />
IPv6-in-IPv4<br />
tunnel<br />
•Home Agent<br />
•QoS Broker<br />
escarabajo<br />
eth1<br />
2001:0720:0410:1003::/64<br />
aranya<br />
eth2<br />
•Paging Agent<br />
•AAAAC server<br />
eth1<br />
WLAN<br />
larva9<br />
MN<br />
eth1 eth1<br />
escorpion<br />
MN<br />
wlan0<br />
2001:0720:0410:1004::/64<br />
eth1<br />
pulga<br />
CN<br />
larva10<br />
chinche<br />
WLAN<br />
MN<br />
viudanegra<br />
MOBYDICK<br />
DOMAIN<br />
larva1<br />
wlan0<br />
eth2<br />
cucaracha<br />
termita<br />
eth1<br />
wlan0<br />
WLAN<br />
MN<br />
garrapata<br />
wlan0<br />
2001:0720:0410:1006::/64<br />
2001:0720:0410:1007::/64<br />
IPv6-WLAN<br />
WLAN<br />
wlan0<br />
MN<br />
IPv6-WLAN2<br />
coleoptero<br />
Figure 1. Madrid testbed<br />
Moby Dick WP5 12 / 86
d0504.doc Version 1.0 15.01.2004<br />
2.2 <strong>Test</strong> Bed in Stuttgart<br />
Some changes have been made in the Stuttgart testbed since last deliverables D0503 (see [2]). We<br />
enhanced the testbed to have two domains <strong>and</strong> allow testing H<strong>and</strong>overs between two domains (Interdomain<br />
h<strong>and</strong>over). These 2 domains are marked in the figure below with different colors.<br />
MOBY DICK<br />
escarabajo<br />
HA<br />
aranya<br />
AAAS<br />
6net<br />
kssun10<br />
DNS/AR<br />
2001:638:202:1::/64<br />
ksat70<br />
AR<br />
(no-MD)<br />
RUS <strong>Test</strong>bed<br />
ksat81<br />
AR10<br />
ksat24<br />
AAAs<br />
ksat31<br />
QoSB<br />
ksat35<br />
HA<br />
eth0 wlan0<br />
2001:638:202:117::/64<br />
2001:638:202:111::/64 Domain-II: ipv6.ks.uni-stuttgart.de<br />
2001:638:202:11::/64<br />
Domain-I: ipv6.rus.uni-stuttgart.de<br />
eth0 ksat10<br />
AR1<br />
ksat48 eth0<br />
AR2<br />
ksat49 eth0<br />
AR3<br />
ksat73 eth0<br />
AR4<br />
ksat13<br />
HA<br />
ksat30<br />
PA<br />
ksat42<br />
QoS Br<br />
eth1<br />
2001:638:202:22::/64<br />
ksat66<br />
CN<br />
wlan0<br />
wlan0<br />
ksat54<br />
MT<br />
wlan<br />
wlan0<br />
2001:638:202:15::/64 2001:638:202:16::/64<br />
ksat58<br />
MT<br />
ksat51<br />
TD_CDMA<br />
RG<br />
2001:638:202:23::/64<br />
ksat67<br />
ksat46<br />
ksat52<br />
W_MT<br />
RUS<br />
Projects<br />
Communication Systems<br />
BeWü Development RUS Oct. 17. 2003 1<br />
CN<br />
-AAAs<br />
-Charging -Log<br />
Figure 2. Stuttgart testbed<br />
The main enhancement in Stuttgart testbed is TD-CDMA testbed which is not appeared in Madrid<br />
testbed. The TD-CDMA testbed contains one Access router (ksat73), one Radio Gateway (ksat51), which<br />
connects to an antenna. The Mobile Terminal ksat67 has a TD-CDMA interface, which can connect to<br />
Radio Gateway.<br />
The core network of domain I is 2001:638:202:11::/64 with the domain name ipv6.rus.uni-stuttgart.de.<br />
There are still three other access router in this domain which are ksat10 (Ethernet), ksat48 <strong>and</strong> ksat49<br />
(WLAN). So this domain contains three different access technologies, i.e. Ethernet, 802.11 Wireless LAN<br />
<strong>and</strong> TD-CDMA. Other components in this domain are listed as following:<br />
Ksat13 Home Agent<br />
Ksat30 Paging Agent<br />
Ksat42 QoS Broker<br />
Ksat52 AAAC Server<br />
Ksat46, ksat66 Corresponding Node (application server)<br />
Ksat54, ksat58 Mobile Terminal<br />
In order to present public demonstration (see Annex) during the final review in another building we<br />
extended our testbed (see Figure 2) by using VLAN to create a core network 2001:638:202:11::/64 in the<br />
demo building. This testbed contains the following hosts:<br />
Ksat77, ksat78<br />
Ksat79<br />
Ksat80<br />
Ksat74<br />
Ksat68<br />
WLAN Access Router<br />
Ethernet Access Router<br />
TD-CDMA Access Router<br />
Radio Gateway<br />
TD-CDMA Mobile Terminal<br />
Moby Dick WP5 13 / 86
d0504.doc Version 1.0 15.01.2004<br />
MOBY DICK<br />
<strong>Test</strong>bed in ETI (Pub demo)<br />
VLAN<br />
2001:638:202:11::/64<br />
eth0<br />
ksat79 ksat77 ksat78 eth0<br />
eth0<br />
ksat80 eth0<br />
AR8<br />
AR6 AR7<br />
AR9<br />
eth1<br />
wlan0<br />
wlan0<br />
eth1<br />
2001:638:202:20::/64 2001:638:202:18::/64<br />
2001:638:202:19::/64<br />
ksat56<br />
MT<br />
wlan0<br />
ksat74<br />
WCDMA<br />
RG(new)<br />
2001:638:202:21::/64<br />
ksat68<br />
W_MT<br />
RUS<br />
Projects<br />
Communication Systems<br />
BeWü Development RUS Oct. 17. 2003 4<br />
Figure 3: Public Demo <strong>Test</strong>bed<br />
The domain II has the domain name ipv6.ks.uni-stuttgart.de, its core network is 2001:638:202:111::/64.<br />
We distinguish different domain according to the first 56 bit of the IPv6 address. There is only one wlan<br />
access router (ksat81) in this domain. But it has also its Home agent (ksat35), QoS Broker (ksat31) <strong>and</strong><br />
AAAC server (ksat24). With this setting we can perform inter-domain h<strong>and</strong>over between these two<br />
domains.<br />
The Stuttgart testbed connects to Madrid testbed through 6net. The router on the border is kssun10. It has<br />
a tunnel to 6net. Our IPv6 DNS is running also on this machine. All the machines in domain I are<br />
reachable from Madrid site.<br />
2.3 Other test beds<br />
2.3.1 TD_CDMA integration <strong>Test</strong>bed in Eurecom<br />
This testbed has been build for the integration meeting held in Sophia Antipolis in March 2003 <strong>and</strong> used<br />
intensively later to complete the TD-CDMA integration.<br />
It contains the full Moby Dick architecture, including Mobility (Home Agent <strong>and</strong> Paging Agent), AAAC<br />
server, <strong>and</strong> QoS (QoS Manager <strong>and</strong> QoS Broker). To demonstrate h<strong>and</strong>over, it implements two access<br />
technologies : Ethernet <strong>and</strong> TD-CDMA.<br />
Moby Dick WP5 14 / 86
d0504.doc Version 1.0 15.01.2004<br />
Mobile Terminal<br />
(Calvin)<br />
IF W-CDMA box<br />
Radio Gateway<br />
Golgoth13<br />
11<br />
W-CDMA AR2<br />
(JECKEL)<br />
14<br />
14<br />
12 12<br />
UMTS Switch<br />
(24 ports)<br />
MobyRouter + DNS IPv4 / IPV6<br />
12<br />
PA<br />
IPv6 Appli Server<br />
(Agoumi)<br />
HA + AAA-C<br />
+ QoS B<br />
(HECKEL)<br />
MURRET Access Router<br />
12<br />
Eurecom Switch<br />
6Wind Router<br />
12<br />
12<br />
CARNE<br />
EXP<br />
EURECOM<br />
Figure 4 : Eurecom testbed (Overall Description)<br />
IF W-CDMA box<br />
Mobile Terminal<br />
(Calvin)<br />
Radio Gateway<br />
Golgoth13<br />
Eth0: 2001:660:382:11:206:5bff:fe88:866a/64<br />
Eth0: 2001:660:382:1 1:201:2ff:feb8:1a76/64<br />
W-CDMA AR2<br />
(JECKEL)<br />
Eth1: 2001:660:382:14:201:2ff:fea5:912d/64<br />
Via UMTS Switch<br />
MobyRouter<br />
Eth2 : 2001:660:382:14:204:75ff:fec2:8c6d/64 on network 14<br />
Eth3 : 2001:660:382:12:204:75ff:feb0:3e68/64 on network 12<br />
MobyRouter + DNS IPv4 / IPV6<br />
Eth0: 2001:660:382:21:210:5aff:fea7:f1c5/64<br />
MURRET AR1<br />
Eth1: 2001:660:382:12:204:76ff:fe96:59a3/64<br />
UMTS<br />
VLAN 12<br />
Airs Bone<br />
IPV6<br />
2001:660:382:12::1<br />
2001:660:382::1<br />
6Wind Router<br />
HA + AAA-C+ QoS B<br />
(HECKEL)<br />
2001:660:382:12:201:2ff:feb8:1c29/64<br />
PA<br />
(Agoumi)<br />
2001:660:382:12:201:2ff:fecf:10e9/64<br />
Figure 5 : Eurecom testbed (IPv6 Addressing)<br />
The testbed was used to integrate the components in several steps described below:<br />
Phase 1 : test TD-CDMA traffic alone<br />
Phase 2 : TD-CDMA + Mobility<br />
Phase 3 : TD-CDMA + AAAC<br />
Phase 4 : TD-CDMA + Mobility + AAAC<br />
Phase 5 : TD-CDMA + QoS<br />
Further integration could be completed in the Stuttgart testbed.<br />
Moby Dick WP5 15 / 86
d0504.doc Version 1.0 15.01.2004<br />
2.3.2 AAA <strong>Test</strong>bed in Fokus<br />
The following configuration is used for the tests:<br />
eth<br />
wlan<br />
AR<br />
AAA_h<br />
server<br />
MN<br />
Figure 6. AAA Fokus testbed configuration<br />
Notes:<br />
? MN runs dummy_urp<br />
? Att in AR- runs disc in client mode<br />
? AAA home server runs disc in server mode<br />
2.3.3 Logging & auditing testbed in ETH<br />
All hosts are located in the same subnet to minimize network delay. The clock in each host is<br />
synchorinized using NTP (Network Time Protocol).<br />
Access Router 1<br />
LLM-1<br />
Local<br />
Log 1<br />
AAAC<br />
CLM<br />
Access Router 2<br />
Audit<br />
Trail<br />
LLM-2<br />
Local<br />
Log 2<br />
Figure 7. <strong>Test</strong>bed for Log Transfer<br />
Moby Dick WP5 16 / 86
d0504.doc Version 1.0 15.01.2004<br />
The databases <strong>and</strong> the mySQL Server are located in the same host as the Auditor.<br />
AAAC<br />
Audit<br />
Trail<br />
Auditor<br />
Audit<br />
<strong>Report</strong><br />
Archives<br />
Figure 8. <strong>Test</strong>bed for Auditing.<br />
2.3.4 WCDMA+ DiffServ testbed in Motorola<br />
CN<br />
HA<br />
AR<br />
AR<br />
RG<br />
MN<br />
Figure 9. WCDMA+ DiffServ testbed in Motorola<br />
Motorola performed the following tests:<br />
? Marking software tested on MN <strong>and</strong> AR.<br />
? TD-CDMA Non Access Stratum tested on MN <strong>and</strong> RG with a dummy version of TD-CDMA<br />
Access Stratum on MN <strong>and</strong> RG.<br />
? MTNM tested on MN with dummy version of FHO, Registration <strong>and</strong> Paging modules on MN.<br />
? MTNM tested on MN with dummy version of Registration <strong>and</strong> Paging modules on MN, real<br />
version of FHO on MN <strong>and</strong> AR.<br />
Moby Dick WP5 17 / 86
d0504.doc Version 1.0 15.01.2004<br />
2.3.5 Overall test bed in PTIN<br />
Figure 10. PTIN testbed<br />
Moby Dick WP5 18 / 86
d0504.doc Version 1.0 15.01.2004<br />
3. Expert <strong>Test</strong>s description <strong>and</strong> methodology<br />
3.1 Introduction<br />
In this section, a description of the expert evaluation procedures (including the trial scenario involved <strong>and</strong><br />
so on) is provided. The description is divided in some subsections, each one according to a specific part<br />
of Moby Dick software.<br />
3.2 Log Transfer Delay<br />
In this document the Log Transfer Delay (LTD) is defined as the average time required to transfer a<br />
single log stored in a Local Log into a central Audit Trail. The Local Log is accessed by the Local Log<br />
Management Module (LLM), while the Audit Trail is accessed by the Central Log Management Module<br />
(CLM). Within MobyDick each Access Router (AR) owns a Local Log <strong>and</strong> the AAAC Server owns the<br />
Audit Trail. Both Local Log <strong>and</strong> Audit Trail are implemented as a mySQL database.<br />
3.2.1 <strong>Test</strong> Description<br />
The Log Transfer Delay depends surely on the network delay, database access time, hardware speed, etc.,<br />
however there are other more interesting parameters, which may also have an impact. These parameters<br />
are:<br />
n local logger = number of Local Log Management Modules (LLM),<br />
n log = #Logs = number of logs in the Local Log (stored in mySQL database)<br />
Therefore,<br />
LTD = f(n local logger , n log )<br />
The relation of LTD to n local logger <strong>and</strong> n log reflects how well the LLM <strong>and</strong> the CLM are designed <strong>and</strong><br />
implemented.<br />
There are three types of event logs, i.e., Entity Availability, User Registration, <strong>and</strong> Service Authorization<br />
Event Logs, which differ in the log size (size of information contained within each event log) <strong>and</strong> rate of<br />
generation. The different log sizes also contribute to different LTDs of each event type. However their<br />
differences are expected to be bound by a fix value, because each event type has typically a fix <strong>and</strong><br />
maximum size. Investigating LTD on this issue can not show the quality of the implementation of the<br />
LLM <strong>and</strong> the CLM; it can only show how big those differences are. Therefore, only the LTD of Entity<br />
Availablitiy Event Logs is evaluated in this document. The different rates of log generation of each event<br />
type <strong>and</strong> varying rates of log generation within certain event types (e.g., user registration <strong>and</strong> service<br />
authorization events occur intermittently) cause unequal distribution of number of event logs within each<br />
fix time interval. The current implementation of LLM queries mySQL database (Local Log) periodically<br />
to obtain event logs fall within a fix time interval. In this evaluation all logs are made available in<br />
advance before being fetched by LLMs <strong>and</strong> transferred to CLM. In real practice, every LLM <strong>and</strong> the<br />
CLM run, while event logs are generated. Therefore, the number of logs obtained by the implemented<br />
mySQL query in practice will be far less than in this evaluation; in other words the evaluation is carried<br />
out for a tougher condition. The LLMs have to work harder in this “batch mode”.<br />
3.2.2 Measurement Parameters<br />
The measurement points within the LLMs <strong>and</strong> the CLM are shown in the following time diagram.<br />
A Message Transfer Time (MTT) is defined as the time that has elapsed between the timepoint where<br />
LLM was about to deliver the message to the underlying communication layer (TCP) <strong>and</strong> the timepoint<br />
where the message has been received <strong>and</strong> successfully disassembled by CLM.<br />
Logs Transfer Time (LTT) is defined as the time that has elapsed between the timepoint where LLM was<br />
about to retrieve the logs <strong>and</strong> the timepoint where CLM has stored the last log. Note here that LTD is<br />
LTT divided by the number of logs.<br />
Moby Dick WP5 19 / 86
d0504.doc Version 1.0 15.01.2004<br />
LLM<br />
CLM<br />
Retrieval of logs fall<br />
within the first n<br />
minutes. (p = #logs)<br />
Logs Retrieval<br />
Start<br />
Assemble 1 st message<br />
Message<br />
Assemble Start<br />
send()<br />
Message Sending<br />
Start<br />
Message Sending<br />
End<br />
Message Transfer<br />
Time<br />
recv() <strong>and</strong><br />
disassemble 1 st message<br />
Message Received<br />
Get AVP values <strong>and</strong><br />
generate mySQL query<br />
Assemble p th message<br />
Message<br />
Assemble Start<br />
Log Storing Start<br />
Run mySQL query<br />
send()<br />
Message Sending<br />
Start<br />
Log Storing End<br />
Message Sending<br />
End<br />
Logs Transfer<br />
Time<br />
recv() <strong>and</strong><br />
disassemble p th message<br />
Retrieval of logs fall<br />
within the next n<br />
minutes. (q = #logs)<br />
Message Received<br />
Log Storing Start<br />
Get AVP values <strong>and</strong><br />
generate mySQL query<br />
Run mySQL query<br />
Log Storing End<br />
Log Storing End<br />
Run mySQL query to<br />
store the last log<br />
Figure 11. Measurement points in LLM <strong>and</strong> CLM<br />
3.2.3 Measurement Process <strong>and</strong> Configuration<br />
Two configurations have been used for the measurement of MTT <strong>and</strong> LTT. In the first configuration only<br />
one LLM is involved, while in the second configuration both LLMs are involved.<br />
To measure MTT, the testbed is configured as follows:<br />
? Local Log contains 5000 Entity Availability (EA) Event Logs<br />
? The measurement points are Message Sending Start Time in LLM <strong>and</strong> Message Received<br />
Timepoint in CLM.<br />
Moby Dick WP5 20 / 86
d0504.doc Version 1.0 15.01.2004<br />
? The measurements are carried out for three cases:<br />
o<br />
o<br />
o<br />
Only LLM-1 (LLM in Access Router 1) is involved<br />
Only LLM-2 (LLM in Access Router 2) is involved<br />
Both LLM-1 <strong>and</strong> LLM-2 are involved<br />
In the experiment with 2 LLMs the LLMs need to be started as simultaneous as possible.<br />
The following table shows the configuration <strong>and</strong> the required measurements.<br />
Event Type = EA #Logs per LLM = 5000 LLMs involved<br />
1<br />
2<br />
…<br />
? LLM-1<br />
? LLM-2<br />
? Both<br />
Message No. Message Sending Start Message Received<br />
5000<br />
To measure LTT, the testbed is configured as follows:<br />
? Varying amount of logs in the Local Log:<br />
o Experiments with 1 LLM use the following amount of logs: 500, 1000, 2000, 5000,<br />
10000, 20000, 50000<br />
o Experiments with 2 LLMs use the following amount of logs per LLM: 500, 1000, 2500,<br />
5000, 10000, 25000<br />
? The measurement points are the first Logs Retrieval Start in LLM <strong>and</strong> the last Log Storing End<br />
in CLM.<br />
LTT = Last Log Storing End - First Logs Retrieval Start<br />
LTD = LTT / total number of logs transferred<br />
? The measurements are carried out for three cases:<br />
o<br />
o<br />
o<br />
Only LLM-1 is involved<br />
Only LLM-2 is involved<br />
Both LLM-1 <strong>and</strong> LLM-2 are involved<br />
In experiments with 2 LLMs the LLMs need to be started as simultaneous as possible. The LTD<br />
in an experiment with 2 LLMs must be seen from the CLM’s point of view, where the total<br />
amount of logs received from all the LLMs is of interest. In this case the First Retrieval Start is<br />
the earliest of all First Retrieval Starts of the LLMs <strong>and</strong> the Last Log Storing End is the latest of<br />
all the Last Log Storing Ends.<br />
? The same configuration are repeated two or three times.<br />
3.3 Auditing Speed<br />
In this document the Auditing Speed S is defined as the number of logs that is processed within a unit of<br />
time. The Auditing Speed may be dependent on the following factors:<br />
? nN = number of users or entities in the audit trail<br />
? nlog = number of logs in the audit trail<br />
? typelog = type of event logs<br />
Therefore,<br />
S = f(n N , n log , type log )<br />
Moby Dick WP5 21 / 86
d0504.doc Version 1.0 15.01.2004<br />
3.3.1 <strong>Test</strong> Description <strong>and</strong> Measurement Parameters<br />
The measurement points within the Auditor are shown in the time diagram in Figure 12. The Audit Time<br />
as shown in Figure 12 encompasses also the time to retrieve the users or entities identity, the time to<br />
retrieve the logs from the Audit Trail, the time to store the processed logs in the Archive, <strong>and</strong> the time to<br />
delete the processed logs from the Audit Trail. Unfortunately the time spent for querying the databases<br />
(retrieval, storing, <strong>and</strong> deletion) was not measured. Current implementation of the Auditor processed the<br />
users or entities consecutively, however auditing of entity availability, user registration, <strong>and</strong> service<br />
authorization is carried out by three parallel processes.<br />
Auditor<br />
Retrieval of the first n users or<br />
entities’ identity<br />
Retrieval of the logs of user or<br />
entity 1 that fall within the first<br />
interval v (p = #logs)<br />
Store the processed logs into the<br />
Archive <strong>and</strong> delete those logs from<br />
the Audit Trail<br />
Auditing Start<br />
Auditing for user or entity 1<br />
started<br />
Auditing <strong>and</strong> <strong>Report</strong>ing ....<br />
Retrieval of the logs of user or<br />
entity 1 that fall within the next<br />
interval v (q = #logs)<br />
Store the processed logs into the<br />
Archive <strong>and</strong> delete those logs from<br />
the Audit Trail<br />
Retrieval of the logs of user or<br />
entity 1 that fall within the last<br />
interval v (r = #logs)<br />
Store the processed logs into the<br />
Archive <strong>and</strong> delete those logs from<br />
the Audit Trail<br />
Auditing <strong>and</strong> <strong>Report</strong>ing ....<br />
Auditing <strong>and</strong> <strong>Report</strong>ing ....<br />
Auditing for user or entity 1<br />
ended<br />
Audit Time<br />
Auditing for user or entity i<br />
...<br />
Retrieval of the next n users or<br />
entities’ identity<br />
Auditing for the last user or<br />
entity ended<br />
Figure 12. Measurement points in the Auditor<br />
The Audit Time has been evaluated with different number of users or entities, <strong>and</strong> different number of<br />
logs. Auditing Speed is number of logs divided by the Audit Time.<br />
3.3.2 Measurement Process <strong>and</strong> Configuration<br />
To measure the Auditing Speed of entity availability events, the testbed is configured as follows:<br />
? Varying amount of logs in the Audit Trail: 20000, 40000, <strong>and</strong> 60000 logs.<br />
? Varying number of entities for the same total amount of logs: 6, 8, <strong>and</strong> 10 entities.<br />
Moby Dick WP5 22 / 86
d0504.doc Version 1.0 15.01.2004<br />
? The measurement points are the Audit Start Time <strong>and</strong> the Audit End Time.<br />
To measure the Auditing Speed of user registration <strong>and</strong> service authorization events, the testbed is<br />
configured as follows:<br />
? Varying amount of logs in the Audit Trail: 2000, 5000, 10000, <strong>and</strong> 20000 logs.<br />
? Varying number of user identities for the same total amount of logs: 10, 20, <strong>and</strong> 30 identities.<br />
? The measurement points are the Audit Start Time <strong>and</strong> the Audit End Time.<br />
3.4 Charging performance<br />
3.4.1 <strong>Test</strong> description<br />
Charging data has been generated <strong>and</strong> transferred to the charging module. The test consists of seeing how<br />
much time it takes to process the data.<br />
3.4.2 Measurement parameters<br />
? Measures of the charging processing time, i.e. how long does it take to calculate the charge.<br />
? Measures of the time taken by the following processes:<br />
o consistency check<br />
o charge calculations<br />
o movement of the accounting data to the accounting data warehouse in function of this<br />
two parameters:<br />
? number of sessions<br />
? number of rows.<br />
3.4.3 <strong>Test</strong> realization, measurement process, interactions required.<br />
? Fill the accounting database for one user <strong>and</strong> one session with n rows of accounting data; i.e. 1<br />
START row, 1 STOP row <strong>and</strong> (n-2) INTERIM rows.<br />
Database: Accounting; Table: accountingrecords<br />
? Fill the flow table for each row of the above mentioned session; i.e. for each row one can have m<br />
different flows (described by DSCP’s). The charging code supports up to 9 different flows<br />
(DSCP’s), describing the 9 different services (e.g. S1, S2, …).<br />
Database: Accounting; Table: flows<br />
Remarks:<br />
- First of all it makes sense to have only one user with one session <strong>and</strong> just to vary the parameters n<br />
<strong>and</strong> m. Typically n is something between 2 (START <strong>and</strong> STOP rows are m<strong>and</strong>atory) <strong>and</strong> 15.<br />
- Concerning the parameter m: At the moment there is only one type flow, i.e. the DSCP is always<br />
equals 0, hence m = 1.<br />
- At the second stage, one could increase the number of sessions per user.<br />
- At a later stage, one could have several users with several sessions.<br />
Get the results from charging Statistics database.<br />
3.5 AAA Scalability tests<br />
3.5.1 <strong>Test</strong>s description <strong>and</strong> motivation<br />
AAA registration process <strong>and</strong> authentication mechanism offers an enhanced security model; but as always<br />
there is an trade-off between this extra feature <strong>and</strong> performance.<br />
Thus the AAA registration process should be regarded as an overhead its overall consumption of<br />
resources (b<strong>and</strong>width, time, CPU time) adding up <strong>and</strong> contributing to the latency of the whole system.<br />
Moby Dick WP5 23 / 86
d0504.doc Version 1.0 15.01.2004<br />
A parameter that gives an idea about this newly introduced latency is the time it takes an AAA<br />
registration to complete; this should be measured for each <strong>and</strong> every available access network. In the<br />
following this AAA registration round trip times are called t_reg_{wlan,eth,wcdma}. they can be<br />
compared against the ping roundtrip time, again for each <strong>and</strong> every type of access network<br />
(rtt_{wlan,eth,wdcdma}).<br />
Now, if you suppose that t_reg_{wlan,eth,wcdma} was measured in the home network, Madrid say, you<br />
could measure the same kind of value for a roaming mn - t_reg_roaming{wlan,eth,wcdma}. again<br />
compare it against rtt. The overhead should be small when comparing to t_reg_{wlan,eth,wcdma} - is<br />
given by the AAA routing / forwarding beetween. the AAA servers in the two different domains.<br />
Yet a new kind of test would be to see how well the attendant behaves at high loads. For that you should<br />
try to see whether the t_reg_{wlan,eth,wcdma} is affected by a configuration in which you have a mobile<br />
node (or several ones) attached to the access router <strong>and</strong> trying to register in the same time <strong>and</strong> very often;<br />
for this set up a very small lifetime offered by the AAA.h server (could be even 0). You measure the same<br />
parameter – that is t_reg_{wlan,eth,wcdma} - at any of the mobile nodes.<br />
The same test as above can be performed with several access routers <strong>and</strong> with mobile nodes from the<br />
same domain, connected to them <strong>and</strong> registering with a very small lifetime. This test would give an idea<br />
about how much load an AAA.h server can take.<br />
3.5.2 Variables to measure<br />
The following parameters are to be measured:<br />
t_reg_{wlan,eth,td-cdma}<br />
rtt_{wlan,eth,td-cdma}<br />
t_reg_roaming{wlan,eth,td-cdma}<br />
These parameters are measured in configurations involving one or more attendants <strong>and</strong> one or more<br />
mobile nodes.<br />
3.5.3 Measurement process, test realization<br />
The tests are 2in the testbed available at the FhG FOKUS premises <strong>and</strong> at official trial sites. Those test<br />
beds are described in section .2. We have one AAA server <strong>and</strong> several AAA clients offering both<br />
wavelan, TD-CDMA <strong>and</strong> ethernet access are installed in these testbed. In these tests MNs were connected<br />
to the AR (AAA client) using either wlan access or Ethernet or TD-CDMA<br />
access.<br />
MN runs dummy_urp, AR runs disc in client mode, AAA.h runs disc in server mode.<br />
Disc client (AAAAC client) is configured to communicate with disc server (AAAAC.h=AAAAC.f). Disc<br />
server has the AAA routing defined. mn_config file does not matter (no matter of the profile of the user<br />
registering) But its size (i.e. number of users in the database) must be taken into account. For not roaming<br />
tests AAAAC.h=AAAAC.f. If roaming configure AAA routing in AAAC.h <strong>and</strong> AAAC.f<br />
Launch disc server, then disc client then AAA Register user (no matter what his profile is) using<br />
dummy_urp <strong>and</strong> test_fifo. No need of Mobile Ipv6, nor QoS system. For measuring<br />
t_reg_{wlan,eth,wcdma} the following procedure is employed; a "tcpdump" runs on the AR having a<br />
filter which looks for URP ARR <strong>and</strong> URP ARA messages on the interface on which the mobile node<br />
connects to the AR <strong>and</strong> recording the timestamps of these packets. The filter could be based on the<br />
following information:<br />
request: src addr: mn_link_local(on the iface towards the AR), dst_addr: ar_link_local, src_port: 2301,<br />
dst_port: 2300;<br />
reply - the other way around;<br />
The timestamps of a request <strong>and</strong> reply are then processed by a script which produces<br />
t_reg_{wlan,eth,wcdma} by substracting the request timestamp from the reply one.<br />
Moby Dick WP5 24 / 86
d0504.doc Version 1.0 15.01.2004<br />
The rtt_{wlan,eth,wdcdma} can be used by ping6-ing the AAA.h server from the mobile node.<br />
As suggested above, another way to measure the t_reg_ (both on wlan <strong>and</strong> eth): a mn was registering <strong>and</strong><br />
the lifetime (changing it for the user registering in mn_config file in AAAAC.h server) offered by the<br />
AAAAC.h server was 0 - thus trying to (re-)register as fast as it could on a certain period of time. The<br />
number of such registrations / sec. should give an idea about the latency of the whole process under heavy<br />
stress.<br />
3.6 DSCP Marking Software (DMS)<br />
3.6.1 Filter loading<br />
3.6.1.1 <strong>Test</strong> description<br />
A basic filter is defined <strong>and</strong> loaded by the DMS. The time needed to perform this operation is measured.<br />
The goal of this test is to ensure that a filter update in the DMS is fast enough to not disturb the normal<br />
AR or MT functioning. In fact, a problem could occur if a filter is modified during a communication,<br />
especially in the AR. If the process takes too much time, some packets may not be marked correctly, <strong>and</strong><br />
could even not be marked at all.<br />
3.6.1.2 Measurement parameter<br />
The parameter measured during this test is the time needed to perform the entire filter loading process, i.e.<br />
the time between the detection of new data on the /dev/dscp character device <strong>and</strong> the end of the function<br />
in charge of reading this data <strong>and</strong> modifying the filters table in the kernel. The result is in microseconds.<br />
3.6.1.3 <strong>Test</strong> realization<br />
A file named ‘rules_MN_WWW’ is edited <strong>and</strong> contains the following filter definition:<br />
Filter 1<br />
Description TCPFilter<br />
TCPDestPort 80<br />
TCPCtrlSyn 1<br />
CoS S2<br />
Then, the contents of this file are written on the /dev/dscp character device with the following comm<strong>and</strong><br />
line:<br />
# cat rules_MN_WWW > /dev/dscp<br />
<strong>Final</strong>ly, the result is read in the /var/log/message file, because the module in charge of the /dev/dscp<br />
device, named ‘dscpdev.o’, prints all the results in this file. The comm<strong>and</strong> line is:<br />
# cat /var/log/message<br />
3.6.2 DSCP dumping from AAA<br />
3.6.2.1 <strong>Test</strong> description<br />
The time needed to change the DSCP associated with a marking type maybe critical. When flows are<br />
running, <strong>and</strong> their DSCP must change because the MT enters a new domain, if the process takes too much<br />
time then the packets may use a marking type that is not valid in the new access network. So, the test<br />
shows this time during the user registration phase. The MT AAA-registration module passes seven DSCP<br />
codes to the DMS, one by one.<br />
3.6.2.2 Measurement parameter<br />
The parameter measured during this test is the time needed to perform the each marking type update. The<br />
result is in microseconds.<br />
Moby Dick WP5 25 / 86
d0504.doc Version 1.0 15.01.2004<br />
3.6.2.3 <strong>Test</strong> realization<br />
The MT AAA-registration module writes the following line one by one on the /dev/dscp character device:<br />
SIG 12<br />
S1 12<br />
S2 12<br />
S3 12<br />
S4 0<br />
S5 12<br />
S6 12<br />
Then, the results are read in the /var/log/message file.<br />
3.6.3 DMS <strong>and</strong> IPsec<br />
3.6.3.1 <strong>Test</strong> description<br />
In the very first tests of the DMS, the marking of the IPsec packets could not be tested. This test has been<br />
done after the second release of the software.<br />
The problem was only to ensure that a packet was correctly marked, despite its modification by IPsec. In<br />
fact, we just verified that the call of the marking function was done before the call of the IPsec code, in<br />
order to ensure the DMS could not try to modify the DSCP value of a packet already processed by IPsec.<br />
The test simply consists in marking a packet before the call to the IPsec code, <strong>and</strong> verifying the packet<br />
stays correctly marked after this call.<br />
3.6.3.2 <strong>Test</strong> realization<br />
To perform this test, we just chose a packet that we were sure IPsec would treat it. Then, we defined a<br />
filter to mark this packet with a specific DSCP value (10 in hexadecimal), <strong>and</strong> verified, with Ethereal, that<br />
the packet was transmitted on the output interface with the correct DSCP value.<br />
3.7 QoS entities communication delays<br />
3.7.1 <strong>Test</strong> description <strong>and</strong> purpose:<br />
With this test we pretend to measure communication delays between the different QoS entities: QoSB,<br />
QoSM, RG, <strong>and</strong> A4C server QoSB communication. In order to achieve this purpose, delays between<br />
client to server message <strong>and</strong> respective response where measured.<br />
3.7.2 Variables to measure Measurement parameters:<br />
For QoSB-QoSM communication:<br />
Message<br />
Client-Open ? Client-Accept<br />
Configuration Request ? Configuration Decision<br />
Config Request ? Config Decision (Deny)<br />
Config Request ? Config Decision (Core Network Accept)<br />
Config Request ? Config Decision (Access Network Accept)<br />
Keep-Alive ? Keep Alive<br />
FHO <strong>Report</strong> ? FHO Decision<br />
For A4C.f server QoSB communication:<br />
Message<br />
Client-Open ? Client-Accept<br />
QoS Profile Definitions<br />
NVUP Authorization<br />
For QoSB-RG:<br />
Message<br />
Moby Dick WP5 26 / 86
d0504.doc Version 1.0 15.01.2004<br />
Message to RG<br />
Response to AR<br />
3.7.3 Measurement process, test realization Interactions required:<br />
In order to measure the response times of the QoS Broker to the AR solicitations, several measures where<br />
made in PTIN test-bed described in section 2.3.5 running the QoS Broker on Kyle <strong>and</strong> QoSManager on<br />
Kenny <strong>and</strong> Cartma.<br />
The tool used in these tests was ethereal, by placing it in the core network we where able to calculate<br />
times need by the QoS to process responses to the several clients. For some other values (cases where no<br />
response message is issued), profiling tools where used such as gprof (GNU Profiler).<br />
The profiling tools able us to measure time need by the processing functions in the QoS Broker to process<br />
QoS Profile Definition <strong>and</strong> a NVUP Authorization issued by the A4C Server. It also help us to track time<br />
need to send messages to the RG.<br />
3.8 QoS context installation time in the QoSM in the nAR<br />
3.8.1 <strong>Test</strong>s purpose <strong>and</strong> description<br />
We want to measure how long does it takes in the QoSM of the nAR during a FHO to install the filters<br />
(QoS context) a user (CoA) had installed in the oAR (of course changing the CoA of the filters to the<br />
nCoA, operation performed by the QoSB)<br />
A MN registers <strong>and</strong> sends some traffic inside his profile in order to install some filters (QoS context) in<br />
the AR it is attached to. The user does a FHO. The QoS system transfers the QoS context from the oAR to<br />
the nAR (changing the CoA to the nCoA). We want to measure how long does it take in the nAR to<br />
install this context.<br />
The total QoS FHO time is the QoS context time measured in this test plus the time taken to transfer the<br />
QoS profile from oAR to nAR via QoSB. This delay is measured in section 3.7 under the term FHO<br />
3.8.2 Variables to measure<br />
To measure how long does it take in the AR to install the QoS context we measure the time elapsed since<br />
the QoSM receives the QoS context from the QoSB (DEC COPS message) till the QoSM answers the<br />
QoSB it has installed all the filters (RPT COPS message).<br />
3.8.3 Measurement process, test realization<br />
To have more accurate results, we do the test twice. The first time we install <strong>and</strong> transfer several filters<br />
<strong>and</strong> the second time we install <strong>and</strong> transfer less filters.<br />
We employ Madrid test bed. We run AAA, QoS <strong>and</strong> FHO. No paging, charging metering <strong>and</strong> auditing<br />
needed. No DiffServ marking soft is needed. Ping6 is employed to generate DSCP marked traffic.<br />
MN is attached <strong>and</strong> AAA registered in cucaracha <strong>and</strong> installs there a filter. Then it moves to termita.<br />
There it installs 5 new filters. For this it is enough to ping (with authorized DSCPs) to several destinations<br />
<strong>and</strong> with several DSCPs. The first echo requests triggers the QoS authorization process <strong>and</strong> the QoSM<br />
installs the filter (since the user is registered). Then another FHO is done. We measure for this second<br />
FHO the time elapsed between the DEC message <strong>and</strong> the RPT message (as described in 3.8.2) using<br />
ethereal. This time corresponds to the time needed to install the 5 new filters, <strong>and</strong> to do the necessary<br />
processing including checking which from the filters transferred are already installed (reason why we<br />
install initially one filter in cucaracha).<br />
We restart the software (the QoSMs) <strong>and</strong> repeat the test. This time in termita we install only one new<br />
filter.<br />
3.9 FHO<br />
3.9.1 <strong>Test</strong> description<br />
For Fast H<strong>and</strong>over, measurements should be done to compare the following:<br />
1) Fast H<strong>and</strong>over with dummy QoS <strong>and</strong> dummy AAA module<br />
2) Fast H<strong>and</strong>over with real QoS <strong>and</strong> real AAA module<br />
Moby Dick WP5 27 / 86
d0504.doc Version 1.0 15.01.2004<br />
For those configurations, the following measurements should be made.<br />
- Overall latency during h<strong>and</strong>over (no. of ms)<br />
- data loss during h<strong>and</strong>over (no. of packets)<br />
Also, these measures should be obtained (not for all possible configurations):<br />
- specific delay of QoS manager - QoS broker communication (ms)<br />
The measurements should be done for both intra-technology (WLAN-WLAN) <strong>and</strong> inter-technology<br />
(Ethernet-WLAN, WLAN-TDCDMA, Ethernet-TDCDMA h<strong>and</strong>overs).<br />
In order to check the bicasting process, before testing FHO latencies in both tests involving only one trial<br />
site <strong>and</strong> test involving the two trial sites, a test scenario was deployed in Madrid. In this scenario, a delay<br />
was added – by means of the NistNET emulator – between the MN <strong>and</strong> a CN. This way, we can check if<br />
bicasting is working properly, because we delay the reception of the BU sent by the MN. The scenario is<br />
shown in the next figure:<br />
Figure 13. NistNET trial scenario<br />
NistNET tool emulate different network conditions in IPv4 connections, so in order to be used in our<br />
tests, an IPv6-in-IPv4 tunnel was set up, between a CN (escorpion) <strong>and</strong> a router (viudanegra). Routing<br />
was also changed in order to use viudanegra as default router of cucaracha <strong>and</strong> termita. Therefore, every<br />
packet between CN <strong>and</strong> MN passed through viudanegra (<strong>and</strong> also through the established tunnel).<br />
NistNET was installed in pulga.<br />
3.9.2 Measurement parameters<br />
To measure the overall latency during h<strong>and</strong>over we measure the delay between messages 2 Router<br />
Solicitation for Proxy (sent from MN to oAR) <strong>and</strong> message 9 H<strong>and</strong>over Execute ACK (from oAR to<br />
MN).<br />
To measure the specific delay of QoS manager - QoS broker communication (ms) we measure the delay<br />
between messages A (from oAR to QoSB) <strong>and</strong> C (from QoSB to nAR). This delay is also measured in<br />
section 4.7, under the term FHO. Note that the time needed to install the qos filters must be added to this<br />
delay. See section 3.8 for more details.<br />
See [1], signalling flow section for detail of these messages.<br />
Moby Dick WP5 28 / 86
d0504.doc Version 1.0 15.01.2004<br />
3.10 Paging<br />
3.10.1 <strong>Test</strong> description<br />
Important performance characteristic of a paging system is the signalling costs introduced as well as<br />
saved with a network supporting paging, compared to signalling costs of a system without dormant mode<br />
<strong>and</strong> paging support. The signalling costs characteristics of the proposed paging architecture <strong>and</strong> protocol<br />
has been evaluated analytically, taking various network conditions into account. A further important<br />
characteristic, which is to be measured <strong>and</strong> evaluated in this test, is the additional delay in routing initial<br />
user-data packets, which is introduced by the paging system. Additional delay is introduced due to routing<br />
a paging trigger packet (initial user-data packet) through a Paging Agent instead of routing it directly<br />
towards the addressed mobile terminal. The latter difference in routing delay is negligible (with the given<br />
network topology, having the Paging Agent in the mobile node’s visited domain) compared to the delay<br />
introduced when buffering the data-packet at the Paging Agent until the mobile terminal has been reactivated<br />
<strong>and</strong> associated routing states have been re-established under control of the Paging Agent. Delay<br />
characteristics at various interfaces/nodes are to be determined by means of the tests described in the<br />
subsequent sub-sections.<br />
3.10.2 Measurement parameters<br />
The following paragraphs describe the measurements to be performed at a particular network interface or<br />
component:<br />
1) Measure the overall paging delay when a CN sends a packet to a dormant MN, i.e. measure the<br />
time elapsed (ms) until echo reply (assumes the Ping application to be used) arrives at the CN.<br />
This time delay includes buffering of initial data packet at the PA, the paging process, reregistration<br />
of the MN with the network, re-establishment of routing info, forwarding of buffered<br />
packets <strong>and</strong> finally sending the reply from the MN. Actually, this value covers not only<br />
additional paging delay, but also the total response time. To compare, the reply time of an<br />
"active" MN can be measured (but this implies a different path!). Measure whether or not any<br />
packets are lost.<br />
2) Measure individual delays (all in ms)<br />
a) delay between arrival of initial packet at PA to forwarding of first buffered<br />
packet<br />
b) delay between MN receiving paging request <strong>and</strong> sending of MIPv6 Binding<br />
Update <strong>and</strong> de-registration (dormant request with lifetime 0).<br />
c) individual direct delays for comparison (HA ? PA, PA ? MN, MN ? HA,<br />
MN ? CN)<br />
The intention of the measurement described above is to be able to check the difference between network<br />
delays (c) <strong>and</strong> the delays specifically caused by the paging process (a, b).<br />
Repeat the measurements described above for the following scenarios <strong>and</strong> investigate differences in delay<br />
characteristics:<br />
- Enter dormant mode attached to one access technology, initiate re-activation via CN while<br />
attached to same technology.<br />
- Enter dormant mode attached to one access technology, initiate re-activation via CN while<br />
attached (moved) to a different access technology.<br />
- Perform measurements for different network load conditions (optional).<br />
The first two bullet points should be done with some, but not necessarily all combinations of<br />
technologies.<br />
3.10.3 <strong>Test</strong> realization<br />
These tests were performed both in Madrid <strong>and</strong> in Stuttgart. But the procedure was different. In Madrid,<br />
when the MN awakes, there is no QoS context for this MN (which is normal situation). Thus the QoS<br />
Moby Dick WP5 29 / 86
d0504.doc Version 1.0 15.01.2004<br />
context (QoS negotiation) must be done for each new flow (flow is unique combination of Source<br />
address, destination address <strong>and</strong> DSCP). In Stuttgart, artificially this QoS context is established before the<br />
MN awakes. This allows seeing the effect of QoS negotiation while awaking.<br />
3.11 Inter-domain H<strong>and</strong>over<br />
3.11.1 <strong>Test</strong> description<br />
For Inter-domain H<strong>and</strong>over, measurements should be done to compare Inter-domain H<strong>and</strong>over <strong>and</strong> Intradomain<br />
H<strong>and</strong>over (Fast H<strong>and</strong>over). The real QoS <strong>and</strong> AAA modules should be used in the measurement.<br />
For this configuration the data loss during the h<strong>and</strong>over should be measured.<br />
The measurement should be done only for h<strong>and</strong>over between two WLAN access routers.<br />
3.11.2 Measurement parameters<br />
To measure the Packet loss a ping is done from the MN to CN.<br />
3.11.3 Measurement process<br />
After registration MN should h<strong>and</strong>over first from ksat48 to ksat49 <strong>and</strong> then to ksat81, during the<br />
h<strong>and</strong>over a ping is done on the MN.<br />
4. Expert tests realization, results, analysis <strong>and</strong> assessment<br />
4.1 Introduction<br />
In this section we present some results obtained from the realisation of the test described in the previous<br />
section.<br />
4.2 Log Transfer Delay<br />
4.2.1 Message Transfer Time<br />
Figure 14 depicts the Message Transfer Time as a function of Sending Start Time in the case where only<br />
LLM-1 was sending the 5000 logs. The number of long lines in the figure corresponds to the number of<br />
retrieval of the whole 5000 logs. There were 26 retrievals with nearly 200 logs obtained in each retrieval<br />
except in the last retrieval. This was due to the fact that two entities (AAAC Client <strong>and</strong> QoS Manager)<br />
generated Availability Event Logs approximately every five minutes <strong>and</strong> the query used in mySQL<br />
statement retrieved logs that fell within an interval of 500 minutes.<br />
Each line shows that MTT was increasing as long as the LLM kept sending the logs. In the beginning,<br />
after each log retrieval phase, the first MTT dropped again, but its value was greater than the value of the<br />
previous phase, because during the short logs retrieval time TCP was unable to empty its buffer. This<br />
situation repeated during approximately 1.5 seconds before it achieved a “stable” state, where probably<br />
the TCP buffer was full all the time.<br />
Figure 15 shows a similar behavior when using LLM-2 (the same LLM, but in Access Router 2), except<br />
that the “stable” state has not been achieved until the end of the transfer of 5000 logs. This is reasonable<br />
when looking at the value of each MTT, which is lower than the previous case.<br />
Figure 16 <strong>and</strong> Figure 17 present the MTT of messages from LLM-1 <strong>and</strong> LLM-2 in the case where both<br />
LLMs were involved. After one second of log transfer the MTTs of both LLMs lay in a range between 0.6<br />
<strong>and</strong> 0.9 second. Obviously the MTT was larger than the first two cases <strong>and</strong> both LLMs needed nearly 4 to<br />
4.5 seconds to transfer their 5000 logs. This was double so long compared to the first two cases, but the<br />
total number of logs was also twice as much. It is worth to note here that although from an LLM’s<br />
viewpoint the MTT <strong>and</strong> duration of logs transfer (LTT) is larger, but from a CLM’s point of view the<br />
LTT is not worse.<br />
Moby Dick WP5 30 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 14. Transfer Time of messages from LLM-1 where only LLM-1 is involved.<br />
Figure 15. Transfer Time of messages from LLM-2 where only LLM-2 is involved.<br />
Moby Dick WP5 31 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 16. MTT of messages from LLM-1 where 2 LLMs are involved.<br />
Figure 17. MTT of messages from LLM-2 where 2 LLMs are involved.<br />
Moby Dick WP5 32 / 86
d0504.doc Version 1.0 15.01.2004<br />
The sections of the diagram in Figure 16 <strong>and</strong> Figure 17 between second 2.32 <strong>and</strong> 2.38 are shown together<br />
in Figure 18. As long as the LLMs kept transferring the logs, the MTT of both LLMs would continuously<br />
increase.<br />
Figure 18. MTT of messages from LLM-1 <strong>and</strong> LLM-2 between second 2.32 <strong>and</strong> 2.38.<br />
Although it is interesting to know how the MTT behaves in each of the cases, the evaluation of the LTT<br />
gives more information on the implementation performance of the LLM <strong>and</strong> the CLM.<br />
4.2.2 Logs Transfer Time<br />
The following tables show the measurement results. The time value is the number of seconds since the<br />
Epoch (00:00:00 UTC, January 1, 1970).<br />
Involved LLMs = 1 (LLM-1)<br />
Table 1. LTD where only LLM-1 is involved.<br />
Viewpoint: LLM-1<br />
#Logs Retrieval Start [sec] Last Log Stored [sec] LTT [sec] LTD [sec]<br />
500 1069724251.002910 1069724251.254946 0.252036 0.000504<br />
500 1069724369.002333 1069724369.254170 0.251837 0.000504<br />
500 1069724399.003081 1069724399.256558 0.253477 0.000507<br />
1000 1069724671.003607 1069724671.495345 0.491738 0.000492<br />
1000 1069724699.004695 1069724699.496490 0.491795 0.000492<br />
1000 1069724739.001786 1069724739.494080 0.492294 0.000492<br />
2000 1069724985.002864 1069724985.973565 0.970701 0.000485<br />
2000 1069725015.003587 1069725015.972682 0.969095 0.000485<br />
2000 1069725049.003649 1069725049.974169 0.970520 0.000485<br />
5000 1069725307.002649 1069725309.410903 2.408254 0.000482<br />
5000 1069725337.003402 1069725339.418564 2.415162 0.000483<br />
Moby Dick WP5 33 / 86
d0504.doc Version 1.0 15.01.2004<br />
5000 1069725367.002194 1069725369.409739 2.407545 0.000482<br />
10000 1069725595.002406 1069725599.834603 4.832197 0.000483<br />
10000 1069725625.005099 1069725629.833880 4.828781 0.000483<br />
10000 1069725671.003129 1069725675.784336 4.781207 0.000478<br />
20000 1069725899.005288 1069725908.714195 9.708907 0.000485<br />
20000 1069725941.003992 1069725950.722350 9.718358 0.000486<br />
20000 1069725979.003366 1069725988.679019 9.675653 0.000484<br />
50000 1069726325.003003 1069726349.246906 24.243903 0.000485<br />
50000 1069726381.003231 1069726405.191189 24.187958 0.000484<br />
50000 1069726435.003800 1069726459.383961 24.380161 0.000488<br />
LTD avg = 0.000488<br />
Table 2. LTD where only LLM-2 is involved.<br />
Involved LLMs = 1 (LLM-2)<br />
Viewpoint: LLM-2<br />
#Logs Retrieval Start [sec] Last Log Stored [sec] LTT [sec] LTD [sec]<br />
500 1069723915.020366 1069723915.276614 0.256248 0.000512<br />
500 1069724059.023657 1069724059.278982 0.255325 0.000511<br />
500 1069724087.014298 1069724087.271644 0.257346 0.000515<br />
1000 1069724505.013870 1069724505.514406 0.500536 0.000501<br />
1000 1069724553.024957 1069724553.522220 0.497263 0.000497<br />
1000 1069724589.015777 1069724589.513375 0.497598 0.000498<br />
2000 1069724847.021682 1069724847.998343 0.976661 0.000488<br />
2000 1069724881.022500 1069724882.141814 1.119314 0.000560<br />
2000 1069724913.013188 1069724913.992419 0.979231 0.000490<br />
5000 1069725159.018811 1069725161.435491 2.416680 0.000483<br />
5000 1069725191.019543 1069725193.444896 2.425353 0.000485<br />
5000 1069725231.020453 1069725233.456355 2.435902 0.000487<br />
10000 1069725453.025528 1069725457.888324 4.862796 0.000486<br />
10000 1069725487.016313 1069725491.867644 4.851331 0.000485<br />
10000 1069725531.017309 1069725535.859280 4.841971 0.000484<br />
20000 1069725757.012476 1069725766.710607 9.698131 0.000485<br />
20000 1069725797.023396 1069725806.729352 9.705956 0.000485<br />
20000 1069725835.014261 1069725844.861650 9.847389 0.000492<br />
50000 1069726059.019377 1069726083.669442 24.650065 0.000493<br />
50000 1069726115.020657 1069726139.276752 24.256095 0.000485<br />
50000 1069726177.022080 1069726201.261439 24.239359 0.000485<br />
LTD avg = 0.000496<br />
Table 3. LLM-1’s view of LTD where both LLM-1 <strong>and</strong> LLM-2 are involved.<br />
Involved LLMs = 2 (LLM-1 <strong>and</strong> LLM-2)<br />
Viewpoint: LLM-1<br />
Moby Dick WP5 34 / 86
d0504.doc Version 1.0 15.01.2004<br />
#Logs Retrieval Start [sec] Last Log Stored [sec] LTT [sec] LTD [sec]<br />
500 1069747815.003111 1069747815.457552 0.454441 0.000909<br />
500 1069748047.004514 1069748047.464777 0.460263 0.000921<br />
1000 1069748679.004019 1069748679.852027 0.848008 0.000848<br />
1000 1069748789.002811 1069748789.809630 0.806819 0.000807<br />
1000 1069748851.003950 1069748851.956201 0.952251 0.000952<br />
2500 1069748993.003134 1069748995.277950 2.274816 0.000910<br />
2500 1069749203.004370 1069749205.278851 2.274481 0.000910<br />
2500 1069749253.003639 1069749255.295361 2.291722 0.000917<br />
5000 1069749407.002732 1069749411.727801 4.725069 0.000945<br />
5000 1069749471.003514 1069749473.825093 2.821579 0.000564<br />
5000 1069749535.004308 1069749539.786188 4.781880 0.000956<br />
10000 1069749641.003784 1069749650.585746 9.581962 0.000958<br />
10000 1069749705.002622 1069749714.517247 9.514625 0.000951<br />
10000 1069749763.002488 1069749772.557991 9.555503 0.000956<br />
25000 1069750189.003941 1069750213.027421 24.023480 0.000961<br />
25000 1069750265.002701 1069750289.148210 24.145509 0.000966<br />
25000 1069750333.002784 1069750356.997857 23.995073 0.000960<br />
LTD avg = 0.000905<br />
Table 4. LLM-2’s view of LTD where both LLM-1 <strong>and</strong> LLM-2 are involved.<br />
Involved LLMs = 2 (LLM-1 <strong>and</strong> LLM-2)<br />
Viewpoint: LLM-2<br />
#Logs Retrieval Start [sec] Last Log Stored [sec] LTT [sec] LTD [sec]<br />
500 1069747815.017171 1069747815.498640 0.481469 0.000963<br />
500 1069748047.022479 1069748047.488888 0.466409 0.000933<br />
1000 1069748679.016871 1069748679.955910 0.939039 0.000939<br />
1000 1069748789.019379 1069748789.981726 0.962347 0.000962<br />
1000 1069748851.020791 1069748851.953104 0.932313 0.000932<br />
2500 1069748993.024030 1069748995.353110 2.329080 0.000932<br />
2500 1069749203.018818 1069749205.411743 2.392925 0.000957<br />
2500 1069749253.019957 1069749255.347914 2.327957 0.000931<br />
5000 1069749407.023449 1069749411.765081 4.741632 0.000948<br />
5000 1069749473.024959 1069749475.803932 2.778973 0.000556<br />
5000 1069749535.016377 1069749539.441761 4.425384 0.000885<br />
10000 1069749641.018774 1069749650.612392 9.593618 0.000959<br />
10000 1069749705.020243 1069749714.587306 9.567063 0.000957<br />
10000 1069749763.021558 1069749772.362947 9.341389 0.000934<br />
25000 1069750189.021248 1069750212.727297 23.706049 0.000948<br />
25000 1069750265.022978 1069750288.907783 23.884805 0.000955<br />
25000 1069750333.024516 1069750356.914274 23.889758 0.000956<br />
Moby Dick WP5 35 / 86
d0504.doc Version 1.0 15.01.2004<br />
LTD avg = 0.000920<br />
Table 5. CLM's view of LTD where both LLM-1 <strong>and</strong> LLM-2 are involved.<br />
Involved LLMs = 2 (LLM-1 <strong>and</strong> LLM-2)<br />
Viewpoint: CLM<br />
#Logs Retrieval Start [sec] Last Log Stored [sec] LTT [sec] LTD [sec]<br />
1000 1069747815.003111 1069747815.498640 0.495529 0.000496<br />
1000 1069748047.004514 1069748047.488888 0.484374 0.000484<br />
2000 1069748679.004019 1069748679.955910 0.951891 0.000476<br />
2000 1069748789.002811 1069748789.981726 0.978915 0.000489<br />
2000 1069748851.003950 1069748851.956201 0.952251 0.000476<br />
5000 1069748993.003134 1069748995.353110 2.349976 0.000470<br />
5000 1069749203.004370 1069749205.411743 2.407373 0.000481<br />
5000 1069749253.003639 1069749255.347914 2.344275 0.000469<br />
10000 1069749407.002732 1069749411.765081 4.762349 0.000476<br />
10000 1069749471.003514 1069749475.803932 4.800418 0.000480<br />
10000 1069749535.004308 1069749539.786188 4.781880 0.000478<br />
20000 1069749641.003784 1069749650.612392 9.608608 0.000480<br />
20000 1069749705.002622 1069749714.587306 9.584684 0.000479<br />
20000 1069749763.002488 1069749772.557991 9.555503 0.000478<br />
50000 1069750189.003941 1069750213.027421 24.023480 0.000480<br />
50000 1069750265.002701 1069750289.148210 24.145509 0.000483<br />
50000 1069750333.002784 1069750356.997857 23.995073 0.000480<br />
LTD avg = 0.000480<br />
Note that in the abovementioned CLM’s view, the number of logs is the sum of logs sent by LLM-1 <strong>and</strong><br />
LLM-2.<br />
The following two figures present <strong>and</strong> compare the LTDs from the tables above. The LTDs are shown as<br />
a function of the number of logs.<br />
Figure 19 compares the LTDs obtained from the experiments with 1 LLM (1-LLM-System) <strong>and</strong> the<br />
experiments with 2 LLMs (2-LLMs-System) for the same number of logs. The figure shows that the<br />
implemented LLM <strong>and</strong> CLM can transfer the same amount of logs a bit faster in a 2-LLMs-System than<br />
in a 1-LLM-System. This is reasonable because both transfers in a 2-LLMs-System (from LLM-1 <strong>and</strong><br />
LLM-2) were running in parallel <strong>and</strong> the CLM was able to receive both data stream simultaneously. The<br />
figure also shows that there is only a small deviation of LTD for different number of logs ranging from<br />
1000 to 50’000 logs.<br />
Moby Dick WP5 36 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 19. Comparison of LTDs between 1-LLM-System <strong>and</strong> 2-LLMs-System.<br />
Figure 20 depicts the LTD in a 2-LLMs-System from different viewpoints, i.e., from the viewpoint of<br />
each LLMs <strong>and</strong> the CLM. Each LLM experienced a larger LTD in a 2-LLMs-System than in a 1-LLM-<br />
System (cf. Figure 19), but if this is seen as a whole, i.e., from the CLM’s viewpoint, the resulted LTD is<br />
better.<br />
Moby Dick WP5 37 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 20. LTD in a 2-LLMs-System from the viewpoint of the LLMs <strong>and</strong> the CLM.<br />
4.2.3 Conclusions<br />
The evaluation of the measurement results has shown that the implemented LLM <strong>and</strong> CLM are scalable<br />
with respect to number of logs <strong>and</strong> number of LLMs (number of Access Routers).<br />
4.3 Auditing Speed<br />
In this section the results of the measurements are presented. Each subsection deals with one of the three<br />
auditing tasks, i.e., auditing of entity availability, user registration, <strong>and</strong> service authorization events.<br />
4.3.1 Entity Availability<br />
The measurement results are presented in the following table.<br />
#Logs<br />
Table 6. Entity Availability Auditing Speed for different number of logs <strong>and</strong> entities.<br />
#Entities<br />
Audit Start<br />
[sec]<br />
Audit End<br />
[sec]<br />
Audit Time<br />
[sec]<br />
Auditing Speed<br />
[logs/sec]<br />
20000 6 1070327137 1070327143 6 3333.33<br />
20000 6 1070328061 1070328067 6 3333.33<br />
40000 6 1070327014 1070327030 16 2500.00<br />
40000 6 1070327935 1070327951 16 2500.00<br />
60000 6 1070326860 1070326891 31 1935.48<br />
60000 6 1070327748 1070327779 31 1935.48<br />
20000 8 1070327083 1070327089 6 3333.33<br />
Moby Dick WP5 38 / 86
d0504.doc Version 1.0 15.01.2004<br />
20000 8 1070328023 1070328029 6 3333.33<br />
40000 8 1070326950 1070326966 16 2500.00<br />
40000 8 1070327883 1070327899 16 2500.00<br />
60000 8 1070326772 1070326803 31 1935.48<br />
60000 8 1070327678 1070327709 31 1935.48<br />
20000 10 1070325514 1070325520 6 3333.33<br />
20000 10 1070327984 1070327990 6 3333.33<br />
40000 10 1070325647 1070325663 16 2500.00<br />
40000 10 1070327829 1070327846 17 2352.94<br />
60000 10 1070325292 1070325321 29 2068.97<br />
60000 10 1070327548 1070327577 29 2068.97<br />
Figure 21 depicts the Audit Time for different number of entities <strong>and</strong> logs. The diagram shows an<br />
increase in the Audit Time the greater the amount of logs, which is reasonable, but also shows an<br />
unexpected decrease in the auditing speed, which is clearly identifiable in Figure 22. The deceleration<br />
seems however to be getting smaller the greater the amount of logs is. On the other h<strong>and</strong> it seems that the<br />
auditing speed does not depend on the number of entities within the audit trail, which reflects the fact that<br />
the logs of the entities are not processed concurrently.<br />
Figure 21. Entity Availability Audit Time for different number of logs <strong>and</strong> entities.<br />
Moby Dick WP5 39 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 22. Entity Availability Auditing Speed for different number of logs <strong>and</strong> entities.<br />
4.3.2 User Registration<br />
The measurement results are presented in the following table.<br />
Moby Dick WP5 40 / 86
d0504.doc Version 1.0 15.01.2004<br />
#Logs<br />
#Users<br />
Audit Start<br />
[sec]<br />
Audit End<br />
[sec]<br />
Audit Time<br />
[sec]<br />
Auditing Speed<br />
[logs/sec]<br />
2000 10 1070317521 1070317529 8 250.00<br />
2000 10 1070317614 1070317622 8 250.00<br />
2000 10 1070317657 1070317664 7 285.71<br />
5000 10 1070316893 1070316932 39 128.21<br />
5000 10 1070316998 1070317038 40 125.00<br />
5000 10 1070317083 1070317122 39 128.21<br />
10000 10 1070314765 1070314901 136 73.53<br />
10000 10 1070314975 1070315108 133 75.19<br />
10000 10 1070315161 1070315295 134 74.63<br />
20000 10 1070310191 1070310682 491 40.73<br />
20000 10 1070310826 1070311320 494 40.49<br />
20000 10 1070312139 1070312632 493 40.57<br />
2000 20 1070317385 1070317394 9 222.22<br />
2000 20 1070317429 1070317438 9 222.22<br />
2000 20 1070317464 1070317472 8 250.00<br />
5000 20 1070316493 1070316535 42 119.05<br />
5000 20 1070316682 1070316723 41 121.95<br />
5000 20 1070316810 1070316851 41 121.95<br />
10000 20 1070314021 1070314164 143 69.93<br />
10000 20 1070314302 1070314444 142 70.42<br />
10000 20 1070314502 1070314644 142 70.42<br />
20000 20 1070308917 1070309428 511 39.14<br />
20000 20 1070309606 1070310113 507 39.45<br />
20000 20 1070311561 1070312069 508 39.37<br />
2000 30 1070317222 1070317230 8 250.00<br />
2000 30 1070317278 1070317286 8 250.00<br />
2000 30 1070317316 1070317326 10 200.00<br />
5000 30 1070316186 1070316228 42 119.05<br />
5000 30 1070316290 1070316332 42 119.05<br />
5000 30 1070316367 1070316411 44 113.64<br />
10000 30 1070312885 1070313023 138 72.46<br />
10000 30 1070313218 1070313361 143 69.93<br />
10000 30 1070313483 1070313627 144 69.44<br />
20000 30 1070306793 1070307316 523 38.24<br />
20000 30 1070307576 1070308098 522 38.31<br />
20000 30 1070308168 1070308683 515 38.83<br />
While Figure 23 shows the Audit Time, Figure 24 shows the Auditing Speed of user registration events.<br />
Again, there is a decrease in the Auditing Speed, but now the asymptotical limit is more visible.<br />
Compared to the Auditing Speed of entity availability events, the Auditing Speed of user registration<br />
events is much lower, which can be justified by the more complex violation criteria in the auditing of user<br />
registration events.<br />
Moby Dick WP5 41 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 23. User Registration Audit Time for different number of logs <strong>and</strong> users.<br />
Figure 24. User Registration Auditing Speed for different number of logs <strong>and</strong> users.<br />
Moby Dick WP5 42 / 86
d0504.doc Version 1.0 15.01.2004<br />
4.3.3 Service Authorization<br />
The following tables show the measurement results.<br />
#Logs<br />
#Users<br />
Audit Start<br />
[sec]<br />
Audit End<br />
[sec]<br />
Audit Time<br />
[sec]<br />
Auditing Speed<br />
[logs/sec]<br />
2000 10 1070323429 1070323436 7 285.71<br />
5000 10 1070323069 1070323105 36 138.89<br />
10000 10 1070322566 1070322688 122 81.97<br />
20000 10 1070320978 1070321459 481 41.58<br />
2000 20 1070323326 1070323334 8 250.00<br />
5000 20 1070322949 1070322987 38 131.58<br />
10000 20 1070322279 1070322421 142 70.42<br />
20000 20 1070320358 1070320902 544 36.76<br />
2000 30 1070323218 1070323225 7 285.71<br />
5000 30 1070322821 1070322860 39 128.21<br />
10000 30 1070321968 1070322099 131 76.34<br />
20000 30 1070317837 1070318340 503 39.76<br />
The measurement results are shown in the following figures.<br />
Figure 25. Service Authorization Audit Time for different number of logs <strong>and</strong> users.<br />
Moby Dick WP5 43 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 26. Service Authorization Auditing Speed for different number of logs <strong>and</strong> users.<br />
The violation criteria in the auditing of service authorization events is similar to the criteria for user<br />
registration events. Therefore, the auditing speeds in both cases are nearly the same.<br />
4.3.4 Conclusions<br />
The larger the amount of logs to be processed the smaller is the Auditing Speed. This is an undesirable<br />
behavior, although an asymptotic limit of this deceleration seems to exist. In this regard, the<br />
implementation of the auditor must be improved. Database queries may need to be made more optimal.<br />
Since an SLA violation detection deals a lot with the processing of time values, the time representation<br />
needs to be reconsidered. Current mySQL tables use the field DATETIME to represent the time, <strong>and</strong> this<br />
is represented as string in the C++ code within the auditor. A better solution is to store time values as<br />
integer values.<br />
4.4 AAA Scalability tests<br />
4.4.1 AAA Scalability tests at FhG FOKUS<br />
The results obtained in the FOKUS FhG test bed are as follows:<br />
For wlan:<br />
? rtt_wlan = 4.932 ms<br />
? t_reg_wlan = 140 ms<br />
For Ethernet:<br />
? rtt_eth = 0.946 ms<br />
? t_reg_eth = 80 ms<br />
In Madrid testbed the results were (for wlan)<br />
? t_reg_wlan= 195 ms (delay between messages 3 <strong>and</strong> 9 in Figure 27)<br />
Moby Dick WP5 44 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 27 AAA registration time (URP in MN)<br />
The data obtained in Stuttgart for the same kind of measurements:<br />
For wlan:<br />
? rtt_wlan = 2.870 ms<br />
? t_reg_wlan = ms<br />
For eth when registering to the local AAA server.<br />
? rtt_eth = 0.502 ms<br />
? t_reg_eth = ms<br />
For tdcdma:<br />
? rtt_tdcdma = 133 ms<br />
? t_reg_tdcdma = 881.25 ms<br />
For eth when registering a roaming user to the Madrid AAA server<br />
? rtt_eth = 210.235 ms<br />
? t_reg_eth = 316 ms<br />
Registering a mobile with 0 seconds AAA session lifetime should give an idea about the latency of the<br />
whole process when the attendants are under heavy (maximum) load.<br />
The following results were obtained in the FOKUS FhG testbed:<br />
4.50 < max_no_registrations / sec < 5.00<br />
That would give no more than 1 reg. at approx. 200 ms.<br />
4.4.1.1 Scalability tests done in Madrid <strong>and</strong> Stuttgart<br />
1 attendant, several MNs registering at this attendant. All users register with session lifetime=0 thus<br />
forcing registration as soon at it finishes to register.<br />
Nº of MN Time to register (ms) for MN whose user is acuevas@ipv6.it.uc3m.es<br />
1 29 reg in 20,29 s = 0,6 s/reg<br />
3 20 reg in 28,16 s = 1,4 s/reg<br />
Several attendants, 1MN registering at each attendant. All users register with session lifetime=0 thus<br />
forcing registration as soon at it finishes to register.<br />
Nº of Attendants Time to register (ms) for MN whose user is acuevas@ipv6.it.uc3m.es<br />
1 16 reg in 8,4 s = 0,525 s/reg (this test is exactly the same like the first in the previous<br />
table)<br />
2 50 reg in 26,5 s = 0,53<br />
3 25 reg in 13 s = 0,52 s reg<br />
Moby Dick WP5 45 / 86
d0504.doc Version 1.0 15.01.2004<br />
4.4.2 Analysis of results<br />
Several other tests (see next paragraph) led to the following conclusions:<br />
? Most of the time is spent at MN <strong>and</strong> AR computing the DH keys necessary for a DH key<br />
exchange.<br />
? Some CPU cycles are also spent at AAA_h for computing the HMAC of the packet.<br />
? The AAA routing does not introduce too much extra overhead (when compared to the one<br />
resulting from the registration processing delays at the attendant <strong>and</strong> the AAA server)<br />
The time in the AAA.h to process the registration request was measured in d0503. We measure it again,<br />
having 10 users in Madrid's database (mn_config). It is about 1 ms. So it is negligible with respect to the<br />
total registration time. In Figure 28 it's time elapsed between message 24 <strong>and</strong> 26. Messages 29 <strong>and</strong> 30 are<br />
the initiation of the corresponding accounting session.<br />
Figure 28 Registration processing time in AAA.h <strong>and</strong> accounting session start<br />
4.5 Charging performance<br />
4.5.1 Results<br />
Presentation of Results:<br />
- Typically one would use a two dimensional graphical representation: E.g. fix m to certain value,<br />
<strong>and</strong> vary n (X-axis) <strong>and</strong> calculate the charging time (Y-axis).<br />
For the first test, the x axis is the multiplication of the two variables that affect charging time: nº of<br />
sessions <strong>and</strong> nº of rows<br />
Results are: (times in ms)<br />
mysql> select * from stat;<br />
+----+-------------+----------+--------+------------+------------+--------+<br />
| id | consistency | charging | moving | processing | nrsessions | nrrows |<br />
+----+-------------+----------+--------+------------+------------+--------+<br />
| 4 | 913 | 13618 | 29660 | 1 | 53 | 7224 |<br />
| 3 | 12 | 85 | 137 | 0 | 2 | 17 |<br />
| 5 | 8 | 88 | 166 | 0 | 1 | 50 |<br />
| 6 | 40 | 1258 | 1933 | 0 | 7 | 486 |<br />
| 7 | 27 | 574 | 1447 | 0 | 3 | 188 |<br />
| 8 | 30 | 1683 | 1574 | 1 | 5 | 352 |<br />
+----+-------------+----------+--------+------------+------------+--------+<br />
6 rows in set (0.00 sec)<br />
We represent id 5,6,7,8:<br />
Moby Dick WP5 46 / 86
d0504.doc Version 1.0 15.01.2004<br />
Charging processes time<br />
2000<br />
time in ms<br />
1500<br />
1000<br />
500<br />
0<br />
0 500 1000 1500 2000 2500 3000 3500 4000<br />
nºsessions*nºrows<br />
consistency*10 charging moving<br />
Figure 29. Charging performance graph<br />
4.5.2 Analysis of results<br />
Charging is a resource consuming process. It should be done in a separate machine than the A4C.h server.<br />
The A4C.h entity would be the same though. The idea is to put the AAA.h in a machine <strong>and</strong> the charging<br />
process in another. Both communicate because they access the same database. MySQL is used now.<br />
MySQL allows transparently to access to remote databases, so the above scenario could be very easily<br />
done.<br />
4.6 DSCP Marking Software (DMS)<br />
4.6.1 Filter loading<br />
The result read on the /var/log/message is the following:<br />
Jun 12 11:24:30 larva10 kernel: dscp: open<br />
Jun 12 11:24:30 larva10 kernel: dscp: write<br />
Jun 12 11:24:30 larva10 kernel: +Filter 1 erased<br />
Jun 12 11:24:30 larva10 kernel: --New filter description: TCPFilter<br />
Jun 12 11:24:30 larva10 kernel: --New field list<br />
Jun 12 11:24:30 larva10 kernel: ...Field name: TCPDestPort<br />
Jun 12 11:24:30 larva10 kernel: ...Value: 80<br />
Jun 12 11:24:30 larva10 kernel: --New field<br />
Jun 12 11:24:30 larva10 kernel: ...Field name: TCPCtrlSyn<br />
Jun 12 11:24:30 larva10 kernel: ...Value: 1<br />
Jun 12 11:24:30 larva10 kernel: --New filter DSCP: 0xC<br />
Jun 12 11:24:30 larva10 kernel: +1 Filters loaded, update done in 40 usec<br />
Jun 12 11:24:30 larva10 kernel: dscp: release<br />
The time needed to perform the entire filter loading process, i.e. interpreting <strong>and</strong> controlling the data read<br />
on the /dev/dscp character device, removing the old filter, <strong>and</strong> creating <strong>and</strong> recording the new one, is only<br />
40 microseconds. The repetition of this kind of test shows that the time needed to perform this operation<br />
is stable: the result is always in the range 36 to 40 microseconds, on a MT.<br />
Note that the Filter 1 was already defined before the test. So, the old Filter 1 has been erased first. Then, a<br />
new filter has been created to replace the old Filter 1. This is the worst-case scenario in terms of time<br />
needed to load a filter. Loading a filter in the kernel filters table at an empty place takes, of course, less<br />
time.<br />
4.6.2 DSCP dumping from AAA<br />
The result read on the /var/log/message is the following:<br />
Jun 12 11:24:29 larva10 kernel: dscp: open<br />
Jun 12 11:24:29 larva10 kernel: dscp: write<br />
Jun 12 11:24:29 larva10 kernel: Service 0 = Name: SIG - DSCP: 0x C<br />
Jun 12 11:24:29 larva10 kernel: +1 Filters loaded, update done in 14 usec<br />
Moby Dick WP5 47 / 86
d0504.doc Version 1.0 15.01.2004<br />
Jun 12 11:24:29 larva10 kernel: dscp: write<br />
Jun 12 11:24:29 larva10 kernel: Service 1 = Name: S1 - DSCP: 0x C<br />
Jun 12 11:24:29 larva10 kernel: +1 Filters loaded, update done in 7 usec<br />
Jun 12 11:24:29 larva10 kernel: dscp: write<br />
Jun 12 11:24:29 larva10 kernel: Service 2 = Name: S2 - DSCP: 0x C<br />
Jun 12 11:24:29 larva10 kernel: +1 Filters loaded, update done in 6 usec<br />
Jun 12 11:24:29 larva10 kernel: dscp: write<br />
Jun 12 11:24:29 larva10 kernel: Service 3 = Name: S3 - DSCP: 0x C<br />
Jun 12 11:24:29 larva10 kernel: +1 Filters loaded, update done in 6 usec<br />
Jun 12 11:24:29 larva10 kernel: dscp: write<br />
Jun 12 11:24:29 larva10 kernel: Service 4 = Name: S4 - DSCP: 0x 0<br />
Jun 12 11:24:29 larva10 kernel: +1 Filters loaded, update done in 5 usec<br />
Jun 12 11:24:29 larva10 kernel: dscp: write<br />
Jun 12 11:24:29 larva10 kernel: Service 5 = Name: S5 - DSCP: 0x C<br />
Jun 12 11:24:29 larva10 kernel: +1 Filters loaded, update done in 4 usec<br />
Jun 12 11:24:30 larva10 kernel: dscp: write<br />
Jun 12 11:24:30 larva10 kernel: Service 6 = Name: S6 - DSCP: 0x C<br />
Jun 12 11:24:30 larva10 kernel: +1 Filters loaded, update done in 3 usec<br />
Jun 12 11:24:30 larva10 kernel: dscp: release<br />
The result is always in the range of 3 to 7 microseconds, excepted for the SIG marking type, which is<br />
treated as particular case in the DMS. However, because the time is expressed in microseconds, we can<br />
consider that this difference is negligible.<br />
4.6.3 DMS <strong>and</strong> IPsec<br />
Figure 30. DMS <strong>and</strong> IPsec detail<br />
As we can see on the figure, the outer packet is correctly marked, <strong>and</strong> its DSCP is the same than the inner<br />
packet DSCP (10 in hexadecimal).<br />
Moby Dick WP5 48 / 86
d0504.doc Version 1.0 15.01.2004<br />
4.7 QoS entities communication delays<br />
4.7.1 Results<br />
QoSB-QoSM<br />
Message<br />
Response time (ms)<br />
Client-Open 0,08<br />
Configuration Request 54<br />
Access Request Denial 0,7<br />
Accept Core Request 1,5<br />
Accept Access Network Request 4<br />
Keep-Alive 0,14<br />
FHO 18<br />
QoSB-4AC.f server<br />
Table 7 - Response times of the QoS Broker to QoSManager<br />
Message<br />
Response time (ms)<br />
Client-Open 0,08<br />
QoS Profile Definitions 384<br />
NVUP Authorization 4<br />
Table 8 - Response times of the QoSB to the 4AC.f server<br />
QoSB-RG<br />
Message<br />
Responde time (ms)<br />
Message to RG 1<br />
Response to AR 2<br />
4.7.2 Analysis of results<br />
QoSB-QoSM<br />
From these results several conclusions can be obtained.<br />
a) The time needed by the QoSBroker to accept a new client is extremely quick (81 µs) as expected, since<br />
this is a very simple operation, <strong>and</strong> simple decision rules are in place currently.<br />
b) The time needed to respond to a Configuration Request is the highest time measured. This is related<br />
with the need to check a database on the AR nature <strong>and</strong> needs. However this does not constitute a<br />
problem since, it’s an uncommon task.<br />
c) Another interesting comparison is between Accepted Request <strong>and</strong> Denied ones, which are much faster.<br />
This is explained by the fact that Denials do not need to under go under the reservation processing. The<br />
same goes for Core Requests where there is no need to check User Profiles.<br />
d) With the QoSBroker version used for these tests, the FHO response time is quite high (compared to<br />
expected time for full network response), since in these test several changes where done to the QoSBroker<br />
code in order to facilitate its profiling. The QoS context installation time measured in section 4.8 must be<br />
added to this time to have the total time added by QoS to the FHO.<br />
QoSB-A4C.f server<br />
Analysing the data in Table 8 we can conclude that the time taken by the QoSBroker to accept AAAC<br />
server communications is in the same order of magnitude than the time required to accept a new<br />
QoSManager communications.<br />
The extra time needed by the QoSBroker to process QoS Profile Definitions is related to the need to<br />
process, interpret <strong>and</strong> store all that information, <strong>and</strong> is hard to improve from the point of view of<br />
implementation. Nevertheless it is not a common message, <strong>and</strong> corresponds to an action when no real<br />
communications is being performed (user registration).<br />
The time taken to respond to a NVUP Authorization is 4007 µs, wich is quite acceptable.<br />
QoSB-RG<br />
Moby Dick WP5 49 / 86
d0504.doc Version 1.0 15.01.2004<br />
The time needed to respond to the AR is longer than the time needed to respond to the RG. This has to do<br />
with the fact that the RG is contacted before finishing up all reservations, <strong>and</strong> only then it responds to the<br />
AR.<br />
4.8 QoS context installation time in the QoSM in the nAR<br />
4.8.1 Results<br />
For 6 filters one installed:<br />
Display in QoSM in nAR<br />
Read from FHO-module 52 bytes<br />
Received START_QOSB_LISTEN!!!<br />
UNSOLICITED FHO DECISION RECEIVED O.K.<br />
Installing FHO connection number 1 ...<br />
CoA IPv6 address= 2001:720:410:1007:204:e2ff:fe2a:460c<br />
Destination IPv6 address= 2001:720:410:1003:2c0:26ff:fea3:7fed<br />
DSCP= 0x02<br />
Installing FHO connection number 2 ...<br />
CoA IPv6 address= 2001:720:410:1007:204:e2ff:fe2a:460c<br />
Destination IPv6 address= 2001:200:0:8002:203:47ff:fea5:3085<br />
DSCP= 0x02<br />
Installing FHO connection number 3 ...<br />
CoA IPv6 address= 2001:720:410:1007:204:e2ff:fe2a:460c<br />
Destination IPv6 address= 3ffe:2620:6:1::1<br />
DSCP= 0x02<br />
Installing FHO connection number 4 ...<br />
CoA IPv6 address= 2001:720:410:1007:204:e2ff:fe2a:460c<br />
Destination IPv6 address= 2001:638:202:1:204:76ff:fe9b:ecaf<br />
DSCP= 0x02<br />
Installing FHO connection number 5 ...<br />
CoA IPv6 address= 2001:720:410:1007:204:e2ff:fe2a:460c<br />
Destination IPv6 address= 2001:720:410:1003:2c0:26ff:fea0:dd21<br />
DSCP= 0x02<br />
FHO: A->C, Next Connection is already installed, nothing to do...:<br />
CoA IPv6 address= 2001:720:410:1007:204:e2ff:fe2a:460c<br />
Destination IPv6 address= 2001:720:410:1003:2c0:26ff:fea0:dd21<br />
DSCP= 0x22<br />
sending QOS_FHO_ACK to the FHO-module (value QOS_AVAILABLE)<br />
Measure is shown in next figure:<br />
Figure 31. QoS context installation time<br />
The time elapsed is about 1 ms: Time between messages No 60 <strong>and</strong> 59: 14.139s-14.138 s<br />
Moby Dick WP5 50 / 86
d0504.doc Version 1.0 15.01.2004<br />
For 2 filters one installed:<br />
We follow the same procedure than before <strong>and</strong> using ethereal we find that the time elapsed is 0,4ms.<br />
We can assume that the time elapsed is a linear function of the number of filters to install:<br />
Time_elapsed=a*number_of_filters_to_install+b<br />
Solving the equation system:<br />
Time_elapsed=0.15ms/filter*number_of_filters_to_install+0.25ms<br />
Time elapsed to install the QoS Context<br />
in the nAR<br />
time elapsed (ms) (numbers<br />
indicate values measured, other values<br />
are infered)<br />
1,4<br />
1,2<br />
1<br />
0,8<br />
0,6<br />
0,4<br />
0,2<br />
0<br />
0,4<br />
0 2 4 6 8<br />
Number of filters to install<br />
1<br />
Figure 32. Time elapsed to install the QoS context in the nAR<br />
4.8.2 Analysis of results<br />
It takes 0.15 ms to install a filter.<br />
The 0,25 ms seem to be the overhead to process the DEC message, to check if some of the filters<br />
transferred are already installed, etc.<br />
The QoS context transfer time measured in section 4.7 must be added to this time to have the total time<br />
added by QoS to the FHO.<br />
4.9 FHO<br />
4.9.1 <strong>Test</strong>bed in Madrid<br />
4.9.1.1 Results<br />
Table 9. Summary of measures for FHO (in Madrid)<br />
N QoS AAA Orig Dest Measures<br />
D R D R Eth WL WC Eth WL WC OL DL DQ<br />
1 X X X X 16ms 0 14ms<br />
2 X X X X 20ms 0 14ms<br />
3 X X X X 18ms 67 14ms<br />
4 X X X X 18ms 0 -<br />
Notes:<br />
FHO with dummy modules was performed in [5] <strong>and</strong> the results are taken from there.<br />
See section 3.9.2 to know details of the measures. Ping done each 40ms.<br />
Abbreviations:<br />
N Number of test<br />
D 'dummy' module<br />
R 'real' module<br />
Moby Dick WP5 51 / 86
d0504.doc Version 1.0 15.01.2004<br />
Orig Technology from where the MN starts<br />
Dest Technology where the MN arrives<br />
Eth Ethernet<br />
WL Wireless LAN (802.11)<br />
WC WCDMA<br />
OL Overall latency<br />
DL Data loss<br />
DQ Delay of QoS manager - QoS broker communication<br />
DA Delay due to communication with the AAA attendant<br />
DI Delay caused by IPSec tunnelling<br />
DQ: The time to install the filters in the nAR must be added to this value to know the total delay due to QoS. See section 4.8.1 for<br />
more details<br />
Data loss<br />
[root@viudanegra root]# ping6 larva10.ipv6.it.uc3m.es -i 0.04<br />
PING larva10.ipv6.it.uc3m.es(larva10.ipv6.it.uc3m.es) 56 data bytes<br />
64 bytes from larva10.ipv6.it.uc3m.es: icmp_seq=0 hops=62 time=2.693 msec<br />
64 bytes from larva10.ipv6.it.uc3m.es: icmp_seq=1 hops=62 time=3.348 msec<br />
***********ok***********<br />
64 bytes from larva10.ipv6.it.uc3m.es: icmp_seq=152 hops=62 time=2.647 msec<br />
64 bytes from larva10.ipv6.it.uc3m.es: icmp_seq=153 hops=62 time=3.118 msec<br />
64 bytes from larva10.ipv6.it.uc3m.es: icmp_seq=221 hops=62 time=726 usec<br />
64 bytes from larva10.ipv6.it.uc3m.es: icmp_seq=222 hops=62 time=588 usec<br />
***********ok********<br />
64 bytes from larva10.ipv6.it.uc3m.es: icmp_seq=242 hops=62 time=620 usec<br />
64 bytes from larva10.ipv6.it.uc3m.es: icmp_seq=243 hops=62 time=613 usec<br />
--- larva10.ipv6.it.uc3m.es ping statistics ---<br />
244 packets transmitted, 177 packets received, 27% packet loss<br />
round-trip min/avg/max/mdev = 0.553/5.305/64.011/10.330 ms<br />
[root@viudanegra root]#<br />
4.9.1.2 Analysis of results.<br />
Results are in the same range than in Stuttgart (section 4.9.2.1) <strong>and</strong> DQ is also similar to the results found<br />
in Aveiro (section 4.7.1).<br />
<strong>Test</strong>s with the IPv6-in-IPv4 tunnel <strong>and</strong> added delays between CN <strong>and</strong> MN showed that bicasting process<br />
worked properly, so this added delay did not increase h<strong>and</strong>over latencies, as expected. We should note<br />
that we also performed some tests with only Mobile IPv6 (not fast h<strong>and</strong>over) <strong>and</strong> in this case latencies<br />
increased when delays were added.<br />
Regarding to the tests presented above, a capture done in a MN while performing an inter-technology<br />
h<strong>and</strong>over from wlan to Ethernet is shown in the next figure.<br />
Moby Dick WP5 52 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 33. WLAN to ETH FHO capture<br />
FHO time is just about the same than in other fho types. But there are a lot of packets lost. Fho process is<br />
materialized in packets 9,10,11 <strong>and</strong> 14. It lasts, as written in results, 18ms. The network advertisement<br />
sent by the MN to the nAR is done just after the FHO process, to announce the nAR the MAC address of<br />
the MN.<br />
In the capture presented above, the BU to the HA (packet 25) <strong>and</strong> to the CN (packet 29) are sent about 2,5<br />
seconds after the FHO process. In order to analyse this problem, which only occurs in the particular<br />
WLAN to ETH h<strong>and</strong>over, more detailed tests <strong>and</strong> evaluation is required. However, the BU delay is not<br />
the reason for the packet loss due to the established bicasting. A packet loss of 67 packets, as shown<br />
above, at the deployed packet rate of 40ms would result in an interruption time of 2680 ms, which is far<br />
beyond the presented results. Due to the complexity of the overall system, more time for tests <strong>and</strong><br />
evaluation is required to analyse the problem. Extensive test on all modules separately have been<br />
successfully performed, however, the evaluation of the integrated overall system requires more time.<br />
Since the correct functioning of the architecture has been shown, the benefit of identifying the bug in this<br />
particular setting is not considered to be an additional value to the proof of concept.<br />
4.9.2 <strong>Test</strong>bed in Stuttgart<br />
For the tests in Stuttgart we do the QoS negotiation before the measurements by starting a ping from MN<br />
or from CN after the registration.<br />
The description of the tests performed is the following:<br />
-MN (ksat67) registers into the corresponding network technology<br />
-We start monitoring with Ethereal<br />
-Start ping6<br />
-MN performs FHO<br />
4.9.2.1 Results<br />
We have done two different kinds of tests. Table 10 summarizes the results obtained when the MN<br />
performs FHO while it pings the CN. Table 11 summarizes the results obtained when the MN performs<br />
FHO while it is being pinged by the CN. We send echo requests every 50ms except for the case where<br />
one of the two access technologies involved is WCDMA. For those cases we send echo requests each<br />
300ms.<br />
Moby Dick WP5 53 / 86
d0504.doc Version 1.0 15.01.2004<br />
All tests are done with real modules.<br />
N ORIG DEST MEASUREMENTS<br />
Eth WL WC Eth WL WC OL (ms) DL(pkt) DQ(ms) Ping interval (ms)<br />
1 X X 24,26 0 22,64 50<br />
2 X X 18,59 0 14,48 50<br />
3 X X 17,87 0 14,48 50<br />
4 X X 24,766 0 18,93 300<br />
5 X X 301,84 1 16 300<br />
6 X X 25,979 0 22,52 300<br />
7 X X 260,427 37 12,305 300<br />
Table 10: FHO results pinging from MN to CN<br />
N ORIG DEST MEASUREMENTS<br />
Eth WL WC Eth WL WC OL (ms) DL(pkt) DQ(ms) Ping interval (ms)<br />
1 X X 24,26 0 22,64 50<br />
2 X X 17,267 42 13,259 50<br />
3 X X 17,87 0 14,48 50<br />
4 X X 23,707 1 18,305 300<br />
5 X X 260,482 36 20,554 300<br />
6 X X 21,675 0 18,257 300<br />
7 X X 260,456 32 18,357 300<br />
Abbreviations:<br />
N Number of test<br />
Orig Technology from where the MN starts<br />
Dest Technology where the MN arrives<br />
Eth Ethernet<br />
WL Wireless LAN (802.11)<br />
WC WCDMA<br />
OL Overall latency<br />
DL Data loss<br />
DQ Delay of QoS manager - QoS broker communication<br />
Table 11: FHO results pinging from CN to MN<br />
DQ: The time to install the filters in the nAR must be added to this value to know the total delay due to QoS. See section 4.8.1 for<br />
more details<br />
4.9.2.2 Analysis of results<br />
In Table 10 we see that the FHO from WCDMA to any other technology lasts more time than other cases<br />
<strong>and</strong> also that there are some packets lost (tests number 5 <strong>and</strong> 7).<br />
In Table 11 we can see again that FHO from WCDMA to any other technology needs more time. But we<br />
can see some additional cases in which we lose some packets.<br />
In the h<strong>and</strong>over from WLAN to ETH using a ping interval of 50ms, we can see that there are 42 packets<br />
lost. That means there is a break in the h<strong>and</strong>over process of around 2.1 seconds.<br />
We can analyze some ethereal captures now for the case FHO from WLAN to ETHERNET.<br />
Moby Dick WP5 54 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 34: View from AR (1)<br />
Figure 35: View from AR (2)<br />
We can see in Figure 34 <strong>and</strong> Figure 35 ethereal captures from the AR. Packet #304 is the last FHO<br />
message. Packet #487 contains the BU sent to the CN. Packet #491 is the BU sent to the HA. We can see<br />
that there is a big delay between the time when the MN sends the BU to the HA <strong>and</strong> CN <strong>and</strong> the time<br />
when the MN receives the last FHO message. This delay is around 2 seconds, <strong>and</strong> during this time the<br />
ping packets are lost.<br />
The problem is just the same than in section 4.9.1.2.<br />
Moby Dick WP5 55 / 86
d0504.doc Version 1.0 15.01.2004<br />
4.10 Paging<br />
4.10.1 <strong>Test</strong>bed in Madrid<br />
4.10.1.1 <strong>Test</strong> results<br />
N Orig Dest Ov. Meas. Ind. Meas<br />
Eth WL WC Eth WL WC OL D OL A DP DM<br />
1 X X 560*ms-<br />
1500**ms<br />
280ms 280ms<br />
Table 12 Results for paging tests<br />
User-data traffic source: Ping from CN is done each 40ms<br />
Abbreviations:<br />
N Number of test<br />
Orig Technology where the MN enters dormant mode<br />
Dest Technology where the MN exits dormant mode<br />
Eth Ethernet<br />
WL Wireless LAN (802.11)<br />
WC WCDMA<br />
OL D Overall latency for a ping being the MN in dormant mode (echo reply arriving *in signalling packet, ** in data packet)<br />
OL A Overall latency for a ping (the MN is always active)<br />
DP Delay between arrival of initial packet at PA to forwarding of first buffered packet<br />
DM Delay between MN receiving paging request <strong>and</strong> sending of deregistration (dormant request with lifetime 0)<br />
Packets lost:<br />
[root@viudanegra root]# ping6 larva9.ipv6.it.uc3m.es -i 0.04<br />
PING 2001:720:410:1003:204:75ff:fe7b:921f(2001:720:410:1003:204:75ff:fe7b:921f) from<br />
2001:720:410:1002::81 : 56 data bytes<br />
64 bytes from 2001:720:410:1003:204:75ff:fe7b:921f: icmp_seq=14 hops=62 time=3.210 msec<br />
64 bytes from 2001:720:410:1003:204:75ff:fe7b:921f: icmp_seq=37 hops=62 time=3.639 msec<br />
64 bytes from 2001:720:410:1003:204:75ff:fe7b:921f: icmp_seq=38 hops=62 time=3.120 msec<br />
**********ok*********<br />
64 bytes from 2001:720:410:1003:204:75ff:fe7b:921f: icmp_seq=68 hops=62 time=3.145 msec<br />
64 bytes from 2001:720:410:1003:204:75ff:fe7b:921f: icmp_seq=69 hops=62 time=3.127 msec<br />
--- 2001:720:410:1003:204:75ff:fe7b:921f ping statistics ---<br />
70 packets transmitted, 34 packets received, 51% packet loss<br />
round-trip min/avg/max/mdev = 2.928/10.278/77.267/18.459 ms<br />
[root@viudanegra root]#<br />
Direct delays:<br />
HA -> PA PA -> MN MN -> HA MN -> CN<br />
0,13 ms 2,7 ms 2,7ms 3ms<br />
4.10.1.2 Analysis of test results<br />
Loss of first packets (till the 37) is expected, since re-activating a mobile terminal from dormant mode<br />
implies registering with the network. Registration with the network (authentication, authorization <strong>and</strong><br />
location updating) has been synchronized with the paging implementation to avoid packet loss due to<br />
forwarding packets without being registered properly before. Furthermore, to allow a mobile terminal<br />
sending packets towards the network infrastructure, the associated Access Router has to negotiate the<br />
QoS for the sending mobile terminal with the domain’s QoS Broker. Until this negotiation is not done,<br />
packets sent by the MN are dropped –except signalling ones- by the QoSM in the AR. This QoS<br />
negotiation process is analyzed in D0503.<br />
Packet 14 is not lost because it contains the BU <strong>and</strong> thus it is a signalling packet. So paging performs as<br />
expected. But a detailed analysis of the captures is very interesting <strong>and</strong> worth the writing.<br />
View from aranya (PA <strong>and</strong> A4C server) some packets not displayed for ease of underst<strong>and</strong>ing:<br />
Moby Dick WP5 56 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 36. Paging test (in Madrid) capture (view from PA <strong>and</strong> AAA server)<br />
First echo redirected from escarabajo, the HA is packet #3. The PA starts paging to the paging area where<br />
the MN is (packets 4,5,6). The answer from the AR where the MN is attached to is received in packet 33.<br />
Now the PA sends the buffered packets to the MN (packets 34,35,36,37,38).<br />
Beefore the packet 33, the MN has AAA registered: packets 24,25 (NVUP dumped from A4 server to<br />
QoSB in packet 26), 29 <strong>and</strong> 30.<br />
Packets 39,40, <strong>and</strong> 41 are auditing packets.<br />
View from MN (some packets not displayed for ease of underst<strong>and</strong>ing)<br />
Moby Dick WP5 57 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 37. Paging test (in Madrid) capture (view from MN)<br />
MN receives paging messages from its AR (messages 1 <strong>and</strong> 2). Then it initiates the AAA registration<br />
process (messages 3,5,8 <strong>and</strong> 9).<br />
Once the AAA registration process done, MN does the BU to the HA (messages 10 <strong>and</strong> 14). Message 10<br />
can go through the QoSM in the AR without the need of QoS negotiation because it is a signalling<br />
message.<br />
Once the two above steps completed, the MN sends a paging message to the AR (message 16).<br />
Then the PA sends the MN the buffered packets (17, 18) <strong>and</strong> the MN answers. But the echo replies will<br />
be blocked by the QoSManager in current MN Access Router (AR) <strong>and</strong> not reach the CN until the QoS<br />
negotiation process completes in the AR. Exception is echo reply Nº 22, who caries the BU to the CN.<br />
This packet is treated as signalling <strong>and</strong> thus can go through the QoSM. It is marked by DiffServ marking<br />
soft as signalling (DSCP=0x22, erroneously displayed by ethereal as 0x88). Packet 27 is the first non<br />
signalling packet to reach the CN (because the QoS negotiation process has already concluded).<br />
Moby Dick WP5 58 / 86
d0504.doc Version 1.0 15.01.2004<br />
DP components (DP=280 ms see Table 12):<br />
AAA registering: 200 ms see section 4.4 for an analysis of this time<br />
View from AR (capturing in core interface)<br />
Figure 38. Paging test (in Madrid) capture (view from AR core interface)<br />
Messages 3,4,5,6 <strong>and</strong> 7 are paging messages seeking the dormant MN.<br />
Messages 8, 9, 10 <strong>and</strong> 11 are AAA registration messages.<br />
Messages 12 <strong>and</strong> 13 are BU to HA. Message 12 was not blocked by QoSM because it is signalling.<br />
Message 14 is message to PA telling that the MN is awaked <strong>and</strong> registered.<br />
When the PA receives message 14 it starts sending the buffered packets to the MN (messages 15, 16,…)<br />
those messages are received by the MN but the answers are blocked till the QoS negotiation takes place.<br />
Message 18 was not blocked by QoSM because it is signalling (contains BU to CN).<br />
Messages 23, 26 <strong>and</strong> 27 are QoS negotiation messages. The time taken here is exceptionally high, the<br />
normal value being 4 ms as written in section 4.7.1. Filters are installed in QoSM, then the RPT message<br />
is sent. QoS negotiation process has finished <strong>and</strong> now the messages from MN can go through: messages<br />
29, 31, …<br />
Messages 39, 40 20 <strong>and</strong> 21 are auditing messages.<br />
View from escarabajo (QoSB <strong>and</strong> PA):<br />
Moby Dick WP5 59 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 39. Paging test (in Madrid) capture (view from HA <strong>and</strong> QoSB)<br />
message 3 is the first echo received. The HA redirects it to the PA (packet 4)<br />
packet 7 is the NVUP from A4C server to QoSB.<br />
Packets 10 <strong>and</strong> 11 are the BUs.<br />
Packets 15,16,17 are QoS negotiation messages. The time taken here is exceptionally high, the normal<br />
value being 4 ms as written in section 4.7.1.<br />
View from CN:<br />
Figure 40. Paging test (in Madrid) capture (view from CN)<br />
Moby Dick WP5 60 / 86
d0504.doc Version 1.0 15.01.2004<br />
The first echo is message 4. No echo reply will be received until the paging, the AAA registration <strong>and</strong> BU<br />
to HA <strong>and</strong> QoS procedures are completed by the MN <strong>and</strong> the QoSM in the AR. Message 13 is an echo<br />
reply with a BU to the CN, that way it is h<strong>and</strong>led as signalling <strong>and</strong> can go through the QoSM in the AR<br />
before the QoS negotiation process is done.<br />
The first non signalling packet to reach the CN is packet Nº 22.<br />
4.10.2 <strong>Test</strong>bed in Stuttgart<br />
4.10.2.1 <strong>Test</strong> results<br />
N Orig Dest Ov. Meas Ind. Meas.<br />
Eth WL WC Eth WL WC OL (ms) DP(ms) DM(ms)<br />
1 x x 669,639* 5712,304** 1857,708 1730,317<br />
2 x x 0,670* 342,359** 341,403 339,234<br />
3 x x 122.210* 2031,277** 1271,524 1136,617<br />
4 x x<br />
5 x x<br />
Table 13: Results for paging test<br />
Abbreviations:<br />
N Number of test<br />
Orig Technology where the MN enters dormant mode<br />
Dest Technology where the MN exits dormant mode<br />
Eth Ethernet<br />
WL Wireless LAN (802.11)<br />
WC WCDMA<br />
OL D Overall latency for a ping being the MN in dormant mode (echo reply arriving *in signalling packet, ** in data packet)<br />
DP Delay between arrival of initial packet at PA to forwarding of first buffered packet<br />
DM Delay between MN receiving paging request <strong>and</strong> sending of deregistration (dormant request with lifetime 0)<br />
Ping from CN is done each 300ms<br />
4.10.2.2 Analysis of test results<br />
For the tests in Stuttgart, as already mentioned, QoS is negotiated before doing the measurements. The<br />
consequence of this is that all the packets buffered in the Paging Agent go through the QoSM, <strong>and</strong> in<br />
some cases this leads to a reordering in the ping replies arriving at the CN. We can analyse this behaviour<br />
in a detailed view of the captures.<br />
<strong>Test</strong> #1: MN active in ETH, go dormant <strong>and</strong> wake-up in WCDMA<br />
The output of ping with interval 300 ms at the CN is the following:<br />
[root@ksat46 root]# ping6 2001:638:202:11:20b:dbff:fe14:317f -i 0.3<br />
PING 2001:638:202:11:20b:dbff:fe14:317f(2001:638:202:11:20b:dbff:fe14:317f) from<br />
2001:638:202:11:204:76ff:fe13:b146 : 56 data bytes<br />
64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=5 ttl=63 time=669 ms<br />
64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=6 ttl=63 time=955 ms<br />
64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=1 ttl=63 time=2712 ms<br />
64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=7 ttl=63 time=884 ms<br />
64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=2 ttl=63 time=2455 ms<br />
64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=3 ttl=63 time=2204 ms<br />
64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=4 ttl=63 time=2012 ms<br />
64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=8 ttl=63 time=793 ms<br />
64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=9 ttl=63 time=524 ms<br />
64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=10 ttl=63 time=260 ms<br />
****OK****<br />
64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=50 ttl=63 time=136 ms<br />
64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=51 ttl=63 time=128 ms<br />
--- 2001:638:202:11:20b:dbff:fe14:317f ping statistics ---<br />
51 packets transmitted, 51 received, 0% loss, time 15468ms<br />
rtt min/avg/max/mdev = 128.658/373.035/2712.362/611.467 ms, pipe 9<br />
We can see how all the ping replies arrive, but there was a reordering.<br />
The first echo reply has icmp_seq number 5, <strong>and</strong> it has the BU from the MN.<br />
In Figure 41 we can see the captures from CN:<br />
Moby Dick WP5 61 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 41: View from CN<br />
Packet #2 is the echo request with ICMP_SEQ = 1<br />
Packet #7 is the echo request with ICMP_SEQ = 5<br />
Packet #8 is the echo request with ICMP_SEQ = 6<br />
Packet #10 is the echo reply with ICMP_SEQ = 5, which includes the BU<br />
Packet #13 is the echo request with ICMP_SEQ = 6<br />
Packet #14 is the echo reply with ICMP_SEQ = 1<br />
From this capture, we see that echo requests from 1 to 4 were buffered at the PA.<br />
Echo reply 5 contains the BU from the MN <strong>and</strong> is marked as signalling.<br />
Echo request 6 <strong>and</strong> the following ones are not buffered at the PA.<br />
Now in Figure 42 we can see the capture of the PA:<br />
Moby Dick WP5 62 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 42: View from PA<br />
Packet #7 is the echo request with ICMP_SEQ = 1, <strong>and</strong> it is buffered at the PA.<br />
After the reception of the first ping request, the Paging Agent starts polling the paging area (packets #8,<br />
#9, #10 <strong>and</strong> #11 sent to the different ARs).<br />
In the meantime, echo requests with ICMP_SEQ 2, 3 <strong>and</strong> 4 arrive at the PA (packets #12, #17 <strong>and</strong> #23)<br />
<strong>and</strong> they are also buffered.<br />
PA gets the paging message answer from ksat73 (WCDMA-AR) in packet #28.<br />
At this time PA forwards the 4 packets buffered (packets #29, #30, #31 <strong>and</strong> #32) which arrive in the MN<br />
disordered.<br />
In the MN we can see the following sequence (Figure 43):<br />
Moby Dick WP5 63 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 43: View from MN<br />
Packet #59 is the paging answer from the MN to the WCDMA AR.<br />
Packet #60 is the echo request with ICMP_SEQ = 5<br />
Packet #62 is the echo request with ICMP_SEQ = 6<br />
Packet #64 is the echo request with ICMP_SEQ = 1 which was buffered at the PA <strong>and</strong> that is why this<br />
request arrives disordered at the MN.<br />
4.11 Inter-domain H<strong>and</strong>over<br />
4.11.1 Results<br />
Table 14. Summary of measures for Inter-domain h<strong>and</strong>over<br />
N QoS AAA Orig Dest Inter-D Intra-D Measure<br />
D R D R Eth WL WC Eth WL WC PL<br />
1 X X X X X 0<br />
2 X X X X X 3<br />
Ping is done each second<br />
Abbreviations:<br />
N Number of test<br />
D 'dummy' module<br />
R 'real' module<br />
Orig Technology from where the MN starts<br />
Dest Technology where the MN arrives<br />
Eth Ethernet<br />
WL Wireless LAN (802.11)<br />
WC WCDMA<br />
Inter-D Inter-domain H<strong>and</strong>over<br />
Intra-D Intra-domain H<strong>and</strong>over (FHO)<br />
PL Packet loss<br />
The results of ping from MN to CN are shown as following:<br />
? Intra-domain h<strong>and</strong>over (FHO). Ping interval = 1 second.<br />
[root@ksat54 root]# ping6 ksat46.ipv6.rus.uni-stuttgart.de<br />
Moby Dick WP5 64 / 86
d0504.doc Version 1.0 15.01.2004<br />
PING ksat46.ipv6.rus.uni-stuttgart.de(ksat46.ipv6.rus.uni-stuttgart.de) 56 data<br />
bytes<br />
64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=0 hops=61 time=2.818 msec<br />
64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=1 hops=61 time=2.741 msec<br />
****OK****<br />
64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=20 hops=61 time=2.500 msec<br />
64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=21 hops=61 time=2.404 msec<br />
--- ksat46.ipv6.rus.uni-stuttgart.de ping statistics ---<br />
22 packets transmitted, 22 packets received, 0% packet loss<br />
round-trip min/avg/max/mdev = 2.404/2.816/3.299/0.224 ms<br />
? Inter-domain h<strong>and</strong>over<br />
[root@ksat54 root]# ping6 ksat46.ipv6.rus.uni-stuttgart.de<br />
PING ksat46.ipv6.rus.uni-stuttgart.de(ksat46.ipv6.rus.uni-stuttgart.de) 56 data<br />
bytes<br />
64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=0 hops=61 time=2.976 msec<br />
64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=1 hops=61 time=2.528 msec<br />
****OK****<br />
64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=8 hops=61 time=2.823 msec<br />
64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=9 hops=61 time=2.775 msec<br />
64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=12 hops=61 time=252.822 msec<br />
64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=13 hops=61 time=2.897 msec<br />
****OK****<br />
64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=20 hops=61 time=3.083 msec<br />
--- ksat46.ipv6.rus.uni-stuttgart.de ping statistics ---<br />
21 packets transmitted, 19 packets received, 9% packet loss<br />
round-trip min/avg/max/mdev = 2.449/16.579/252.822/55.708 ms<br />
4.11.2 Analysis of results<br />
The test is performed as expected. When the MN enters a new domain, it must perform a re-registration<br />
on its AAA home server <strong>and</strong> its home agent. Also there is no QoS context transfer. Before the new Care<br />
of Address is valid <strong>and</strong> the QoS negotiation takes place, some packets are lost. For Fast H<strong>and</strong>over (see<br />
also section 4.9), there is no packets lost.<br />
5. User tests<br />
5.1 VoIP<br />
5.1.1 Description of the test<br />
In this case, users able to perform calls between themselves, evaluating the quality of the service (sound<br />
quality, delays, <strong>and</strong> interruptions). Also, it will be possible to measure degradation of service -if anyintroduced<br />
by h<strong>and</strong>overs; <strong>and</strong> -of course- to distinguish between classes of users.<br />
People / site 2~2<br />
Software<br />
RAT<br />
Machines<br />
1 server, 1 MN for each client<br />
Time per session<br />
10~30 min<br />
User evaluation<br />
Expert evaluation<br />
Concrete tests<br />
<strong>Test</strong> 1<br />
<strong>Test</strong> 2<br />
Ease of usage<br />
Voice quality<br />
RTCP flow description: delay, jitter, packet loss<br />
All users with the same high quality (EF), no other<br />
traffic exists. Calls between Madrid <strong>and</strong> Stuttgart.<br />
FHO in Madrid.<br />
All users with the same -lowest- quality, no other<br />
traffic exists. Local calls in Madrid. FHOs<br />
5.1.2 Realization<br />
<strong>Test</strong> beds: Madrid <strong>and</strong> Stuttgart<br />
Moby Dick WP5 65 / 86
d0504.doc Version 1.0 15.01.2004<br />
AAAAC (with no Auditing) Metering<br />
QoS system (diff serv, QoSM, QoSB)<br />
FHO system<br />
Mobile IPv6 in Madrid <strong>and</strong> Stuttgart<br />
MN in Madrid <strong>and</strong> MN in Stuttgart.<br />
User profiles:<br />
Both high quality users have EF service in their profile (mn_config file in AAAAC.h). Both low quality<br />
users have the EF substituted by S4 (BE).<br />
The S4 parameters have been changed for this test: The BW was changed from 32 kbps to 16 kbps. To do<br />
that we change the qos_config file in Madrid AAAAC server (no need to change that in Stuttgart since<br />
this test bed is not involved in the second test). This file is transferred from AAAAC server to QoSB <strong>and</strong><br />
this configures the QoSM in the ARs.<br />
qos_config file employed:<br />
#DstAd BW BurstSize Priority DSCP<br />
::0 32 11514 0 0<br />
::0 64 11514 0 2<br />
::0 256 11514 0 4<br />
::0 32 11514 1 2e<br />
::0 1 11514 2 22<br />
::0 256 11514 3 12<br />
::0 64 11514 4 0a<br />
::0 128 11514 4 0c<br />
::0 256 11514 4 0e<br />
Both MNs have EF as class of service for RAT (Universal MD DiffServ Marking rules)<br />
EF service in both domains has the same characteristics (MD defined)<br />
No roaming, both users are in their home domains.<br />
DiffServ Marking software in both MN marks RAT with EF service.<br />
WLAN-WLAN FHO in Madrid. No FHO in Stuttgart<br />
RAT parameters:<br />
GSM codec 13.2 kbps.<br />
Frames Cod. Delay Eff BW<br />
1 20 ms 22% 60 kbps<br />
2 40 ms 35% 38 kbps<br />
4 80 ms 52% 25 kbps
d0504.doc Version 1.0 15.01.2004<br />
5.1.3 Results<br />
As it was expected, the perceived quality was very good in both directions. The RTP statistics showed<br />
also a very good performance (0% packet loss <strong>and</strong> a delay of about 180ms). However, a tiny but<br />
appreciable delay was introduced.<br />
For the second test, the quality was poor, but the conversation could still take place. The RTP statistics<br />
showed also a medium performance (20% packet loss <strong>and</strong> a delay of about 180ms).<br />
During the tests the platform stability was very good, without many problems. Conversations were very<br />
long without any quality degradation. 6 FHO took place <strong>and</strong> the users did not noticed any quality<br />
degradation except in the 6 th FHO were the user in Madrid stop hearing the user in Stuttgart for about 5<br />
seconds.<br />
5.1.4 Expert evaluation: Characterization of RAT traffic pattern<br />
A GSM coded was used in the tests with RAT, which generates about 13.2kbps of UDP payload (33 bytes<br />
every 20ms). Nevertheless a rate of 32kbps was not enough, as it has been shown in <strong>Test</strong> 1. In this test<br />
48% of the packets were lost, so it seemed that a b<strong>and</strong>width of about 64kbps is required. In order to<br />
measure the traffic pattern, NISTNET emulator was used in two border routers of Madrid trial site.<br />
NISTNET allows us to emulate some network characteristics (delay, packet loss, etc), but it works only in<br />
IPv4, so a IPv6-in-IPv4 tunnel was created between these two routers. This tunnel does not affect the test<br />
performance with the exception of a small added delay due to IPv6-in-IPv4 tunneling.<br />
After some test we can say that the b<strong>and</strong>with needed by RAT is about 64kbps (the same result we<br />
obtained from RAT statistics using a 32kbps allowed b<strong>and</strong>with).<br />
RAT generates about 13.2 kbps of application traffic, but the overall IPv6 rate is about 64kbps. This is<br />
due to IPv6 basic header, IPv6 Home Address dst. option, Routing Header, UDPv6 header, <strong>and</strong> RTP<br />
header:<br />
Home<br />
VoIP<br />
IPv6<br />
IPv6 basic<br />
Address<br />
payload<br />
Ethernet<br />
Routing<br />
UDP header RTP header<br />
header<br />
destinatio.<br />
(GSM<br />
header<br />
option<br />
codec)<br />
14 bytes 20 bytes 24 bytes 24 bytes 8 bytes 12 bytes 33 bytes<br />
Mobile IPv6 signalling adds a significant amount of overhead (48 bytes). Therefore, it is interesting note<br />
that EF Moby Dick class (32kbps peak rate) defined for real time services maybe is not suitable when<br />
mobility is involved.<br />
5.2 Video Streaming<br />
5.2.1 Description<br />
A point to point video streaming scenario is provided. Users are able to play remote video files,<br />
evaluating the quality of reception in different circumstances. Low priority queue in one cell will become<br />
saturated when a user receiving low priority traffic moves in. Low priority users in that cell will be<br />
affected.<br />
People / site 2~1<br />
Software<br />
VideoLAN server an clients<br />
Machines<br />
1 server, 1 MN per user<br />
Time per session<br />
10 min<br />
User evaluation<br />
Expert evaluation<br />
Concrete tests<br />
<strong>Test</strong> 1<br />
Quality of audio, video<br />
RTCP parameters (jitter, delay, loss)<br />
Two kind of users, high priority <strong>and</strong> low. no<br />
Moby Dick WP5 67 / 86
d0504.doc Version 1.0 15.01.2004<br />
<strong>Test</strong> 2<br />
background traffic. FHOs<br />
Perform the same tests as before, but adding a<br />
"large enough" low priority background traffic<br />
directed to the high priority user.<br />
5.2.2 <strong>Test</strong> Realization<br />
MNs (coleoptero <strong>and</strong> larva10) are attached to WLAN ARs (temita <strong>and</strong> cucaracha). The video stream<br />
server is a CN –chinche- attached to pulga, an Ethernet AR. No MD software runs is pulga.<br />
Background traffic is generated by the CN. We use mgen with packet size 300 bytes (udp level), 1000<br />
packets/s. Video is ice age trailer (about 500 kbps). Application is videolan<br />
DiffServ Marking rules in the CN:<br />
Filter 1<br />
Description VideoHighPrio<br />
UDPDestPort 1234<br />
CoS 101110<br />
Filter 2<br />
Description VideoLowPrio<br />
UDPDestPort 5678<br />
CoS 010010<br />
Filter 3<br />
Description NoiseLowPrio<br />
UDPDestPort 33333<br />
CoS 010010<br />
For the first test (with no background traffic) both users see the video while performing FHOs. FHOs in<br />
coleptero are triggered because this MN is moving. Larva10 does no move <strong>and</strong> to trigger the FHOs we<br />
change the signal thresholds.<br />
For the second test, we present the test sequence bellow.<br />
Step 1<br />
Larva10 attached to cucaracha, coleoptero to termita<br />
Larva10 receives a video from chinche with dscp=0x12 (AF) low priority<br />
coleoptero receives a video from chinche with dscp=0x2e (EF) high priority<br />
coleoptero receives noise from chinche with dscp=0x12<br />
Step 2<br />
coleoptero moves to cucaracha<br />
Step 3 (not included in the demo plan)<br />
coleoptero moves back to termita<br />
Moby Dick WP5 68 / 86
d0504.doc Version 1.0 15.01.2004<br />
6bone<br />
grillo<br />
CISCO 7500<br />
Video streaming test: step 1<br />
GEANT<br />
2001:0720:0410:1001::/64<br />
MOBYDICK<br />
DOMAIN<br />
larva10<br />
viudanegra<br />
MN<br />
•Home Agent<br />
•QoS Broker<br />
•DNS<br />
eth0<br />
IPv6-WLAN<br />
escarabajo<br />
eth1<br />
eth1<br />
2001:0720:0410:1006::/64<br />
WLAN<br />
eth1<br />
eth2<br />
2001:0720:0410:1003::/64<br />
Simulated<br />
CORE<br />
cucaracha<br />
WLAN<br />
larva9<br />
•Paging Agent CN<br />
•AAA server<br />
aranya<br />
2001:0720:0410:1007::/64<br />
escorpion<br />
termita<br />
pulga<br />
IPv6-WLAN2<br />
MN<br />
Coleoptero<br />
WLAN<br />
2001:0720:0410:1004::/64<br />
eth1<br />
CN<br />
chinche<br />
No MD soft in<br />
pulga!!!<br />
PHB only for<br />
packets from core<br />
to access!!<br />
PHB, queues<br />
Noise<br />
Video<br />
DSCP=0x2e<br />
DSCP=0x12<br />
6bone<br />
grillo<br />
CISCO 7500<br />
Video streaming test: step 2<br />
GEANT<br />
2001:0720:0410:1001::/64<br />
This queue viudanegra<br />
becomes<br />
saturated<br />
MOBYDICK<br />
DOMAIN<br />
larva10<br />
MN<br />
•Home Agent<br />
•QoS Broker<br />
•DNS<br />
IPv6-WLAN<br />
escarabajo<br />
2001:0720:0410:1006::/64<br />
WLAN<br />
eth0<br />
eth1<br />
eth1<br />
MN<br />
WLAN<br />
Coleoptero<br />
eth1<br />
eth2<br />
2001:0720:0410:1003::/64<br />
Simulated<br />
CORE<br />
cucaracha<br />
WLAN<br />
larva9<br />
•Paging Agent CN<br />
•AAA server<br />
aranya<br />
2001:0720:0410:1007::/64<br />
IPv6-WLAN2<br />
escorpion<br />
termita<br />
2001:0720:0410:1004::/64<br />
eth1<br />
pulga<br />
CN<br />
chinche<br />
No MD soft in<br />
pulga!!!<br />
PHB only for<br />
packets from core<br />
to access!!<br />
PHB, queues<br />
Noise<br />
Video<br />
DSCP=0x2e<br />
DSCP=0x12<br />
6bone<br />
grillo<br />
CISCO 7500<br />
Video streaming test: step 3<br />
GEANT<br />
2001:0720:0410:1001::/64<br />
MOBYDICK<br />
DOMAIN<br />
larva10<br />
viudanegra<br />
MN<br />
•Home Agent<br />
•QoS Broker<br />
•DNS<br />
IPv6-WLAN<br />
escarabajo<br />
2001:0720:0410:1006::/64<br />
WLAN<br />
eth0<br />
eth1<br />
eth1<br />
eth1<br />
eth2<br />
2001:0720:0410:1003::/64<br />
Simulated<br />
CORE<br />
cucaracha<br />
WLAN<br />
larva9<br />
•Paging Agent CN<br />
•AAA server<br />
aranya<br />
2001:0720:0410:1007::/64<br />
escorpion<br />
termita<br />
pulga<br />
IPv6-WLAN2<br />
MN<br />
Coleoptero<br />
WLAN<br />
2001:0720:0410:1004::/64<br />
eth1<br />
CN<br />
chinche<br />
No MD soft in<br />
pulga!!!<br />
PHB only for<br />
packets from core<br />
to access!!<br />
PHB, queues<br />
Noise<br />
Video<br />
DSCP=0x2e<br />
DSCP=0x12<br />
Figure 44. Video streaming test<br />
5.2.3 Results<br />
When no background traffic exists, both video display good. FHO are not appreciated by users.<br />
When we add the background traffic, <strong>and</strong> in the case it competes for resources with the low priority video<br />
(step 2), the quality of the low priority video gets very poor. Indeed, the video stops completely. High<br />
priority video is not affected.<br />
When the low priority video does not compete anymore for resources with background traffic (step 3) its<br />
quality increases <strong>and</strong> becomes very good <strong>and</strong> just like the quality of the high priority video. The quality<br />
of the low priority video takes a while to recover, corresponding to the time needed to resync the video.<br />
5.3 Internet radio<br />
5.3.1 Description<br />
A streaming server is installed so as to provide an uninterrupted flow of music. Users are able to listen to<br />
this stream, evaluating the quality of the sound in normal conditions <strong>and</strong> while h<strong>and</strong>overs take place. It is<br />
possible to have different kind of users, receiving different priority streams<br />
People / site 2~1<br />
Software<br />
Streaming server, clients<br />
Machines<br />
1 server, 1 MN per user<br />
Time per session<br />
30 min<br />
<strong>Test</strong> duration<br />
User evaluation<br />
Expert evaluation<br />
Quality of sound<br />
Billing model<br />
Behaviour of TCP during FHO<br />
Concrete tests<br />
Moby Dick WP5 69 / 86
d0504.doc Version 1.0 15.01.2004<br />
<strong>Test</strong> 1<br />
<strong>Test</strong> 2<br />
high priority radio. Users listen to 3 or 4 songs <strong>and</strong><br />
do FHOs.<br />
Same as before, but half the users receive premium<br />
radio stream, <strong>and</strong> half low priority<br />
5.3.2 Realization<br />
The MNs are attached to WLAN ARs. The configuration of the queues in the QoSMs in the ARs is<br />
changed for this test to:<br />
***************************************************************************************************<br />
**** IPv6 QoS/Policing ACCESS ROUTER [MobyDick] (COPS CLIENT, 2 interfaces) version 3.7.28 ****<br />
****************************************************************************************************<br />
[root@termita root]# CONFIGURATION DECISION RECEIVED O.K.<br />
--------------------------------------------<br />
BEHAVIOUR TABLE received:<br />
DSCP | Agre. | B<strong>and</strong>width | BW Borrow | RIO min queue | RIO max queue | RIO limit queue| RIO drop|<br />
| Number| (kbps) | flag | length (kB) | length (kB) | length (kB) | prob.(%)|<br />
--------------------------------------------------------------------------------------------------<br />
0x00 | 0 | 300 | 1 | 0 | 0 | 0 | 0 |<br />
0x2e | 1 | 100 | 0 | 0 | 0 | 0 | 0 |<br />
0x22 | 2 | 100 | 0 | 50 | 100 | 120 | 10 |<br />
0x12 | 3 | 900 | 0 | 250 | 500 | 600 | 10 |<br />
0x0a | 4 | 300 | 0 | 150 | 300 | 400 | 10 |<br />
0x0c | 4 | 200 | 0 | 100 | 200 | 250 | 25 |<br />
0x0e | 4 | 100 | 0 | 50 | 100 | 120 | 50 |<br />
--------------------------------------------------------------------------------------------------<br />
------------------------------------------------------------------------------------<br />
** TCAPI QUEUES READY!!... Sending to BB a CONFIGURATION REPORT COMPLETED message **<br />
------------------------------------------------------------------------------------<br />
Change is done in QoSB database <strong>and</strong> this configures the QoSM with the new values.<br />
The Radio Stream Server is a CN attached to pulga, an Ethernet AR. No MD software is running in pulga.<br />
low priority radio is marked 0x2e<br />
high priority radio is marked 0x12<br />
The program used is XMMS. The file sent is an mp3 file at 128 kbps. TCP is used to transfer the audio<br />
stream.<br />
The marking rules in the stream server are:<br />
Both users (cjbc <strong>and</strong> acuevas) have 0x2e <strong>and</strong> 0x12 services in their profiles. The payment scheme for<br />
both is the same (look HDTariffS1 <strong>and</strong> HDTarrifS2:<br />
Figure 45. User profiles<br />
The charging applied is for DSCP=46 <strong>and</strong> HDTariffS1=10<br />
chargeH = (int)(100*(bytestoH+bytesfromH)/1048576);<br />
The charging applied is for DSCP=18 <strong>and</strong> HDTariffS2=21<br />
chargeH = (int)(250*(bytestoH+bytesfromH)/1048576);<br />
Their session life time is 150s.<br />
Initially both users choose to listen to the high priority stream. They do FHO. No accounting is done in<br />
this test.<br />
Moby Dick WP5 70 / 86
d0504.doc Version 1.0 15.01.2004<br />
In the second test acuevas listens to the low priority stream <strong>and</strong> cjbc to high priority. Accounting is done.<br />
acuevas <strong>and</strong> cjbc are attached to termita. cjbc moves from termita to cucaracha <strong>and</strong> back. Users start there<br />
Moby Dick sessions <strong>and</strong> immediately after they start to receive the audio stream. When they are done,<br />
they stop the audio stream <strong>and</strong> close the Moby Dick session (pushing Deregistration button in NCP <strong>and</strong><br />
thus stopping the periodic re-registration).<br />
5.3.3 Results<br />
When both users listen to high priority stream, the quality is very good, the FHOs are not appreciated.<br />
When acuevas chooses to listen to low priority stream the quality is low, with constant cuts. cjbc keeps<br />
listening to good quality music while doing some FHO.<br />
Next figures show charging information of acuevas <strong>and</strong> cjbc:<br />
Figure 46. Total charging for acuevas@ipv6.it.uc3m.es<br />
Moby Dick WP5 71 / 86
d0504.doc Version 1.0 15.01.2004<br />
In the next figures, one day charging info is shown:<br />
Figure 47. Total charging for cjbc@ipv6.it.uc3m.es<br />
Figure 48. One day charging info for acuevas@ipv6.it.uc3m.es<br />
Moby Dick WP5 72 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 49. One day charging info for cjbc@ipv6.it.uc3m.es<br />
User cjbc performs some h<strong>and</strong>overs as it is shown in the figure. cjbc has different sessions in both<br />
cucaracha <strong>and</strong> termita ARs. Since cjbc does some FHOs, some sessions are stopped before the Lifetime<br />
of the session expires.<br />
In the next figures charging details from one session of each user are shown:<br />
Figure 50. Charging acuevas session details<br />
Moby Dick WP5 73 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 51. Charging cjbc session details<br />
User cjbc receives high priority traffic <strong>and</strong> acuevas receives real time traffic, which is cheaper.<br />
As it can be seen, cjbc receives much more traffic than acuevas –because high priority traffic has more<br />
BW allocated than real time traffic- <strong>and</strong> the DSCP employed (18=0x12) is more expensive so the total<br />
amount to pay is much more than for acuevas.<br />
Details of these types of traffic are shown in the next figures:<br />
Figure 52. Service Specifications (SLS)<br />
5.3.4 Expert evaluation: TCP during FHO<br />
A small issue related to TCP <strong>and</strong> FHO was detected during the tests. TCP sometimes tries to fill<br />
(dpending on the application) the MSS (Maximum Segment Size), so it generates packets that fills the<br />
PMTU between the nodes involved in the communication. If this happens (e.g. MP3 streaming uses<br />
TCP), whe a FHO is performed, oAR bicasts packets to nCoA, encapsulating them in a tunnel. This<br />
tunnel a new IPv6 header to the packets that can go beyond the PMTU between the oAR <strong>and</strong> the nAR,<br />
being these packets dropped (because of Packet Too Big). This error makes the TCP connection to be<br />
aborted. This problem should be solved in a future release of FHO software, but in order to make tests<br />
with TCP applications, a solution is to decrease the MTU of the interface of the CN.<br />
5.4 Internet coffee<br />
Now it is considered the simulation of an actual environment, where different kinds of users accede to a<br />
"cyber coffee", using the Internet while moving <strong>and</strong> having long inactivity periods. Users can do web<br />
surfing, file downloading, or playing chess games. Real time constraints are very loose. It should be noted<br />
that there are only a few of IPv6 web servers available.<br />
People / site 1~4<br />
Moby Dick WP5 74 / 86
d0504.doc Version 1.0 15.01.2004<br />
Software<br />
Machines<br />
Time per session<br />
User evaluation<br />
Expert evaluation<br />
Concrete tests<br />
<strong>Test</strong> 1<br />
Web server, ftp server, IM server, clients<br />
1 server, 1 MN for each client<br />
30 min<br />
Navigation speed<br />
AAAC issues<br />
B<strong>and</strong>width consumed<br />
FHO signalling procedures<br />
Users should both navigate along some web pages,<br />
download some files, some of them could do a<br />
chess match. <strong>Test</strong> system stability despite mobility<br />
<strong>and</strong> paging. Users should change their point of<br />
attachment 5 or 6 times <strong>and</strong> go to dormant mode<br />
when they are in long activity period.<br />
5.4.1 Results<br />
Users were satisfied with the performance of the service provided. Alas, the low QoS requirements of the<br />
applications <strong>and</strong> the absence of multimedia elements impede "spectacular results", so user's comments<br />
were related to the charging model applied. The ability to perform real-time consults of committed<br />
consumption was highly appreciated, as the user is able to asses if the service is worth the money spent.<br />
Due to both the low QoS requirements <strong>and</strong> the "traditional nature" of the considered applications (users<br />
are used to ISPs low-guaranteed provision), "best effort" service is the best suitable for this kind of<br />
services. A better (<strong>and</strong> more expensive) service adds almost nothing to the user utility function.<br />
Successful h<strong>and</strong>overs <strong>and</strong> paging procedures pass unnoticied to users.<br />
5.5 Quake 2 championship<br />
With this test, a proper -<strong>and</strong> amusing- test environment is provided. Up to four players on each trial site<br />
will be able to fight between themselves. FHO will occur <strong>and</strong> its effects on game performance will be<br />
measured.<br />
People / site<br />
Software<br />
Machines<br />
Time per session<br />
User evaluation<br />
Expert evaluation<br />
Concrete tests<br />
<strong>Test</strong> 1<br />
1~4 players<br />
Quake 2 server, clients<br />
1 server, 1 MN for each client<br />
~30 minutes<br />
Game performance (via inquiry)<br />
Charging, accounting<br />
(with a probe flow) Round trip time, packet loss<br />
Users should Start playing <strong>and</strong> get used to the<br />
game. <strong>Test</strong> then system stability despite mobility.<br />
Users should change their point of attachment 5 or<br />
6 times during a game.<br />
Remarks:<br />
- if game is played between two teams, they should be balanced among Madrid <strong>and</strong> Stuttgart<br />
(because players close to the server earn advantage among others)<br />
- each Quake client generates <strong>and</strong> receives approximately 15 kbps of traffic<br />
Moby Dick WP5 75 / 86
d0504.doc Version 1.0 15.01.2004<br />
DREC<br />
MNs<br />
MGEN<br />
Ethereal<br />
MNs<br />
NISTNet<br />
Ethereal<br />
"Configurable Link"<br />
UC3M-U.Stutt<br />
Q2 Server<br />
Figure 53. Scenario for Quake 2 test (note: Moby Dick "core nodes" (QoSB, AR, AAAC...) are not<br />
included on the figure, for the sake of simplicity.<br />
5.5.1 Results<br />
Users can play Quake2 <strong>and</strong> perform h<strong>and</strong>overs (both inter <strong>and</strong> intra technology) without being aware of<br />
movement. Game performance is not affected at all by mobility.<br />
Quake2 is a very good example of 4G application. It’s a real time interactive game, in which some<br />
players are involved. It’s also a good application in order to evaluate FHO performance due to its real<br />
time requirements. User evaluation (more than 10 students were involved on the tests) shows that FHO<br />
performance is really good, with users not perceiving mobility of their terminals.<br />
Figure 54. Quake2 screenshot<br />
Moby Dick WP5 76 / 86
d0504.doc Version 1.0 15.01.2004<br />
6. Conclusions<br />
The results presented show that, not only Moby Dick fulfilled its goals, but they also demonstrate that the<br />
performance obtained is very high. Moby Dick offers the infrastructure of a 4G network provider over<br />
which packets can be transported with QoS, security, mobility <strong>and</strong> packet based charging, <strong>and</strong> allowing<br />
any kind of IPv6 application to use Moby Dick infrastructure. The promising results obtained will allow<br />
follow up projects where, for instance, a Moby Dick network provider can play the role of an aggregator<br />
of services.<br />
Moby Dick WP5 77 / 86
d0504.doc Version 1.0 15.01.2004<br />
7. References<br />
[1] [Moby Dick] Moby Dick Web Site: “http://www.ist-mobydick.org/”<br />
[2] [D0101] Moby Dick Deliverable: "Moby Dick Framework Specification", delivered in October 2001, available at:<br />
“http://www.ist-mobydick.org/deliverables/D0101.zip”<br />
[3] [D0102] Moby Dick Deliverable: "Moby Dick Application Framework Specification", delivered in August 2002,<br />
available at: “http://www.ist-mobydick.org/deliverables/D0102.zip”<br />
[4] [D0103] Moby Dick Deliverable: "Moby Dick Consolidated System Integration Plan", delivered in june 2003,<br />
available at: “http://www.ist-mobydick.org/deliverables/D0103.zip”<br />
[5] [D0201] Moby Dick Deliverable: "Initial Design <strong>and</strong> Specification of the Moby Dick QoS Architecture", delivered in<br />
April 2002, available at: “http://www.ist-mobydick.org/deliverables/D0201.zip”<br />
[6] [D0301] Moby Dick Deliverable: "Initial Design <strong>and</strong> Specification of the Moby Dick Mobility Architecture", delivered<br />
in April 2002, available at: “http://www.ist-mobydick.org/deliverables/D0301.zip”<br />
[7] [D0302] Moby Dick Deliverable: "Mobility Architecture Implementation <strong>Report</strong> ", delivered in December 2002,<br />
available at: “http://www.ist-mobydick.org/deliverables/D0302.zip”<br />
[8] [D0401] Moby Dick Deliverable: "Design <strong>and</strong> Specification of an AAAC Architecture Draft on administrative,<br />
heterogeneous, multi-provider, <strong>and</strong> mobile IPv6 sub-networks", delivered in December 2001, available at: “http://www.istmobydick.org/deliverables/D0401.zip”<br />
[9] [D0501] Moby Dick Deliverable: "Definition of Moby Dick Trial Scenarios", delivered in February 2002, available at:<br />
“http://www.ist-mobydick.org/deliverables/D0501.zip”<br />
[10] [D0501 Annex] Moby Dick Deliverable Annex: "Definition of Moby Dick Trial Scenarios Annex", delivered in September<br />
2002, available at: “http://www.ist-mobydick.org/deliverables/D0501-Annex.zip”<br />
[11] [D0502] Moby Dick Deliverable: "First <strong>Test</strong> <strong>and</strong> <strong>Evaluation</strong> <strong>Report</strong>", delivered in April 2002, available at:<br />
http://www.ist-mobydick.org/deliverables/D0502.zip<br />
[12] [D0503] Moby Dick Deliverable: "Second <strong>Test</strong> <strong>and</strong> <strong>Evaluation</strong> <strong>Report</strong>", delivered in January 2003, available at:<br />
http://www.ist-mobydick.org/deliverables/D0503.zip<br />
[13] [Moby Summit] http://www.it.uc3m.es/mobydick<br />
[14] [prism] “http://www.intersil.com/design/prism/index.asp”<br />
[15] [Ethereal] Network Analyzer, available at: “http://www.ethereal.com”<br />
[16] [diameter] Stefano M. Faccin, Franck Le, Basavaraj Patil, Charles E. Perkins, “Diameter Mobile IPv6 Application,<br />
internet-draft, draft-le-aaa-diameter-mobileipv6-01.txt, November 2001<br />
[17] [diffserv] http://diffserv.sourceforge.net/<br />
[18] [interlink] http://www.interlinknetworks.com/<br />
[19] [NeTraMet] http://www2.auckl<strong>and</strong>.ac.nz/net/Accounting/ntm.Release.note.html<br />
[20] [sals] “COPS Usage for Outsourcing Diffserv Resource Allocation” , Stefano Salsano,<br />
October 2001.<br />
[21] [tcapi] http://oss.software.ibm.com/developerworks/projects/tcapi/<br />
Moby Dick WP5 78 / 86
d0504.doc Version 1.0 15.01.2004<br />
8. Annex: Public Demonstrations<br />
8.1 Mobile Summit in Aveiro, Moby Dick Demonstration<br />
In the 2002 IST Mobile & Wireless Communications Summit, which took place in Aveiro/Portugal, the<br />
Moby Dick Project presented a test bed integrating all its key elements, with the exception of TD-CDMA.<br />
From June 15 to 18, the PTIn test bed was moved from the IT labs to the Summit site, in order to provide<br />
complete demos to visitors <strong>and</strong> provide them with a “Moby Dick Project” experience.<br />
The demos included video broadcast in a Wireless<br />
LAN Environment with FHO occurring during<br />
this broadcast. Visitors where invited to<br />
accompany the status of the demo through 2<br />
visualization tools specifically developed for this<br />
demo. The visualization tools showed the current<br />
location of the MN, <strong>and</strong> a list of current allocated<br />
resources, <strong>and</strong> where used latter on by the project<br />
in more recent demos.<br />
Picture 1 - Booth 1 @ IST Mobile & Wireless<br />
Communication Summit<br />
The demo consisted of 2 users. A user with a high<br />
quality profile called John Goode <strong>and</strong> a second<br />
user with less quality profile with named Mary<br />
Sheep.<br />
A first login using Mary’s profile showed a video<br />
with several failures. After reregistering with<br />
John’s login the video would run smoothly even<br />
during the FHO.<br />
The Visitor was able to acknowledge the<br />
registrations <strong>and</strong> the reservations made in the<br />
network through a visualization tool of the QoS<br />
Broker.<br />
Picture 2 - Booth 2 @ IST Mobile & Wireless<br />
Communication Summit<br />
Moby Dick WP5 79 / 86
d0504.doc Version 1.0 15.01.2004<br />
During the H<strong>and</strong>over the visitor could follow<br />
the MN movements through the visualization tool,<br />
thus underst<strong>and</strong>ing the moment the FHO had<br />
occurred although no losses where found in the<br />
video.<br />
This demo attracted the attention of several<br />
Summit participants, through ongoing demos<br />
during all summit period. National Press covered<br />
the Summit, having Project Moby Dick grabbed<br />
their attention. In the end a small interview was<br />
presented on national television news bulletin<br />
(Video available at<br />
http://mobydick.av.it.pt/videos/Summit2003-<br />
1.wmv).<br />
Picture 3 - FHO Occurring while one of our<br />
operators brings the MN from one booth to the<br />
other<br />
8.2 Moby Dick Summit in Stuttgart<br />
The project decided to combine the final audit with a project summit. Stuttgart was chosen as the place<br />
for this event. This summit has officially closed the 6-month Moby Dick field trial. During this field trial,<br />
master students from University Carlos III de Madrid <strong>and</strong> University of Stuttgart have worked in a mobile<br />
network environment comprising Ethernet, WLAN <strong>and</strong> – in Stuttgart – a TD-CDMA test bed.<br />
The event presented speakers from industry, operators <strong>and</strong> Academia. During the event the following<br />
demonstration was shown to the visitors.<br />
o A user registers first on a TD-CDMA access router by using a TD-CDMA aware MN.<br />
o Live video was sent from a CN to MN by using VIC.<br />
o Seamless h<strong>and</strong>over was performed between TD-CDMA, WLAN <strong>and</strong> Ethernet.<br />
o During the h<strong>and</strong>over video keeps sending without broken.<br />
o Charging results were shown at the end of the demonstration.<br />
o Two visualizition tools were used to show the FHO scenario <strong>and</strong> the current location of MN.<br />
o See the following pictures.<br />
Moby Dick WP5 80 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 55: TD-CDMA Antenna<br />
Figure 56: Mobile Terminal with Live Video<br />
Moby Dick WP5 81 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 57: Visualization of FHO Scenario<br />
Moby Dick WP5 82 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 58: Visualization of MN Location<br />
Moby Dick WP5 83 / 86
d0504.doc Version 1.0 15.01.2004<br />
Figure 59: Charging Results<br />
The summit itself was arranged in the following:<br />
09:00 - 09:30 Registration<br />
09:30 - 10:00 Opening, Keynote Speech<br />
Mobility Challenges – Part 1<br />
Presentation on Running <strong>and</strong> Closing Projects<br />
- OverDRiVE (Ralf Tönjes, Ericsson)<br />
- CREDO (Hong-Yon Lach, Motorola)<br />
11:00 - 11:15 Coffee break<br />
11:15 - 12:30 Mobility Challenges – Part 2<br />
Presentation on Running <strong>and</strong> Closing Projects<br />
- MODIS (Michel Mazzella, Alcatel Space)<br />
- Moby Dick (Jürgen Jähnert, University of Stuttgart)<br />
12:45 - 13:30 Lunch Break<br />
13:30 - 14:15 Demonstration<br />
14:15 - 16:15 Broadcast <strong>and</strong> Mobile Networks<br />
Presentations on Starting Integrated Projects<br />
- Maestro (Nicolas Chuberre, Alcatel Space)<br />
- Ambient Networks (Ulrich Barth, Alcatel)<br />
- E2R (Didier Bourse, Motorola)<br />
- Diadalos (Hans J. Einsiedler (T-Systems)<br />
16:15 - 16:25 Break<br />
16:25 – 17:00 The Future of Mobility <strong>and</strong> IP<br />
Panel discussion<br />
Moby Dick WP5 84 / 86
d0504.doc Version 1.0 15.01.2004<br />
The number of visitors was around 50<br />
peoples. The call for participation was<br />
announced via e-mails, the project’s<br />
Web-site, <strong>and</strong> also via the Commission<br />
Web-site. The visitors came from<br />
whole Europe.<br />
Figure 60: Demonstration<br />
The summit, the demonstrations, <strong>and</strong> the<br />
audit were very successful.<br />
The visitors <strong>and</strong> the speakers were<br />
involved in fruitful discussions <strong>and</strong> the<br />
presentations were from high levels.<br />
Figure 61: Foyer - place of demonstration<br />
8.3 Moby Dick workshop in Singapore<br />
For the Moby Dick meeting <strong>and</strong> workshop in Singapore, held in December 2003, I2R proposed to host a<br />
demonstration testbed to showcase technology developed in the Moby Dick project. The testbed was<br />
intentionally small, because of both hardware <strong>and</strong> manpower restrictions, <strong>and</strong> the intention was to do the<br />
demonstration in via two means: a wlan-only registration <strong>and</strong> h<strong>and</strong>over demonstration, including QoS<br />
support; <strong>and</strong> secondly, a video-recording of tests done in Madrid by UC3M.<br />
Results:<br />
Some difficulties were faced due to the fact that the exact hardware being used in Mobydick trial sites<br />
were un-available in Singapore. Mobydick partners made this hardware available when they arrived in<br />
Singapore prior to the meeting. However, that left little time to do final configuration <strong>and</strong><br />
troubleshooting of the testbed.<br />
Unfortunately, the documentation of the Mobydick software was not clear, or incomplete, in some places.<br />
This led to the result that a significant part of the demonstration testbed was not configured properly. It is<br />
clear that while Mobydick software works, it is not easy to setup, <strong>and</strong> takes a lot of time to fine-tune,<br />
especially when not all the experts of individual technologies are present. From the experience, I2R<br />
would recommend that some effort be spent to make all the documentation consistent with each other,<br />
Moby Dick WP5 85 / 86
d0504.doc Version 1.0 15.01.2004<br />
<strong>and</strong>/or updated where necessary (i.e., where the docuemtnation has not kept in synchronization with<br />
changes in software).<br />
In the end, the live testbed was ab<strong>and</strong>oned <strong>and</strong> only demonstration of video recordings of tests performed<br />
earlier was done. This was however, appreciated by the audience, <strong>and</strong> clearly the Mobydick<br />
demonstration scenarios are shown to be effective at showing the utility of the technology.<br />
People coming from South Korea, Japan, Taiwan <strong>and</strong> other places of Asia <strong>and</strong> Europe assisted to the<br />
workshop.<br />
8.4 Demonstration to high school students at UC3M<br />
University Carlos III of Madrid organised some informative public events for high school students in<br />
order to show university activities. Two of these events took place during Moby Dick evaluation period,<br />
so some of the Moby Dick activities were showed to them as a example of Telematics work. About 40<br />
students in each visit – with no technical background at all – checked Moby Dick infrastructure. The<br />
demonstration consisted in:<br />
o<br />
o<br />
In the first visit, an example of Internet radio streaming was shown. A demo of FHO was<br />
shown, changing technology of access (inter-technology h<strong>and</strong>over).<br />
In the second visit (students profile was more telematics related), a example of video<br />
streaming seamless h<strong>and</strong>over was shown. In this demo, users were able to check that there is<br />
no appreciable interruption during inter-technology h<strong>and</strong>over.<br />
These demos were very useful for Moby Dick user feedback, because users with no technical background<br />
nor Moby Dick knowledge were able to evaluate our prototype. They saw how a 4 th generation network<br />
works, with more than one access technology involved <strong>and</strong> more valuable services.<br />
Moby Dick WP5 86 / 86