13.07.2015 Views

2002 - cesnet

2002 - cesnet

2002 - cesnet

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

tem forms the basic infrastructure for remote task running, which makes it possibleto interconnect our computing capacities in extensive international Grids.The security of the Globus system is based on PKI (we have already solved itslinking with the Kerberos system). MetaCentrum provides much more extensiveservices than Globus, which is therefore not used internally.In <strong>2002</strong>, MetaCentrum supported several international activities, both withinCESNET and for its members. Among the most important projects, there isthe participation in the DataGrid project (see a separate section devoted tothis issue), and the support of the GridLab project (solved at ÚVT MU, seewww.gridlab.org) and the involvement in experiments within the internationalconference SC<strong>2002</strong>. In <strong>2002</strong>, the conference was held in Baltimore (USA, Maryland)and as usual, there were several challenges announced. MetaCentrum,or its nodes, participated in two groups: the so-called High Performance ComputingChallenge and the High Performance Bandwidth Challenge (see alsohttp://scb.ics.muni.cz/static/SC<strong>2002</strong>).The High Performance Computing Challenge included three sections:• Most Innovative Data-Intensive Application• Most Geographical Distributed Application• Most Heterogeneous Set of PlatformsWe took part in the challenge as part of a team, headed by Prof. Ed Seidel, AlbertEinstein Institute for Gravitation Physics, Potsdam (Germany), taking careof the security of Grid. We succeeded in creating a Grid of 69 nodes in 14 countries,with 7,345 processors, of which 3,500 were available for the purposes ofthe experiment. PlayStation 2 was also one of the nodes, with installed Linux OS(in Manchester). For the geographic locations of this Grid’s nodes, see Figure7.1. Application used consisted of a distributed computation of certain blackhole characteristics. Thanks to this Grid, we won two out of the three categoriesgiven above, i.e., the most distributed application and the most heterogeneousGrid.The objective of the High Performance Bandwidth Challenge was to demonstratethe application requiring the largest data flows at the conference site. Therewere 3 × 10 Gbps available. The team was led by J. Shalf from Lawrence BerkleyLaboratory (LBL), making use of the same basic application as describedabove, only that this time, the application was used as a generator of primarydata for visualization – the data sent were unprocessed and rendered directlyon site with the use of a cluster with 32 processors (each processor with 1 Gbpsconnection, the theoretical capacity was therefore 32 Gbps).The data were primarily generated by large LBL and NCSA computers in theUS, plus two systems in Europe: in Amsterdam (Holland) and Brno. While theAmsterdam cluster made use of a reserved transatlantic line (with a capacityof 10 Gbps), data from Brno were transmitted with the use of CESNET2, Géant100 High-speed National Research Network and its New Applications <strong>2002</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!