13.07.2015 Views

2002 - cesnet

2002 - cesnet

2002 - cesnet

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

(December <strong>2002</strong>). This was one of the main prerequisites for the testing, asduring parallel run, each processor participating in the computation allocatesa single floating licence. TechSoft Engineering expressed an interest in the testingand its outcomes and we expect that the response will not be limited to thecountry itself.The tasks that we have prepared represent a certain portfolio of typical tasksof various complexity. The largest task concerns the simulation of the externalaerodynamics of Škoda Fabia Combi, with the computational grid comprising13 million 3D cells. This grid was created on the supercomputer Compaq GS140at ZČU Plzeň. For the purpose of solving this task, it is necessary to have approximately13 GB of memory, which exceeds the capacity of the supercomputerdescribed above. This task is demanding even on the global scale. Othertasks are less demanding, however, their parameters rank them among the typeof tasks that users usually do not encounter.We made use of FLUENT 6.0.20, installed on an AFS cell of ZČU Plzeň. Duringthe testing, we monitored the computing time of a single iteration (due to a lackof time, only several dozen computations were carried out, not always the entiretask), the task load time for the metacomputer of a selected configurationand, in some cases, also the start-up time of FLUENT on the metacomputer. Duringdistributed run, FLUENT can be set up with different communication protocols,which is why we also changed these parameters during the testing. Forthe nympha cluster (in Plzeň), we had to carry out a downgrade of drivers forspecial Myrinet communication cards, version 1.5, as FLUENT with upgradeddrivers installed originally (1.5.1) refused to cooperate.It was our original intention to make use of the PBS batch system for the administrationof the testing tasks running on PC clusters; however, we succeededonly in a part of the tests. The main reason was in the use of the openPBS version,which does not provide for running tasks with the support of Kerberos(this is solved within MetaCentrum by the PBSpro batch system), we thereforehad to make use of ssh, instead of (Kerberized) rsh. However, Network MPI andMyrinet MPI communicators failed to function in FLUENT, which is why wecarried out some testing outside PBS, when part of PC clusters was in a specialmode.We also witnessed failure in the case of some testing, which we wanted to haveprocessed through the PBS system, with the use of a larger part of PC clusters:PBS allocated the number of nodes correctly but the task did not start up andthe testing was not initiated. Also, in this case, we carried out testing outsidePBS. This problem was also related to situations when we needed eight andmore nodes from each cluster (concurrently in Plzeň, Prague and Brno). Wemade use of the results of this testing while debugging the PBSpro system, sothat the new version of the used batch system is free of such problems.High-speed National Research Network and its New Applications <strong>2002</strong>103

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!