CASINO manual - Theory of Condensed Matter
CASINO manual - Theory of Condensed Matter
CASINO manual - Theory of Condensed Matter
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
--no-cluster | -l<br />
Run the calculation directly on the login node <strong>of</strong> a cluster without<br />
producing a submission script. This option is applied before any others,<br />
and makes this script behave as if the machine was a multi-core workstation.<br />
--nnode= | -n <br />
Set the number <strong>of</strong> physical, (possibly) multi-core nodes to use to .<br />
This will have to be consistent with , , and the machine<br />
information in the relevant .arch file.<br />
--walltime= | -T <br />
Set the wall-time limit for the run to , given in the format<br />
[d][h][m][s].<br />
--coretime= | -C <br />
Set the core-time (i.e., wall-time times number <strong>of</strong> reserved cores) limit for<br />
the run to , given in the format<br />
[d][h][m][s].<br />
--name= | -N <br />
Set the submission script name and job name to .<br />
Options available on some individual machines<br />
If you type runqmc --help you may see appended to the above list a set <strong>of</strong> additional user-definable<br />
options—these all take the form <strong>of</strong> ‘--user.option’ (and obviously a <strong>CASINO</strong> ARCH must be defined in<br />
order to see these). Such options are defined in the relevant .arch file using the syntax <strong>of</strong> Appendix<br />
5. The arch system was designed to be readily extensible on a per-machine basis without having to<br />
modify any part <strong>of</strong> the script system that handles it, and thus only variables associated with core<br />
functionality are pre-defined, e.g., even the (widely applicable but Linux-specific) nice value <strong>of</strong> a job<br />
on a workstation is handled as --user.nice.<br />
Examples <strong>of</strong> such options include but are not limited to:<br />
--user.account<br />
Name <strong>of</strong> account under which to run the job (note that it’s very useful to<br />
alias runqmc=’runqmc --user.account=my_account ’ on machines which require it).<br />
--user.nice<br />
Unix ’nice’ value to use for the job on workstations.<br />
--user.queue<br />
Name <strong>of</strong> queue on which to run the job.<br />
--user.mem<br />
(Over-)estimate <strong>of</strong> the memory usage per compute node in Mb.<br />
--user.shmemsize<br />
When running in shared memory mode [on a Blue Gene machine] you are required<br />
to set the size <strong>of</strong> the shared memory partition when launching a job (through<br />
the value <strong>of</strong> an environment variable named BG_SHAREDMEMPOOLSIZE (Blue Gene/P)<br />
or BG_SHAREDMEMSIZE (Blue Gene/Q)). The value <strong>of</strong> this number (in Mb) may be<br />
set on the command line as the value <strong>of</strong> user.shmemsize. See the <strong>CASINO</strong> FAQ for<br />
further details.<br />
--user.messagesize<br />
Size <strong>of</strong> MPI message in bytes below which the ’eager’ (truly non-blocking)<br />
protocol is used instead <strong>of</strong> the ’rendezvous’ (partially non-blocking)<br />
protocol. Perfect parallel scaling at very large numbers <strong>of</strong> MPI processes<br />
(> c.20,000) with <strong>CASINO</strong> supposedly requires the non-blocking message<br />
passing to be truly asynchronous (i.e., communication takes place at the same<br />
time as communication). The default value for MESSAGESIZE on [a<br />
Blue Gene/P machine] (1200 bytes) is generally too small, as <strong>CASINO</strong> config<br />
messages typically range up to around 50000 bytes. Thus some advantage may<br />
be gained on large numbers <strong>of</strong> cores by setting MESSAGESIZE to be larger than<br />
the default.<br />
28