10.07.2015 Views

Molecular Simulation Methods with Gromacs

Molecular Simulation Methods with Gromacs

Molecular Simulation Methods with Gromacs

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

lambda_02lambda_03lambda_04lambda_05lambda_06which each have contents:conf.grogrompp.mdptopol.topVerify that the substitution worked correctly <strong>with</strong>grep init-lambda lambda_*/grompp.mdpwhich should show something likelambda_0.2/grompp.mdp:init-lambda = 0.2lambda_0.4/grompp.mdp:init-lambda = 0.4etc. for each ! point. We now need to pre-process each run <strong>with</strong>cd lambda_00gromppcd ../lambda_01grompp....Check the output for whether these are successful, just to be sure. At this point we areready to run. The total run time will be about 5 minutes on 4 cores on a modern x86(AMD/Intel) CPU per ! point. This means that we can run the jobs sequentially, butthen we’ll have to wait 35 minutes and we’re wasting a big opportunity forparallelization.Instead, we’re going to try to run them in parallel and assume that we have somekind of batch system that we can submit jobs to. Because the system only has 1000particles, scaling beyond 4 cores makes no real gains. Typically, a modern computecluster has 8 core or more cores per node. We will therefore trick our to submit our jobs.Because we use fewer cores than there are in a node, we can use the threaded versionof <strong>Gromacs</strong> - which doesn’t need MPI (a library and run environment for runningparallel high-performance computing jobs over a network) to run in parallel. In manylocations, <strong>Gromacs</strong> is installed such that mdrun runs the threaded version, andmdrun_mpi the MPI version, though this may vary.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!