MPI, i.e. in
sysnameto append the string
_mpito the system name and the scripts like
jobexwill take the parallel binaries by default. To call the parallel versions of the programs ridft, rdgrad, dscf, grad, ricc2, or mpgrad from your command line without explicit path, expand your
$PATHenvironment variable to:
poeon IBM) automatically. The number of CPUs that shall be used can be chosen by setting the environment variable
The default for
PARNODES is 2.
Finally the user can set a default scratch directory that must be available on all nodes. Writing scratch files to local directories is highly recommended, otherwise the scratch files will be written over the network to the same directory where the input is located. The path to the local disk can be set with
$TURBOTMPDIRnode-specific extension (e.g. /scratch/username/tmjob-001) to avoid clashes between processes that access the same file system. The jobs must have the permissions to create these directories. Therefore one must not set
$TURBOTMPDIRto something like /scratch which would result in directory names like /scratch-001 which can usually not created by jobs running under a standard user id.
$TURBOTMPDIRand twoint are not identical - otherwise the jobs would overwrite each other's scratch files.
$TURBOTMPDIRto a directory name which contains the process id of the job or the queuing system.
-mfileSPMlt;hostfile>; option it is important to delete the $tmpdir keyword and to unset
On all systems TURBOMOLE is using the MPI library that has been shipped with your operating system.
On Linux for PCs and Itanium2 systems
IBM Platform MPI (formerly known as HP-MPI and Platform MPI) is used -- see
IBM Platform MPIhttp://www-03.ibm.com/systems/technicalcomputing/platformcomputing/products/mpi/index.html
COSMOlogic ships TURBOMOLE with a licensed IBM Platform MPI. TURBOMOLE users do not have to install or license IBM Platform MPI themselves. Parallel binaries will run out of the box on the fastest interconnect that is found - Infiniband, Myrinet, TCP/IP, etc.
The binaries that initialize MPI and start the parallel binaries (
mpirun) are located in the
Note: the parallel TURBOMOLE modules (except ricc2) need an extra server running in addition to the clients. This server is included in the parallel binaries and
it will be started automatically--but this results in one additional
task that usually does not need any CPU time. So if you are setting
N+1 tasks will be started.
If you are using a queuing system or if you give a list of hosts where TURBOMOLE jobs shall run on (see below), make sure that the number of supplied nodes match
$PARNODES -- e.g. if you are using 4 CPUs via a queuing system, make sure that
$PARNODES is set to 4.
In some older versions of the LoadLeveler on IBM systems the total number of tasks must be set to
$PARNODES + 1 (except for ricc2).