MPI, i.e. in
sysnameto append the string
_mpito the system name and the scripts like
jobexwill take the parallel binaries by default. To call the parallel versions of the programs RIDFT, RDGRAD, DSCF, GRAD, RICC2, or MPGRAD from your command line without explicit path, expand your
$PATHenvironment variable to:
poeon IBM) automatically. The number of CPUs that shall be used can be chosen by setting the environment variable
The default for
PARNODES is 2.
Finally the user can set a default scratch directory that must be available on all nodes. Writing scratch files to local directories is highly recommended, otherwise the scratch files will be written over the network to the same directory where the input is located. The path to the local disk can be set with
TURBOTMPDIR is not set by the user, RIDFTwill check for a
/work directory. If one of those is found, it will be used by default.
On all systems TURBOMOLE is using the MPI library that has been shipped with your operating system.
On Linux for PCs and Itanium2 systems
HP-MPI is used -- see
The binaries that initialize MPI and start the parallel binaries (
mpirun) are located in the
Note: the parallel TURBOMOLE modules (except RICC2) need an extra server running in addition to the clients. This server is included in the parallel binaries and
it will be started automatically--but this results in one additional
task that usually does not need any CPU time. So if you are setting
N+1 tasks will be started.
If you are using a queuing system or if you give a list of hosts where TURBOMOLE jobs shall run on (see below), make sure that the number of supplied nodes match
$PARNODES -- e.g. if you are using 4 CPUs via a queuing system, make sure that
$PARNODES is set to 4.
On IBM systems the total number of tasks in the LoadLeveller script must be set to
$PARNODES + 1 except for RICC2.