Texas Tech University

Parallel Environments (PE)

Below you will find information for each of the 3 parallel environments that can used when submitting jobs to HPCC clusters. Please note that the acceptable values differs from system to system, so make sure you are reading the correct allowed values for the cluster and queue you plan to submit your job to.

Parallel Environments:
Fill
MPI
SM

 

Fill

The fill PE will greedily take all of the slots on a given node and then move on to the next node until the slot requirement for the job is met. For example, if a user requests 8 slots and a single machine has 8 slots available, that job will run entirely on one machine. If 5 slots are available on one host and 3 on another, it will take all 5 on that host, and all 3 on the other host. Jobs using this PE will not be scheduled until there are enough compute nodes available to fulfill the users requested number of slots.

On Quanah, this PE is used to request any number of slots. The job's tasks may, or may not, be run on the same machine. The PE will schedule tasks to any open slots available.

On Hrothgar (normal queue), this PE is used to request nodes in multiples of 12. This queue has been designed to only allow one user to run jobs per node, so you must request entire 12 core nodes at a time.

On Hrothgar (ivy queue), this PE is used to request nodes in multiples of 20. This queue has been designed to only allow one user to run jobs per node, so you must request entire 20 core nodes at a time.

When to use: This is a general PE. A user who needs to run many serial jobs should use this. MPI jobs can be run using this PE, but they may not run on the same machine, or the schedule could scatter the MPI tasks in an unpredictable way.


 

MPI

The mpi PE is used for scheduling MPI jobs, especially large ones. Regardless of the queue being used, this PE guarantees that the user will be allocated compute node(s) to themselves. Jobs using this PE will not be scheduled until there are enough compute nodes available to fulfill the users requested number of slots.

On Quanah, this PE is used to request slots in multiples of 36. The mpi PE has been designed to only allow one user to run jobs per node, so you must request entire 36 core nodes at a time.

On Hrothgar (normal queue), this PE is used to request nodes in multiples of 12. The mpi PE has been designed to only allow one user to run jobs per node, so you must request entire 12 core nodes at a time.

On Hrothgar (ivy queue), this PE is used to request nodes in multiples of 20. The mpi PE has been designed to only allow one user to run jobs per node, so you must request entire 20 core nodes at a time.

When to use: All large MPI jobs should use this PE. The user may also want to use this PE to prevent collision with other user jobs.


 

SM

The sm PE is used for jobs that will only need to be run on one machine. Jobs using this PE will not be scheduled until there are enough compute nodes available to fulfill the users requested number of slots.  If no nodes contain enough slots for the job to schedule, the job will never schedule.

On Quanah, this PE is used to request any number of slots up to 36. This PE has been designed to ensure all of the work is contained on a single node, so the user must request 36 or less cores to ensure their job can be run entirely on a single node.

On Hrothgar (normal queue), this PE is used to request 12 slots. This PE has been designed to only allow one user to run jobs per node, so you must request entire 12 core nodes at a time and no more.

On Hrothgar (ivy queue), this PE is used to request nodes in multiples of 20. This PE has been designed to only allow one user to run jobs per node, so you must request entire 20 core nodes at a time and no more.

When to use: A user should request this PE when they need to have all their jobs run on the same machine. This job is good for openmp or thread applications or when a user needs to use all of a nodes resources (memory, local disk space, etc).

High Performance Computing Center