Texas Tech University

Parallel Environments (PE)

Below you will find information for each of the parallel environments that can used when submitting jobs to HPCC clusters.  Please note that the allowed parallel environments and their acceptable values differ from system to system. To check for the available parallel environments you may run the command "qconf -sq queue_name | grep pe_list" (where "queue_name" should be the name of the queue you want to check).

Please ensure that you are using the correct allowed values for the cluster and queue you plan to use for your job.

Parallel Environments Per Cluster:

  • Quanah
  • Hrothgar (Ivy)
  • Hrothgar (Serial) [Currently not available]
  • Community Clusters

 

MPI

The mpi PE is used for scheduling MPI jobs, especially large ones. This PE guarantees that the user will be allocated entire compute nodes to themselves - eliminating the fear of other jobs using up any resources. Jobs using this PE will not be scheduled until there are enough compute nodes available to fulfill the users requested number of slots.

On Quanah, this PE is used to request slots in multiples of 36. The mpi PE has been designed to only allow one user to run jobs per node, so you must request entire 36 core nodes at a time.

On the Community Clusters, this PE is used to request slots in any number you wish, up to the total slots for the cluster.  The mpi PE for the community cluster will behave similar to the "fill" parallel environment and will simply allow users to run on any slots on any nodes available in that cluster.

When to use: All large MPI jobs should use this PE. The user may also want to use this PE to prevent collision with other user jobs.

Where to use: Quanah and Community Clusters only.

Examples:

-pe mpi 36

-pe mpi 72

 Back to Top

 

 

Fill

The fill PE will greedily take all of the slots on a given node and then move on to the next node until the slot requirement for the job is met. For example, if a user requests 8 slots and a single machine has 8 slots available, that job will run entirely on one machine. If 5 slots are available on one host and 3 on another, it will take all 5 on that host, and all 3 on the other host. Jobs using this PE will not be scheduled until there are enough compute nodes available to fulfill the users requested number of slots.

On the Community Clusters, this PE is used to request any number of slots. The job's tasks may, or may not, be run on the same machine. The PE will schedule tasks to any open slots available.

When to use: This PE has been deprecated on Quanah and is no longer available.  It will eventually be removed from the community clusters as well sometime in the future.  It is recommended that you use either the mpi or sm environments instead.

Examples:

-pe fill 1

-pe fill 39

 Back to Top

 

 

SM

The sm PE is used for jobs that will only need to be run on one machine. Jobs using this PE will not be scheduled until there are enough compute nodes available to fulfill the users requested number of slots.  If no nodes contain enough slots for the job to schedule, the job will never schedule.

On Quanah, this PE is used to request any number of slots up to 36. This PE has been designed to ensure all of the work is contained on a single node, so the user must request 36 or less cores to ensure their job can be run entirely on a single node.

On the Hrothgar serial queue and Community Cluster queues, this PE is used to request any number of slots up to the number of slots available per node on your chosen portion of the cluster. This PE has been designed to ensure all of the work is contained on a single node, so the user must request the number of cores per node or less to ensure their job can be run entirely on a single node.  If you are unsure how many cores are available on your community cluster, please feel free to contact HPCC Support.

The available parallel environment for the Hrothgar serial queue is "sm". Your job should be submitted to use "-pe sm 1" to run a single-core job on serial queue. At present, the serial nodes can run up to 12 cores within a single node. Serial queue nodes have lower speed connections to the storage fabric but are just as fast per core for jobs that fit within a sincle node.

When to use: A user should request this PE when they need to have all their jobs run on the same machine. This job is good for openmp or thread applications or when a user needs to use all of a nodes resources (memory, local disk space, etc).

Examples (given here for single-core jobs on any queue that supports sm jobs, Hrothgar serial queue full-node jobs, or Quanah omni queue full-node jobs, respectively):

-pe sm 1

-pe sm 12

-pe sm 36

Back to Top

 

 

 

Ivy

The ivy PE is used for scheduling all jobs on the Hrothgar ivy queue. This PE guarantees that the user will be allocated entire compute nodes to themselves - eliminating the fear of other jobs using up any resources. Jobs using this PE will not be scheduled until there are enough compute nodes available to fulfill the users requested number of slots.

On Hrothgar ivy, this PE is used to request slots in multiples of 20. The mpi PE has been designed to only allow one user to run jobs per node, so you must request entire 20 core nodes at a time.

When to use: All jobs should use this PE.

Where to use: Hrothgar ivy queue only.

Examples:

-pe ivy 20

-pe ivy 40

Back to Top

High Performance Computing Center