Texas Tech University

HPCC Facilities and Equipment

For a description of HPCC software and other services, click here.

Hardware

The High Performance Computing Center's (HPCC) hardware is located in three data center locations. The main production clusters, Quanah and Hrothgar, along with the central file system servers and a number of smaller systems are in the Experimental Sciences Building on the main TTU campus. Certain specialty clusters including Weland, Realtime2, and Janus are located at Reese Center, several miles from then main campus. A final set of resources consists of TTU's portion of the Lonestar 5 cluster, comprising approximately 1600 cores, operated at the Texas Advanced Computing Center in Austin.

These resources include both generally-available public and researcher-owned private systems. Public nodes are available to any TTU researcher. Private nodes are owned by individual researchers and administered by HPCC. All of the generally-available cluster resources operate using a weighted fair-share queueing system to provide a flexible balance to ensure that newly submitted jobs compete favorably for upcoming batch queue slots compared to long-running job sequences. The TTU resources on Lonestar 5  are available for competitive allocations for specific projects on special request and serve as a "safety valve" for certain large-scale projects.

Community Clusters

For all of the general use resources, such as Quanah and Hrothgar, scheduling software is used to allocate computing capacity in a reasonably fair manner. If you need additional computing capacity beyond this and you are considering buying a cluster, talk with us about the community cluster option. Additions to the Community Cluster are subject to space or infrastructure limitations. Please check with the HPCC staff for the current status of the Community Cluster.

In the Community Cluster you will buy nodes that are part of a larger cluster, and you will get priority access equal to the nodes you purchased. We will house, operate and maintain the resources as long as they are in warranty. This is typically three years. Contact us for more details.

Dedicated Clusters/Servers

A dedicated cluster is a standalone cluster that is paid for by a specific TTU faculty member or research group. HPCC is able to, subject to space and infrastructure availability, house these clusters in its machine rooms providing system administration support, UPS power and cooling. Typically, for these clusters HPCC system administration support is by request with day to day cluster administration provided by the owner of the cluster.

Campus

The newest cluster, Quanah, has 467 worker nodes with 36 cores each for a total of 16,812 cores, of which 16,092 are reserved for general use and 720 cores are owned by specific research groups. To connect with West Texas history, the cluster is named for Quanah Parker, and its internal management node Charlie is named for Charles Goodnight. Commissioned in early 2017 and expanded to its current size later in that year, it is based on Dell C6300 enclosures holding four C6320 nodes each. The worker nodes consist of dual-18-core Broadwell Xeon processors (36 cores per node) with 192 GB memory per node. The software environment is based CentOS 7 Linux, controlled by Intel HPC Orchestrator, and has a fully non-blocking Intel Omni-Path 100 Gbps fabric for MPI computing. The cluster is operated with a single queue, with jobs sorted according to projects in order to satisfy the needs of the participating research groups. Benchmarks show the performance of Quanah to be approximately 485 Teraflops as of late 2017.

Hrothgar is an older Dell Linux Cluster currently consisting of 630 total nodes and 8246 total processing cores, out of which 7408 cores are made available for general use and the rest are owned by specific research groups. The Hrothgar cluster was initially built in 2011. Several updates have occurred since then, including the replacement of the core and leaf switches with QDR Infiniband and updating of approximately 100 of the nodes to newer Ivy Bridge Xeon systems. Most of the Hrothgar nodes are composed of two 2.8 GHz 6-core processors with 24 GB memory, and the rest, approximately 20% of the total, are in denser 20-core and 32-core nodes. Roughly 90% of these nodes are connected to either 20 Gbps or 40 Gbps Infiniband fabric optimized for parallel computing, and the remainder are dedicated to serial processing. The Hrothgar parallel nodes in the current configuration have a total estimated peak rating of approximately 80 teraflops. To honor the first clusters of this type, the Hrothgar cluster and its internal management node Hygd are named for characters in the Beowulf mythic poem.

The HPCC has a DataDirect Network storage system capable of providing storage for up to 2.5 petabytes of data.This storage space is configured using Luster to provide a set of common file systems to the Quanah and Hrothgar clusters. The file system uses a combination of LNet routers to bridge the Omni-Path traffic to Infiniband for the Quanah cluster, direct QDR Infiniband to connect the high-speed compute fabric on Hrothgar, and Gigabit Ethernet to connect to Hrothgar serial nodes. In addition to the DDN central file system, a number of researchers and research groups have purchased dedicated  servers for long-term data storage, typically in increments of tens of terabytes, that are also reachable by the same methods. 

Reese

The Janus, Weland and Realtime clusters are located at the off-campus Reese data center, which also houses some of the serial nodes for the Hrothgar cluster. Janus and Weland are also named after characters from the Beowulf story.

Janus is a Microsoft Windows HPC cluster with twelve 20 core nodes. This system is used for a small number of dedicated workloads that depend on specific licensed software that is not available more seedily for the Linux clusters. This is not intended to be a general Windows login system for the university, but instead to serve those specific workloads that require Windowes HPC Server support.

Weland is a Linux cluster with sixteen 8 core nodes, and each node contains two Xeon E5540 processors for a total of 128 cores running at 2.53GHz with 16 GB main memory. It is primarily operated as a TechGrid resource to augment the campus cycle-sharing grid.

Realtime is a dedicated private weather modeling cluster owned by the Atmospheric Sciences group.

Additionally, a portion of the Hrothgar serial queue operates from the Reese data center.

High Performance Computing Center