Texas Tech University

HPCC RedRaider OS and Software Update 2025

 

This page contains information on the new OS and software environment implemented in Fall 2025 for the majority of the RedRaider cluster. Information is available for each of the topics below. Additionally, see our user guide on Transferring Data to learn about the new features enabled by our Texas Tech subscription to Globus Connect now available to all HPCC account holders.

Overall considerations:

Software usage that relies on interpreted languages such as Python or packages installed locally to personal accounts according to our guide on Installing a local custom copy of Python using Miniforge should continue to work without modification on the updated partitions. All other workflows that require module setup or are compiled applications will need to be recompiled or reconfigured to make use of this new environment.

New HPCC interactive web portal

During the meeting, as covered in the slides and video linked below, the HPCC Interactive Web Portal was announced and discussed, and a demonstration was given. The portal requires an on-campus or GlobalProtect VPN network connection and a valid HPCC account to be accessed. Further documentation on the features of the new interactive portal is available here.

Login node ssh public key updates

Due to the upgrade, you may notice a message when you connect that states, “WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!” when connecting to the RedRaider login service nodes (login-20-25.hpcc.ttu.edu or login-20-26.hpcc.ttu.edu). If you receive this or a similar message from your ssh connection software, you will need to remove the old host key from your system using one of the following methods:

  • On Linux and MacOS systems, this is done with the command: ssh-keygen -f ~/.ssh/known_hosts -R (node) where (node) is the login service node mentioned above. This should be done for both login service nodes.
  • On Windows systems, or with GUI-enabled ssh client software, this is often followed by a prompt that allows you to change to the new host key. Select the option on the popup window that corresponds to changing keys, or use the menu options or other features within the software to remove the old public key(s) and to accept the replacements.

The Quanah partition was converted to the new environment in Spring 2025. The new Nocona partition environment includes the same modules and also the option of the aocc compiler from AMD in addition to the gcc and intel-oneapi compilers. The Matador partition now includes the ability to select from a wider variety of NVIDIA software including newer CUDA versions.

HPCC User Group Meeting October 16, 2025 recording

A recording of the October 16, 2025 HPCC User Group Meeting explaining the upgrades and new features in detail is available, including chapter selection options to allow easier selection of the portion you are interested in. (You may need to dismiss the extra sign-in pop-up by clicking "Not now" after signing in once to view the recording.) Slides from the meeting are also available here.

Nocona Partition 

The special reservation used to test the nocona partition changes before the October 6-10 planned maintenance period is no longer be needed. The entire Nocona partition has now been converted to make use of the new environment.

How to adjust jobs and software packages for the updated Nocona software environment:

  • R, Python, and Conda environments are expected to remain functional as before, except in rare cases in which the Python or R modules are required to be recompiled against the new local GCC or OS library packages.

  • All C/C++/Fortran codes and other previously compiled software packages installed locally in your account that depend on previous modules or system libraries will need to be recompiled against the newly updated versions of GCC, Intel OneAPI (formerly known as Intel Parallel Studio), and/or OpenMPI compilers provided in the new environment. Previously compiled code is unlikely to work in the new OS environment without this step. This requires action on your part!

  • To view the full updated list of available cluster-wide software packages and compilers, visit the “HPCC RedRaider Cluster Software Packages” webpage either directly or by navigating to the HPCC website and selecting “RedRaider Cluster -> Software Packages” from the menu.

  • HPCC staff are available to work with researchers following the earlier one-month transition period to provide technical support for all account holders.

The “matador” partition, along with its test partition, “gpu-build,” has been upgraded to Rocky Linux 9 and configured with the latest Nvidia GPU driver to support newer versions of the CUDA toolkit and a wide range of GPU-intensive scientific and AI/ML software applications.

As before, the “gpu-build” test partition includes one GPU worker node with an Nvidia V100 GPU device and is configured for multiple simultaneous logins to provide an interactive environment for testing and developing GPU and CUDA applications. Please continue using the “interactive” command from the login nodes as before to access this partition.  See the “HPCC RedRaider Cluster Software Packages” webpage to list other currently available modules for the gpu-build and matador partitions. The modules listed there and below include a set of pre-built and containerized applications, along with tools for compiling your own CUDA code if required, or testing your code before submitting a job to the Matador partition.

Please review the following points regarding the new changes and GCC/CUDA compatibilities in the updated software environment for these partitions before resuming your job submissions to this partition.

  • The current and future Nvidia GPU driver updates on Matador nodes support all versions of the CUDA toolkit up to version 13.0. Please note, however, that CUDA versions beyond version 12 (i.e., CUDA 13.0+) have discontinued support for Nvidia V100 GPU devices and older. Therefore, CUDA 12.x.x is the latest supported version on the matador and gpu-build nodes.

  • By default, no default version of CUDA is set up upon interactive sessions or batch job submissions to the matador nodes. In order to access the cluster-wide CUDA toolkit packages, you may need to load the corresponding Lmod modules in your job submission scripts or upon the establishment of an interactive session. Currently, the following CUDA versions are available on these partitions. Select the command for the version of CUDA that you wish to use. (Note these commands will only work once you have a session on a gpu-build or matador partition node, or in the context of batch jobs on those nodes!)

    module load cuda/11.8.0
    module load gcc/12.2.0 cuda/12.3.2
    module load gcc/13.2.0 cuda/12.9.0
     
  • The HPCC does not support CUDA versions earlier than 11.8 or versions beyond 12.x on the Matador partition. However, HPCC account holders may feel free to install any version of CUDA not listed above in their account under Home or Work areas or through Conda/MiniForge channels for Python packages, if required. Please keep in mind, however, that CUDA versions beyond v13.0 are no longer supported on the Matador partition.

  • For those using compiled C/C++/Fortran CUDA codes installed locally by you or your research group, please recompile these using the latest compatible version of CUDA and GCC before resubmitting the jobs based on those software packages. Failure to do so will result in job failures or unusual behavior of the jobs.

  • Almost all Python modules and CUDA packages installed in Conda environments, with some exceptions, are expected to work fine on the updated Matador nodes. However, we strongly recommend testing your CUDA/GPU Python scripts in interactive sessions to ensure they work correctly before submitting long-running jobs to the Matador partition.

 

Toreador partition

Changes to the "toreador" partition are similar to those for the "matador" partition described above. As this partition is currently 100% subscribed for purchased access by specific researchers, information will be conveyed directly to these account holders regarding these changes.


To view the full list of available cluster-wide software packages and compilers, please visit the “HPCC RedRaider Cluster Software Packages” webpage by navigating to the HPCC Website and selecting “RedRaider Cluster -> Software Packages” from the menu, or simply clicking on This Link.


As always, please do not hesitate to contact the HPCC support team via hpccsupport@ttu.edu if you have any questions or need assistance with software installation or adjusting your job submissions.

 

High Performance Computing Center