Texas Tech University

New Account Request

See general information on HPCC accounts here.

Welcome to the HPCC account creation page. An eRaider account must already exist to use this form. All account requests from students, staff members including postdoctoral staff, and external research partners must also have a TTU faculty sponsor to proceed. 

Notes:

  • Please sign in to the account request system using your TTU eRaider credentials after agreeing to the operational policies and data policies below to reach the HPCC account request form.
  • If you are a faculty member and would like to request access for an external research partner, please first follow the "Research Partner" link to get an eRaider account for your partner, who will then be able to use this form to make an account request using the eRaider ID issued by the university. The HPCC does not manage the process of issuing eRaider IDs.
  • The HPCC Account Registration system will require to provide a valid ORCiD. For more information on creating an ORCiD identifier, please Follow This Link.

Please review the Operational Policies and Data Policies of the HPCC and click the "Agree" button at the bottom to certify that you agree to comply with these policies, then continue with your account request.

Operational Policies

 

General conditions for access

Per Texas statutes, TTU information resources are strategic assets of the state of Texas that must be managed as valuable state resources. As such, use of TTU information resources is subject to university OPs and other applicable laws. Unauthorized use is prohibited, usage may be subject to security testing and monitoring, misuse is subject to criminal prosecution, and users have no expectation of privacy except as otherwise provided by applicable privacy laws.


Login security

Access methods to HPCC resources rely on TTU eRaider authentication system to check user credentials. All users use their eRaider id and password to log in HPCC clusters and/or to transfer files using the HPCC Globus Connect data transfer service. 

Cluster Internal Security

On first login to a cluster head node, SSH may ask you for a key phrase. This is used only within the cluster for communication between nodes within your account and a key phrase is not generally needed for this case. It is acceptably secure for cluster operations, and makes login to the compute nodes simpler, if you leave this key blank (hit the enter key at this prompt). From the head node, you should be able to either ssh or rsh to all of the compute nodes in that cluster without a password. If either ssh or rsh prompts for a password on cluster head to compute login, please contact HPCC staff at hpccsupport@ttu.edu, as parallel software generally depends on passwordless login. More complex methods will be required if you have a non-blank SSH key phrase on the cluster head nodes. Remote shell by ssh or or rsh is only permitted within the HPCC clusters to nodes on which you have currently running jobs. MPI on clusters also uses either ssh or rsh for data transmission.

Please also read and observe the data access, permissions, and security policies below and on the TTU HPCC Data Policies page.

Access Permissions

By default in Linux systems, users have read, write and execute permissions to the directories and files that they own. Meanwhile the directories and files are often set by default to be readable and executable to other users, including the users in the same group of the owner. Basically a user is the owner of the directories /home/user-id, /lustre/work/user-id, and /lustre/scratch/user-id, as well as all files and directories under them. A user also owns the temporary files or directories in partitions on compute nodes, if their jobs create temporary output there. If you are concerned about the permission settings, for example, you do not want others to read your files, you can change the permission by command "chmod" with appropriate options. For example:

chmod 700 (file)

or

chmod -R 700 (directory)


For the details, please run "man chmod" to get the manual of chmod command, or contact hpccsupport@ttu.edu. A more flexible way to set access control for files and folders is to use "access control list" methods. Please contact HPCC Support if you wish to learn more about how to use ACLs to control access to your files.

Examples of reasons to set stricter than normal permissions would be to protect files from inadvertent sharing, such as homework or class personal activities, or protection of private keys such as those in your .ssh folder. In general, you should not assume that files on a shared cluster file system are private and should take steps such as keeping any sensitive data off of the cluster file systems and instead moving them to external storage under your direct control. You may also need to request to delete any backup copies from the HPCC backup system, if applicable.

Regardless of the directory permissions, root users (HPCC staff and TTU security personnel) are permitted to access user files as needed for management of storage systems or for security-related investigations. Sponsoring faculty/staff can also request to access your files for purposes of continuity of research.

Data Policies

 

HPCC cluster data storage
Use of HPCC storage

The main function of the HPCC storage systems is to provide rapid access to and from the worker nodes of the HPCC clusters for data needed in high speed calculations.  For this reason, these systems are optimized for speed and are not intended for long-term or archival storage. We cannot guarantee that data will not be lost due to operational factors in the use of the clusters. As a result, it is the researcher's responsibility to back up their own important data externally.

The HPCC stores cluster-wide data on a set of resilient Lustre-based file systems, and backs up a limited amount of user data in home areas. We strongly encourage users to maintain an external copy of all data and not to use HPCC Lustre cluster-wide storage systems as the only copy of files critical to their research. In particular, the work, scratch and other special-purpose areas are not backed up and should not be used as the only long-term copies of important data.

The HPCC is in the process of commissioning a new near-line backup storage system for users who do not have the capability to maintain their own backups, or who prefer to use our backup systems. Further information will be posted once this system has been commissioned.

In more detail, in HPCC Lustre cluster-wide storage systems, the conflict between performance, size, speed, cost, and reliability is generally resolved  in favor of large size at high speed with relatively low cost. Most of the cluster disk storage is composed of redundant arrays of inexpensive disks (RAID) to be resilient against single disk failures. There are nearly 100 such arrays operating at this time in the HPCC. In most cases, at least three disks in any given array must fail for data to be lost. 

Please also read the general conditions for access in the TTU HPCC Operational Policies page.

Data Policy for Hrothgar and Quanah

On Hrothgar and Quanah,

  • The $HOME area for every user is backed up and is subject to usage quotas.
  • The $WORK area for every user is not backed up but is not purged, and is subject to usage quotas larger than those used in $HOME.
  • Special researcher-owned storage areas may be purchased by individual researchers or research groups and access permissions are managed according to their own policies. Backup may be provided optionally for purchase once the new backup system is commissioned.
  • The Scratch partition is subject to purging in order to keep the overall file system usage below 75% full.
    • If the overall Lustre file system becomes 75% or more full, the $SCRATCH area for every user is purged of its oldest files.  See the Purge Policy below for details. 
    • On a monthly basis the $SCRATCH area for every user is purged according to the Purge Policy - see below for details.


The following table summarizes the locations, their sizes and backup details.

Location, quota and backup summary
Location Size in GB Alias Backup Purged
/home/eraiderid 300GB $HOME Yes No
/lustre/work/eraiderid 700GB $WORK No No
/lustre/scratch/eraiderid none $SCRATCH No

As needed to maintain <75% Lustre space usage

(Purges typically take place monthly)


Purge Policy

Individual files will be removed from /lustre/scratch/eraiderid ($SCRATCH) automatically if they have not been accessed in over 1 year.  To check a file's last access time (atime) you can use the following command: ls -ulh filename.

Users may not run "touch" commands or similar commands for the purpose of altering their file's timestamps to circumvent this purge policy.  Users who violate this policy will have their accounts suspended by HPCC staff.  This suspension may result in the user's jobs being killed.

HPCC Staff will monitor the overall Lustre space usage to ensure that it remains below 75% full.  On a monthly basis the $SCRATCH area for every user will be purged of all files that have not been accessed in over 1 year.  This monthly purge will be run regardless of the current level of scratch space usage.  In the event the Scratch partition goes above the 80% threshold, an immediate purge of every user's $SCRATCH space will be triggered and the oldest files for each user will be removed until we are well below the 80% threshold.

To help us avoid the need to shorten the retention period, please use the scratch space conscientiously.  The Scratch partition should be used for files that have no need for long-term retention. Ideally, this period should be measured in days. The reason that the retention period is variable is that it depends on usage. Proactively removing files that are not needed thus extends the retention time for yourself and other users.

At this time, and with current usage patterns, the file retention period on the scratch area can be expected to be at least several days and most likely up to a few weeks, but in no case will files stay on disk more than a year since their last accessThe scratch area should NOT be used for files that will be needed for longer periods.

We will keep the HPCC user community informed and give warnings if the expected retention period decreases significantly.

Additional Assistance

For additional assistance please contact hpccsupport@ttu.edu


High Performance Computing Center