Texas Tech University

Student/External account request

 

Welcome to the HPCC account creation page.

Note: An eRaider account must already exist to use this form. If you are a student or an external research partner with a current eRaider account, please proceed. If you are a faculty or staff member and would like to request access for an external research partner, please follow the "Research Partner" link FIRST to get an eRaider account for your partner so that she/he will be able to use this form to make an account request.

Please review the Operation Policies and Data Policies of HPCC, and click the "Agree" button at the bottom to certify that you agree to comply with the policies, then continue with your account request.

Operational Policies


General conditions for access

Per Texas statutes, TTU information resources are strategic assets of the state of Texas that must be managed as valuable state resources. As such, use of TTU information resources is subject to university OPs and other applicable laws. Unauthorized use is prohibited, usage may be subject to security testing and monitoring, misuse is subject to criminal prosecution, and users have no expectation of privacy except as otherwise provided by applicable privacy laws.


Login security

HPCC rely on TTU e-raider authentication system to check user credentials on our systems. All users use their e-raider id and password to log in HPCC clusters.

HPCC systems have RSA authentication enabled for passwordless login. You may choose whether to enable this on each system by adding RSA keys from remote systems to ~/.ssh/authorized_keys. Please be careful. If enabled, this allows an intruder to enter all of your accounts if one is cracked. We strongly suggest that you do not add a key for a system that is itself insecure (MS Windows, security not up to date, telnet enabled, etc.) as this allows intruders user access to HPCC systems which can then be escalated to root access and total control.

Cluster Internal Security

On first login to a cluster head node, SSH may ask you for a key phrase. This is not generally needed. It is reasonably secure, and makes login to the compute nodes simpler, if you leave this key blank (hit the enter key at this prompt). From the head node, you should be able to either ssh or rsh to all of the compute nodes in that cluster without a password. If either ssh or rsh prompts for a password on cluster head to compute login, please contact HPCC staff at hpccsupport@ttu.edu, as parallel software generally depends on passwordless login. More complex methods will be required if you have a non-blank SSH key phrase on the cluster head nodes.

Remote shell or rsh only works within each cluster. If you are extremely concerned about security, you may wish to use only ssh within clusters. rsh is faster as it does not encrypt each transmission, but the transmissions can be intercepted and decoded. This is generally not an issue, since root access to the cluster is required to intercept the messages, and this interception procedure would not be necessary for a cracker who already had root access.

MPI on clusters also uses either ssh or rsh for data transmission.

Please also read and observe the data access, permissions, and security policies on the TTU HPCC Data Policies page.

Access Permissions

By default in Linux systems, users have read, write and execute permissions to the directories and files that they own. Meanwhile the directories and files are readable and executable to other users, including the users in the same group of the owner. Basically a user is the owner of the directories /home/user-id, /lustre/work/user-id, and /lustre/scratch/user-id, as well as all files and directories under them. A user also owns the temporary files or directories in /state/partition1 on compute nodes, if their jobs create temporary output there. If you are concerned about the permission settings, for example, you do not want others to read your files, you can change the permission by command "chmod" with appropriate options. For the details, please run "man chmod" to get the manual of chmod command, or contact hpccsupport@ttu.edu.

Examples of reasons to set stricter than normal permissions would be to protect files from inadvertent sharing, such as homework or class personal activities, or protection of proivate keys such as those in your .ssh folder. In general, you should not assume that files on a shared cluster file system are private and should take steps such as keeping any sensitive data off of the cluster file systems and instead moving them to external storage under your direct control. You may also need to request to delete any backup copies from the HPCC backup system, if applicable.

Regardless of the directory permissions, root users (HPCC staff and TTU security personnel) are permitted to access user files as needed for management of storage systems or for security-related investigations. Sponsoring faculty/staff can also request to access your files for purposes of continuity of research.

Data Policies


Use of HPCC storage

The main function of the HPCC storage systems is to provide rapid access to and from the worker nodes of the HPCC clusters for data needed in high speed calculations.  For this reason, these systems are optimized for speed and are not intended for long-term or archival storage. We cannot guarantee that data will not be lost due to operational factors in the use of the clusters. As a result, it is the researcher's responsibility to back up their own important data externally.

The HPCC stores cluster-wide data on a set of resilient Lustre-based file systems, and backs up a limited amount of user data in home areas. We strongly encourage users to maintain an external copy of all data and not to use HPCC Lustre cluster-wide storage systems as the only copy of files critical to their research. In particular, the work, scratch and other special-purpose areas are not backed up and should not be used as the only long-term copies of important data.

The HPCC is in the process of commissioning a new near-line backup storage system for users who do not have the capability to maintain their own backups, or who prefer to use our backup systems. Further information will be posted once this system has been commissioned.

In more detail, in HPCC Lustre cluster-wide storage systems, the conflict between performance, size, speed, cost, and reliability is generally resolved  in favor of large size at high speed with relatively low cost. Most of the cluster disk storage is composed of redundant arrays of inexpensive disks (RAID) to be resilient against single disk failures. There are nearly 100 such arrays operating at this time in the HPCC. In most cases, at least three disks in any given array must fail for data to be lost. 

Please also read the general conditions for access in the TTU HPCC Operational Policies page.

Data Policy for Hrothgar and Quanah

On Hrothgar and Quanah,

  • The $HOME area for every user is backed up and is subject to usage quotas.
  • The $WORK area for every user is not backed up but is not purged, and is subject to usage quotas larger than those used in $HOME.
  • Special researcher-owned storage areas may be purchased by individual researchers or research groups and access permissions are managed according to their own policies. Backup may be provided optionally for purchase once the new backup system is commissioned.
  • The Scratch partition is subject to purging in order to keep the file usage below 80% full.
    • If the Scratch partition becomes 80% or more full, the $SCRATCH area for every user is purged of its oldest files.  See the Purge Policy below for details. 
    • On a monthly basis the $SCRATCH area for every user is purged according to the Purge Policy - see below for details.


The following table summarizes the locations, their sizes and backup details.

Location, quota and backup summary
Location Size in GB Alias Backup Purged
/home/eraiderid 150GB $HOME Yes No
/lustre/work/eraiderid 700GB $WORK No No
/lustre/scratch/eraiderid none $SCRATCH No

As needed to maintain <80% usage

(Purges typically take place monthly)


Purge Policy

Individual files will be removed from /lustre/scratch/eraiderid ($SCRATCH) automatically if they have not been accessed in over 1 year.  To check a file's last access time (atime) you can use the following command: ls -ulh filename.

Users may not run "touch" commands or similar commands for the purpose of altering their file's timestamps to circumvent this purge policy.  Users who violate this policy will have their accounts suspended by HPCC staff.  This suspension may result in the user's jobs being killed.

HPCC Staff will monitor the scratch space usage to ensure that it remains below 80% full.  On a monthly basis the $SCRATCH area for every user will be purged of all files that have not been accessed in over 1 year.  This monthly purge will be run regardless of the current level of scratch space usage.  In the event the Scratch partition goes above the 80% threshold, an immediate purge of every user's $SCRATCH space will be triggered and the oldest files for each user will be removed until we are well below the 80% threshold.

To help us avoid the need to shorten the retention period, please use the scratch space conscientiously.  The Scratch partition should be used for files that have no need for long-term retention. Ideally, this period should be measured in days. The reason that the retention period is variable is that it depends on usage. Proactively removing files that are not needed thus extends the retention time for yourself and other users.

At this time, and with current usage patterns, the file retention period on the scratch area can be expected to be at least several days and most likely up to a few weeks, but in no case will files stay on disk more than a year since their last accessThe scratch area should NOT be used for files that will be needed for longer periods.

We will keep the HPCC user community informed and give warnings if the expected retention period decreases significantly.

Additional Assistance

For additional assistance please contact hpccsupport@ttu.edu


High Performance Computing Center