Ganymede

Ganymede, named after Jupiter’s largest satellite, is a Cyberinfrastructure Research Computing (CIRC) cluster built on the condo model. While Ganymede does have free-to-use queues available to all UT Dallas researchers, a majority of the computational power is provided by nodes purchased for exclusive group access.

For information about purchasing nodes to add to Ganymede, email circ-assist@utdallas.edu.

Ganymede node setup

Ganymede is set up to run only one job per node. When a user submits a job, they will be given exclusive access to the entire node, regardless of how many cores or how much memory is requested. The following partitions are available by default:

Queue Name Number of nodes Cores (CPU Architecture) Memory Time Limit ([d-]hh:mm:ss)

debug

2

16 (Intel Sandy Bridge)

32 GB

02:00:00

normal

110

16 (Intel Sandy Bridge)

32 GB

4-00:00:00

128s

8

16 (Intel Sandy Bridge)

128 GB

4-00:00:00

256i

16

20 (Intel Ivy Bridge)

256 GB

4-00:00:00

256h

1

16 ( Intel Haswell)

256 GB

4-00:00:00

The above resources only account for a fraction of the compute capacity of Ganymede. All nodes cluster-wide considered, Ganymede has nearly 7300 CPU cores and 36 TB of RAM.

Ganymede storage

Ganymede has two user-writable storage directories, accessible from the head node (ganymede.utdallas.edu) and the compute nodes:

Directory Filesystem Type Network Speed Filesystem Size User Quota (Soft/Hard) Backup Frequency

/home

NFS

1/10 gigabit/s[1]

1.8 TB

20/30 GB

Nightly

~/oldscratch (via /petastore)

IBM General Parallel Filesystem

40/56/100 gigabit/s[2]

1.2 PB

None

None

~/scratch

WekaFS

40/56/100 gigabit/s

200 TB

9/10 TB[3]

No

/home

/home, due to its small quota, serial nature (recall that it’s exported via NFS) and slower speed, is recommended to be used for scripts, runfiles, and smaller output files. Please don’t run jobs from /home as the filesystem and network can be easily saturated, reducing user experience for others. MPI jobs read and write a lot of data, so just a single multi-node MPI job can slow the /home filesystem drastically.

Recall from the chart that /home is backed up nightly. Backups run nightly to facilitate restoration of users' files in the event file deletion or corruption occurs. If you need a backup restored email {circ-support-email} as soon as possible and a CIRC team member will assist you however they can in getting your files back.

The Petastore file system is in the process of being removed from service and will be decommissioned before 2024. Any data currently stored on the Petastore file system needs to be immediately copied off and saved. If you require assistance with this or would like to discuss further options, please contact circ-assist@utdallas.edu.

Petastore

Petastore, the Parallel File System for the CIRC cluster Ganymede, is a 1.2 PB IBM General Parallel File System (GPFS) storage appliance that provides dedicated storage as well as user scratch space.

Hardware

Petastore consists of 3 main components: the network fabric, the filers, and the disk arrays. The network fabric is a 100 gigabit/s Infiniband network with redundant links between two Mellanox EDR (100 gigabit/s) switches. The filers are two standard servers, each with redundant links between the switches and the disk arrays. These servers are the actual exporters of the filesystem as well as control systems. The DataDirect Networks (DDN) disk arrays (3 in this case) are filled with spinning disks as well as a number of SSD cache/metadata drives.

On Ganymede

Petastore exports to Ganymede as /petastore, but is generally user-accessible as ~/scratch. On the cluster side:

Directory Filesystem Type Network Speed Filesystem Size User Quota (Soft/Hard) Backup Frequency

~/scratch (via /petastore)

40/56/100 gigabit/s[4]

IBM General Parallel File System

1.2 PB

None

None

Please notate in the preceding chart that ~/scratch is NOT BACKED UP IN ANY FORM OR FASHION. Any important data should NOT be stored on the Ganymede Scratch filesystem. Important data should be stored in your home directory, or if applicable, your MooseFS /work directory. While we generally ask users to voluntarily clean up ~/scratch, we reserve the right to purge scratch at any time. If you need assistance, please email circ-assist@utdallas.edu.

In addition to the free-to-everyone scratch space, some groups have opted to buy in to the Petastore system (generally to fund the purchase of disks and/or enclosures). If you believe your research group is included in this category, please email circ-assist@utdallas.edu for more information and assistance.

Usage

On Ganymede, ~/scratch is best utilized for high I/O and "larger than 20 GB" datasets, TEMPORARILY. Please utilize the scratch space as "copy to, run, clean up when done" space. The filesystem is a shared resource amongst all Ganymede users and needs to be kept as clean as possible for performance and usability reasons.

Note the network speeds are 40/56/100 gigabit/s rather than Base10 (10/100/1000 M). This is due to the network link being Infiniband as opposed to Ethernet; Infiniband is a much faster and much lower latency (0.5 us as opposed to 5-10 ms) than Ethernet. This allows much faster file access (near-instantaneous) when running jobs and ~1.3 GB/s file read/write, which is about 10 times faster than the "standard" link to Ganymede’s /home. Also, due to its parallel nature, the filesystem doesn’t get saturated as easily as /home, which allows more users to run jobs at the same time. MPI jobs are no issue for ~/scratch, so it’s highly advised to use that space for running jobs, whether parallel or serial.

If you need persistent data storage in addition to the temporary scratch space, please contact circ-assist@utdallas.edu with the amount of storage you need, how long you need it, and what your workloads are and a CIRC team member will work with you to determine how best to proceed.

Backups

Recall from above, ~/scratch is NOT BACKED UP. Repeat, the entire Petastore filesystem is NOT BACKED UP. The logistics of backing up 1.2 PB of data make it extremely difficult and cost-ineffective to retain any backups. Any data on Petastore can be considered as volatile, so if it’s important please move it off when your job is complete. The hardware running the storage is robust, but nothing is invincible so please be cautious with data storage.

Cleanup

As Petastore is a shared resource by many, we ask that everyone cleans up their data when their jobs have finished, they no longer need it, or if it’s been copied to an external file system. Often times, Petastore will reach +90% utilization and we will send out an notification for users to clean up what they can to keep the filesystem performing optimally. This has continued to work when we’ve asked so automated purge policies haven’t been implemented, but the file system does support implementation of said policies.

Using Ganymede

Logging in

Ganymede is accessed via SSH. Once your account is activated, you can connect to Ganymede at ganymede.utdallas.edu. For example, in a typical terminal client run the command:

ssh <NetID>@ganymede.utdallas.edu

More information on setting up SSH access to CIRC machines on your computer can be found here.

Submitting jobs

Information on submitting jobs to Ganymede can be found here. However, there is one Slurm variable that should not be included in your submission script:

#SBATCH --account=<NetID>

Attempting to run a Slurm submission script with this variable results in:

[user@ganymede ~]$ sbatch script.sh
sbatch: error: Batch job submission failed: Invalid account or account/partition combination specified

This error can usually be fixed by removing the --account flag in your batch script.

Users and accounts are separate concepts in Slurm. Your NetID is a user ID, not an account.

TACC Launcher

Multiple instances of serial programs can run on Ganymede via TACC Launcher. To use, load the launcher module in your submission script:

# Loads the Launcher Module
module load launcher

The TACC webpage has more information on running Launcher jobs.

DO NOT use Launcher with MATLAB. MATLAB at UT Dallas is run by a central license server (not run by CIRC) and queuing up many simultaneous MATLAB jobs will crash the license server and break MATLAB for everyone at UT Dallas.

Using containers

Many "bundled" codes are distributed via Docker containers. Docker is not allowed on CIRC systems due to security issues. However, Docker containers can be used by running them with Apptainer/Singularity. For example, you can use the TensorFlow Docker container from DockerHub with the following commands:

# Loads the Singularity Module
module load singularity

# Pull the TensorFlow Docker container and transform
# it into a Singularity sandbox
singularity build --sandbox tensorflow_sandbox/ docker://tensorflow/tensorflow

# Run your Python script with the tensorflow container
singularity run -u --nv tensorflow_sandbox python <your_python_script.py>

Available software

You can view all available modules on Ganymede by running the command module spider. If you need new software installed or a different version than is provided, please contact circ-assist@utdallas.edu.


1. Most Ganymede nodes have a 1 gigabit/s connection to `/home`; however some privately accessible nodes have a 10gigabit/s connection. The file server itself exports the filesystem via a 10 gigabit/s link.
2. The `smallmem` queue utilizes a 40gigabit/s link whereas the `normal` and `debug` queues utilize a 56gigabit/s link. Some privately accessible nodes utilize a 100gigabit/s link. The storage appliance itself exports the filesystem at 100gigabit/s.
3. The WekaFS currently has a 10 TB per user quota to prevent any one user from filling up the filesystem. This quota is subject to change as the filesystem grows with future expansion.
4. The `smallmem` queue utilizes a 40 gigabit/s link whereas the `normal` and `debug` queues utilize a 56 gigabit/s link. Some privately accessible nodes utilize a 100 gigabit/s link. The storage appliance itself exports the filesystem at 100 gigabit/s.