This is a short introduction into using the BSWIFT II cluster based on the documentation on High Performance Computing of the Division of Information Technology.   Additionally, OACS has put together a new user on-boarding document that can be referenced by clicking here

Requirements for authentication:

A TerpConnect/Glue account is required to access the unix environment on campus. The username and password for this will be the same as your campus Directory ID (username) and password, but it might need to be activated separately. You can find detailed instructions on how to activate your TerpConnect account in the campus knowledge base. If you are not a member of the University of Maryland (faculty, staff, or registered student), you can get a TerpConnect/Glue account if you are working with a faculty member who is willing to sponsor you as an affiliate. You need an account on the BSWIFT cluster, which provides you with a local home directory on BSWIFT II and the right to run jobs on BSWIFT II. Account requests are reviewed by a BSWIFT II administrator, for student accounts we require the consent of the sponsoring faculty.

Connecting to BSWIFT II:

***Prior to connecting to BSWIFT II, you must first connect to the University of Maryland GlobalProtect client prior to connecting to BSWIFT II***

BSWIFT II is a high performance computing cluster running RedHat linux. It is assumed that you are familiar with basic unix commands. You can connect to BSWIFT II using the secure shell protocol (ssh) and transfer files to or from BSWIFT II using the secure file transfer protocol (sftp). On a Microsoft Windows computer you need a SSH client program like PuTTY and WinSCP (Type in the ‘host name’ field: bswift2-login.umd.edu. Do not change the ‘Port Number’ (22)). On a MAC or linux (unix) terminal you can simply type:

      ssh username@bswift2-login.umd.edu    or:    ssh –l username bswift2-login.umd.edu

‘username’ is your UMD directory ID. You will be asked for your campus password – at the first time a warning message will appear showing the RSA fingerprint of the host ‘bswift2-login.umd.edu (128.8.204.140)’ and asking you: “are you sure you want to continue connecting (yes/no)?” On ‘yes’ you will be asked for your campus password.

Now you are logged in to your home directory on your default command shell: bash (or sh) or tcsh (or csh). You can check the command shell flavor by typing     ps –p $$.   When writing a job script, this might be important to know.

Note: If you want to open a remote BSWIFT II display on your local machine (e.g. an editor like gedit or an image or pdf document), you have to enable X11-forwarding (option “-X”):

      ssh –X username@bswift2-login.umd.edu

If connecting via PuTTY, you need to enable X11 forwarding (on the left panel “Category” under Connection > SSH > X11), and you have to install and start a X Windows server like Xming to display the graphics.

Home Directories:
• Path: /data/homes/$USER
• This directory is your personal space on the cluster where you can store your important scripts and files. Each user has their own sub directory within /data/homes.

Software Research Directory:
• Path: /data/software-research/$USER
• This directory is designated for storing your research data. Each user has their own sub directory within /data/software-research.

Singularity Images
• The Singularity images you have been using on BSWIFT II can be found in the directory /data/software-research/software/simgs.

Accessing installed software:

GLUE software packages (open-source and proprietary), that are not included in your default environment, must be loaded using the ‘module’ command (note: the ‘tab’ command is obsolete).

e.g.    module load matlab   or to load a specific version:   module load matlab/2016b

Note: if you want to load a different version, you have to unload this software package first.

e.g.     module unload matlab
            module load matlab/2016a

To list all available software packages and versions, type:     module avail

Job submission:

1. Save your SLURM job script (e.g., myjob.sh).
2. Submit your job to the scheduler using the sbatch command: sbatch myjob.sh

Checking Job Status
To check the status of your job, use the squeue command: squeue -u your_username

Creating a job script:

Below is a sample SLURM job script to demonstrate how to use Singularity on
the bswift2 cluster:
#!/bin/bash
#SBATCH --job-name=test_bswift2 # Job name
#SBATCH --ntasks=1 # Number of tasks (processes)
#SBATCH --cpus-per-task=12 # Number of CPU cores per task
#SBATCH --mem=16G # Memory per node
# The compute node bswift2-compute-1-[1-2] has 80 CPUs and 450 GB memory.

echo "hello from bswift2"
# Run a Singularity container
SINGULARITY=/data/software-research/software/apptainer/bin/singularity
IMAGE=/data/software-research/software/simgs/centos7.simg
$SINGULARITY run $IMAGE echo "hello from singularity"