BioAnalyze HPC Helper Libraries#

BioAnalyze HPC comes with a set of libraries and defaults to help you get started quickly. One of these is modules, which is a way of seperating out different software environments. There are also several CLI components.

[1]:
module load pcluster-helpers
[2]:
pcluster-helper --help
                                                                                
 Usage: pcluster-helper [OPTIONS] COMMAND [ARGS]...                             
                                                                                
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --install-completion        [bash|zsh|fish|powershe  Install completion for  │
│                             ll|pwsh]                 the specified shell.    │
│                                                      [default: None]         │
│ --show-completion           [bash|zsh|fish|powershe  Show completion for the │
│                             ll|pwsh]                 specified shell, to     │
│                                                      copy it or customize    │
│                                                      the installation.       │
│                                                      [default: None]         │
│ --help                                               Show this message and   │
│                                                      exit.                   │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Commands ───────────────────────────────────────────────────────────────────╮
│ gen-nxf-slurm-config  Generate a slurm.config for nextflow that is           │
│                       compatible with your cluster.                          │
│ sinfo                 A more helpful sinfo                                   │
╰──────────────────────────────────────────────────────────────────────────────╯

SLURM - SInfo Helper#

When you submit jobs to an HPC cluster it is important to know how to map what your job needs to what the cluster provides.

Your cluster may look different depending on your queue configuration setup.

Please make sure to submit the jobs from your own sinfo table and not the one shown here.

[3]:
pcluster-helper sinfo
Printing sinfo table
                                  SLURM SInfo
┏━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━┳━━━┳━━━━━━━━━━━━┳┓
┃     Queue  Constrai…  TotalMem(…  SchedulableMem(G…             EC2 ┃┃
┡━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━╇━━━╇━━━━━━━━━━━━╇┩
│ compute-…c6a24xla…192182c6a.24xla… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│ compute-…c524xlar…192182c5.24xlar… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│ compute-…c6a32xla…256243c6a.32xla… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│ compute-…c6a48xla…384364c6a.48xla… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│ compute-…c6a4xlar…32308c6a.4xlar… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│ compute-…c6a8xlar…6460c6a.8xlar… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│ compute-…c6a12xla…9691c6a.12xla… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│ compute-…c6a16xla…128121c6a.16xla… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│ compute-…c518xlar…144136c5.18xlar… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│       devm5axlarge161542m5a.xlarge ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│       devm5a2xlar…323084m5a.2xlar… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│       devm5a4xlar…64608m5a.4xlar… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│       devm5a8xlar…128121m5a.8xlar… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│       devm5a12xla…192182m5a.12xla… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│  dev-highm6a8xlar…128121m6a.8xlar… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│  dev-highm6a12xla…192182m6a.12xla… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│  dev-highm6a16xla…256243m6a.16xla… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│  dev-highm6a24xla…384364m6a.24xla… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│  dev-highm6a32xla…512486m6a.32xla… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│   dev-lowt3amedium4321t3a.medium ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│   dev-lowt3alarge8721t3a.large ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│   dev-lowt3axlarge161542t3a.xlarge ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│   dev-lowt3a2xlar…323084t3a.2xlar… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│   dev-lowm6a2xlar…323084m6a.2xlar… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│     gpu-1g5xlarge161542g5.xlarge ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│     gpu-1g52xlarge323084g5.2xlarge ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│     gpu-1g54xlarge64608g5.4xlarge ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│     gpu-1g4dn8xla…128121g4dn.8xla… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│     gpu-1g4dn16xl…256243g4dn.16xl… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│     gpu-2g58xlarge128121g5.8xlarge ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│     gpu-2g512xlar…192182g5.12xlar… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│     gpu-2g516xlar…256243g5.16xlar… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│     gpu-2g524xlar…384364g5.24xlar… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│     gpu-2g548xlar…768729g5.48xlar… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│ memory-h…r5b24xla…768729r5b.24xla… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│ memory-h…r5dn24xl…768729r5dn.24xl… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│ memory-h…r5ad24xl…768729r5ad.24xl… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│ memory-h…r6i32xla…1024972r6i.32xla… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│ memory-h…r6id32xl…1024972r6id.32xl… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│ memory-l…r6i4xlar…1281218r6i.4xlar… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│ memory-l…r6i8xlar…256243r6i.8xlar… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│ memory-l…r6i12xla…384364r6i.12xla… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│ memory-l…r6i16xla…512486r6i.16xla… ││
├───────────┼───────────┼────────────┼───────────────────┼───┼───┼────────────┼┤
│ memory-l…r6i24xla…768729r6i.24xla… ││
└───────────┴───────────┴────────────┴───────────────────┴───┴───┴────────────┴┘

Submit a sample job#

Grab the text below and create a file called my-job.sh.

#!/bin/bash

# my-job.sh

#SBATCH --job-name=my-job-name        # Job name
#SBATCH --partition=MY_PARTITION      # sbatch job parition from the table above
#SBATCH -c=CPU                        # sbatch cpu from the above table
#SBATCH --constraint                  # sbatch constraint from the table above
#SBATCH --time=01:00:00               # Time limit hrs:min:sec
#SBATCH --output=my_job_%j.log   # Standard output and error log

pwd; hostname; date

date

sleep 10

Once you have your script make it executable and submit it.

chmod 777 ./my-job.sh
sbatch ./my-job.sh

Troubleshooting#

Submitting jobs to SLURM on AWS is slightly different than submitting jobs to an in house data center. You do not specify memory, but instead specify the instance type and a constraint on the EC2 instance type.

Additional Information#

SLURM provides much more information than given by the pcluster-helper utility.

Cluster Info

sinfo

Job Info

squeue

Job Info for a Single User

squeue -u $USER

Show detailed job information

scontrol show job <job_id>

These are examples from this particular cluster. Your output will be slightly different depending on your cluster configuration

[4]:
squeue -u $USER
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
               490       dev jupyterh  jillian  R    5:25:26      1 dev-dy-m5a2xlarge-1
[5]:
# scontrol show job some_job_id
[6]:
sinfo
PARTITION    AVAIL  TIMELIMIT  NODES  STATE NODELIST
dev*            up   infinite   4999  idle~ dev-dy-m5a2xlarge-[2-1000],dev-dy-m5a4xlarge-[1-1000],dev-dy-m5a8xlarge-[1-1000],dev-dy-m5a12xlarge-[1-1000],dev-dy-m5axlarge-[1-1000]
dev*            up   infinite      1  alloc dev-dy-m5a2xlarge-1
dev-low         up   infinite   5000  idle~ dev-low-dy-m6a2xlarge-[1-1000],dev-low-dy-t3a2xlarge-[1-1000],dev-low-dy-t3alarge-[1-1000],dev-low-dy-t3amedium-[1-1000],dev-low-dy-t3axlarge-[1-1000]
dev-high        up   infinite   5000  idle~ dev-high-dy-m6a8xlarge-[1-1000],dev-high-dy-m6a12xlarge-[1-1000],dev-high-dy-m6a16xlarge-[1-1000],dev-high-dy-m6a24xlarge-[1-1000],dev-high-dy-m6a32xlarge-[1-1000]
memory-low      up   infinite   5000  idle~ memory-low-dy-r6i4xlarge-[1-1000],memory-low-dy-r6i8xlarge-[1-1000],memory-low-dy-r6i12xlarge-[1-1000],memory-low-dy-r6i16xlarge-[1-1000],memory-low-dy-r6i24xlarge-[1-1000]
memory-high     up   infinite   5000  idle~ memory-high-dy-r5ad24xlarge-[1-1000],memory-high-dy-r5b24xlarge-[1-1000],memory-high-dy-r5dn24xlarge-[1-1000],memory-high-dy-r6i32xlarge-[1-1000],memory-high-dy-r6id32xlarge-[1-1000]
compute-low     up   infinite   5000  idle~ compute-low-dy-c6a4xlarge-[1-1000],compute-low-dy-c6a8xlarge-[1-1000],compute-low-dy-c6a12xlarge-[1-1000],compute-low-dy-c6a16xlarge-[1-1000],compute-low-dy-c518xlarge-[1-1000]
compute-high    up   infinite   4000  idle~ compute-high-dy-c6a24xlarge-[1-1000],compute-high-dy-c6a32xlarge-[1-1000],compute-high-dy-c6a48xlarge-[1-1000],compute-high-dy-c524xlarge-[1-1000]
gpu-1           up   infinite   5000  idle~ gpu-1-dy-g4dn8xlarge-[1-1000],gpu-1-dy-g4dn16xlarge-[1-1000],gpu-1-dy-g5xlarge-[1-1000],gpu-1-dy-g52xlarge-[1-1000],gpu-1-dy-g54xlarge-[1-1000]
gpu-2           up   infinite   5000  idle~ gpu-2-dy-g58xlarge-[1-1000],gpu-2-dy-g512xlarge-[1-1000],gpu-2-dy-g516xlarge-[1-1000],gpu-2-dy-g524xlarge-[1-1000],gpu-2-dy-g548xlarge-[1-1000]
[ ]:

[ ]: