9. List of Pre-Configured Resources

The following list of resources are supported by the underlying layers of ExTASY.

Note

To configure your applications to run on these machines, you would need to add entries to your kernel definitions and specify the environment to be loaded for execution, executable, arguments, etc.

9.1. RESOURCE_FUTUREGRID

9.1.1. BRAVO

FutureGrid Hewlett-Packard ProLiant compute cluster (https://futuregrid.github.io/manual/hardware.html).

  • Resource label : futuregrid.bravo
  • Raw config : resource_futuregrid.json
  • Note : Works only up to 64 cores, beyond that Torque configuration is broken.
  • Default values for ComputePilotDescription attributes:
  • queue         : bravo
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh

9.1.2. INDIA

The FutureGrid ‘india’ cluster (https://futuregrid.github.io/manual/hardware.html).

  • Resource label : futuregrid.india
  • Raw config : resource_futuregrid.json
  • Default values for ComputePilotDescription attributes:
  • queue         : batch
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh

9.1.3. ECHO

FutureGrid Supermicro ScaleMP cluster (https://futuregrid.github.io/manual/hardware.html).

  • Resource label : futuregrid.echo
  • Raw config : resource_futuregrid.json
  • Note : Untested
  • Default values for ComputePilotDescription attributes:
  • queue         : echo
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh

9.1.4. XRAY

FutureGrid Cray XT5m cluster (https://futuregrid.github.io/manual/hardware.html).

  • Resource label : futuregrid.xray
  • Raw config : resource_futuregrid.json
  • Note : One needs to add ‘module load torque’ to ~/.profile on xray.
  • Default values for ComputePilotDescription attributes:
  • queue         : batch
  • sandbox       : /scratch/$USER
  • access_schema : ssh
  • Available schemas : ssh

9.1.5. XRAY_CCM

FutureGrid Cray XT5m cluster in Cluster Compatibility Mode (CCM) (https://futuregrid.github.io/manual/hardware.html).

  • Resource label : futuregrid.xray_ccm
  • Raw config : resource_futuregrid.json
  • Note : One needs to add ‘module load torque’ to ~/.profile on xray.
  • Default values for ComputePilotDescription attributes:
  • queue         : ccm_queue
  • sandbox       : /scratch/$USER
  • access_schema : ssh
  • Available schemas : ssh

9.1.6. DELTA

FutureGrid Supermicro GPU cluster (https://futuregrid.github.io/manual/hardware.html).

  • Resource label : futuregrid.delta
  • Raw config : resource_futuregrid.json
  • Note : Untested.
  • Default values for ComputePilotDescription attributes:
  • queue         : delta
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh

9.2. RESOURCE_ORNL

9.2.1. TITAN

The Cray XK7 supercomputer located at the Oak Ridge Leadership Computing Facility (OLCF), (https://www.olcf.ornl.gov/titan/)

  • Resource label : ornl.titan
  • Raw config : resource_ornl.json
  • Note : Requires the use of an RSA SecurID on every connection.
  • Default values for ComputePilotDescription attributes:
  • queue         : batch
  • sandbox       : $MEMBERWORK/`groups | cut -d' ' -f2`
  • access_schema : ssh
  • Available schemas : ssh, local, go

9.3. RESOURCE_IU

9.3.1. BIGRED2

Indiana University’s Cray XE6/XK7 cluster (https://kb.iu.edu/d/bcqt).

  • Resource label : iu.bigred2
  • Raw config : resource_iu.json
  • Default values for ComputePilotDescription attributes:
  • queue         : None
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh

9.3.2. BIGRED2_CCM

Indiana University’s Cray XE6/XK7 cluster in Cluster Compatibility Mode (CCM) (https://kb.iu.edu/d/bcqt).

  • Resource label : iu.bigred2_ccm
  • Raw config : resource_iu.json
  • Default values for ComputePilotDescription attributes:
  • queue         : None
  • sandbox       : /N/dc2/scratch/$USER
  • access_schema : ssh
  • Available schemas : ssh

9.4. RESOURCE_RADICAL

9.4.1. TUTORIAL

Our private tutorial VM on EC2

  • Resource label : radical.tutorial
  • Raw config : resource_radical.json
  • Default values for ComputePilotDescription attributes:
  • queue         : batch
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh, local

9.5. RESOURCE_RICE

9.5.1. DAVINCI

The DAVinCI Linux cluster at Rice University (https://docs.rice.edu/confluence/display/ITDIY/Getting+Started+on+DAVinCI).

  • Resource label : rice.davinci
  • Raw config : resource_rice.json
  • Note : DAVinCI compute nodes have 12 or 16 processor cores per node.
  • Default values for ComputePilotDescription attributes:
  • queue         : parallel
  • sandbox       : $SHARED_SCRATCH/$USER
  • access_schema : ssh
  • Available schemas : ssh

9.5.2. BIOU

The Blue BioU Linux cluster at Rice University (https://docs.rice.edu/confluence/display/ITDIY/Getting+Started+on+Blue+BioU).

  • Resource label : rice.biou
  • Raw config : resource_rice.json
  • Note : Blue BioU compute nodes have 32 processor cores per node.
  • Default values for ComputePilotDescription attributes:
  • queue         : serial
  • sandbox       : $SHARED_SCRATCH/$USER
  • access_schema : ssh
  • Available schemas : ssh

9.6. RESOURCE_XSEDE

9.6.1. LONESTAR

The XSEDE ‘Lonestar’ cluster at TACC (https://www.tacc.utexas.edu/resources/hpc/lonestar).

  • Resource label : xsede.lonestar
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh, gsissh

9.6.2. STAMPEDE_YARN

The XSEDE ‘Stampede’ cluster at TACC (https://www.tacc.utexas.edu/stampede/).

  • Resource label : xsede.stampede_yarn
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : $WORK
  • access_schema : ssh
  • Available schemas : ssh, gsissh, go

9.6.3. STAMPEDE

The XSEDE ‘Stampede’ cluster at TACC (https://www.tacc.utexas.edu/stampede/).

  • Resource label : xsede.stampede
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : $WORK
  • access_schema : ssh
  • Available schemas : ssh, gsissh, go

9.6.4. BLACKLIGHT

The XSEDE ‘Blacklight’ cluster at PSC (https://www.psc.edu/index.php/computing-resources/blacklight).

  • Resource label : xsede.blacklight
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : batch
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh, gsissh

9.6.5. COMET

The Comet HPC resource at SDSC ‘HPC for the 99%’ (http://www.sdsc.edu/services/hpc/hpc_systems.html#comet).

  • Resource label : xsede.comet
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : compute
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh, gsissh

9.6.6. SUPERMIC

SuperMIC (pronounced ‘Super Mick’) is Louisiana State University’s (LSU) newest supercomputer funded by the National Science Foundation’s (NSF) Major Research Instrumentation (MRI) award to the Center for Computation & Technology. (https://portal.xsede.org/lsu-supermic)

  • Resource label : xsede.supermic
  • Raw config : resource_xsede.json
  • Note : Partially allocated through XSEDE. Primary access through GSISSH. Allows SSH key authentication too.
  • Default values for ComputePilotDescription attributes:
  • queue         : workq
  • sandbox       : /work/$USER
  • access_schema : ssh
  • Available schemas : ssh, gsissh

9.6.7. COMET_ORTE

The Comet HPC resource at SDSC ‘HPC for the 99%’ (http://www.sdsc.edu/services/hpc/hpc_systems.html#comet).

  • Resource label : xsede.comet_orte
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : compute
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh, gsissh

9.6.8. TRESTLES

The XSEDE ‘Trestles’ cluster at SDSC (http://www.sdsc.edu/us/resources/trestles/).

  • Resource label : xsede.trestles
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh, gsissh

9.6.9. GORDON

The XSEDE ‘Gordon’ cluster at SDSC (http://www.sdsc.edu/us/resources/gordon/).

  • Resource label : xsede.gordon
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh, gsissh

9.7. RESOURCE_LOCAL

9.7.1. LOCALHOST_YARN

Your local machine.

  • Resource label : local.localhost_yarn
  • Raw config : resource_local.json
  • Note : To use the ssh schema, make sure that ssh access to localhost is enabled.
  • Default values for ComputePilotDescription attributes:
  • queue         : None
  • sandbox       : $HOME
  • access_schema : local
  • Available schemas : local, ssh

9.7.2. LOCALHOST_ANACONDA

Your local machine.

  • Resource label : local.localhost_anaconda
  • Raw config : resource_local.json
  • Note : To use the ssh schema, make sure that ssh access to localhost is enabled.
  • Default values for ComputePilotDescription attributes:
  • queue         : None
  • sandbox       : $HOME
  • access_schema : local
  • Available schemas : local, ssh

9.7.3. LOCALHOST

Your local machine.

  • Resource label : local.localhost
  • Raw config : resource_local.json
  • Note : To use the ssh schema, make sure that ssh access to localhost is enabled.
  • Default values for ComputePilotDescription attributes:
  • queue         : None
  • sandbox       : $HOME
  • access_schema : local
  • Available schemas : local, ssh

9.8. RESOURCE_NCAR

9.8.1. YELLOWSTONE

The Yellowstone IBM iDataPlex cluster at UCAR (https://www2.cisl.ucar.edu/resources/yellowstone).

  • Resource label : ncar.yellowstone
  • Raw config : resource_ncar.json
  • Note : We only support one concurrent CU per node currently.
  • Default values for ComputePilotDescription attributes:
  • queue         : premium
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh

9.9. RESOURCE_STFC

9.9.1. JOULE

The STFC Joule IBM BG/Q system (http://community.hartree.stfc.ac.uk/wiki/site/admin/home.html)

  • Resource label : stfc.joule
  • Raw config : resource_stfc.json
  • Note : This currently needs a centrally administered outbound ssh tunnel.
  • Default values for ComputePilotDescription attributes:
  • queue         : prod
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh

9.10. RESOURCE_EPSRC

9.10.1. ARCHER

The EPSRC Archer Cray XC30 system (https://www.archer.ac.uk/)

  • Resource label : epsrc.archer
  • Raw config : resource_epsrc.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : standard
  • sandbox       : /work/`id -gn`/`id -gn`/$USER
  • access_schema : ssh
  • Available schemas : ssh

9.10.2. ARCHER_ORTE

The EPSRC Archer Cray XC30 system (https://www.archer.ac.uk/)

  • Resource label : epsrc.archer_orte
  • Raw config : resource_epsrc.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : standard
  • sandbox       : /work/`id -gn`/`id -gn`/$USER
  • access_schema : ssh
  • Available schemas : ssh

9.11. RESOURCE_DAS4

9.11.1. FS2

The Distributed ASCI Supercomputer 4 (http://www.cs.vu.nl/das4/).

  • Resource label : das4.fs2
  • Raw config : resource_das4.json
  • Default values for ComputePilotDescription attributes:
  • queue         : all.q
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh

9.12. RESOURCE_NCSA

9.12.1. BW_CCM

The NCSA Blue Waters Cray XE6/XK7 system in CCM (https://bluewaters.ncsa.illinois.edu/)

  • Resource label : ncsa.bw_ccm
  • Raw config : resource_ncsa.json
  • Note : Running ‘touch .hushlogin’ on the login node will reduce the likelihood of prompt detection issues.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : /scratch/sciteam/$USER
  • access_schema : gsissh
  • Available schemas : gsissh

9.12.2. BW

The NCSA Blue Waters Cray XE6/XK7 system (https://bluewaters.ncsa.illinois.edu/)

  • Resource label : ncsa.bw
  • Raw config : resource_ncsa.json
  • Note : Running ‘touch .hushlogin’ on the login node will reduce the likelihood of prompt detection issues.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : /scratch/sciteam/$USER
  • access_schema : gsissh
  • Available schemas : gsissh

9.12.3. BW_APRUN

The NCSA Blue Waters Cray XE6/XK7 system (https://bluewaters.ncsa.illinois.edu/)

  • Resource label : ncsa.bw_aprun
  • Raw config : resource_ncsa.json
  • Note : Running ‘touch .hushlogin’ on the login node will reduce the likelihood of prompt detection issues.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : /scratch/sciteam/$USER
  • access_schema : gsissh
  • Available schemas : gsissh

9.13. RESOURCE_NERSC

9.13.1. EDISON_CCM

The NERSC Edison Cray XC30 in Cluster Compatibility Mode (https://www.nersc.gov/users/computational-systems/edison/)

  • Resource label : nersc.edison_ccm
  • Raw config : resource_nersc.json
  • Note : For CCM you need to use special ccm_ queues.
  • Default values for ComputePilotDescription attributes:
  • queue         : ccm_queue
  • sandbox       : $SCRATCH
  • access_schema : ssh
  • Available schemas : ssh

9.13.2. EDISON

The NERSC Edison Cray XC30 (https://www.nersc.gov/users/computational-systems/edison/)

  • Resource label : nersc.edison
  • Raw config : resource_nersc.json
  • Note :
  • Default values for ComputePilotDescription attributes:
  • queue         : regular
  • sandbox       : $SCRATCH
  • access_schema : ssh
  • Available schemas : ssh, go

9.13.3. HOPPER

The NERSC Hopper Cray XE6 (https://www.nersc.gov/users/computational-systems/hopper/)

  • Resource label : nersc.hopper
  • Raw config : resource_nersc.json
  • Note :
  • Default values for ComputePilotDescription attributes:
  • queue         : regular
  • sandbox       : $SCRATCH
  • access_schema : ssh
  • Available schemas : ssh, go

9.13.4. HOPPER_APRUN

The NERSC Hopper Cray XE6 (https://www.nersc.gov/users/computational-systems/hopper/)

  • Resource label : nersc.hopper_aprun
  • Raw config : resource_nersc.json
  • Note : Only one CU per node in APRUN mode
  • Default values for ComputePilotDescription attributes:
  • queue         : regular
  • sandbox       : $SCRATCH
  • access_schema : ssh
  • Available schemas : ssh

9.13.5. HOPPER_CCM

The NERSC Hopper Cray XE6 in Cluster Compatibility Mode (https://www.nersc.gov/users/computational-systems/hopper/)

  • Resource label : nersc.hopper_ccm
  • Raw config : resource_nersc.json
  • Note : For CCM you need to use special ccm_ queues.
  • Default values for ComputePilotDescription attributes:
  • queue         : ccm_queue
  • sandbox       : $SCRATCH
  • access_schema : ssh
  • Available schemas : ssh

9.13.6. EDISON_APRUN

The NERSC Edison Cray XC30 (https://www.nersc.gov/users/computational-systems/edison/)

  • Resource label : nersc.edison_aprun
  • Raw config : resource_nersc.json
  • Note : Only one CU per node in APRUN mode
  • Default values for ComputePilotDescription attributes:
  • queue         : regular
  • sandbox       : $SCRATCH
  • access_schema : ssh
  • Available schemas : ssh, go

9.14. RESOURCE_LRZ

9.14.1. SUPERMUC

The SuperMUC petascale HPC cluster at LRZ, Munich (http://www.lrz.de/services/compute/supermuc/).

  • Resource label : lrz.supermuc
  • Raw config : resource_lrz.json
  • Note : Default authentication to SuperMUC uses X509 and is firewalled, make sure you can gsissh into the machine from your registered IP address. Because of outgoing traffic restrictions your MongoDB needs to run on a port in the range 20000 to 25000.
  • Default values for ComputePilotDescription attributes:
  • queue         : test
  • sandbox       : $HOME
  • access_schema : gsissh
  • Available schemas : gsissh, ssh