The new version of MAC OS X, 10.10 (Yosemite) is expected to be released today, Thursday October 16th, and is currently incompatible with the ClearPass registration system and potentially other services at Rowan.

Page tree
Skip to end of metadata
Go to start of metadata

Summary

The Rowan University Computational Cluster, or RUCC, is Rowan's research computing platform. Primarily funded through National Science Foundation and Robert Wood Johnson grants, this resource is used by the College of Science and Mathematics and the College of Engineering for computational research.


Please submit all requests (access/software): http://www.rowan.edu/go/hpc


Connecting to RUCC

You must use an SSH client to connect to RUCC. Windows users must download a client such as PuTTY while Mac OS X users may user Terminal (already installed on the system). The address to connect to RUCC is rucc.rowan.edu. Below is a connection example from a Mac OS X System:

File Storage

Small Data Sets

For small programs generating trivial sized datasets, a user may use their home directory on rucc.rowan.edu. If unsure how to access your home directory on RUCC, follow the below steps:

[grochowski@rucc-headnode ~]$ cd $HOME
[grochowski@rucc-headnode ~]$ pwd
/home/grochowski

Home directory sizes are restricted and will not be allocated additional space under any circumstances. Any programs or data set generations that are above a trivial size should utilize one of the larger storage options in the below section.

Large Data Sets

rucc-headnode:/home - user configuration files and small programs

rucc-headnode:/csm_data - location for research projects and large data set for College of Science and Mathematics

ruck-storage:/coe_data - location for research projects and large data set for College of Engineering

Job Scheduler

This system is configured to use the SLURM job scheduler.

What is SLURM?

SLURM (Simple Linux Utility for Resource Management) is a workload manager that provides a framework for job queues, allocation of compute nodes and the start and execution of jobs.

Using SLURM

The cluster compute node are available in SLURM partitions. User submit jobs to requestion node resources in a partition. SLURM partitions for general use are listed below. The default is the compute partition. If you do not specify a partition, your job will be submitted to the default partition. Currently, there are no restrictions on who can submit to which partition.  

List of Partitions

  • compute  - consists of the base compute nodes purchased by COE and CSM with their grants
  • high_mem - consists of the high memory nodes (512GB RAM)
  • parallel - consists of the infiniband enabled nodes
  • gpu - consists of the nodes that contain the NVIDA Tesla K80’s
  • coe-gpu - this is the gpu partition for the COE purchase
  • csm-gpu - this is the gpu partition for the CSM purchase
  • coe - nodes purchased by the COE grant
  • csm - nodes purchased by the CSM  grant
  • ece -  nodes purchased by Robi Polikar
  • astrophys - nodes purchased by Dave Klassen
  • math - nodes purchased by the Math Department

Overview of SLURM Commands

  • squeue - show status of jobs in queue
  • scancel - delete a job
  • sinfo - show status of compute nodes
  • srun - run a command on allocated compute nodes
  • sbatch - submit a job script
  • salloc - allocate compute nodes for interactive use
  • sview - graphic view of nodes, partitions, and jobs

Search

  • No labels