A job can be thought of as a directory with a script named, and any other files. When uploaded to the cluster, the job files are placed on the shared file system at /mnt/shared/jobs/<job-id>/, where <job-id> is the unique identifier for the job.


When a job is run by a compute node, the job’s script will be run inside the job’s directory on the shared file system. The script should run the job’s main program in the foreground.

If your job will do a significant amount of reading and writing to disk, you may want to use the local SSD (for example /home/user/tmp/ or /tmp/), which will be faster than shared storage.


The compute node considers the job completed when the script returns.

Desired output should be written to files in the job’s directory on the shared drive. These files will be returned to the user.