A job can be thought of as a directory with a script named
run.sh, and any
other files. When uploaded to the cluster, the job files are placed on the
shared file system at
<job-id> is the
unique identifier for the job.
When a job is run by a compute node, the job’s
run.sh script will be run
inside the job’s directory on the shared file system. The
should run the job’s main program in the foreground.
If your job will do a significant amount of reading and writing to disk, you
may want to use the local SSD (for example
will be faster than shared storage.
The compute node considers the job completed when the
Desired output should be written to files in the job’s directory on the shared drive. These files will be returned to the user.