A human-readable name for your own information. This isn’t used for anything internally
This is the physical location of the cluster. The region cannot be changed after the cluster has been created.
This is the size of the virtual server used to host the head node. In most cases a small head node is fine. Larger sizes will provide more computing power and memory, and may be appropriate for large clusters, read-heavy workloads or large queues.
This is the size of the root volume for the head node. It cannot be adjusted.
The size of the shared file system. The shared volume will contain the job queue data, data for individual jobs, and any shared data. The volume cannot be resized after the cluster has been created.
The number of seconds after which the head node will be considered to have failed. A failed head node is destroyed and replaced with a new node. A reasonable setting here is 300 seconds or more. Note that the timeout needs to take boot time into account. A value of 0 will disable monitoring.
The size fo the virtual server used to host the compute nodes. The compute node size can be changed at any time, so it’s best to start small and increase the size after benchmarking.
The size of the root filesystem on each compute node. This filesystem is faster than the shared filesystem, and can be accessed by jobs.
The operating system image to use when launching a compute node. Please contact support if the available images don’t meet your needs.
The number of seconds after which a compute node will be considered to have failed. A failed compute node is destroyed. The cluster’s autoscaling mechanism will create a new compute node if required. Note that this timeout only comes into effect once the compute node has registered with the head node. If a compute node doesn’t register with the head node within 20 minutes, it’s considered failed and is destroyed.
See the auto scaling documentation for more details.
The minimum number of compute nodes that the cluster will keep running.
The maximum number of nodes the cluster can expand to. Note that each account also has a global limit on the number of nodes. Please contact support if you need a higher limit.
This setting controls how the number of compute nodes will scale with the job queue depth.
This is coppied directly into each compute node’s
on startup. You’ll need to place your ssh public key here in order to be able
to log in to and configure your compute nodes.
The cluster’s API key is used to authenticate users of the cluster. Anyone in possession of the API key is allowed to submit, cancel, or delete jobs on the cluster. The API key is automatically generated when a cluster is created and may be regenerated at any time.