Managing Partitions
Partitions control which nodes are available to specific groups of users and set resource limits. Partitions are defined in the Slinky Controller CR's spec.extraConf field.
Creating a Partition
Step 1 — Edit the Controller CR
The Controller CR is named slurm-<cluster-name> in the slurm namespace:
kubectl -n slurm edit controller slurm-<cluster-name>
Step 2 — Add partition lines to spec.extraConf
The spec.extraConf field contains both your custom configuration and an automatically injected section managed by the Crusoe Slurm Operator. Add your PartitionName= lines above the injected section markers:
spec:
extraConf: |
PartitionName=ml-team Nodes=worker-[0-2] MaxTime=08:00:00 State=UP
# THE FOLLOWING SETTINGS ARE AUTOMATICALLY INJECTED BY CRUSOE SLURM OPERATOR
# ===============================START======================================
SlurmctldDebug=debug5
SlurmdDebug=debug5
...
PartitionName=all Nodes=ALL Default=YES MaxTime=UNLIMITED State=UP
# ================================END=======================================
Do not modify anything between the START and END markers. The operator overwrites this section on every reconciliation cycle.
If you add Default=YES to your custom partition, the operator will automatically remove Default=YES from the all partition in the injected section.
Step 3 — Verify the partition
From a login node, run sinfo to confirm the new partition is available:
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
all* up infinite 5 idle worker-[0-4]
ml-team up 8:00:00 3 idle worker-[0-2]
Step 4 — Reconfigure Slurm (optional)
If Slurm doesn't pick up the change automatically, run from a login node:
scontrol reconfigure
Next Steps
- Quickstart — Set up your Slurm cluster
- User Management — Create and manage users and groups
- Slurm Metrics — Monitor cluster health and job performance
- Advanced: Kubernetes Operations — Direct kubectl access, CRD-level configuration, and prolog/epilog scripts
- For Slurm command reference, see the official Slurm documentation