Skip to main content

Managing Shared Disks

Creating Shared Disks

Use the storage disks create command and specify the type "shared-volume" to create a Shared Disk of your specified size. In the example below we will create 1 TiB disk called "shared-1."

crusoe storage disks create \
--name shared-1 \
--type shared-volume \
--size 1TiB \
--location us-southcentral1-a

name, size, type and location are required arguments to create a Shared Disk. When attaching a disk to a VM, the disk must be in the same location as the VM.

Viewing all Shared Disks

Use the storage disks list command to list existing disks.


crusoe storage disks list

Update an existing Shared Disk

Use the storage disks resize <name> command to resize an existing Shared Disk using the --size flag. Specify <size> in TiB increments (e.g., 5TiB).

crusoe storage disks resize <name> --size <size>

Shared Disks' size can be increased or decreased in increments of 1 TiB with a minimum size of 1 TiB and a maximum size of 1000 TiB. Decreasing the size of a Shared Disk is only allowed up to the nearest rounded-up size in TiB. Any operation to reduce the size lower than the used capacity will fail.

Deleting a Shared Disk

danger

Deleting a disk is a permanent action.

All Crusoe VMs must be unmounted from a Shared Disk before the Shared Disk can be deleted.

Use the storage disks delete <name> command to delete a disk of your choice. As an example, you can delete a disk by replacing DISK_NAME with the name of the disk you wish to delete:

crusoe storage disks delete DISK_NAME

Mounting Shared Disks

Before mounting a Shared Disk on a VM for the first time, install the VAST NFS driver — see Setting up the VAST NFS driver.

Confirm the driver is installed on the VM:

ubuntu@<vm>:~$ vastnfs-ctl status
version: 4.0.35-vastdata
kernel modules: sunrpc
services: rpcbind.socket rpcbind
rpc_pipefs: /run/rpc_pipefs

The mount command for the disk can be found in the Crusoe Cloud Console, either on the compute instance details page under the disk actions section or on the storage page under the actions section.

# eu-iceland1-a
ubuntu@<vm>:~$ sudo mount -t nfs -o vers=3,nconnect=16,spread_reads,spread_writes,remoteports=dns nfs.crusoecloudcompute.com:/volumes/<volume_uuid> <path-to-mount>

# all other regions
ubuntu@<vm>:~$ sudo mount -t nfs -o vers=3,nconnect=16,spread_reads,spread_writes,remoteports=100.64.0.2-100.64.0.17 100.64.0.2:/volumes/<volume_uuid> <path-to-mount>
info

On eu-iceland1-a, the remoteports=dns form requires the VM to resolve nfs.crusoecloudcompute.com. If your VPC uses custom DNS, verify resolution first (e.g., dig nfs.crusoecloudcompute.com) before relying on this mount command. Other regions use literal IP addresses and do not depend on DNS.

The findmnt command is used to confirm the Shared Disk is mounted correctly

ubuntu@<vm>:~$ findmnt -t nfs
TARGET SOURCE FSTYPE OPTIONS
<path-to-mount> <path-of-shared-disk> nfs rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,forcerdirplus,proto=tcp,nconnect=16,timeo=600,retrans=2,sec=sys,mountaddr=100.64.0.2,mountvers=3,mountport=20048,mountproto=tcp,local_lock=none,spread_reads,spread_writes,addr=100.64.0.2

Running df shows correct provisioned capacity for the Shared Disk

ubuntu@<vm>:~$ df -h
Filesystem Size Used Avail Use% Mounted on
<path-of-shared-disk> 100T 52G 101T 1% <path-to-mount>

To mount Shared Disks persistently across VM reboots, add an entry to the /etc/fstab file

danger

Always take a backup of the fstab file for recovery purposes and ensure serial console access is enabled to recover the VM in case of boot failures due to incorrect fstab entries.

ubuntu@<vm>:~$ sudo vi /etc/fstab
...
# eu-iceland1-a
nfs.crusoecloudcompute.com:/volumes/<volume_uuid> <path-to-mount> nfs rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,forcerdirplus,proto=tcp,nconnect=16,timeo=600,retrans=2,sec=sys,local_lock=none,remoteports=dns,spread_reads,spread_writes,_netdev,nofail,x-systemd.automount,x-systemd.idle-timeout=30 0 0

# all other regions
100.64.0.2:/volumes/<volume_uuid> <path-to-mount> nfs rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,forcerdirplus,proto=tcp,nconnect=16,timeo=600,retrans=2,sec=sys,local_lock=none,spread_reads,spread_writes,_netdev,nofail,x-systemd.automount,x-systemd.idle-timeout=30,remoteports=100.64.0.2-100.64.0.17 0 0
...

Verify if the automount works for Shared Disks

ubuntu@<vm>:~$ sudo mount -va
...
<path-to mount> : successfully mounted
...

Unmounting Shared Disks

Shared Disks can be unmounted by running umount command:

ubuntu@<vm>:~$ sudo umount <path-to-unmount>
ubuntu@<vm>:~$ findmnt -t nfs
ubuntu@<vm>:~$

Benchmarking a mounted Shared Disk

Ensure that the readahead parameter is set and MTU is set to 9000 before running the fio benchmark, for best results.

The fio tool can be used to benchmark Shared Disks. From within a mounted Shared Disk, you can run the following commands to test read/write bandwidth and IOPS:

# test write bw
fio --name=my-job --group_reporting --time_based=1 --cpus_allowed_policy=split --runtime=10s --ramp_time=5s --size 20G --numjobs=32 --ioengine=aio --direct=1 --iodepth 8 --rw write --bs 1m
# test read bw
fio --name=my-job --group_reporting --time_based=1 --cpus_allowed_policy=split --runtime=10s --ramp_time=5s --size 20G --numjobs=32 --ioengine=aio --direct=1 --iodepth 8 --rw read --bs 1m
# test write iop
fio --name=my-job --group_reporting --time_based=1 --cpus_allowed_policy=split --runtime=10s --ramp_time=5s --size 20G --numjobs=32 --ioengine=aio --direct=1 --iodepth 8 --rw write --bs 4k
# test read iop
fio --name=my-job --group_reporting --time_based=1 --cpus_allowed_policy=split --runtime=10s --ramp_time=5s --size 20G --numjobs=32 --ioengine=aio --direct=1 --iodepth 8 --rw read --bs 4k

Shared Disks Performance Profile

Shared Disks performance is designed to scale with capacity. The performance target of the Shared Disk is determined by its capacity, scaling linearly up to 1 PiB from 100 TiB. This table shows the scaling rate per 1 TiB of provisioned capacity.

MetricRead BandwidthWrite BandwidthIOPS
Scaling RateUp to 200 MB/s per TiBUp to 40 MB/s per TiBUp to 1.2k IOPs per TiB

While performance scales linearly per TiB, the service includes a base level of performance (at the 100 TiB level) and a defined maximum ceiling (at the 1 PiB level). Disks smaller than 100 TiB will still receive the base performance.

info

The base performance target of 100 TiB is not yet enabled in the eu-iceland region due to some ongoing maintenance and will be enabled after the maintenance is complete. Instead, you will see a base performance which is directly proportional to the size of the disk from 1 TiB to 1000 TiB.

MetricBase Performance (At 100 TiB)Maximum Performance (At 1 PiB)
Read BandwidthUp to 20 GB/sUp to 200 GB/s
Write BandwidthUp to 4 GB/sUp to 40 GB/s
IOPSUp to 120,000 IOPsUp to 1,200,000 IOPs

Aggregate Performance vs. Per-VM Performance

The performance metrics (Read, Write, and IOPS) detailed above represent the target aggregate performance of the shared disk across all virtual machines (VMs) attached to it. Performance within individual VMs will vary based on the available VPC Network Bandwidth allocated to that specific VM .

API reference

To manage Shared Disks programmatically over HTTP, see the Crusoe Cloud API reference.