Managing Shared Disks
Creating Shared Disks
- CLI
- UI
- Terraform
Use the storage disks create
command and specify the type "shared-volume" to create a shared disk of your specified size. In the example below we will create 1 TiB disk called "shared-1."
crusoe storage disks create \
--name shared-1 \
--type shared-volume \
--size 1TiB \
--location us-southcentral1-a
name
, size
, type
and location
are required arguments to create a shared disk. When attaching a disk to a VM, the disk must be in the same location as the VM.
In order to create a Shared Disk via the Crusoe Cloud console:
- Visit the Crusoe Cloud console
- Click the "Storage" tab in the left nav
- Click the "Create Disk" button
- Input a name for the disk, using only letters, numbers,
-
and_
- Select the type "Shared Volume"
- Set the desired size of the disk from 1TiB to 1000TiB
- Click the "Create" button
The following is intended to help get you started in using Terraform to provision a Shared disk and attach the disk to a VM in Crusoe Cloud.
Copy and paste the code below in a text-editor of your choice and name the file main.tf
. The example below creates a Disk:
// Crusoe Provider
terraform {
required_providers {
crusoe = {
source = "registry.terraform.io/crusoecloud/crusoe"
}
}
}
resource "crusoe_storage_disk" "new_shared_disk" {
name = "new-shared-disk"
size = "1TiB" // "1TiB" to "1000TiB"
location = "us-southcentral1-a"
type = "shared-volume"
}
name
, size
, type
and location
are required arguments.
name
can only include lowercase ascii characters, numbers and -
.
size
must be in format [Number][unit] where valid units are TiB (tebibyte). Acceptable sizes are from 1TiB to 1000TiB (required).
When attaching a disk to a VM, the disk must be in the same location as the VM.
Viewing all Shared Disks
- CLI
- UI
- Terraform
Use the storage disks list
command to list existing disks.
crusoe storage disks list
In order to view a list of existing disks via the Crusoe Cloud console:
- Visit the Crusoe Cloud console
- Click the "Storage" tab in the left nav
To list existing disks using Terraform, the following code snippet can be used to populate a Terraform data source using the Crusoe Terraform provider.
# list disks
data "crusoe_storage_disks" "disks" {}
output "crusoe_disks" {
value = data.crusoe_storage_disks.disks
}
Update an existing Shared Disk
- CLI
- UI
- Terraform
Use the storage disks resize <name>
command to resize existing disks using the --size
flag. Here's an example:
crusoe storage disks resize <name> --size <size>
In order to update a disk via the Crusoe Cloud console:
- Visit the Crusoe Cloud console
- Click the "Storage" tab in the left nav
- Navigate to the row of the disk you wish to update
- Click the plus icon on the far right side of the row
- Enter the new size of the disk
- Click the "Confirm" button
To update an existing Shared Disk using the Crusoe Terraform provider, you can change the fields of an existing disk resource and run terraform apply
. The Crusoe Terraform provider will apply the changes to the disk.
terraform {
required_providers {
crusoe = {
source = "registry.terraform.io/crusoecloud/crusoe"
}
}
}
resource "crusoe_storage_disk" "new_shared_disk" {
name = "new-shared-disk"
size = "1TiB" -> "2TiB"
location = "us-southcentral1-a"
}
Currently, only the "size" of the disk can be changed. Changes to the "name" or "location" of the disk will force a re-creation of the disk (deletion and then creation of a new disk).
Shared disks size can be increased or decreased in multiples of 1 TiB with a minimum size of 1 TiB and a maximum size of 1000 TiB. Decreasing the size of the shared disk is only allowed up to the nearest rounded up size in TiB. Any operation to reduce the size lower than the used capacity will fail.
Deleting a Shared Disk
Warning: Deleting a disk is a permanent action.
All Crusoe VMs must be detached from a Shared Disk before the Shared Disk can be deleted.
- CLI
- UI
- Terraform
Use the storage disks delete <name>
command to delete a disk of your choice. As an example, you can delete a disk by replacing DISK_NAME
with the name of the disk you wish to delete:
crusoe storage disks delete DISK_NAME
In order to delete a disk via the Crusoe Cloud console:
- Visit the Crusoe Cloud console
- Click the "Storage" tab in the left nav
- Navigate to the row of the disk you wish to delete
- Click the trash can icon on the far right side of the row
- Enter the name of the disk you wish to delete in the popup that appears.
- Click the "Confirm" button
A disk can be deleted by using the terraform destroy
command provided by the Terraform CLI tool.
Attaching and detaching Shared Disks
Once the disk is created above we can attach and detach the disk to an instance.
- CLI
- UI
- Terraform
Use the compute vms attach-disks
command to attach a disk to an instance. You can attach multiple disks to an instance with this command as well using a comma separated list of disk names.
crusoe compute vms attach-disks my-vm --disk name=data-1,mode=read-write
Use the compute vms detach-disks
command to detach a disk from an instance. You can detach multiple disks from an instance with this command as well using a comma separated list of disk names.
crusoe compute vms detach-disks my-vm --disk name=data-1
In order to attach a Shared Disk to an instance via the Crusoe Cloud console:
- Visit the Crusoe Cloud console
- Click the "Instances" tab in the left nav
- Click on the instance that you want to attach the disk, you will get further details about the instance
- Under the "Disks" section select the "Attach Disk" button to attach a valid shared disk, or select the "X" next to an existing disk to detach
If you want to create a VM with a disk attached, you can copy and paste the code below in a text-editor of your choice and name the file main.tf
. The example below creates a VM that uses a 10x Nvidia L40S GPUs called my-new-vm
with the disk new-shared-disk
created and attached in the us-southcentral1-a
location:
terraform {
required_providers {
crusoe = {
source = "registry.terraform.io/crusoecloud/crusoe"
}
}
}
locals {
my_ssh_key = file("~/.ssh/id_ed25519.pub")
}
resource "crusoe_storage_disk" "new_shared_disk" {
name = "new-shared-disk"
size = "1TiB"
location = "us-southcentral1-a"
type = "shared-volume"
}
// new VM
resource "crusoe_compute_instance" "my_vm" {
name = "my-new-vm"
type = "l40s.10x"
location = "us-southcentral1-a"
# specify the base image
image = "ubuntu22.04:latest"
disks = [
// disk attached at startup
{
id = crusoe_storage_disk.new_shared_disk.id
mode = "read-write" // other option: "read-only"
attachment_type = "data"
}
]
ssh_key = local.my_ssh_key
}
In the disks
section of the VM
resource, id
, mode
and attachment_type
are required.
id
is the id of the disk, which you can append as crusoe_storage_disk.new_shared_disk.id
.
mode
is either read-only
or read-write
.
attachment_type
is only set to data
currently.
After saving the code to a main.tf file, the following commands serve as the process to create a resource in Crusoe Cloud using Terraform:
terraform init
- Initializes a working directory containing Terraform configuration files.
terraform plan
- the output of this command will show the resources Terraform plans on creating.
terraform apply
- this command will create the resources.
You can confirm that terraform successfully created the resources through the console, but if you prefer CLI, you can also run:
crusoe storage disks list
which will show you the disks you have created in your account.
Mounting Shared Disks
Once a Shared Disk is attached, the Shared Disk can be mounted by running mount command:
ubuntu@<vm>:~$ sudo mount -t virtiofs <name of shared disk> <path-to-mount>
findmnt command is used to confirm the Shared Disk is mounted correctly,
ubuntu@<vm>:~$ findmnt -t virtiofs
TARGET SOURCE FSTYPE OPTIONS
<path-to-mount> <name of shared disk> virtiofs rw,relatime
Running df shows correct provisioned capacity for the Shared Disk
ubuntu@<vm>:~$ df -h
Filesystem Size Used Avail Use% Mounted on
...
<name of shared disk> 100T 0 100T 0% <path-to-mount>
To mount Shared Disks persistently across VM reboots, add an entry to the /etc/fstab file
Warning: Always take a backup of the fstab file for recovery purposes and ensure serial console access is enabled to recover the VM in case of boot failures due to incorrect fstab entries
ubuntu@<vm>:~$sudo vi /etc/fstab
...
<name of shared disk> <path-to-mount> virtiofs defaults 0 0
...
Verify if the automount works for Shared Disks
```sh
ubuntu@<vm>:~$sudo mount -va
...
<path-to mount> : successfully mounted
...
Unmounting Shared Disks
Shared Disks can be unmounted by running umount command:
ubuntu@<vm>:~$ sudo umount <path-to-unmount>
ubuntu@<vm>:~$ findmnt -t virtiofs
ubuntu@<vm>:~$
Unmounted Shared Disks can be mounted again as long as the shared volume is still attached in control plane.
Benchmarking a mounted Shared Disk
The fio tool can be used to benchmark Shared Disks. From within a mounted Shared Disk you can run the following commands to test read/write bandwidth and iops:
# test write bw
fio --name=my-job --group_reporting --time_based=1 --cpus_allowed_policy=split --runtime=10s --ramp_time=5s --size 20G --numjobs=32 --ioengine=aio --direct=1 --iodepth 8 --rw write --bs 1m
# test read bw
fio --name=my-job --group_reporting --time_based=1 --cpus_allowed_policy=split --runtime=10s --ramp_time=5s --size 20G --numjobs=32 --ioengine=aio --direct=1 --iodepth 8 --rw read --bs 1m
# test write iop
fio --name=my-job --group_reporting --time_based=1 --cpus_allowed_policy=split --runtime=10s --ramp_time=5s --size 20G --numjobs=32 --ioengine=aio --direct=1 --iodepth 8 --rw write --bs 4k
# test read iop
fio --name=my-job --group_reporting --time_based=1 --cpus_allowed_policy=split --runtime=10s --ramp_time=5s --size 20G --numjobs=32 --ioengine=aio --direct=1 --iodepth 8 --rw read --bs 4k
Performance of Shared disks
Read in MB/s per TiB of Storage | Write in MB/s per TiB of Storage | IOPs per TiB of Storage |
---|---|---|
Up to 200 MB/s per TiB | Up to 40 MB/s per TiB | Up to 1.2k IOPs per TiB |
Shared disks are currently restricted and may not be available for immediate provisioning. If you require access, please contact our sales team to discuss your use case.