Managing Instance Ephemeral Storage
Overview
Ephemeral disks are available on certain VM instance types. Please refer to VM overview page for details.
Lifecycle
Ephemeral disks have no redundancy. Ephemeral disks have a lifecycle that is tied to the server associated with a given virtual machine. The disks are erased when the physical server reboots, or the VM is stopped and restarted, or if there are any other hardware or software failures causing the virtual machine to move or shut down. They are only suitable for use cases where data loss is tolerable, even if they are configured in a storage cluster like MinIO, Lustre or Ceph. If your data has a need to be protected, please back it up on an external disk like persistent or shared disk.
Note: if a VM is restarted from within the VM (e.g. using sudo reboot now
) rather than by stopping and restarting the VM from the UI, CLI, or API, the disks will not be erased.
Formatting and Mounting Ephemeral Disks
Below is a script you can add to your startup scripts to automatically detect the number of ephemeral disks and create an ext4
file system mounted at the /nvme
path. If multiple ephemeral disks are found, the script will create an unprotected RAID0
array of the disks for additional performance benefits, mounted at the /raid0
path.
#!/bin/bash
set -euo pipefail
echo "info: detecting NVMe drives by-id..."
# Collect all nvme-* symlinks, exclude partitions
all_symlinks=$(ls -1 /dev/disk/by-id/nvme-* 2>/dev/null | grep -vE '(_[0-9]+$|part[0-9]+$)' || true)
# Deduplicate: keep only one symlink per backing device
nvme_devices=""
seen_targets=""
for symlink in $all_symlinks; do
target=$(readlink -f "$symlink")
if ! echo "$seen_targets" | grep -q -w "$target"; then
nvme_devices="$nvme_devices $symlink"
seen_targets="$seen_targets $target"
fi
done
nvme_devices=$(echo "$nvme_devices" | xargs) # trim
num_nvme=$(echo "$nvme_devices" | wc -w)
if [ "$num_nvme" -eq 0 ]; then
echo "error: no NVMe drives were detected under /dev/disk/by-id/. Exiting."
exit 1
fi
echo "info: found $num_nvme NVMe drive(s)."
echo "info: devices: $nvme_devices"
if [ ! -b /dev/md/ephemeral ]; then
echo "info: creating md dev"
sudo mdadm --create /dev/md/ephemeral \
--force \
--name=ephemeral \
--level=0 \
--raid-devices=$num_nvme \
--homehost=any \
$nvme_devices
if [ $? -ne 0 ]; then
echo "error: failed to create md dev"
exit 1
fi
else
echo "info: md dev already exists"
fi
sudo udevadm settle
# Check if the RAID device is already formatted
if ! sudo blkid -p -u filesystem /dev/md/ephemeral /dev/null 2>&1; then
echo "info: creating xfs fs on md dev"
sudo mkfs.xfs /dev/md/ephemeral
else
echo "info: md dev is already formatted with an xfs fs"
fi
# Create mount point if it doesn't exist
if [ ! -d /raid0 ]; then
echo "info: creating mount point /raid0"
sudo mkdir /raid0
fi
# Mount the device if it's not already mounted
if ! mountpoint -q /raid0; then
echo "info: mounting /dev/md/ephemeral at /raid0"
sudo mount /dev/md/ephemeral /raid0
else
echo "info: /raid0 is already a mount point"
fi
echo "info: setup complete. Filesystem is mounted at /raid0."