Skip to main content

Troubleshooting Object Storage

This guide covers common issues you may encounter when using Crusoe Cloud Object Storage and their solutions.

"403 Access Denied" when using S3 tools

Possible causes and solutions:

  • Incorrect credentials: Verify your access key and secret key are correct and have not expired.
  • Region mismatch: Confirm the endpoint URL matches the location of your bucket. Object Storage is a regional resource — you cannot access buckets in one location from VMs in another.
  • Path-style configuration: Check that host_bucket (s3cmd) or force_path_style (rclone/boto3) is configured correctly for path-style access.

Example fix for s3cmd:

# In ~/.s3cfg, ensure these are set correctly:
host_base = object.<location>.crusoecloudcompute.com
host_bucket = object.<location>.crusoecloudcompute.com # No %(bucket)s prefix

Example fix for rclone:

# In ~/.config/rclone/rclone.conf:
force_path_style = true

"Bucket already exists" error on creation

Cause: Bucket names must be globally unique across all Crusoe Cloud projects.

Solution: Choose a different name or check for existing buckets with crusoe storage buckets list.

Best practice: Use a naming convention that includes your organization or project name:

  • mycompany-training-data-prod
  • project123-ml-checkpoints

Virtiofs conflict error

Error message: Object Storage is currently not supported for project <project-id> because it uses virtiofs for shared disks. To enable Object Storage, migrate your shared disks to NFS. Please contact support for assistance.

Cause: Your project is still using the legacy virtiofs backend for Shared Disks, which conflicts with Object Storage.

Solution: Complete the migration from virtiofs to NFS before using Object Storage.


Slow upload/download performance

Possible causes and solutions:

Use multipart uploads for large files

Most S3 clients automatically use multipart uploads for files larger than 15 MB, but you can adjust settings for better performance:

s3cmd:

s3cmd put --multipart-chunk-size-mb=64 large-file.tar s3://bucket/

boto3:

from boto3.s3.transfer import TransferConfig

config = TransferConfig(
multipart_threshold=64 * 1024 * 1024,
multipart_chunksize=64 * 1024 * 1024,
max_concurrency=10,
)
s3.upload_file("large-file.tar", "bucket", "key", Config=config)

Increase concurrency settings

rclone:

rclone copy ./data/ crusoe:bucket/path/ \
--transfers 16 \
--checkers 8 \
--s3-upload-concurrency 4

Check VM network bandwidth

Verify your VM type has sufficient VPC Network Bandwidth. Larger instance types have higher network throughput.

Use optimal object sizes

For best throughput, use object sizes of 64 MB or larger. If you're uploading many small files, consider creating tar archives before uploading.


Cannot create or delete buckets via S3 clients

Cause: This is expected behavior. Bucket creation and deletion are managed exclusively through the Crusoe Cloud Console or CLI for security and resource management reasons.

Solution: Use the Crusoe CLI or Console:

# Create bucket
crusoe storage buckets create --name my-bucket --location us-east1-a

# Delete bucket
crusoe storage buckets delete --name my-bucket

"Object lock requires versioning" error

Cause: Object lock can only be enabled on buckets that have versioning enabled.

Solution: Enable versioning first, then enable object lock:

# Enable versioning
crusoe storage buckets enable-versioning my-bucket

# Then enable object lock
crusoe storage buckets enable-locking my-bucket --retention 30d
Warning

Once versioning and object lock are enabled, they cannot be disabled.


S3 client reports "InvalidRequest" or "NotImplemented"

Cause: You're trying to use an S3 feature that isn't supported by Crusoe Object Storage.

Currently unsupported features:

  • Server-side encryption (SSE-S3, SSE-KMS, SSE-C)
  • Complex ACLs beyond bucket-level permissions
  • Cross-region replication
  • Event notifications (S3 Event Notifications to SNS/SQS)

Solution: Check the Supported S3 Features section in the Overview to confirm the feature is available.


Multipart upload stuck or incomplete

Symptoms: Large file uploads fail or hang indefinitely.

Causes and solutions:

  1. Network interruption: Check your VM's network connectivity and retry the upload.

  2. List incomplete uploads:

    # s3cmd
    s3cmd multipart s3://bucket-name

    # boto3
    response = s3.list_multipart_uploads(Bucket='bucket-name')
  3. Abort incomplete uploads:

    # Using boto3
    s3.abort_multipart_upload(
    Bucket='bucket-name',
    Key='object-key',
    UploadId='upload-id-from-list'
    )

Secret key not saved or lost

Cause: The secret key is only displayed once during creation and cannot be retrieved afterward.

Solution:

  1. Delete the old S3 API key
  2. Create a new S3 API key
  3. Update your S3 client configuration with the new credentials
# Delete old key
crusoe storage tokens delete <old-token-id>

# Create new key
crusoe storage tokens create --alias my-new-key

"Connection timed out" or "Unable to connect"

Possible causes:

  1. Wrong endpoint: Verify you're using the correct endpoint for your location:

    https://object.<location>.crusoecloudcompute.com
  2. VM not in same location: Object Storage is regional. Ensure your VM is in the same location as your bucket.

  3. Network issues: Check your VM's network connectivity:

    curl -I https://object.<location>.crusoecloudcompute.com

Need more help?

If you're still experiencing issues after trying these solutions:

  1. Check the Crusoe Cloud status page for service incidents
  2. Review the Object Storage Overview for architecture details
  3. Contact Crusoe Support with:
    • Error messages (full text)
    • S3 client configuration (redact credentials)
    • Steps to reproduce the issue
    • Your project ID and location