Skip to main content

Interacting with Video to Video models

We've partnered with Decart, a leading AI lab focused on delivering real-time generative experiences to offer MirageLSD, the industry's first real-time video to video model through our managed inference service. We currently support a batch API allowing users to transform input videos with a prompt in near real-time, with a real-time streaming API available in preview. Usage of the model is covered under Crusoe's Platform Privacy policy and Acceptable Use Policy.

Key model details are outlined below.

CategoryDetails
Output Video Resolution1080p
Output Video Framerate30 fps
Output Aspect Ratio16:9
Recommended Input Video Resolution>720p, 1080p is ideal

Querying Video to Video models (Batch)

There are four main steps involved in submitting a request to the Decart Mirage - Batch model, namely:

  • Upload the input video via the files API
  • Submit the request to the queue API
  • Monitor the status of the job via the queue API
  • Download the output video via the files API

We break this down via the code snippets below.

import requests
import time

INPUT_VIDEO_PATH = "<INPUT_VIDEO_PATH>"
OUTPUT_VIDEO_PATH = "<OUTPUT_VIDEO_PATH>"

AUTH_TOKEN = "<YOUR_API_KEY>"

# Step 1: Upload the video file.
with open(INPUT_VIDEO_PATH, "rb") as f:
upload_response = requests.post(
"https://api-video.crusoe.ai/v1/files",
headers={
"Authorization": f"Bearer {AUTH_TOKEN}",
},
files={"file": f},
data={"purpose": "video"},
)
upload_response.raise_for_status()
upload_data = upload_response.json()
file_id = upload_data["id"]
if not file_id:
raise ValueError("Could not find 'id' in upload response.")

print(f"File uploaded successfully. id: {file_id}")

# Step 2: Submit the inference request.
inference_response = requests.post(
"https://api-video.crusoe.ai/v1/queue/decart/miragelsd-1-batch/enhanced",
headers={
"Authorization": f"Bearer {AUTH_TOKEN}",
"Content-Type": "application/json",
},
json={
"file_id": file_id,
"prompt": "<YOUR_PROMPT_DESCRIBING_THE_DESIRED_VIDEO>",
},
)
inference_response.raise_for_status()
inference_data = inference_response.json()
request_id = inference_data["request_id"]
status_url = inference_data["status_url"]
if not request_id or not status_url:
raise ValueError("Could not find 'request_id' or 'status_url' in enqueue response.")

print(f"Request enqueued. request_id: {request_id}")

# Step 3: Poll for job completion.
is_complete = False
result_file_id = None
while not is_complete:
status_response = requests.get(
status_url,
headers={
"Authorization": f"Bearer {AUTH_TOKEN}",
},
)

status_response.raise_for_status()
status_data = status_response.json()
status = status_data["status"]
if status == 200:
is_complete = True
result_file_id = status_data["result_file_id"]
else:
time.sleep(2)

print(f"Request complete. result_file_id: {result_file_id}")

# Step 4: Download the result.
download_response = requests.get(
f"https://api-video.crusoe.ai/v1/files/{result_file_id}",
headers={
"Authorization": f"Bearer {AUTH_TOKEN}",
},
stream=True,
)
download_response.raise_for_status()
with open(OUTPUT_VIDEO_PATH, "wb") as f:
for chunk in download_response.iter_content(chunk_size=8192):
f.write(chunk)

print(f"File downloaded successfully to: {OUTPUT_VIDEO_PATH}")