Getting Started with Managed Inference
All models listed in the overview can be queried via the api.crusoe.ai
path.
Retrieving your Inference API token
You can retrieve your Inference API token via the Crusoe Cloud console by following the steps below.
- UI
- Visit the [Crusoe Cloud console](https://console.crusoecloud.com/)
- Click the "Security" tab in the left nav
- Select the "Inference API Key" tab in the top bar
- Click the "Create Inference API Key" button on the page
- Optionally provide an alias or an expiration date
- Click the "Create" button to view and save your API key
Querying Text models
After retrieving an API key from the Crusoe Console, you can use the OpenAI SDK to make requests. The example below uses the meta-llama/Llama-3.3-70B-Instruct
model.
import os
from openai import OpenAI
CRUSOE_API_KEY = os.getenv("CRUSOE_API_KEY")
client = OpenAI(
api_key=CRUSOE_API_KEY,
base_url="https://api.crusoe.ai/v1",
)
completion = client.chat.completions.create(
model="meta-llama/Llama-3.3-70B-Instruct",
messages=[
{"role": "system", "content": "You are a helpful, concise assistant."},
{"role": "user", "content": "Who is Robinson Crusoe?"},
],
)
print(completion.choices[0].message.content)
Interacting with Video to Video models (Batch)
There are four main steps involved in submitting a request to the Decart Mirage - Batch
model, namely:
- Upload the input video via the files API
- Submit the request to the queue API
- Monitor the status of the job via the queue API
- Download the output video via the files API
We break this down via the code snippets below.
- Python
- TypeScript
- cURL
import requests
import time
# Step 1: Upload the video file.
upload_response = requests.post(
"https://api-video.crusoe.ai/v1/files",
headers={
"Authorization": "Bearer <YOUR_API_KEY>",
"Crusoe-Project-Id": "<PROJECT_ID>",
},
files={"file": open("/path/to/your/video.mp4", "rb")},
data={"purpose": "video"},
)
# Step 2: Submit the inference request.
upload_data = upload_response.json()
file_id = upload_data["id"]
inference_response = requests.post(
"https://api-video.crusoe.ai/v1/queue/decart/miragelsd-1-batch/enhanced",
headers={
"Authorization": "Bearer <YOUR_API_KEY>",
"Crusoe-Project-Id": "<PROJECT_ID>",
"Content-Type": "application/json",
},
json={
"file_id": file_id,
"prompt": "<YOUR_PROMPT_DESCRIBING_THE_DESIRED_VIDEO>",
},
)
# Step 3: Poll for job completion.
inference_data = inference_response.json()
status_url = inference_data["status_url"]
is_complete = False
while not is_complete:
status_response = requests.get(
status_url,
headers={
"Authorization": "Bearer <YOUR_API_KEY>",
"Crusoe-Project-Id": "<PROJECT_ID>",
},
)
if status_response.status_code < 200 or status_response.status_code >= 300:
is_complete = True
break
status_data = status_response.json()
is_complete = status_data["status"] in [200, 202]
if not is_complete:
time.sleep(2)
# Step 4: Download the result.
result_file_id = status_data["result_file_id"]
download_response = requests.get(
f"https://api-video.crusoe.ai/v1/files/{result_file_id}",
headers={
"Authorization": "Bearer <YOUR_API_KEY>",
"Crusoe-Project-Id": "<PROJECT_ID>",
},
stream=True,
)
// Step 1: Upload the video file.
const formData = new FormData();
formData.append("purpose", "video");
const videoBuffer = fs.readFileSync("/path/to/your/video.mp4");
formData.append("file", new Blob([videoBuffer], { type: "video/mp4" }));
const uploadResponse = await fetch("https://api-video.crusoe.ai/v1/files", {
method: "POST",
headers: {
Authorization: "Bearer <YOUR_API_KEY>",
"Crusoe-Project-Id": "<PROJECT_ID>",
},
body: formData,
});
// Step 2: Submit the inference request.
const uploadData = await uploadResponse.json();
const fileId = uploadData.id;
const inferenceResponse = await fetch(
"https://api-video.crusoe.ai/v1/queue/decart/miragelsd-1-batch/enhanced",
{
method: "POST",
headers: {
Authorization: "Bearer <YOUR_API_KEY>",
"Crusoe-Project-Id": "<PROJECT_ID>",
"Content-Type": "application/json",
},
body: JSON.stringify({
file_id: fileId,
prompt: "<YOUR_PROMPT_DESCRIBING_THE_DESIRED_VIDEO>",
}),
}
);
// Step 3: Poll for job completion.
const inferenceData = await inferenceResponse.json();
const statusUrl = inferenceData.status_url;
let statusData;
let isComplete = false;
while (!isComplete) {
const statusResponse = await fetch(statusUrl, {
method: "GET",
headers: {
Authorization: "Bearer <YOUR_API_KEY>",
"Crusoe-Project-Id": "<PROJECT_ID>",
},
});
if (!statusResponse.ok) {
isComplete = true;
break;
}
statusData = await statusResponse.json();
isComplete = [200, 202].includes(statusData.status);
if (!isComplete) {
await new Promise((resolve) => setTimeout(resolve, 2000));
}
}
// Step 4: Download the result.
const resultFileId = statusData.result_file_id;
const downloadResponse = await fetch(
`https://api-video.crusoe.ai/v1/files/${resultFileId}`,
{
method: "GET",
headers: {
Authorization: "Bearer <YOUR_API_KEY>",
"Crusoe-Project-Id": "<PROJECT_ID>",
},
}
);
# Step 1: Upload the video file.
curl -X POST https://api-video.crusoe.ai/v1/files \
-H 'Authorization: Bearer <YOUR_API_KEY>' \
-H 'Crusoe-Project-Id: <PROJECT_ID>' \
-F "purpose=video" \
-F "file=@/path/to/your/video.mp4"
# Step 2: Submit the inference request.
curl -X POST https://api-video.crusoe.ai/v1/queue/decart/miragelsd-1-batch/enhanced \
-H 'Authorization: Bearer <YOUR_API_KEY>' \
-H 'Crusoe-Project-Id: <PROJECT_ID>' \
-H 'Content-Type: application/json' \
-d '{
"file_id": "<FILE_ID_FROM_STEP_1>",
"prompt": "<YOUR_PROMPT_DESCRIBING_THE_DESIRED_VIDEO>"
}'
# Step 3: Get the job status.
curl -X GET https://api-video.crusoe.ai/v1/queue/decart/miragelsd-1-batch/requests/<REQUEST_ID_FROM_STEP_2> \
-H 'Authorization: Bearer <YOUR_API_KEY>' \
-H 'Crusoe-Project-Id: <PROJECT_ID>'
# Step 4: Download the result.
curl -X GET https://api-video.crusoe.ai/v1/files/<FILE_ID_FROM_STEP_3> \
-H 'Authorization: Bearer <YOUR_API_KEY>' \
-H 'Crusoe-Project-Id: <PROJECT_ID>' \
-o ./output.mp4