Use this file to discover all available pages before exploring further.
We’ve updated video generation to use an asynchronous job-based Queue API. This change improves reliability, allows for better timeout handling, and provides visibility into job progress. The synchronous /v1/generate/ endpoints for video are now deprecated in favor of the new /v1/jobs/ endpoints.
import { createDecartClient, models } from "@decartai/sdk";import { writeFileSync } from "fs";const client = createDecartClient({ apiKey: "your-api-key" });// Synchronous - blocks until video is readyconst video = await client.generate({ model: models.video("lucy-pro-t2v"), prompt: "A serene lake at sunset",});writeFileSync("output.mp4", Buffer.from(await video.arrayBuffer()));
import asynciofrom decart import DecartClient, modelsasync def generate(): async with DecartClient(api_key="your-api-key") as client: # Synchronous - blocks until video is ready video = await client.generate({ "model": models.video("lucy-pro-t2v"), "prompt": "A serene lake at sunset", }) with open("output.mp4", "wb") as f: f.write(video)asyncio.run(generate())
import asynciofrom decart import DecartClient, modelsasync def generate(): async with DecartClient(api_key="your-api-key") as client: # Asynchronous - submit and poll for completion result = await client.queue.submit_and_poll({ "model": models.video("lucy-pro-t2v"), "prompt": "A serene lake at sunset", "on_status_change": lambda job: print(f"Status: {job.status}"), }) if result.status == "completed": with open("output.mp4", "wb") as f: f.write(result.data)asyncio.run(generate())
Manual polling (advanced)
For custom polling logic, you can manually control the submit/poll/result flow:
import asyncio# Submit jobjob = await client.queue.submit({ "model": models.video("lucy-pro-t2v"), "prompt": "A serene lake at sunset",})print(f"Job ID: {job.job_id}")# Poll for statuswhile True: status = await client.queue.status(job.job_id) print(f"Status: {status.status}") if status.status == "completed": result = await client.queue.result(job.job_id) with open("output.mp4", "wb") as f: f.write(result.data) break elif status.status == "failed": print("Job failed") break await asyncio.sleep(2)
# Single request - blocks until complete (could timeout on long generations)curl -X POST https://api.decart.ai/v1/generate/lucy-pro-t2v \ -H "X-API-KEY: $DECART_API_KEY" \ -F "prompt=A serene lake at sunset" \ --output output.mp4
The Queue API provides more granular error information through job status:
const result = await client.queue.submitAndPoll({ model: models.video("lucy-pro-t2v"), prompt: "A serene lake at sunset",});if (result.status === "failed") { console.error(`Generation failed: ${result.error}`);}
const result = await client.queue.submitAndPoll({ model: models.video("lucy-pro-t2v"), prompt: "A serene lake at sunset", onStatusChange: (job) => { // Called when status changes: pending -> processing -> completed updateProgressUI(job.status); },});
Are the old /v1/generate/ endpoints still available?
The old synchronous endpoints for video generation are deprecated.
Image generation still uses /v1/generate/.
Do I need to update my SDK version?
Yes, make sure you’re using the latest version of the SDK:
JavaScript: npm install @decartai/sdk@latest
Python: pip install decart --upgrade
What about image generation?
Image generation continues to use the synchronous Process API (client.process in SDKs, /v1/generate/ for REST). Only video generation has moved to the Queue API.