Skip to main content
HTTP signaling is in private preview. Contact us to request access.
Your platform proxies stateless HTTP control endpoints for WebRTC signaling and session control. Media (video and audio) flows directly between the end user and Decart over WebRTC. A long-lived Server-Sent Events (SSE) stream carries server-to-client events. A single POST creates the session, exchanges the SDP offer/answer, and optionally sets the initial prompt and reference image — all in one round-trip.

When to use this path

  • Your platform is HTTP-native (REST API gateway, serverless functions, containers)
  • You want white-label endpoints without managing stateful WebSocket connections
  • You want to use standard API gateway tooling (rate limiting, auth, logging)
  • You prefer stateless infrastructure that scales horizontally
This is the recommended path for most API platforms. If you can proxy HTTP, you can integrate Decart realtime.

Characteristics

PropertyValue
White-labelYes — end users only see your HTTP URLs
Frame accessNo — media bypasses your proxy
Provider visibilityFull — you see every HTTP request (prompts, images, session events)
Client requirementsBrowser or native app with WebRTC (no Decart SDK needed)
Your infrastructureStateless HTTP proxy + one SSE stream per active session
Media quality is identical to using Decart directly. HTTP signaling adds marginal latency to infrequent control operations — not to the video stream.

Architecture

How it works

1

Create session and exchange SDP

A single POST creates the realtime session, exchanges the SDP offer/answer, and returns ICE server configuration. You can optionally include an initial prompt and reference image to pre-configure the generation before the first frame.Your backend sends the model selection and the client’s SDP offer. Decart returns a 201 Created with the SDP answer, a Location header pointing to the session resource, and an ETag for ICE session tracking.
curl -X POST https://api.decart.ai/v1/realtime/sessions \
  -H "x-api-key: $DECART_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "lucy_2_rt",
    "sdp": {
      "type": "offer",
      "sdp": "v=0\r\no=- 123 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE 0 1\r\n..."
    },
    "prompt": "Anime style portrait",
    "enhance_prompt": true
  }'
The sdp object matches the browser’s RTCSessionDescription format. You can pass pc.localDescription directly and apply the response with pc.setRemoteDescription(response.sdp).
The optional fields let you front-load session configuration in the same round-trip:
model
string
required
Model identifier (e.g., lucy_2_rt).
sdp
object
required
Client’s SDP offer — { "type": "offer", "sdp": "..." }.
prompt
string
Set the initial prompt before generation starts.
enhance_prompt
boolean
Enhance the prompt automatically (default: true).
image_data
string
Base64-encoded reference image (max 10 MB).
When you include prompt or image_data, the server validates and moderates the content before creating the session. If moderation rejects it, the request fails with 422 — no session is created and no resources are allocated.
The response includes everything needed to complete the WebRTC connection:
session_id
string
required
Unique session identifier.
sdp
object
required
SDP answer — pass directly to pc.setRemoteDescription(response.sdp).
ice_servers
array
required
ICE server configuration — pass to new RTCPeerConnection({ iceServers: response.ice_servers }).
events
object
required
SSE connection details for the server event stream.
expires_at
string
required
ISO 8601 session expiration timestamp.
2

Trickle ICE candidates and open SSE

Right after session creation, forward ICE candidates from the client using PATCH with the If-Match header. The client opens the SSE stream using the event_token from the session response.The candidate format matches RTCIceCandidate.toJSON() — forward them directly from the browser. Signal end-of-candidates by sending a null candidate.
curl -X PATCH https://api.decart.ai/v1/realtime/sessions/rs_abc123 \
  -H "x-api-key: $DECART_API_KEY" \
  -H "Content-Type: application/json" \
  -H 'If-Match: "ice-1a2b3c"' \
  -d '{
    "candidates": [
      {
        "candidate": "candidate:1 1 UDP 2130706431 192.168.1.5 54321 typ host",
        "sdpMLineIndex": 0,
        "sdpMid": "0"
      }
    ]
  }'
Server-side ICE candidates are delivered via the SSE event stream. The server sends null as the last candidate to signal end-of-candidates from its side.
ETag is required on every PATCH. If you omit If-Match, the server responds with 428 Precondition Required. If the ETag doesn’t match (another ICE operation completed first), you get 412 Precondition Failed — call GET /v1/realtime/sessions/{id} to fetch the current ETag and retry. The server queues concurrent ICE candidates correctly, so rapid-fire PATCHes work as long as each carries a valid ETag.
3

Handle server events (SSE)

The client opens an SSE connection using the event_token from the session creation response. Handle generation lifecycle, server-side ICE candidates, moderation alerts, and errors.
const events = new EventSource(
  `https://api.yourplatform.com/v1/realtime/sessions/${sessionId}/events?event_token=${encodeURIComponent(eventToken)}`
);

events.addEventListener("generation_started", () => {
  showLiveIndicator();
});

events.addEventListener("generation_tick", (e) => {
  const { seconds } = JSON.parse(e.data);
  updateUsageDisplay(seconds);
});

events.addEventListener("ice-candidate", (e) => {
  const data = JSON.parse(e.data);
  if (data.candidate === null) {
    // Server signals end-of-candidates
    peerConnection.addIceCandidate(null);
  } else {
    const { candidate, sdpMLineIndex, sdpMid } = data;
    peerConnection.addIceCandidate({ candidate, sdpMLineIndex, sdpMid });
  }
});

events.addEventListener("ice-restart", async (e) => {
  const { ice_servers } = JSON.parse(e.data);
  // Server requests ICE restart — reconfigure with new TURN credentials
  peerConnection.setConfiguration({ iceServers: ice_servers });
  const offer = await peerConnection.createOffer({ iceRestart: true });
  await peerConnection.setLocalDescription(offer);
  // Send PATCH with If-Match: "*" and the new SDP offer
});


events.addEventListener("generation_ended", (e) => {
  const { seconds, reason } = JSON.parse(e.data);
  finalizeSession(seconds, reason);
  // SSE stream closes after this event
});

events.addEventListener("error", (e) => {
  const error = JSON.parse(e.data);
  handleError(error.title, error.detail);
});
The SSE stream is your only channel for server-initiated messages. Keep it open for the entire session. When the session ends, the server sends generation_ended followed by a final empty data: field, then closes the stream. If the connection drops mid-session, reconnect — EventSource handles this automatically with the Last-Event-ID header. If you reconnect after the session has already ended, the server responds with 410 Gone.
4

Send prompts and images

Control the generation with stateless HTTP calls — no persistent connection needed.
curl -X POST https://api.decart.ai/v1/realtime/sessions/rs_abc123/prompt \
  -H "x-api-key: $DECART_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"prompt": "Anime style", "enhance_prompt": true}'
curl -X POST https://api.decart.ai/v1/realtime/sessions/rs_abc123/image \
  -H "x-api-key: $DECART_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "image_data": "<base64-encoded image>",
    "prompt": "Transform into this character",
    "enhance_prompt": true
  }'
The prompt response echoes back the original prompt text. A 200 status code confirms the content was accepted — no additional success field is needed. Both endpoints moderate input content synchronously — if the content is rejected, the API returns 422 Unprocessable Entity:
{
  "type": "https://api.decart.ai/errors/moderation-rejected",
  "title": "Input rejected by moderation",
  "detail": "Content violates our Terms of Service",
  "status": 422
}
Reference images must be under 10 MB (base64-encoded). Requests with oversized images are rejected with 413 Content Too Large.
5

Handle ICE restart (if needed)

If the network path degrades, the server sends an ice-restart SSE event with updated TURN credentials. Respond by creating a new SDP offer (with the provided ICE servers) and sending it as a PATCH with If-Match: "*" (wildcard).
curl -X PATCH https://api.decart.ai/v1/realtime/sessions/rs_abc123 \
  -H "x-api-key: $DECART_API_KEY" \
  -H "Content-Type: application/json" \
  -H 'If-Match: "*"' \
  -d '{
    "sdp": {
      "type": "offer",
      "sdp": "v=0\r\no=- 123 3 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\n..."
    }
  }'
The response includes a new ETag — use it for any subsequent ICE operations.
ICE restarts are server-initiated. The server makes one restart attempt with TURN credentials — if it fails, the session ends with a generation_ended event (reason: "ice_failure").
6

End the session

Explicitly close the session when the user disconnects. The response includes a session summary with billing information.
curl -X DELETE https://api.decart.ai/v1/realtime/sessions/rs_abc123 \
  -H "x-api-key: $DECART_API_KEY"
Sessions also end automatically when:
  • The WebRTC connection fails and ICE restart does not recover it
  • A moderation violation terminates the stream
  • The session reaches the maximum duration
  • No media activity is detected (inactivity timeout)
In all cases, a generation_ended SSE event is sent before the stream closes.

API reference

Endpoints

MethodPathDescription
POST/v1/realtime/sessionsCreate session + exchange SDP offer/answer
PATCH/v1/realtime/sessions/{id}Trickle ICE candidates or trigger ICE restart
GET/v1/realtime/sessions/{id}Read session state, current ETag, and refresh event_token
GET/v1/realtime/sessions/{id}/eventsSSE stream for server events
POST/v1/realtime/sessions/{id}/promptSet or update the model prompt
POST/v1/realtime/sessions/{id}/imageSet reference image (with optional prompt)
DELETE/v1/realtime/sessions/{id}End the session and return billing summary

Session state

Use GET /v1/realtime/sessions/{id} to recover state after reconnects, to fetch the current ETag before retrying a 412 ICE operation, or to obtain a fresh event_token if the SSE connection dropped.
curl https://api.decart.ai/v1/realtime/sessions/rs_abc123 \
  -H "x-api-key: $DECART_API_KEY"
Each GET returns a fresh event_token that expires at the same time as the session. This avoids the need for a separate token refresh endpoint.

Authentication

All control endpoints require an x-api-key header with a valid Decart API key. The SSE endpoint uses a short-lived event_token returned by POST /v1/realtime/sessions (and refreshable via GET /v1/realtime/sessions/{id}):
GET /v1/realtime/sessions/{id}/events?event_token=EVT_TOKEN
The event_token is scoped to one session and expires when the session does. This avoids placing your long-lived API key in browser-visible URLs while keeping x-api-key as the upstream authentication method for all other endpoints. The token is ephemeral, read-only (SSE is server-to-client), and safe to pass to the client.

SSE event types

EventDataDescription
ice-candidate{"candidate": "...", "sdpMLineIndex": 0, "sdpMid": "0"}Server-side ICE candidate — forward to RTCPeerConnection. A null candidate signals end-of-candidates.
ice-restart{"ice_servers": [{"urls": "turn:...", "username": "...", "credential": "..."}]}Server requests ICE restart — reconfigure and send new offer via PATCH
generation_started{}Model is producing frames — media will start flowing
generation_tick{"seconds": 30}Total elapsed generation time in seconds (cumulative, not a delta). Use for usage tracking.
generation_ended{"seconds": 120, "reason": "disconnect"}Session complete. SSE stream closes after this event.
error{"type": "...", "title": "...", "detail": "...", "status": 500}Server error
The generation_ended reason field indicates why the session ended:
ReasonDescription
disconnectClient disconnected
timeoutSession reached the maximum duration
moderation_violationContent policy violation terminated the stream
errorServer error
insufficient_creditsAccount has insufficient credits

Error responses

All errors return a structured JSON body:
{
  "type": "https://api.decart.ai/errors/invalid-sdp",
  "title": "Invalid SDP offer",
  "detail": "Missing ice-ufrag attribute in the SDP offer",
  "status": 400
}
StatusWhen
400Malformed request body or invalid SDP
401Missing or invalid API key
404Session not found or expired
410Session ended — SSE reconnect after termination
412If-Match ETag doesn’t match current ICE session state
413Image exceeds the 10 MB size limit
422Valid JSON but semantically invalid (unsupported model, content rejected by moderation)
428PATCH sent without required If-Match header
503Service temporarily unavailable — check the Retry-After header

Provider proxy example

Your control endpoints stay stateless — every request is an independent pass-through, including SSE. The client provides the event_token (received during session creation) to connect to the SSE stream through your proxy.
// Next.js / Express — all routes are stateless pass-through
app.all("/v1/realtime/*", async (req, res) => {
  const decartUrl = `https://api.decart.ai${req.path}${req.url.includes("?") ? req.url.substring(req.url.indexOf("?")) : ""}`;

  // Build headers — inject API key, forward If-Match for ICE operations
  const headers: Record<string, string> = {
    "x-api-key": process.env.DECART_API_KEY!,
  };
  if (req.headers["content-type"]) {
    headers["Content-Type"] = req.headers["content-type"] as string;
  }
  if (req.headers["if-match"]) {
    headers["If-Match"] = req.headers["if-match"] as string;
  }
  const response = await fetch(decartUrl, {
    method: req.method,
    headers,
    body: ["GET", "DELETE"].includes(req.method)
      ? undefined
      : JSON.stringify(req.body),
  });
  // Forward response headers
  for (const header of ["location", "etag"]) {
    const value = response.headers.get(header);
    if (value) res.setHeader(header, value);
  }

  // SSE — pipe the stream directly to the client
  if (req.path.endsWith("/events")) {
    res.status(response.status);
    res.setHeader("Content-Type", "text/event-stream");
    res.setHeader("Cache-Control", "no-cache");
    res.setHeader("Connection", "keep-alive");
    response.body?.pipeTo(
      new WritableStream({
        write(chunk) { res.write(chunk); },
        close() { res.end(); },
      })
    );
    return;
  }
  if (response.status === 204) {
    return res.status(204).end();
  }
  const data = await response.json();
  res.status(response.status).json(data);
});
Your proxy must forward the ETag and Location headers from Decart’s response, and pass through If-Match from client requests. The event_token is ephemeral, session-scoped, and read-only — safe to pass to the client. You can still add your own authentication, rate limiting, and logging on top.

Client implementation

Your client talks to your HTTP proxy for signaling and handles WebRTC with standard browser APIs.
const pc = new RTCPeerConnection({
  iceServers: [{ urls: "stun:stun.l.google.com:19302" }],
});

pc.ontrack = (event) => {
  document.getElementById("remote-video").srcObject = event.streams[0];
};

const stream = await navigator.mediaDevices.getUserMedia({
  video: { width: 1280, height: 720, frameRate: 20 },
  audio: true,
});
stream.getTracks().forEach((track) => pc.addTrack(track, stream));

const offer = await pc.createOffer();
await pc.setLocalDescription(offer);

// Create session — SDP exchange + ICE servers in one round-trip
const BASE = "https://api.yourplatform.com/v1/realtime";
const createRes = await fetch(`${BASE}/sessions`, {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({
    model: "lucy_2_rt",
    sdp: { type: "offer", sdp: offer.sdp },
    prompt: "Anime style",
    enhance_prompt: true,
  }),
});

const session = await createRes.json();
const sessionId = session.session_id;
let etag = createRes.headers.get("ETag");

await pc.setRemoteDescription(session.sdp);
pc.setConfiguration({ iceServers: session.ice_servers });

// Queue candidates and flush serially to maintain ETag ordering
const candidateQueue = [];
let flushing = false;

async function flushCandidates() {
  if (flushing || candidateQueue.length === 0) return;
  flushing = true;
  const batch = candidateQueue.splice(0);
  const res = await fetch(`${BASE}/sessions/${sessionId}`, {
    method: "PATCH",
    headers: { "Content-Type": "application/json", "If-Match": etag },
    body: JSON.stringify({ candidates: batch }),
  });
  etag = res.headers.get("ETag") || etag;
  flushing = false;
  if (candidateQueue.length > 0) flushCandidates();
}

pc.onicecandidate = ({ candidate }) => {
  candidateQueue.push(candidate ? candidate.toJSON() : null);
  flushCandidates();
};

// SSE for server-initiated events
const sse = new EventSource(
  `${BASE}/sessions/${sessionId}/events?event_token=${encodeURIComponent(session.events.event_token)}`
);

sse.addEventListener("ice-candidate", async (e) => {
  const data = JSON.parse(e.data);
  await pc.addIceCandidate(data.candidate === null ? null : data);
});

sse.addEventListener("ice-restart", async (e) => {
  const { ice_servers } = JSON.parse(e.data);
  pc.setConfiguration({ iceServers: ice_servers });
  const restartOffer = await pc.createOffer({ iceRestart: true });
  await pc.setLocalDescription(restartOffer);
  // If-Match: "*" bypasses ETag sequencing during restarts
  const res = await fetch(`${BASE}/sessions/${sessionId}`, {
    method: "PATCH",
    headers: { "Content-Type": "application/json", "If-Match": "*" },
    body: JSON.stringify({ sdp: { type: "offer", sdp: restartOffer.sdp } }),
  });
  etag = res.headers.get("ETag") || etag;
  await pc.setRemoteDescription((await res.json()).sdp);
});

sse.addEventListener("generation_started", () => {
  document.getElementById("status").textContent = "● Live";
});

sse.addEventListener("generation_tick", (e) => {
  document.getElementById("usage").textContent = `${JSON.parse(e.data).seconds}s`;
});

sse.addEventListener("generation_ended", () => {
  document.getElementById("status").textContent = "Ended";
  sse.close();
});

// Session controls
window.setPrompt = (text) =>
  fetch(`${BASE}/sessions/${sessionId}/prompt`, {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({ prompt: text }),
  });

window.disconnect = async () => {
  await fetch(`${BASE}/sessions/${sessionId}`, { method: "DELETE" });
  stream.getTracks().forEach((t) => t.stop());
  sse.close();
  pc.close();
};
The queue-and-flush pattern batches ICE candidates into fewer PATCH requests while maintaining ETag ordering. In production, retry 412 responses by fetching the current ETag with GET /sessions/{id}.

Next steps