Skip to main content
Your platform runs a WebSocket proxy that sits between the end user and Decart for the control plane (signaling, prompts, session events). Media (video and audio) flows directly between the end user and Decart over WebRTC — your proxy never touches it. This gives you a white-labeled experience with full media quality and visibility into all control messages.

When to use this path

  • Your platform already runs WebSocket infrastructure
  • You want full visibility into the control plane (prompts, images, session events)
  • You want white-labeled endpoints — end users never see Decart
If your platform is HTTP-native and you’d prefer not to run stateful WebSocket connections, see HTTP Signaling — same quality, same white-label, but with stateless HTTP proxy instead.

Characteristics

PropertyValue
White-labelYes — end users only see your WS URL
Frame accessNo — media bypasses your proxy
Provider visibilityFull — you see every signaling message, prompt, and image
Client requirementsBrowser or native app with WebRTC (no Decart SDK needed)
Your infrastructureWebSocket proxy server
Media quality is identical to using Decart directly. Your WS proxy only handles signaling — video and audio flow peer-to-peer.

Reference implementation

ws-signaling-proxy

A complete working proxy in ~200 lines of TypeScript — message buffering, close-code sanitization, structured logging, and an e2e test.

Architecture

How it works

1

End user connects to your WebSocket endpoint

The end user connects to your platform’s WS URL — they never see a Decart endpoint.
wss://api.yourplatform.com/v1/realtime?model=lucy_2_rt&token=user_token
2

Your proxy opens a connection to Decart

Authenticate with Decart using your platform API key and forward the model selection.
import websockets

upstream = await websockets.connect(
    f"wss://api3.decart.ai/v1/stream?api_key={DECART_API_KEY}&model={model}"
)
Your proxy connects to Decart with your API key.
3

Forward messages bidirectionally

Pump JSON messages between the client and Decart. Every message has a type field. You can log, validate, or modify messages as they pass through — see the message reference for the full protocol.
import asyncio
import json

async def client_to_upstream(client_ws, upstream_ws):
    async for raw_msg in client_ws:
        msg = json.loads(raw_msg)

        if msg["type"] == "prompt":
            log_prompt(msg["prompt"])
            # Optional: validate or modify before forwarding

        elif msg["type"] == "set_image":
            log_image_upload()
            # Optional: validate image size / content

        await upstream_ws.send(raw_msg)

async def upstream_to_client(upstream_ws, client_ws):
    async for raw_msg in upstream_ws:
        msg = json.loads(raw_msg)

        if msg["type"] == "session_id":
            log_session(msg["session_id"])

        elif msg["type"] == "prompt_ack" and not msg["success"]:
            log_moderation_rejection(msg["error"])

        elif msg["type"] == "generation_ended":
            log_session_end(msg["seconds"], msg["reason"])

        await client_ws.send(raw_msg)

await asyncio.gather(
    client_to_upstream(client_ws, upstream),
    upstream_to_client(upstream, client_ws),
)
4

WebRTC connects directly

When the end user’s browser receives the SDP answer and ICE candidates through your proxy, it establishes a direct WebRTC connection to Decart via the IP:port in the ICE candidates. Your proxy is not involved in media.
5

Handle disconnects gracefully

When either side disconnects, close the other connection. Your proxy should handle both directions:
async def proxy_session(client_ws, model):
    upstream = await websockets.connect(
        f"wss://api3.decart.ai/v1/stream?api_key={DECART_API_KEY}&model={model}"
    )
    try:
        await asyncio.gather(
            client_to_upstream(client_ws, upstream),
            upstream_to_client(upstream, client_ws),
        )
    except websockets.ConnectionClosed:
        pass
    finally:
        await upstream.close()

Message reference

Your proxy forwards JSON messages between the client and Decart. Every message has a type field that identifies it.
Messages your proxy receives from the client and forwards to Decart.
{ "type": "offer", "sdp": "v=0\r\no=- 123 2 IN IP4 127.0.0.1\r\n..." }
The sdp field is a string containing the full SDP (same format as RTCSessionDescription.sdp). Forward as-is.
{
  "type": "ice-candidate",
  "candidate": {
    "candidate": "candidate:1 1 UDP 2130706431 192.168.1.5 54321 typ host",
    "sdpMLineIndex": 0,
    "sdpMid": "0",
    "usernameFragment": "ab12"
  }
}
Forward as-is. The candidate format matches RTCIceCandidate.toJSON()usernameFragment is optional and may be present depending on the browser. Signal end-of-candidates by sending null:
{ "type": "ice-candidate", "candidate": null }
{ "type": "prompt", "prompt": "Anime style portrait", "enhance_prompt": true }
FieldTypeDescription
promptstringThe text prompt to apply
enhance_promptbooleanWhen true (default), the server enhances the prompt automatically before applying it. Set to false to use the exact prompt text.
You can validate, filter, or replace this message before forwarding.
{
  "type": "set_image",
  "image_data": "<base64-encoded image>",
  "prompt": "Transform into this character",
  "enhance_prompt": true
}
FieldTypeDescription
image_datastring | nullBase64-encoded image (max 10 MB decoded). Send null to clear the current reference image.
promptstringOptional. Set a prompt alongside the image.
enhance_promptbooleanOptional. Enhance the prompt automatically (default: true).
You can validate or reject this message before forwarding.
Reference images must be under 10 MB (decoded). Oversized images are rejected by the server.

Moderation

Prompts and images are moderated server-side before being applied to the model. Your proxy sees the result as acknowledgment messages:
  • Prompt acceptedprompt_ack with success: true
  • Prompt rejectedprompt_ack with success: false and an error string
  • Image acceptedset_image_ack with success: true
  • Image rejectedset_image_ack with success: false and an error string
Rejected content is not applied — the model continues with the previous prompt or image. Your proxy can add its own moderation layer before forwarding messages to Decart.

Usage tracking

Track usage by watching the generation lifecycle messages:
async def upstream_to_client(upstream_ws, client_ws, session):
    async for raw_msg in upstream_ws:
        msg = json.loads(raw_msg)

        if msg["type"] == "generation_started":
            session.mark_started()

        elif msg["type"] == "generation_tick":
            session.record_usage(seconds=msg["seconds"])

        elif msg["type"] == "generation_ended":
            session.finalize(
                total_seconds=msg["seconds"],
                reason=msg["reason"],
            )

        await client_ws.send(raw_msg)

Proxy example

A complete WebSocket proxy using Python and websockets. This example authenticates the client, connects upstream, pumps messages with logging, and handles disconnects:
import asyncio
import json
import websockets

DECART_API_KEY = "your-api-key"  # Load from environment in production

async def handle_client(client_ws):
    # 1. Authenticate the client and extract model from query params
    model = parse_model_from_query(client_ws.request)
    user = authenticate_user(client_ws.request)
    if not user:
        await client_ws.close(4001, "Unauthorized")
        return

    # 2. Open upstream connection to Decart
    upstream_url = f"wss://api3.decart.ai/v1/stream?api_key={DECART_API_KEY}&model={model}"
    async with websockets.connect(upstream_url) as upstream_ws:
        session_id = None

        async def client_to_upstream():
            async for raw_msg in client_ws:
                msg = json.loads(raw_msg)

                if msg["type"] == "prompt":
                    print(f"[{session_id}] prompt: {msg['prompt'][:80]}")

                elif msg["type"] == "set_image":
                    print(f"[{session_id}] set_image (has_prompt={bool(msg.get('prompt'))})")

                await upstream_ws.send(raw_msg)

        async def upstream_to_client():
            nonlocal session_id
            async for raw_msg in upstream_ws:
                msg = json.loads(raw_msg)

                if msg["type"] == "session_id":
                    session_id = msg["session_id"]
                    print(f"Session started: {session_id}")

                elif msg["type"] == "prompt_ack":
                    if not msg["success"]:
                        print(f"[{session_id}] prompt rejected: {msg['error']}")

                elif msg["type"] == "set_image_ack":
                    if not msg["success"]:
                        print(f"[{session_id}] image rejected: {msg['error']}")

                elif msg["type"] == "generation_started":
                    print(f"[{session_id}] generation started")

                elif msg["type"] == "generation_ended":
                    print(f"[{session_id}] ended: {msg['reason']} ({msg['seconds']}s)")
                    record_billing(user, session_id, msg["seconds"])

                elif msg["type"] == "error":
                    print(f"[{session_id}] error: {msg['error']}")

                await client_ws.send(raw_msg)

        # 3. Pump messages in both directions
        try:
            await asyncio.gather(
                client_to_upstream(),
                upstream_to_client(),
            )
        except websockets.ConnectionClosed:
            pass

    print(f"[{session_id}] connection closed")

async def main():
    async with websockets.serve(handle_client, "0.0.0.0", 8080):
        await asyncio.Future()  # Run forever

asyncio.run(main())
This proxy is stateless — it holds no session state between connections. Each WebSocket connection is an independent pass-through. You can scale horizontally by running multiple proxy instances behind a load balancer.

Client implementation

Your client connects to the proxy over WebSocket and handles WebRTC with standard browser APIs.
const ws = new WebSocket(
  "wss://api.yourplatform.com/v1/realtime?model=lucy_2_rt&token=user_token"
);

const pc = new RTCPeerConnection({
  iceServers: [{ urls: "stun:stun.l.google.com:19302" }],
});

const stream = await navigator.mediaDevices.getUserMedia({
  video: { width: 1280, height: 720, frameRate: 20 },
  audio: true,
});
stream.getTracks().forEach((track) => pc.addTrack(track, stream));

pc.ontrack = (event) => {
  document.getElementById("remote-video").srcObject = event.streams[0];
};

pc.onicecandidate = ({ candidate }) => {
  ws.send(JSON.stringify({ type: "ice-candidate", candidate }));
};

ws.addEventListener("open", async () => {
  const offer = await pc.createOffer();
  await pc.setLocalDescription(offer);
  ws.send(JSON.stringify({ type: "offer", sdp: offer.sdp }));
});

ws.addEventListener("message", async (event) => {
  const msg = JSON.parse(event.data);

  switch (msg.type) {
    case "answer":
      await pc.setRemoteDescription({ type: "answer", sdp: msg.sdp });
      break;

    case "ice-candidate":
      if (msg.candidate) await pc.addIceCandidate(msg.candidate);
      break;

    case "ice-restart":
      // Server rotated TURN credentials — reconfigure and restart ICE
      pc.setConfiguration({
        iceServers: [
          { urls: "stun:stun.l.google.com:19302" },
          {
            urls: msg.turn_config.server_url,
            username: msg.turn_config.username,
            credential: msg.turn_config.credential,
          },
        ],
      });
      const restartOffer = await pc.createOffer({ iceRestart: true });
      await pc.setLocalDescription(restartOffer);
      ws.send(JSON.stringify({ type: "offer", sdp: restartOffer.sdp }));
      break;

    case "generation_started":
      showLiveIndicator();
      break;

    case "generation_tick":
      updateUsageDisplay(msg.seconds);
      break;

    case "generation_ended":
      finalizeSession(msg.seconds, msg.reason);
      break;

    case "prompt_ack":
      if (!msg.success) console.warn("Prompt rejected:", msg.error);
      break;

    case "set_image_ack":
      if (!msg.success) console.warn("Image rejected:", msg.error);
      break;

    case "error":
      console.error("Server error:", msg.error);
      break;
  }
});
Once the session is running, send control messages at any time:
ws.send(JSON.stringify({
  type: "prompt",
  prompt: "Cyberpunk neon city",
  enhance_prompt: true,
}));

stream.getTracks().forEach((t) => t.stop());
ws.close();
pc.close();
Message types and fields match the message reference above. In production, add reconnection logic for WebSocket drops and monitor pc.connectionState for WebRTC failures.

Next steps