Build a realtime AI video app with Expo and the Decart SDK
Transform live camera video with AI on mobile devices. This walkthrough covers the architecture and key integration points of an open-source Expo app that connects to Decart’s realtime models — including Lucy 2 for character transformation.
This example uses Expo with react-native-webrtc. The same @decartai/sdk works with any React Native setup.
The useWebRTC hook handles the entire Decart SDK lifecycle — creating a client, capturing the camera, connecting to a realtime model, and managing cleanup.
Copy
Ask AI
import { useCallback, useRef, useState } from "react";import { createDecartClient, type ModelDefinition, type RealTimeClient } from "@decartai/sdk";import type { MediaStream } from "react-native-webrtc";import { getMediaStream } from "@/lib/realtime/media-streams";import { forceCodecInSDP } from "@/lib/realtime/webrtc-utils";export function useWebRTC({ model, facingMode }: { model: ModelDefinition; facingMode: string }) { const [localMediaStream, setLocalMediaStream] = useState<MediaStream | null>(null); const [remoteMediaStream, setRemoteMediaStream] = useState<MediaStream | null>(null); const [connectionState, setConnectionState] = useState<string>("disconnected"); const realtimeClientRef = useRef<RealTimeClient | null>(null); const initializeWebRTC = useCallback( async ({ model, facingMode }: { model: ModelDefinition; facingMode: string }) => { try { // 1. Create the Decart client const decartClient = createDecartClient({ apiKey: process.env.EXPO_PUBLIC_DECART_API_KEY as string, }); // 2. Capture camera with model-specific constraints const stream = await getMediaStream(facingMode, { fps: model.fps, width: model.width, height: model.height, }); setLocalMediaStream(stream); setConnectionState("connecting"); // 3. Connect to the realtime model realtimeClientRef.current = await decartClient.realtime.connect( stream as unknown as globalThis.MediaStream, { model, onRemoteStream: (transformedStream) => { setRemoteMediaStream(transformedStream as unknown as MediaStream); setConnectionState("connected"); }, customizeOffer: (offer: RTCSessionDescriptionInit) => { forceCodecInSDP(offer, "VP8"); }, }, ); } catch (error) { console.error("Connection failed:", error); setConnectionState("failed"); } }, [], ); const cleanupWebRTC = useCallback(() => { realtimeClientRef.current?.disconnect(); setRemoteMediaStream(null); setConnectionState("disconnected"); }, []); return { localMediaStream, remoteMediaStream, connectionState, realtimeClientRef, initializeWebRTC, cleanupWebRTC, };}
Three things to note:
Model constraints drive camera setup. Each model defines its own fps, width, and height. The getMediaStream helper requests exactly those dimensions from the device camera.
customizeOffer forces VP8. React Native’s WebRTC implementation works best with VP8. The forceCodecInSDP utility reorders the SDP to prefer VP8 over H264.
Type casting. React Native’s MediaStream type differs from the browser’s globalThis.MediaStream. The as unknown as casts bridge this gap.
The app includes curated prompt presets for each model. When a user selects a preset, the prompt is sent to the active model:
Copy
Ask AI
// Apply a style preset to the current modelif (realtimeClientRef.current) { realtimeClientRef.current.setPrompt(skin.prompt, { enhance: skin.enrich ?? false, });}
Example prompts from the preset list:
Copy
Ask AI
const presets = [ { title: "Anime Character", prompt: "Transform the person into a 2D anime character with smooth cel-shaded lines, " + "soft pastel highlights, large expressive eyes, clean contours, even lighting, " + "simplified textures, and a bright studio-style background for a polished anime look.", }, { title: "Black Tuxedo", prompt: "Change the outfit to a sharp black tuxedo with satin lapels, crisp white shirt, " + "black bow tie, tailored fit, polished shoes, and soft indoor lighting reflecting " + "off a marble floor.", }, { title: "Sunglasses", prompt: "Add a pair of dark tinted sunglasses resting naturally on the person's face, " + "smooth acetate frames, subtle reflections on the lenses, accurate nose " + "placement, and soft shadows across the cheeks.", },];
For Lucy 2, use set() instead of setPrompt() to include a reference image:
Copy
Ask AI
// Lucy 2: transform into a character using a reference imageawait realtimeClientRef.current.set({ prompt: "Transform into this character", image: characterImage, // File, Blob, or URL enhance: true,});
Mobile apps frequently move between foreground and background. The example handles this by disconnecting when backgrounded and reconnecting when foregrounded:
Copy
Ask AI
import { AppState, type AppStateStatus } from "react-native";useEffect(() => { const handleAppStateChange = (nextAppState: AppStateStatus) => { if (appStateRef.current.match(/inactive|background/) && nextAppState === "active") { // App returned to foreground — reconnect initializeWebRTC({ model, facingMode }); } else if (appStateRef.current === "active" && nextAppState.match(/inactive|background/)) { // App going to background — disconnect to free resources cleanupWebRTC(); } appStateRef.current = nextAppState; }; const subscription = AppState.addEventListener("change", handleAppStateChange); return () => subscription?.remove();}, [cleanupWebRTC, initializeWebRTC]);
Always disconnect when backgrounded. WebRTC connections consume battery and network resources even when the app isn’t visible.
Mirror the front camera. When switching to the front-facing camera, call setMirror(true) so the output matches what the user sees:
Copy
Ask AI
function switchCamera(newFacingMode: "user" | "environment") { if (realtimeClientRef.current) { realtimeClientRef.current.setMirror(newFacingMode === "user"); }}
Switch facing mode. The camera stream needs to be recaptured when toggling between front and back cameras, since each has different hardware capabilities.
The example app currently ships with Mirage and Lucy 1. Adding Lucy 2 requires two small changes: updating the model selector and using the set() method for character reference images.
2. Use set() for character reference images:Lucy 2’s key feature is character transformation via reference images. Replace setPrompt() with set() when a reference image is available:
Copy
Ask AI
// For Lucy 2 with a reference imageawait realtimeClientRef.current.set({ prompt: "Transform into this character", image: selectedImage, enhance: true,});// For text-only editing (works with all models)realtimeClientRef.current.setPrompt("Add sunglasses", { enhance: true });