Skip to main content
Transform live camera video with AI on mobile devices. This walkthrough covers the architecture and key integration points of an open-source Expo app that connects to Decart’s realtime models — including Lucy 2 for character transformation.
This example uses Expo with react-native-webrtc. The same @decartai/sdk works with any React Native setup.
Source code: github.com/DecartAI/decart-example-expo-realtime

What you’ll build

Model switching

Switch between Mirage restyling, Lucy editing, and — after a quick upgrade — Lucy 2 for character transformation.

Style presets

Swipe through curated looks: anime characters, tuxedos, superhero costumes, and more. Each preset is a text prompt sent to the model.

Camera controls

Toggle front/back camera, switch view modes (fullscreen, picture-in-picture, split), and see your original and transformed feeds side by side.

Prerequisites

  • Node.js 18+
  • Expo (npx expo — no global install needed)
  • A Decart API key from platform.decart.ai
  • A physical iOS or Android device (WebRTC requires real camera hardware)

Project setup

1

Clone the repository

git clone https://github.com/DecartAI/decart-example-expo-realtime.git
cd decart-example-expo-realtime
2

Install dependencies

npm install
3

Add your API key

Create a .env file in the project root:
EXPO_PUBLIC_DECART_API_KEY=your-api-key-here
4

Run on a device

npx expo run:ios
# or
npx expo run:android
WebRTC requires a physical device. The iOS Simulator and Android Emulator do not support camera streaming.

Architecture overview

The app uses Expo Router with three screens:
ScreenFilePurpose
Entryapp/index.tsxChecks camera/microphone permissions, routes accordingly
Permissionsapp/permission-screen.tsxRequests camera and microphone access
Cameraapp/camera.tsxRenders the main <Camera /> component
Key directories:
components/camera/
  hooks/useWebRTC.ts      — Core Decart SDK integration
  ui/ModelSelector.tsx     — Model toggle UI
  ui/VideoRenderer.tsx     — Displays local + remote streams
  ui/style-carousel/       — Swipeable style picker

lib/realtime/
  media-streams.ts         — Camera stream setup with model constraints
  webrtc-utils.ts          — VP8 codec forcing for React Native

lib/skins/
  lucy-skin-list.ts        — 21 text prompts for Lucy editing presets
  mirage-skin-list.ts      — Style presets with optional audio

Core integration: the WebRTC hook

The useWebRTC hook handles the entire Decart SDK lifecycle — creating a client, capturing the camera, connecting to a realtime model, and managing cleanup.
import { useCallback, useRef, useState } from "react";
import { createDecartClient, type ModelDefinition, type RealTimeClient } from "@decartai/sdk";
import type { MediaStream } from "react-native-webrtc";
import { getMediaStream } from "@/lib/realtime/media-streams";
import { forceCodecInSDP } from "@/lib/realtime/webrtc-utils";

export function useWebRTC({ model, facingMode }: { model: ModelDefinition; facingMode: string }) {
  const [localMediaStream, setLocalMediaStream] = useState<MediaStream | null>(null);
  const [remoteMediaStream, setRemoteMediaStream] = useState<MediaStream | null>(null);
  const [connectionState, setConnectionState] = useState<string>("disconnected");
  const realtimeClientRef = useRef<RealTimeClient | null>(null);

  const initializeWebRTC = useCallback(
    async ({ model, facingMode }: { model: ModelDefinition; facingMode: string }) => {
      try {
        // 1. Create the Decart client
        const decartClient = createDecartClient({
          apiKey: process.env.EXPO_PUBLIC_DECART_API_KEY as string,
        });

        // 2. Capture camera with model-specific constraints
        const stream = await getMediaStream(facingMode, {
          fps: model.fps,
          width: model.width,
          height: model.height,
        });
        setLocalMediaStream(stream);
        setConnectionState("connecting");

        // 3. Connect to the realtime model
        realtimeClientRef.current = await decartClient.realtime.connect(
          stream as unknown as globalThis.MediaStream,
          {
            model,
            onRemoteStream: (transformedStream) => {
              setRemoteMediaStream(transformedStream as unknown as MediaStream);
              setConnectionState("connected");
            },
            customizeOffer: (offer: RTCSessionDescriptionInit) => {
              forceCodecInSDP(offer, "VP8");
            },
          },
        );
      } catch (error) {
        console.error("Connection failed:", error);
        setConnectionState("failed");
      }
    },
    [],
  );

  const cleanupWebRTC = useCallback(() => {
    realtimeClientRef.current?.disconnect();
    setRemoteMediaStream(null);
    setConnectionState("disconnected");
  }, []);

  return {
    localMediaStream,
    remoteMediaStream,
    connectionState,
    realtimeClientRef,
    initializeWebRTC,
    cleanupWebRTC,
  };
}
Three things to note:
  1. Model constraints drive camera setup. Each model defines its own fps, width, and height. The getMediaStream helper requests exactly those dimensions from the device camera.
  2. customizeOffer forces VP8. React Native’s WebRTC implementation works best with VP8. The forceCodecInSDP utility reorders the SDP to prefer VP8 over H264.
  3. Type casting. React Native’s MediaStream type differs from the browser’s globalThis.MediaStream. The as unknown as casts bridge this gap.

Switching models

Each model offers different creative capabilities:
ModelIDWhat it does
Lucy 2lucy_2_rtCharacter transformation and text-guided editing in one model
Miragemirage_v2Full video restyling — transform scenes into different visual styles
Lucy 1lucy_v2v_720p_rtText-guided editing for backward compatibility
Switch models by creating a new ModelDefinition and reconnecting:
import { models, type ModelDefinition } from "@decartai/sdk";

// Select a model
const mirage = models.realtime("mirage_v2");
const lucy1 = models.realtime("lucy_v2v_720p_rt");
const lucy2 = models.realtime("lucy_2_rt");

// To switch: disconnect, update state, reconnect
async function switchModel(newModel: ModelDefinition) {
  cleanupWebRTC();
  await new Promise((resolve) => setTimeout(resolve, 100)); // allow cleanup to complete
  await initializeWebRTC({ model: newModel, facingMode });
}
Use Lucy 2 as the default for new mobile integrations. It supports both character reference and text-only editing.

Applying style presets

The app includes curated prompt presets for each model. When a user selects a preset, the prompt is sent to the active model:
// Apply a style preset to the current model
if (realtimeClientRef.current) {
  realtimeClientRef.current.setPrompt(skin.prompt, {
    enhance: skin.enrich ?? false,
  });
}
Example prompts from the preset list:
const presets = [
  {
    title: "Anime Character",
    prompt:
      "Transform the person into a 2D anime character with smooth cel-shaded lines, " +
      "soft pastel highlights, large expressive eyes, clean contours, even lighting, " +
      "simplified textures, and a bright studio-style background for a polished anime look.",
  },
  {
    title: "Black Tuxedo",
    prompt:
      "Change the outfit to a sharp black tuxedo with satin lapels, crisp white shirt, " +
      "black bow tie, tailored fit, polished shoes, and soft indoor lighting reflecting " +
      "off a marble floor.",
  },
  {
    title: "Sunglasses",
    prompt:
      "Add a pair of dark tinted sunglasses resting naturally on the person's face, " +
      "smooth acetate frames, subtle reflections on the lenses, accurate nose " +
      "placement, and soft shadows across the cheeks.",
  },
];
For Lucy 2, use set() instead of setPrompt() to include a reference image:
// Lucy 2: transform into a character using a reference image
await realtimeClientRef.current.set({
  prompt: "Transform into this character",
  image: characterImage, // File, Blob, or URL
  enhance: true,
});

Handling app lifecycle

Mobile apps frequently move between foreground and background. The example handles this by disconnecting when backgrounded and reconnecting when foregrounded:
import { AppState, type AppStateStatus } from "react-native";

useEffect(() => {
  const handleAppStateChange = (nextAppState: AppStateStatus) => {
    if (appStateRef.current.match(/inactive|background/) && nextAppState === "active") {
      // App returned to foreground — reconnect
      initializeWebRTC({ model, facingMode });
    } else if (appStateRef.current === "active" && nextAppState.match(/inactive|background/)) {
      // App going to background — disconnect to free resources
      cleanupWebRTC();
    }
    appStateRef.current = nextAppState;
  };

  const subscription = AppState.addEventListener("change", handleAppStateChange);
  return () => subscription?.remove();
}, [cleanupWebRTC, initializeWebRTC]);
Always disconnect when backgrounded. WebRTC connections consume battery and network resources even when the app isn’t visible.

Camera controls

Mirror the front camera. When switching to the front-facing camera, call setMirror(true) so the output matches what the user sees:
function switchCamera(newFacingMode: "user" | "environment") {
  if (realtimeClientRef.current) {
    realtimeClientRef.current.setMirror(newFacingMode === "user");
  }
}
Switch facing mode. The camera stream needs to be recaptured when toggling between front and back cameras, since each has different hardware capabilities.

Adding Lucy 2 support

The example app currently ships with Mirage and Lucy 1. Adding Lucy 2 requires two small changes: updating the model selector and using the set() method for character reference images.
1. Add Lucy 2 to the model selector:
import { models } from "@decartai/sdk";

// Add lucy_2_rt alongside the existing models
const handleTabPress = (modelName: "mirage_v2" | "lucy_v2v_720p_rt" | "lucy_2_rt") => {
  const newModel = models.realtime(modelName);
  onModelChange(newModel);
};
2. Use set() for character reference images: Lucy 2’s key feature is character transformation via reference images. Replace setPrompt() with set() when a reference image is available:
// For Lucy 2 with a reference image
await realtimeClientRef.current.set({
  prompt: "Transform into this character",
  image: selectedImage,
  enhance: true,
});

// For text-only editing (works with all models)
realtimeClientRef.current.setPrompt("Add sunglasses", { enhance: true });

Next steps