Skip to main content

GitHub

View source code and contribute

Installation

Gradle (JitPack)

Add the JitPack repository to your settings.gradle.kts:
settings.gradle.kts
dependencyResolutionManagement {
    repositories {
        google()
        mavenCentral()
        maven { url = uri("https://jitpack.io") }
    }
}
Add the dependency to your app’s build.gradle.kts:
build.gradle.kts
dependencies {
    implementation("com.github.DecartAI:decart-android:0.2.0")
}

Quick Start

import ai.decart.sdk.DecartClient
import ai.decart.sdk.DecartClientConfig
import ai.decart.sdk.realtime.ConnectOptions
import ai.decart.sdk.realtime.InitialPrompt
import ai.decart.sdk.RealtimeModels

val client = DecartClient(context, DecartClientConfig(apiKey = "your-api-key"))
val realtime = client.realtime

// Initialize WebRTC
realtime.initialize(eglBase)

// Connect with a camera track
realtime.connect(
    localVideoTrack = cameraTrack,
    localAudioTrack = null,
    options = ConnectOptions(
        model = RealtimeModels.MIRAGE_V2,
        onRemoteVideoTrack = { track ->
            // Display the transformed video
            remoteRenderer.addSink(track)
        },
        initialPrompt = InitialPrompt("a cyberpunk cityscape")
    )
)

// Change prompt during session
realtime.setPrompt("a sunny beach scene", enhance = true)

// Disconnect when done
realtime.disconnect()
client.release()

What can you build?

The SDK provides two main APIs for different use cases:
If you need to…UseMain method
Transform live camera streams over WebRTCRealtime APIclient.realtime.connect()
Generate/edit videos asynchronouslyQueue APIclient.queue.submitAndPoll()

Platform Requirements

The Android SDK requires:
  • Android API 24+ (Android 7.0 Nougat)
  • Kotlin 2.1+
  • Java 17
  • A real device for camera access (emulator does not support WebRTC camera features)

Client Setup

Initialize the Decart client with your API key and an Android Context:
import ai.decart.sdk.DecartClient
import ai.decart.sdk.DecartClientConfig

val client = DecartClient(
    context = applicationContext,
    config = DecartClientConfig(
        apiKey = "your-api-key",
        baseUrl = "wss://api.decart.ai",       // optional — WebSocket URL for realtime
        httpBaseUrl = "https://api.decart.ai",  // optional — HTTP URL for queue API
        logLevel = LogLevel.WARN,               // optional — SDK log verbosity
    )
)
Parameters:
  • apiKey (required) - Your Decart API key from the platform
  • baseUrl (optional) - Custom WebSocket endpoint for the Realtime API
  • httpBaseUrl (optional) - Custom HTTP endpoint for the Queue API
  • logLevel (optional) - Minimum log level: VERBOSE, DEBUG, INFO, WARN, ERROR, NONE
Store your API key securely. Never commit API keys to version control. Use BuildConfig fields, encrypted shared preferences, or fetch ephemeral keys from your backend.

Client Tokens

For production Android apps using the Realtime API, fetch short-lived client tokens from your backend instead of embedding your permanent API key in the APK:
// Fetch ephemeral key from your backend
val ephemeralKey = fetchTokenFromBackend()

// Use it to create the client
val client = DecartClient(
    context = applicationContext,
    config = DecartClientConfig(apiKey = ephemeralKey)
)
See Client Tokens for details on secure client-side authentication.

Available Models

Realtime Models

Use these with client.realtime.connect():
import ai.decart.sdk.RealtimeModels

RealtimeModels.MIRAGE              // Realtime video transformation
RealtimeModels.MIRAGE_V2           // Realtime video transformation (v2)
RealtimeModels.LUCY_V2V_720P_RT   // Realtime video editing
RealtimeModels.LUCY_2_RT           // Realtime video editing with character reference
RealtimeModels.LIVE_AVATAR         // Avatar animation with audio
Each model exposes properties for optimal camera configuration:
val model = RealtimeModels.LUCY_V2V_720P_RT
println(model.fps)     // 25
println(model.width)   // 1280
println(model.height)  // 704

Batch Video Models

Use these with client.queue.submit() or client.queue.submitAndPoll():
import ai.decart.sdk.VideoModels

VideoModels.LUCY_2_V2V         // Video-to-video editing
VideoModels.LUCY_PRO_V2V       // Video-to-video (Pro)
VideoModels.LUCY_FAST_V2V      // Video-to-video (Fast)
VideoModels.LUCY_RESTYLE_V2V   // Video restyling
VideoModels.LUCY_PRO_T2V       // Text-to-video
VideoModels.LUCY_PRO_I2V       // Image-to-video (Pro)
VideoModels.LUCY_DEV_I2V       // Image-to-video (Dev)
VideoModels.LUCY_MOTION         // Motion video (trajectory-guided)

Kotlin Coroutines & Flow

The SDK uses Kotlin coroutines and Flow for reactive state management:
import kotlinx.coroutines.flow.collect
import kotlinx.coroutines.launch

// Realtime connection state as StateFlow
lifecycleScope.launch {
    client.realtime.connectionState.collect { state ->
        println("Connection: $state")
    }
}

// Errors as SharedFlow
lifecycleScope.launch {
    client.realtime.errors.collect { error ->
        println("Error: ${error.code}${error.message}")
    }
}

// Queue job progress as Flow
client.queue.submitAndObserve(model, input).collect { update ->
    when (update) {
        is QueueJobResult.InProgress -> println("Status: ${update.status}")
        is QueueJobResult.Completed -> saveVideo(update.data)
        is QueueJobResult.Failed -> showError(update.error)
    }
}
All realtime state is exposed as Kotlin StateFlow or SharedFlow. Collect them in a lifecycle-aware scope (e.g., lifecycleScope or viewModelScope) to avoid leaks.

Jetpack Compose Integration

The SDK works with Jetpack Compose through WebRTC’s SurfaceViewRenderer:
import androidx.compose.runtime.Composable
import androidx.compose.ui.viewinterop.AndroidView
import org.webrtc.SurfaceViewRenderer

@Composable
fun VideoRenderer(
    onSurfaceReady: (SurfaceViewRenderer) -> Unit
) {
    AndroidView(
        factory = { context ->
            SurfaceViewRenderer(context).apply {
                init(client.realtime.getEglBaseContext(), null)
                onSurfaceReady(this)
            }
        }
    )
}

Android Permissions

Your app must declare camera and internet permissions. Add these to your AndroidManifest.xml:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
Request camera permission at runtime before connecting:
val launcher = rememberLauncherForActivityResult(
    ActivityResultContracts.RequestPermission()
) { granted ->
    if (granted) {
        // Start camera and connect
    }
}

launcher.launch(Manifest.permission.CAMERA)

Type Safety

The SDK uses typed input classes for each batch video model category, providing compile-time guarantees:
import ai.decart.sdk.queue.*

// Video editing — requires prompt + video file
val edit = VideoEditInput(
    prompt = "Cinematic color grade",
    data = FileInput.fromUri(videoUri),
)

// Text-to-video — prompt only, no media file
val t2v = TextToVideoInput(
    prompt = "Drone shot over mountains",
    orientation = "landscape",
)

// Motion video — image + trajectory points
val motion = MotionVideoInput(
    data = FileInput.fromUri(imageUri),
    trajectory = listOf(
        TrajectoryPoint(frame = 0, x = 0.5f, y = 0.5f),
        TrajectoryPoint(frame = 12, x = 0.8f, y = 0.35f),
    ),
)

Error Handling

The SDK provides two error mechanisms — one for realtime, one for queue: Realtime errors are emitted via the errors SharedFlow:
client.realtime.errors.collect { error ->
    when (error.code) {
        ErrorCodes.INVALID_API_KEY -> { /* handle auth error */ }
        ErrorCodes.WEBRTC_TIMEOUT_ERROR -> { /* handle timeout */ }
        ErrorCodes.WEBRTC_ICE_ERROR -> { /* handle ICE failure */ }
        else -> { /* handle other errors */ }
    }
}
Queue errors throw typed exceptions:
try {
    val result = client.queue.submitAndPoll(model, input)
} catch (e: QueueSubmitException) {
    // Job submission failed
} catch (e: QueueStatusException) {
    // Status check failed
} catch (e: QueueResultException) {
    // Result download failed
} catch (e: InvalidInputException) {
    // Input validation failed
}

Sample App

The SDK includes a sample Jetpack Compose app with:
  • Realtime tab — Camera capture + WebRTC streaming with live prompt changes
  • Video tab — Batch job submission, status updates, and result playback
For a more complete app showcasing real-world use cases — video restyling, 90+ style presets, multiple view modes, and swipe navigation — see the Decart Android Example App.

Ready to start building?