Skip to main content
Version: Next

VisionCamera Integration

React Native ExecuTorch vision models support real-time frame processing via VisionCamera v5 using the runOnFrame worklet. This page explains how to set it up and what to watch out for.

Prerequisites

Make sure you have the following packages installed:

Which models support runOnFrame?

The following hooks expose runOnFrame:

runOnFrame vs forward

runOnFrameforward
ThreadJS (worklet)Background thread
InputVisionCamera Framestring URI / PixelData
OutputModel result (sync)Promise<result>
Use caseReal-time cameraSingle image

Use runOnFrame when you need to process every camera frame. Use forward for one-off image inference.

How it works

VisionCamera v5 delivers frames via useFrameOutput. Inside the onFrame worklet you call runOnFrame(frame, isFrontCamera) synchronously, then use scheduleOnRN from react-native-worklets to post the result back to React state on the main thread.

The isFrontCamera parameter tells the native side whether the front camera is active so it can correctly mirror the results. The library handles all device orientation rotation internally — results are always returned in screen-space coordinates regardless of how the user holds their device.

warning

You must set pixelFormat: 'rgb' in useFrameOutput. Our extraction pipeline expects RGB pixel data — any other format (e.g. the default yuv) will produce incorrect results.

warning

runOnFrame is synchronous and runs on the JS worklet thread. For models with longer inference times, use dropFramesWhileBusy: true to skip frames and avoid blocking the camera pipeline. For more control, see VisionCamera's async frame processing guide.

note

Always call frame.dispose() after processing to release the frame buffer. Wrap your inference in a try/finally to ensure it's always called even if runOnFrame throws.

Camera configuration

The Camera component requires specific props for correct orientation handling:

<Camera
device={device}
outputs={[frameOutput]}
isActive
orientationSource="device"
/>
  • orientationSource="device" — ensures frame orientation metadata reflects the physical device orientation, which the library uses to rotate model inputs and outputs correctly.
  • Do not set enablePhysicalBufferRotation — this prop must remain false (the default). If enabled, VisionCamera pre-rotates the pixel buffer, which conflicts with the library's own orientation handling and produces incorrect results.

Full example (Object Detection)

import { useState, useCallback, useRef } from 'react';
import { StyleSheet, View, Text } from 'react-native';
import {
Camera,
Frame,
useCameraDevices,
useCameraPermission,
useFrameOutput,
} from 'react-native-vision-camera';
import { scheduleOnRN } from 'react-native-worklets';
import {
Detection,
useObjectDetection,
SSDLITE_320_MOBILENET_V3_LARGE,
} from 'react-native-executorch';

export default function App() {
const { hasPermission, requestPermission } = useCameraPermission();
const devices = useCameraDevices();
const device = devices.find((d) => d.position === 'back');
const model = useObjectDetection({ model: SSDLITE_320_MOBILENET_V3_LARGE });
const [detections, setDetections] = useState<Detection[]>([]);

const detRof = model.runOnFrame;

const updateDetections = useCallback((results: Detection[]) => {
setDetections(results);
}, []);

const frameOutput = useFrameOutput({
pixelFormat: 'rgb',
dropFramesWhileBusy: true,
onFrame: useCallback(
(frame: Frame) => {
'worklet';
try {
if (!detRof) return;
const isFrontCamera = false; // using back camera
const result = detRof(frame, isFrontCamera, 0.5);
if (result) {
scheduleOnRN(updateDetections, result);
}
} finally {
frame.dispose();
}
},
[detRof, updateDetections]
),
});

if (!hasPermission) {
requestPermission();
return null;
}

if (!device) return null;

return (
<View style={styles.container}>
<Camera
style={StyleSheet.absoluteFill}
device={device}
outputs={[frameOutput]}
isActive
orientationSource="device"
/>
{detections.map((det, i) => (
<Text key={i} style={styles.label}>
{det.label} {(det.score * 100).toFixed(1)}%
</Text>
))}
</View>
);
}

const styles = StyleSheet.create({
container: { flex: 1 },
label: {
position: 'absolute',
bottom: 40,
alignSelf: 'center',
color: 'white',
fontSize: 16,
},
});

For a complete example showing how to render bounding boxes, segmentation masks, OCR overlays, and style transfer results on top of the camera preview, see the example app's VisionCamera tasks.

Handling front/back camera

When switching between front and back cameras, you need to pass the correct isFrontCamera value to runOnFrame. Since worklets cannot read React state directly, use a Synchronizable from react-native-worklets:

import { createSynchronizable } from 'react-native-worklets';

// Create outside the component so it's stable across renders
const cameraPositionSync = createSynchronizable<'front' | 'back'>('back');

export default function App() {
const [cameraPosition, setCameraPosition] = useState<'front' | 'back'>(
'back'
);

// Keep the synchronizable in sync with React state
useEffect(() => {
cameraPositionSync.setBlocking(cameraPosition);
}, [cameraPosition]);

const frameOutput = useFrameOutput({
pixelFormat: 'rgb',
dropFramesWhileBusy: true,
onFrame: useCallback((frame: Frame) => {
'worklet';
try {
if (!runOnFrame) return;
const isFrontCamera = cameraPositionSync.getDirty() === 'front';
const result = runOnFrame(frame, isFrontCamera);
// ... handle result
} finally {
frame.dispose();
}
}, []),
});

// ...
}

Using the Module API

If you use the TypeScript Module API (e.g. ClassificationModule) directly instead of a hook, runOnFrame is a worklet function and cannot be passed directly to useState — React would invoke it as a state initializer. Use the functional updater form () => module.runOnFrame:

import { useState, useEffect, useCallback } from 'react';
import { Camera, useFrameOutput } from 'react-native-vision-camera';
import { scheduleOnRN } from 'react-native-worklets';
import {
ClassificationModule,
EFFICIENTNET_V2_S,
} from 'react-native-executorch';

export default function App() {
const [module] = useState(() => new ClassificationModule());
const [runOnFrame, setRunOnFrame] = useState<typeof module.runOnFrame | null>(
null
);
const [topLabel, setTopLabel] = useState('');

useEffect(() => {
module.load(EFFICIENTNET_V2_S).then(() => {
// () => module.runOnFrame is required — passing module.runOnFrame directly
// would cause React to call it as a state initializer function
setRunOnFrame(() => module.runOnFrame);
});
}, [module]);

const frameOutput = useFrameOutput({
pixelFormat: 'rgb',
dropFramesWhileBusy: true,
onFrame: useCallback(
(frame) => {
'worklet';
if (!runOnFrame) return;
try {
const isFrontCamera = false;
const result = runOnFrame(frame, isFrontCamera);
if (result) scheduleOnRN(setTopLabel, 'detected');
} finally {
frame.dispose();
}
},
[runOnFrame]
),
});

return (
<Camera
outputs={[frameOutput]}
isActive
device={device}
orientationSource="device"
/>
);
}

Common issues

Bounding boxes or masks are rotated / misaligned

Make sure you have set orientationSource="device" on the Camera component. Without it, the frame orientation metadata won't match the actual device orientation, causing misaligned results.

Also verify that enablePhysicalBufferRotation is not set to true — this conflicts with the library's orientation handling.

Results look wrong or scrambled

You forgot to set pixelFormat: 'rgb'. The default VisionCamera pixel format is yuv — our frame extraction works only with RGB data.

Results are mirrored on front camera

You are not passing isFrontCamera: true when using the front camera. See Handling front/back camera above.

App freezes or camera drops frames

Your model's inference time exceeds the frame interval. Enable dropFramesWhileBusy: true in useFrameOutput, or move inference off the worklet thread using VisionCamera's async frame processing.

Memory leak / crash after many frames

You are not calling frame.dispose(). Always dispose the frame in a finally block.

runOnFrame is always null

The model hasn't finished loading yet. Guard with if (!runOnFrame) return inside onFrame, or check model.isReady before enabling the camera.

TypeError: module.runOnFrame is not a function (Module API)

You passed module.runOnFrame directly to setState instead of () => module.runOnFrame. React invoked it as a state initializer — see the Module API section above.