ObjectDetectionModule
TypeScript API implementation of the useObjectDetection hook.
API Reference
- For detailed API Reference for
ObjectDetectionModulesee:ObjectDetectionModuleAPI Reference. - For all object detection models available out-of-the-box in React Native ExecuTorch see: Object Detection Models.
High Level Overview
import {
ObjectDetectionModule,
SSDLITE_320_MOBILENET_V3_LARGE,
} from 'react-native-executorch';
const imageUri = 'path/to/image.png';
// Creating an instance and loading the model
const objectDetectionModule = await ObjectDetectionModule.fromModelName(
SSDLITE_320_MOBILENET_V3_LARGE
);
// Running the model
const detections = await objectDetectionModule.forward(imageUri);
Methods
All methods of ObjectDetectionModule are explained in details here: ObjectDetectionModule API Reference
Loading the model
Use the static fromModelName factory method. It accepts a model config object (e.g. SSDLITE_320_MOBILENET_V3_LARGE) and an optional onDownloadProgress callback. It returns a promise resolving to an ObjectDetectionModule instance.
For more information on loading resources, take a look at loading models page.
Running the model
To run the model, use the forward method. It accepts two arguments:
input(required) - The image to process. Can be a remote URL, a local file URI, a base64-encoded image (whole URI or only raw base64), or aPixelDataobject (raw RGB pixel buffer).detectionThreshold(optional) - A number between 0 and 1. Defaults to0.7.
The method returns a promise resolving to an array of Detection objects, each containing the bounding box, label, and confidence score.
For real-time frame processing, use runOnFrame instead.
Using a custom model
Use fromCustomModel to load your own exported model binary instead of a built-in preset.
import { ObjectDetectionModule } from 'react-native-executorch';
const MyLabels = { BACKGROUND: 0, CAT: 1, DOG: 2 } as const;
const detector = await ObjectDetectionModule.fromCustomModel(
'https://example.com/custom_detector.pte',
{ labelMap: MyLabels },
(progress) => console.log(progress)
);
Required model contract
The .pte binary must expose a single forward method with the following interface:
Input: one float32 tensor of shape [1, 3, H, W] — a single RGB image, values in [0, 1] after optional per-channel normalization (pixel − mean) / std. H and W are read from the model's declared input shape at load time.
Outputs: exactly three float32 tensors, in this order:
- Bounding boxes — flat
[4·N]array of(x1, y1, x2, y2)coordinates in model-input pixel space. - Confidence scores — flat
[N]array of values in[0, 1]. - Class indices — flat
[N]array offloat32-encoded integer class indices (0-based, matching the order of entries in yourlabelMap).
Preprocessing (resize → normalize) and postprocessing (coordinate rescaling, threshold filtering, NMS) are handled by the native runtime.
Managing memory
The module is a regular JavaScript object, and as such its lifespan will be managed by the garbage collector. In most cases this should be enough, and you should not worry about freeing the memory of the module yourself, but in some cases you may want to release the memory occupied by the module before the garbage collector steps in. In this case use the method delete on the module object you will no longer use, and want to remove from the memory. Note that you cannot use forward after delete unless you load the module again.