useSemanticSegmentation
Semantic semantic segmentation, akin to image classification, tries to assign the content of the image to one of the predefined classes. However, in case of segmentation this classification is done on a per-pixel basis, so as the result the model provides an image-sized array of scores for each of the classes. You can then use this information to detect objects on a per-pixel basis. React Native ExecuTorch offers a dedicated hook useSemanticSegmentation for this task.
It is recommended to use models provided by us which are available at our Hugging Face repository, you can also use constants shipped with our library.
API Reference
- For detailed API Reference for
useSemanticSegmentationsee:useSemanticSegmentationAPI Reference. - For all semantic segmentation models available out-of-the-box in React Native ExecuTorch see: Semantic Segmentation Models.
High Level Overview
import {
useSemanticSegmentation,
DEEPLAB_V3_RESNET50,
} from 'react-native-executorch';
const model = useSemanticSegmentation({
model: DEEPLAB_V3_RESNET50,
});
const imageUri = 'file::///Users/.../cute_cat.png';
try {
const result = await model.forward(imageUri);
// result.ARGMAX is an Int32Array of per-pixel class indices
} catch (error) {
console.error(error);
}
Arguments
useSemanticSegmentation takes SemanticSegmentationProps that consists of:
model- An object containing:modelName- The name of a built-in model. SeeSemanticSegmentationModelSourcesfor the list of supported models.modelSource- The location of the model binary (a URL or a bundled resource).
- An optional flag
preventLoadwhich prevents auto-loading of the model.
The hook is generic over the model config — TypeScript automatically infers the correct label type based on the modelName you provide. No explicit generic parameter is needed.
You need more details? Check the following resources:
- For detailed information about
useSemanticSegmentationarguments check this section:useSemanticSegmentationarguments. - For all semantic segmentation models available out-of-the-box in React Native ExecuTorch see: Semantic Segmentation Models.
- For more information on loading resources, take a look at loading models page.
Returns
useSemanticSegmentation returns an SemanticSegmentationType object containing:
isReady- Whether the model is loaded and ready to process images.isGenerating- Whether the model is currently processing an image.error- An error object if the model failed to load or encountered a runtime error.downloadProgress- A value between 0 and 1 representing the download progress of the model binary.forward- A function to run inference on an image.
Running the model
To run the model, use the forward method. It accepts three arguments:
imageSource(required) - The image to segment. Can be a remote URL, a local file URI, or a base64-encoded image (whole URI or only raw base64).classesOfInterest(optional) - An array of label keys indicating which per-class probability masks to include in the output. Defaults to[](no class masks). TheARGMAXmap is always returned regardless of this parameter.resizeToInput(optional) - Whether to resize the output masks to the original input image dimensions. Defaults totrue. Iffalse, returns the raw model output dimensions (e.g. 224x224 forDEEPLAB_V3_RESNET50).
Setting resizeToInput to false will make forward faster.
forward returns a promise resolving to an object containing:
ARGMAX- AnInt32Arraywhere each element is the class index with the highest probability for that pixel.- For each label included in
classesOfInterest, aFloat32Arrayof per-pixel probabilities for that class.
The return type is fully typed — TypeScript narrows it based on the labels you pass in classesOfInterest.
Example
import {
useSemanticSegmentation,
DEEPLAB_V3_RESNET50,
DeeplabLabel,
} from 'react-native-executorch';
function App() {
const model = useSemanticSegmentation({
model: DEEPLAB_V3_RESNET50,
});
const handleSegment = async () => {
if (!model.isReady) return;
const imageUri = 'file::///Users/.../cute_cat.png';
try {
const result = await model.forward(imageUri, ['CAT', 'PERSON'], true);
// result.ARGMAX — Int32Array of per-pixel class indices
// result.CAT — Float32Array of per-pixel probabilities for CAT
// result.PERSON — Float32Array of per-pixel probabilities for PERSON
} catch (error) {
console.error(error);
}
};
// ...
}
Supported models
| Model | Number of classes | Class list | Quantized |
|---|---|---|---|
| deeplab-v3-resnet50 | 21 | DeeplabLabel | Yes |
| deeplab-v3-resnet101 | 21 | DeeplabLabel | Yes |
| deeplab-v3-mobilenet-v3-large | 21 | DeeplabLabel | Yes |
| lraspp-mobilenet-v3-large | 21 | DeeplabLabel | Yes |
| fcn-resnet50 | 21 | DeeplabLabel | Yes |
| fcn-resnet101 | 21 | DeeplabLabel | Yes |
| selfie-segmentation | 2 | SelfieSegmentationLabel | No |