Skip to main content
Version: Next

Class: PoseEstimationModule<T>

Defined in: modules/computer_vision/PoseEstimationModule.ts:81

Pose estimation module for detecting human body keypoints.

Extends

Type Parameters

T

T extends PoseEstimationModelName | LabelEnum

Either a built-in model name (e.g. 'yolo26n-pose') or a custom LabelEnum keypoint map.

Properties

generateFromFrame()

generateFromFrame: (frameData, ...args) => any

Defined in: modules/BaseModule.ts:53

Process a camera frame directly for real-time inference.

This method is bound to a native JSI function after calling load(), making it worklet-compatible and safe to call from VisionCamera's frame processor thread.

Performance characteristics:

  • Zero-copy path: When using frame.getNativeBuffer() from VisionCamera v5, frame data is accessed directly without copying (fastest, recommended).
  • Copy path: When using frame.toArrayBuffer(), pixel data is copied from native to JS, then accessed from native code (slower, fallback).

Usage with VisionCamera:

const frameOutput = useFrameOutput({
pixelFormat: 'rgb',
onFrame(frame) {
'worklet';
// Zero-copy approach (recommended)
const nativeBuffer = frame.getNativeBuffer();
const result = model.generateFromFrame(
{ nativeBuffer: nativeBuffer.pointer, width: frame.width, height: frame.height },
...args
);
nativeBuffer.release();
frame.dispose();
}
});

Parameters

frameData

Frame

Frame data object with either nativeBuffer (zero-copy) or data (ArrayBuffer)

args

...any[]

Additional model-specific arguments (e.g., threshold, options)

Returns

any

Model-specific output (e.g., detections, classifications, embeddings)

See

Frame for frame data format details

Inherited from

VisionModule.generateFromFrame


nativeModule

nativeModule: any = null

Defined in: modules/BaseModule.ts:16

Internal

Native module instance (JSI Host Object)

Inherited from

VisionModule.nativeModule

Accessors

runOnFrame

Get Signature

get runOnFrame(): (frame, isFrontCamera, options?) => PoseDetections<ResolveConfigOrType<T, { yolo26n-pose: { availableInputSizes: readonly [384, 512, 640]; defaultDetectionThreshold: number; defaultInputSize: number; defaultKeypointThreshold: number; keypointMap: typeof CocoKeypoint; preprocessorConfig: undefined; }; }, "keypointMap">>

Defined in: modules/computer_vision/PoseEstimationModule.ts:190

Override runOnFrame to provide an options-based API for VisionCamera integration.

Returns

A worklet function for frame processing.

(frame, isFrontCamera, options?): PoseDetections<ResolveConfigOrType<T, { yolo26n-pose: { availableInputSizes: readonly [384, 512, 640]; defaultDetectionThreshold: number; defaultInputSize: number; defaultKeypointThreshold: number; keypointMap: typeof CocoKeypoint; preprocessorConfig: undefined; }; }, "keypointMap">>

Parameters
frame

Frame

isFrontCamera

boolean

options?

PoseEstimationOptions

Returns

PoseDetections<ResolveConfigOrType<T, { yolo26n-pose: { availableInputSizes: readonly [384, 512, 640]; defaultDetectionThreshold: number; defaultInputSize: number; defaultKeypointThreshold: number; keypointMap: typeof CocoKeypoint; preprocessorConfig: undefined; }; }, "keypointMap">>

Overrides

VisionModule.runOnFrame

Methods

delete()

delete(): void

Defined in: modules/BaseModule.ts:81

Unloads the model from memory and releases native resources.

Always call this method when you're done with a model to prevent memory leaks.

Returns

void

Inherited from

VisionModule.delete


forward()

forward(input, options?): Promise<PoseDetections<ResolveConfigOrType<T, { yolo26n-pose: { availableInputSizes: readonly [384, 512, 640]; defaultDetectionThreshold: number; defaultInputSize: number; defaultKeypointThreshold: number; keypointMap: typeof CocoKeypoint; preprocessorConfig: undefined; }; }, "keypointMap">>>

Defined in: modules/computer_vision/PoseEstimationModule.ts:271

Run pose estimation on an image.

Parameters

input

Image path/URI or PixelData

string | PixelData

options?

PoseEstimationOptions

Detection options including inputSize for multi-method models

Returns

Promise<PoseDetections<ResolveConfigOrType<T, { yolo26n-pose: { availableInputSizes: readonly [384, 512, 640]; defaultDetectionThreshold: number; defaultInputSize: number; defaultKeypointThreshold: number; keypointMap: typeof CocoKeypoint; preprocessorConfig: undefined; }; }, "keypointMap">>>

Array of detected people, each with keypoints accessible via the keypoint enum

Overrides

VisionModule.forward


forwardET()

protected forwardET(inputTensor): Promise<TensorPtr[]>

Defined in: modules/BaseModule.ts:62

Internal

Runs the model's forward method with the given input tensors. It returns the output tensors that mimic the structure of output from ExecuTorch.

Parameters

inputTensor

TensorPtr[]

Array of input tensors.

Returns

Promise<TensorPtr[]>

Array of output tensors.

Inherited from

VisionModule.forwardET


getAvailableInputSizes()

getAvailableInputSizes(): readonly number[] | undefined

Defined in: modules/computer_vision/PoseEstimationModule.ts:182

Returns the available input sizes for this model, or undefined if the model accepts any size.

Returns

readonly number[] | undefined

a readonly number[] specifying what input sizes the model supports.


getInputShape()

getInputShape(methodName, index): Promise<number[]>

Defined in: modules/BaseModule.ts:72

Gets the input shape for a given method and index.

Parameters

methodName

string

method name

index

number

index of the argument which shape is requested

Returns

Promise<number[]>

The input shape as an array of numbers.

Inherited from

VisionModule.getInputShape


getKeypointMap()

getKeypointMap(): ResolveConfigOrType<T>

Defined in: modules/computer_vision/PoseEstimationModule.ts:174

Get the keypoint map for this model.

Returns

ResolveConfigOrType<T>

Map of keypoint names to indices, e.g. { NOSE: 0, LEFT_EYE: 1, ... }.


fromCustomModel()

static fromCustomModel<K>(modelSource, config, onDownloadProgress?): Promise<PoseEstimationModule<K>>

Defined in: modules/computer_vision/PoseEstimationModule.ts:147

Creates a pose estimation instance with a user-provided model binary and keypoint map. Use this when working with a custom-exported model that is not one of the built-in presets.

Type Parameters

K

K extends Readonly<Record<string, string | number>>

Parameters

modelSource

ResourceSource

A fetchable resource pointing to the model binary.

config

PoseEstimationConfig<K>

A PoseEstimationConfig object with the keypoint map and optional preprocessing parameters.

onDownloadProgress?

(progress) => void

Optional callback to monitor download progress (0-1).

Returns

Promise<PoseEstimationModule<K>>

A Promise resolving to a PoseEstimationModule instance typed to the provided keypoint map.


fromModelName()

static fromModelName<C>(namedSources, onDownloadProgress?): Promise<PoseEstimationModule<ModelNameOf<C>>>

Defined in: modules/computer_vision/PoseEstimationModule.ts:113

Creates a pose estimation instance for a built-in model.

Type Parameters

C

C extends PoseEstimationModelSources

Parameters

namedSources

C

A PoseEstimationModelSources object specifying which model to load.

onDownloadProgress?

(progress) => void

Optional callback to monitor download progress (0-1).

Returns

Promise<PoseEstimationModule<ModelNameOf<C>>>

A Promise resolving to a PoseEstimationModule instance typed to the model's keypoint map.