Skip to main content
Version: Next

useOCR

Optical character recognition(OCR) is a computer vision technique that detects and recognizes text within the image. It's commonly used to convert different types of documents, such as scanned paper documents, PDF files, or images captured by a digital camera, into editable and searchable data.

caution

It is recommended to use models provided by us, which are available at our Hugging Face repository. You can also use constants shipped with our library.

Reference

import { useOCR, OCR_ENGLISH } from 'react-native-executorch';

function App() {
const model = useOCR({ model: OCR_ENGLISH });

// ...
for (const ocrDetection of await model.forward('https://url-to-image.jpg')) {
console.log('Bounding box: ', ocrDetection.bbox);
console.log('Bounding label: ', ocrDetection.text);
console.log('Bounding score: ', ocrDetection.score);
}
// ...
}

Type definitions

interface RecognizerSources {
recognizerLarge: string | number;
recognizerMedium: string | number;
recognizerSmall: string | number;
}

type OCRLanguage =
| 'abq'
| 'ady'
| 'af'
| 'ava'
| 'az'
| 'be'
| 'bg'
| 'bs'
| 'chSim'
| 'che'
| 'cs'
| 'cy'
| 'da'
| 'dar'
| 'de'
| 'en'
| 'es'
| 'et'
| 'fr'
| 'ga'
| 'hr'
| 'hu'
| 'id'
| 'inh'
| 'ic'
| 'it'
| 'ja'
| 'kbd'
| 'kn'
| 'ko'
| 'ku'
| 'la'
| 'lbe'
| 'lez'
| 'lt'
| 'lv'
| 'mi'
| 'mn'
| 'ms'
| 'mt'
| 'nl'
| 'no'
| 'oc'
| 'pi'
| 'pl'
| 'pt'
| 'ro'
| 'ru'
| 'rsCyrillic'
| 'rsLatin'
| 'sk'
| 'sl'
| 'sq'
| 'sv'
| 'sw'
| 'tab'
| 'te'
| 'th'
| 'tjk'
| 'tl'
| 'tr'
| 'uk'
| 'uz'
| 'vi';

interface Point {
x: number;
y: number;
}

interface OCRDetection {
bbox: Point[];
text: string;
score: number;
}

Arguments

model - Object containing the detector source, recognizer sources, and language.

  • detectorSource - A string that specifies the location of the detector binary.
  • recognizerLarge - A string that specifies the location of the recognizer binary file which accepts input images with a width of 512 pixels.
  • recognizerMedium - A string that specifies the location of the recognizer binary file which accepts input images with a width of 256 pixels.
  • recognizerSmall - A string that specifies the location of the recognizer binary file which accepts input images with a width of 128 pixels.
  • language - A parameter that specifies the language of the text to be recognized by the OCR.

preventLoad? - Boolean that can prevent automatic model loading (and downloading the data if you load it for the first time) after running the hook.

For more information on loading resources, take a look at loading models page.

Returns

The hook returns an object with the following properties:

FieldTypeDescription
forward(imageSource: string) => Promise<OCRDetection[]>A function that accepts an image (url, b64) and returns an array of OCRDetection objects.
errorstring | nullContains the error message if the model loading failed.
isGeneratingbooleanIndicates whether the model is currently processing an inference.
isReadybooleanIndicates whether the model has successfully loaded and is ready for inference.
downloadProgressnumberRepresents the download progress as a value between 0 and 1.

Running the model

To run the model, you can use the forward method. It accepts one argument, which is the image. The image can be a remote URL, a local file URI, or a base64-encoded image. The function returns an array of OCRDetection objects. Each object contains coordinates of the bounding box, the text recognized within the box, and the confidence score. For more information, please refer to the reference or type definitions.

Detection object

The detection object is specified as follows:

interface Point {
x: number;
y: number;
}

interface OCRDetection {
bbox: Point[];
text: string;
score: number;
}

The bbox property contains information about the bounding box of detected text regions. It is represented as four points, which are corners of detected bounding box. The text property contains the text recognized within detected text region. The score represents the confidence score of the recognized text.

Example

import { useOCR, OCR_ENGLISH } from 'react-native-executorch';

function App() {
const model = useOCR({ model: OCR_ENGLISH });

const runModel = async () => {
const ocrDetections = await model.forward('https://url-to-image.jpg');

for (const ocrDetection of ocrDetections) {
console.log('Bounding box: ', ocrDetection.bbox);
console.log('Bounding text: ', ocrDetection.text);
console.log('Bounding score: ', ocrDetection.score);
}
};
}

Language-Specific Recognizers

Each supported language requires its own set of recognizer models.
The built-in constants such as RECOGNIZER_EN_CRNN_512, RECOGNIZER_PL_CRNN_256, etc., point to specific models trained for a particular language.

For example:

  • To recognize English text, use:
    • RECOGNIZER_EN_CRNN_512
    • RECOGNIZER_EN_CRNN_256
    • RECOGNIZER_EN_CRNN_128
  • To recognize Polish text, use:
    • RECOGNIZER_PL_CRNN_512
    • RECOGNIZER_PL_CRNN_256
    • RECOGNIZER_PL_CRNN_128

You need to make sure the recognizer models you pass in recognizerSources match the language you specify.

Supported languages

LanguageCode Name
Abazaabq
Adygheady
Africansaf
Avarava
Azerbaijaniaz
Belarusianbe
Bulgarianbg
Bosnianbs
Simplified ChinesechSim
Chechenche
Chechcs
Welshcy
Danishda
Dargwadar
Germande
Englishen
Spanishes
Estonianet
Frenchfr
Irishga
Croatianhr
Hungarianhu
Indonesianid
Ingushinh
Icelandicic
Italianit
Japaneseja
Karbadiankbd
Kannadakn
Koreanko
Kurdishku
Latinla
Laklbe
Lezghianlez
Lithuanianlt
Latvianlv
Maorimi
Mongolianmn
Malayms
Maltesemt
Dutchnl
Norwegianno
Occitanoc
Palipi
Polishpl
Portuguesept
Romanianro
Russianru
Serbian (Cyrillic)rsCyrillic
Serbian (Latin)rsLatin
Slovaksk
Sloveniansl
Albaniansq
Swedishsv
Swahilisw
Tabassarantab
Telugute
Thaith
Tajiktjk
Tagalogtl
Turkishtr
Ukrainianuk
Uzbekuz
Vietnamesevi

Supported models

ModelType
CRAFT_800*Detector
CRNN_512*Recognizer
CRNN_256*Recognizer
CRNN_128*Recognizer

* - The number following the underscore (_) indicates the input image width used during model export.

Benchmarks

Model size

ModelXNNPACK [MB]
Detector (CRAFT_800)83.1
Recognizer (CRNN_512)15 - 18*
Recognizer (CRNN_256)16 - 18*
Recognizer (CRNN_128)17 - 19*

* - The model weights vary depending on the language.

Memory usage

ModelAndroid (XNNPACK) [MB]iOS (XNNPACK) [MB]
Detector (CRAFT_800) + Recognizer (CRNN_512) + Recognizer (CRNN_256) + Recognizer (CRNN_128)16001700

Inference time

Image Used for Benchmarking:

Alt textAlt text
Original ImageImage with detected Text Boxes
warning

Times presented in the tables are measured as consecutive runs of the model. Initial run times may be up to 2x longer due to model loading and initialization.

Time measurements:

MetriciPhone 14 Pro Max
[ms]
iPhone 16 Pro
[ms]
iPhone SE 3Samsung Galaxy S24
[ms]
OnePlus 12
[ms]
Total Inference Time4330253766485993
Detector (CRAFT_800)1945180920801961
Recognizer (CRNN_512)
├─ Average Time27376289252
├─ Total Time (3 runs)820229867756
Recognizer (CRNN_256)
├─ Average Time13739260229
├─ Total Time (7 runs)95827118181601
Recognizer (CRNN_128)
├─ Average Time6818239214
├─ Total Time (7 runs)47812416731498

❌ - Insufficient RAM.