React NativeExecuTorch
Declarative way to run AI models and LLMs in React Native on device, powered by ExecuTorch.
What is React Native ExecuTorch?
React Native ExecuTorch brings Meta’s ExecuTorch AI framework into the React Native ecosystem, enabling developers to run AI models and LLMs locally, directly on mobile devices. It provides a declarative API for on-device inference, allowing you to use local AI models without relying on cloud infrastructure. Built on the ExecuTorch foundation – part of the PyTorch Edge ecosystem – it extends efficient on-device AI deployment to cross-platform mobile applications in React Native.
Why React Native ExecuTorch?
privacy first
React Native ExecuTorch allows on-device execution of AI models, eliminating the need for external API calls. This means your app's data stays on the device, ensuring maximum privacy for your users.
cost effective
The on-device computing nature of React Native ExecuTorch means you don't have to worry about cloud infrastructure. This approach reduces server costs and minimizes latency.
model variety
We support a wide variety of models, including LLMs, such as Qwen 3, Llama 3.2, SmolLM 2, and Hammer 2.1, as well as CLIP for image embedding, Whisper for ASR, and a selection of computer vision models.
developer friendly
There's no need for deep AI expertise, we handle the complexities of AI models on the native side, making it simple for developers to use these models in React Native.