Local AI inference, on every platform you ship to
DVAI Bridge is one API across browser, Node, iOS, Android, .NET, Flutter, React Native, and Capacitor. Plug in any of nine on-device inference backends. Free for personal use. Each release becomes Apache 2.0 after three years.
Nine backends, one API
llama.cpp, MediaPipe, MLX, Transformers.js, WebLLM, LiteRT, ONNX Runtime, ML.NET, and llamafile — all behind a single OpenAI-compatible HTTP shim.
Seven SDKs
TypeScript, Swift, Kotlin, Dart, C#, plus React Native and Capacitor wrappers. Every SDK exposes the same surface; agents written once run everywhere.
Zero install for users
Bridge is a library, not a product. It runs inside your app's process — no sidecar, no Ollama, no Docker, no separate server for your users to install.
Distributed inference
v3+ ships a transparent offload layer: a weak device (a phone) can offload to a strong device (a laptop) the same user owns, via mDNS on the LAN or a self-hosted rendezvous server.
Licensing
Free for personal, educational, academic, evaluation, and internal-only use. Commercial use is a fair tiered royalty — first £90k is free, then 4% / 6% / 8% on revenue bands. We publish pricing openly.
See the licensing model →Contributing
Pull requests, bug reports, and feature ideas welcome. Contributors sign a short Apache-style CLA so we can keep the dual-licensing model intact while you keep copyright in your contribution.
Read the CLA →