
Running AI models on your phone without an internet connection used to be a research project. Google just made it a gallery you can browse. Google AI Edge Gallery — trending with 19,000+ stars — is an open-source collection of on-device ML and GenAI use cases that you can try and deploy locally.
What Is Google AI Edge Gallery?
Google AI Edge Gallery is a showcase of machine learning and generative AI models that run entirely on-device — on your phone, tablet, or edge hardware. No cloud, no API calls, no latency, no privacy concerns.
The gallery lets you:
- Browse use cases — see what’s possible with on-device AI
- Try models locally — download and run models on your own device
- Use in your apps — integrate the models into your Android/iOS applications
Why On-Device AI Matters
Cloud AI is powerful but has fundamental limitations:
- Latency — every API call adds network delay. On-device is instant.
- Privacy — your data never leaves the device. Critical for health, finance, and personal data.
- Cost — no API fees. Run the model as many times as you want for free.
- Offline — works without internet. Essential for rural areas, travel, and unreliable connectivity.
- Scale — billions of devices can run AI simultaneously without server infrastructure.
What’s in the Gallery
The gallery includes models for:
- Text generation — on-device chatbots and text completion
- Image classification — identify objects, scenes, and activities in photos
- Object detection — real-time detection of objects in camera feed
- Pose estimation — track body movements for fitness, gaming, and accessibility
- Text-to-speech — natural voice synthesis without cloud APIs
- Translation — offline translation between languages
LiteRT-LM: The Engine Behind It
Also trending is LiteRT-LM (2,800+ stars) — Google’s runtime for running large language models on edge devices. It’s the engine that makes on-device GenAI possible, optimizing models to run efficiently on mobile hardware.
Building On-Device AI Apps
For developers, the Gallery is both inspiration and starting point. Each use case comes with code you can integrate into your own applications. Combined with Google AI Studio for prototyping and Antigravity for development, you have a complete pipeline from idea to on-device deployment.
On-device AI is particularly relevant for India, where internet connectivity is unreliable in many areas. Hackathon projects on Reskilll that work offline — like crop disease detectors, language translators, and accessibility tools — consistently impress judges because they solve real problems for real users.
The Build With AI bootcamps are incorporating on-device AI into their curriculum, teaching students to build applications that work everywhere — not just where there’s WiFi.
Explore the gallery at github.com/google-ai-edge/gallery.
On-device AI is the future for India where connectivity is unreliable. Built a crop disease detector using AI Edge models that works completely offline. Farmers in my village are actually using it.
LiteRT-LM running LLMs on a phone is mind-blowing. Tested it on a mid-range Android device and the response time is under 2 seconds. No cloud needed.