Google has introduced a groundbreaking tool for AI enthusiasts with the release of AI Edge Gallery, an experimental app that empowers users to run advanced generative AI models directly on their devices. Launched on May 20, 2025, via the Google AI Edge GitHub repository, this app is currently available for Android, with an iOS version expected soon. Let’s explore what AI Edge Gallery offers, who it’s for, and how it’s shaping the future of on-device AI experimentation.

Table of contents
Open Table of contents
What is AI Edge Gallery?
AI Edge Gallery is an open-source Android application designed to bring the power of generative AI to your fingertips—without the need for cloud servers or constant internet connectivity. Once the initial model setup is complete, all processing happens locally on your device, ensuring privacy and enabling offline functionality. The app, accessible via https://github.com/google-ai-edge/gallery, supports a variety of tasks, from chatting with AI models to processing images and exploring creative prompts.
The app leverages Google’s LiteRT runtime (formerly TensorFlow Lite), a high-performance framework for on-device AI, and supports models like Gemma 3, Gemma 3n, and Qwen 2.5. These models, ranging in size from 500 MB to 4 GB, allow users to experiment with different AI capabilities, such as text generation, image analysis, and single-prompt responses through a feature called Prompt Lab.
Who It’s For
AI Edge Gallery targets developers, researchers, and tech enthusiasts eager to explore AI models locally. It’s an ideal sandbox for:
- Developers: Test and integrate on-device AI into apps without writing extensive code.
- Researchers: Experiment with open-source models like Gemma 3n to study their performance on mobile hardware.
- Tech Enthusiasts: Dive into AI capabilities like image-based queries or text summarization, all offline.
The app requires an Android 10+ device with at least 6GB of RAM and a modern chipset for optimal performance. While it’s a powerful tool for experimentation, its models are less potent than cloud-based AIs like ChatGPT or Gemini, due to the constraints of on-device processing.
Key Features and Capabilities
AI Edge Gallery offers a range of features that make it a versatile platform for AI experimentation:
- Fully Offline Operation: After downloading a model, you can use it without an internet connection, as all processing occurs locally on your device.
- Model Selection: Choose from various open-source models on Hugging Face, such as Gemma 3 (529MB, up to 2585 tokens/sec on prefill). You can also import custom LiteRT-format models.
- Image Processing: Upload images and ask questions about them—get descriptions, identify objects, or solve problems.
- Prompt Lab: Explore single-turn tasks like summarizing text, rewriting content, or generating code snippets.
- Performance Optimization: Switch between CPU and GPU for model execution, with recent updates (e.g., v1.0.3 on May 22, 2025) fixing memory leaks and improving UX, like a 4K context length for Gemma 3n models.
The app’s open-source nature, licensed under Apache 2.0, encourages community contributions, making it a collaborative space for innovation.
How to Get Started
Getting AI Edge Gallery up and running is straightforward:
- Visit the GitHub repository at https://github.com/google-ai-edge/gallery.
- Navigate to the “Releases” section and download the latest APK (e.g., v1.0.3).
- On your Android device, enable “Install from Unknown Sources” in settings, then install the APK.
- Open the app, select a model to download, and start experimenting. For iOS users, keep an eye out for the upcoming release.
For developers, Google provides detailed setup guides and encourages feedback to improve the app. However, some users on X have noted occasional crashes when models are loading, especially on lower-end devices, so a high-end phone is recommended for a smoother experience.
Implications and Future Potential
AI Edge Gallery represents a significant step in democratizing AI experimentation. By enabling on-device processing, it prioritizes user privacy—your data never leaves your device, unlike cloud-based AIs that often send data to external servers. This aligns with growing demands for privacy-focused tech solutions, though some X users have expressed skepticism about the app’s network activity after prompts, suggesting a need for transparency in its local operation claims.
For developers, the app showcases the potential of Google’s AI Edge tools, like the LLM Inference API, for building offline-capable, privacy-first apps. It also highlights the efficiency of LiteRT, which powers models like Gemma 3 for rapid on-device tasks. Looking ahead, Google plans to expand support for third-party models and further optimize performance, as noted in their March 2025 blog on Gemma 3 deployment.
Why It Matters
AI Edge Gallery isn’t just an app—it’s a glimpse into the future of on-device AI. By empowering users to run generative models locally, Google is fostering a community-driven approach to AI development while addressing privacy concerns. Whether you’re a developer building the next big app, a researcher studying model performance, or a tech enthusiast curious about AI, AI Edge Gallery offers a hands-on way to explore the possibilities.
Ready to dive in? Download AI Edge Gallery at https://github.com/google-ai-edge/gallery and start experimenting with on-device AI today. The iOS version is on the horizon, so stay tuned for broader access!