Google is expanding the capabilities of its Gemini Live AI assistant with real-time camera and screenshare features. These tools let users ask Gemini questions about what they’re seeing whether it’s through the phone camera or while browsing online.

Gemini Live: What It Does
The Gemini Live video and screenshare features are designed to offer a smarter, more interactive AI experience. With this update:
- You can point your phone camera at an object like an aquarium or a product and ask Gemini Live for information.
- You can share your screen with Gemini, such as a shopping site or document, and receive feedback or advice based on what’s visible.
These tools rely on agentic AI, meaning Gemini can take real-time actions and respond with relevant information in the moment.
The new features are currently rolling out to:
- Pixel 9 series phones
- Samsung Galaxy S25 devices
Other Android devices will receive the update soon, but there’s a catch: you must be a Gemini Advanced (paid) subscriber to access them.
Google confirmed that Gemini Live is:
- Available in 45 languages
- Restricted to users 18 years and older
- Not available for education or enterprise accounts
Real-World Use Cases
In Google’s April Pixel Drop video, Gemini Live was seen answering questions about fish in an aquarium in real-time. Other use cases include:
- Comparing prices or specs while shopping online
- Getting fashion or styling tips from a clothing website
- Exploring new surroundings and asking questions on the go
Gemini Live Was First Teased at Google I/O
These features were originally demonstrated during Google I/O 2024 as part of “Project Astra.” Now, they are finally being tested on real devices. While some users on Reddit have already noticed the tools appear on phones like Xiaomi, the broader rollout is happening in stages.
The launch of Gemini Live video and screenshare marks a significant upgrade in how users interact with AI assistants on mobile devices.