At its annual I/O developer conference, Google unveiled a sweeping range of product updates and innovations—most notably, deeper integration of Gemini AI across its ecosystem. From major improvements to search and productivity apps, to new hardware platforms, the company demonstrated how Gemini is evolving into a central part of Google’s user experience.
Gemini AI Expands Its Capabilities
Gemini, Google’s answer to OpenAI’s ChatGPT and Anthropic’s Claude, is gaining significant new features—some of which are already available for iPhone users.
Gemini Live is now available in the Gemini iOS app. It allows real-time interaction using screen sharing or the iPhone’s camera. Users can ask Gemini questions about objects in their environment, get help with DIY tasks, organize their day, and more. The feature is integrated with Google Calendar, Maps, Tasks, and Keep.
Gemini Agent Mode is coming soon and will enable goal-oriented tasks like finding concert tickets at ideal prices or locating apartments based on specific requirements.
Gemini Personal Context enhances personalization by drawing on users’ search history and data from other Google apps. It will proactively provide reminders and prep tools for upcoming events, positioning it as more advanced than Apple’s current Siri offerings.
AI Ultra, a new subscription tier priced at $250 per month, offers premium access to Google’s most advanced AI tools, including the latest features, high rate limits, YouTube Premium, and 30TB of storage. The existing $19.99 tier, now renamed Google AI Pro, continues to offer standard Gemini capabilities.
Veo 3 introduces a significant update to Google’s video generation model. It now supports audio features like ambient noise and spoken dialogue, and is capable of producing physics-aware visuals. Veo 3 is available today for AI Ultra subscribers.
Imagen 4 brings enhanced image generation to Gemini. It offers improved realism in details like hair, fur, and fabric, and better text generation capabilities. Imagen 4 is live in Gemini starting today.
Deep Research supports uploading private PDFs and images for long-form research projects, with Gmail and Drive integration coming soon.
AI Comes to Google Search
Gemini is now deeply embedded within Google Search via a new dedicated AI Mode, rolling out in the U.S. this week. Unlike the previous AI Overviews, AI Mode performs deeper reasoning using a query fan-out method that splits questions into multiple searches for more comprehensive results.
A Deep Search variant allows Gemini to conduct hundreds of searches simultaneously, analyze data, and produce expert-level reports, complete with visualizations and charts.
AI Mode for Shopping will help users find clothing and other items, with virtual try-on features that realistically render apparel on user-submitted photos. It can also facilitate purchases and monitor deals. These capabilities will roll out in the coming months.
Google Search Live, coming this summer, adds real-time interaction similar to Gemini Live. Users will be able to ask questions based on what they see through their phone camera.
Google Apps Get Smarter
Gmail, Chrome, and Meet are all receiving Gemini-powered upgrades, available starting today.
Gmail Personal Context, arriving this summer for Gemini subscribers, will let the AI craft personalized responses using content from past emails, notes, and Drive documents—mimicking user tone, greetings, and language style.
Google Meet is gaining real-time translation, starting with English and Spanish, with more languages in development. This feature is limited to Pro and Ultra subscribers.
Google Chrome will begin integrating Gemini tomorrow. The AI assistant can clarify content on open tabs, summarize complex pages, and answer user questions directly from the taskbar. Chrome is also getting an auto-password change tool for supported sites in the event of a security breach.
FireSat, a new project in development, uses satellite imagery to detect wildfires as small as 270 square feet. While still in early stages, it promises significant utility in fire-prone regions such as California.
Android XR Glasses: Google’s Next Hardware Frontier
Building on its Android XR platform for headsets, Google announced plans to extend XR to smart glasses. A prototype unveiled on stage featured an in-lens display, cameras, microphones, and speakers, all connected to Gemini. The glasses can provide live translation, turn-by-turn navigation, visual recognition, and contextual assistance based on what the wearer sees and hears.
This marks a return to the category after the discontinued Google Glass. With partnerships involving Gentle Monster and Warby Parker, the new Android XR glasses are being designed to be lightweight and stylish. While Apple is reportedly working on similar AR glasses, Google’s offering appears closer to market readiness.
Meanwhile, Samsung’s upcoming XR headset will be the first commercial device to run Android XR, launching later this year. Samsung also plans to release its own XR glasses shortly after.