Search Live Expands to More Than 200 Countries and Territories

Google has widened the reach of Search Live, pushing its real-time conversational search feature into more than 200 countries and territories. The expansion turns what was previously a limited rollout into a much broader product move, making camera-assisted, voice-based search available to users around the world.

Search Live is built for back-and-forth interaction. Users can speak to Google in real time while pointing their phone camera at an object, scene, or problem in front of them. This makes a difference because it turns search from typing out questions into a more natural and instant experience. Instead of stopping to describe what they see, users can simply show it and ask follow-up questions as the conversation unfolds.

The feature works through the "Live" icon inside AI Mode or Google Lens, combining live visual input with web-based results. That multimodal setup gives users a more responsive way to search, especially when the subject is easier to show than explain.

How Search Live Works in AI Mode and Google Lens

Search Live allows people to interact with Google through spoken questions while the phone camera captures the surrounding context. This creates a conversational flow where the search engine can respond to what the user is seeing in the moment.

That kind of search experience is especially useful for object identification, visual problem-solving, and follow-up questioning. A user doesn't need to restart the search every time they want to clarify something. They can keep talking, keep showing, and keep narrowing in on the answer.

Multimodal Search Powered by Live Web Results

The feature draws on live web results while processing both voice and camera input. That's a key part of the product's design. It isn't just image recognition, and it isn't just voice search. It's a blended search experience that treats speech, visuals, and web data as part of the same interaction.

Google's rollout signals a bigger strategy here: making conversational visual search feel normal, not experimental. By expanding the feature globally, the company is clearly positioning multimodal AI search as a standard part of how people use Google on mobile devices.

Personal Intelligence Rolls Out to All Free Users in the United States

At the same time, Google has expanded Personal Intelligence to all free-tier users in the United States. The feature is available in AI Mode in Search, the Gemini app, and Gemini in Chrome, giving it visibility across several of Google's most important AI surfaces.

This marks a major access shift. Personal Intelligence had previously been limited to paid Google AI Pro and Google AI Ultra subscribers after its earlier introduction. Opening it to free users dramatically broadens the number of people who can connect Gemini to their Google apps for more tailored responses.

What Personal Intelligence Does Across Google Apps

Gemini Connects to Gmail, Photos, Calendar, Drive, Maps, and YouTube

Personal Intelligence links Gemini with a user's first-party Google services, including Gmail, Google Photos, YouTube, Google Calendar, Drive, and Maps. The point is simple: instead of answering in a generic way, the assistant can respond using details tied to the user's own activity, files, purchases, plans, and stored memories.

That makes the assistant more context-aware inside Google's ecosystem. Users don't have to manually explain as much background, because the connected apps can supply relevant signals that shape the answer.

Personalized Responses Based on Real User Context

Google outlined several examples of how Personal Intelligence can be used in daily life. Someone trying to find a pair of sneakers they bought before could get help based on prior purchase history. A person planning a trip could receive itinerary help based on hotel confirmations in Gmail and travel photos stored in Google Photos. Technical support questions could be answered using receipt information tied to previous purchases.

The larger value is convenience. Personal Intelligence is designed to reduce the friction of repeating context over and over. Instead of feeding the assistant every detail manually, users can rely on connected Google services to provide that foundation.

Privacy Controls and Opt-In Settings for Personal Intelligence

Off by Default and User-Controlled App Connections

Google says Personal Intelligence remains opt-in and is off by default. Users choose which apps they want to connect, and they can revoke access whenever they want. That's an important guardrail for a feature built around personal data, because the usefulness of the system depends heavily on how much access a person is willing to grant.

The setup gives users direct control over participation rather than forcing automatic integration across their accounts. For privacy-conscious users, that kind of permission structure is likely to be a deciding factor.

Google Says AI Models Do Not Train Directly on Gmail or Photos

Google also stated that its AI models do not train directly on a user's Gmail inbox or Google Photos library. However, the company noted that limited data, including specific prompts and model responses, may still be used to improve the system.

That difference is important. It shows that Google wants to assure users their most personal information isn’t being fully used to train its models, while still admitting that some data from user interactions may be used to help improve their products.

Personal Intelligence Availability and Account Limitations

Personal Intelligence is limited to personal Google accounts. It is not available for Workspace business, enterprise, or education users. That means the rollout is broad in consumer terms but still restricted when it comes to professional and institutional Google environments.

This limitation keeps the feature focused on consumer personalization rather than workplace deployment. It also reflects how sensitive cross-app AI integration can become when business, school, or enterprise data is involved.

Google’s AI Search Strategy Moves Ahead of Apple and Microsoft

The dual expansion of Search Live and Personal Intelligence shows Google pushing aggressively on two fronts at once: global multimodal search and deeply integrated personal AI assistance. Search Live broadens how users search in the moment, while Personal Intelligence deepens how Google responds based on a user's own digital history.

Together, these updates strengthen Google's position in AI-powered search and assistant technology. By combining real-time voice and camera search with cross-app personalization inside Gemini, Google is building an ecosystem that reaches beyond standalone chatbot use and into everyday search behavior.

The rollout also puts pressure on rivals. With Search Live now available in more than 200 countries and Personal Intelligence open to free US users, Google has moved faster in turning personalized, ecosystem-based AI into a practical consumer feature.