Google Search is getting a bunch of new features, the company announced at its “Search On” event, and many of them will provide richer, more visually focused results. “We’re going far beyond the search box to create a search that works more like our minds—which are as multidimensional as humans. As we enter this new era of search, you’ll be able to find exactly what you’re looking for through a combination of images, sounds, text and speech. We call it making search more natural and intuitive,” said Prabhakar Raghavan, Google’s vice president of search, during the keynote.
First, Google is expanding the multi-search feature — which it introduced in beta in April of this year — to English worldwide, with 70 more languages coming in the next few months. The multi-search feature allows users to search for multiple things at the same time by combining images and text. The feature can also be used together with Google Lens. According to Google, users rely on its Lens feature nearly eight billion times a month to search for what they see.
But by combining Lens with multi-search, users will be able to take a picture of an item and then use the phrase “near me” to find it nearby. Google says this “new way to search will help users find and connect with local businesses.” Multisearch near me will begin rolling out in English in the US later this fall.
“This is possible through a thorough understanding of local locations and product inventory. informed by millions of images and reviews on the web,” said Raghavan regarding multisearch and Lens.
Google is improving the way translations are displayed on an image. According to the company, people use Google to translate text on images more than 1 billion times a month in more than 100 languages. With the new feature, Google will be able to “merge translated text into complex images, making it look and feel much more natural.” So the translated text will look smoother and be part of the original image instead of the translated text sticking out. According to Google, it uses “generative adversarial networks (also known as GAN models), which is what helps power the Pixel’s Magic Eraser technology, to deliver this experience. This feature will launch later this year.
It’s also improving its iOS app, where users will be able to use shortcuts directly below the search bar. This will help users shop using screenshots, translate any text using the camera, find a song, and more.
Google Search results will also be visually richer as users browse information about a location or topic. In the example Google showed, when searching for a city in Mexico, the results also show videos, images and other information about the place, all in the first set of results themselves. Google says this will ensure that a user doesn’t have to open multiple tabs when trying to get more information about in a place or topic.
It will also provide more relevant information in the coming month even when the user starts typing a question. Google will offer users “keyword or topic options to help” create their questions. It will also show content from creators on the open web for some of these topics, such as cities, etc., along with travel tips, etc. “The most relevant content from a variety of sources, no matter what format the information comes in – whether it’s text, images or video,” according to the company’s blog. The new feature will be launched in the coming months.
When it comes to food searches – and it can be a specific dish or restaurant item – Google will display more visually rich results, including photos of the food in question. It also expands “the coverage of digital offerings and makes them visually richer and more reliable.”
According to the company, it combines “menu information provided by people and merchants and found on restaurant websites that use open standards for data sharing” and relies on its “image and language understanding technologies, including the multitasking Unified Model.” these new results.
“These menus will showcase the most popular dishes and helpfully list different dietary options, starting with vegetarian and vegan,” Google said in a blog post.
It will also tweak how shopping results appear in Search, making them more visual along with links, as well as allowing them to shop for the “complete look”. The search results will also support 3D sneaker shopping, where users will be able to view those specific items in a 3D view.
Google Maps is also getting some new features that will bring more visual information, although most of them will be limited to selected cities. First, users will be able to check the “Neighborhood Vibe,” which means they’ll find out where to eat, visit, etc. in a specific location.
This will appeal to tourists who will be able to use the information to get to know the district better. Google says it uses “AI with local knowledge from Google Maps users” to provide this information. Neighborhood Vibe will roll out globally in the coming months for Android and iOS.
It also expands the immersive viewing feature so users can see 250 photo-realistic aerial views of world landmarks that include everything from Tokyo Tower to the Acropolis. According to the blog post, Google uses “predictive modeling” so the immersive view automatically learns the historical trends of a location. The immersive view will roll out in Los Angeles, London, New York, San Francisco, and Tokyo for Android and iOS in the coming months.
Users will also be able to see useful information with Live View. The live view search feature helps users find a place near them, such as a market or store, while they are walking around. Search with Live View will be available in London, Los Angeles, New York, San Francisco, Paris and Tokyo for Android and iOS in the coming months.
It’s also expanding its eco-friendly routing feature — which previously launched in the US, Canada and Europe — to third-party developers via the Google Maps Platform. Google hopes that companies in other industries, such as delivery or ridesharing, will be able to enable green routing and measure fuel consumption in their apps.