Google is bringing Gemini capabilities to Google Maps Platform



Gemini model capabilities are coming to the Google Maps Platform for developers, starting with the Places API, the company announced at the Google I/O 2024 conference on Tuesday. With this new capability, developers will be able to show generative AI summaries of places and areas in their own apps and websites. 

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors. With this new capability, developers will no longer have to write their own custom descriptions of places. 

For example, if a developer has a restaurant-booking app, this new capability will help users understand which restaurant is best for them. When users search for restaurants in the app, they’ll be able to quickly see all the most important information, like the house specialty, happy hour deals and the place’s vibes. 

Image Credits: Google

The new summaries are available for many types of places, including restaurants, shops, supermarkets, parks and movie theaters. 

Google is also bringing AI-powered contextual search results to the Places API. When users search for places in a developer’s product, developers will now be able to display reviews and photos related to their search. 

If a developer has an app that allows users to explore local restaurants, their users can search “dog-friendly restaurants,” for example, to see a list of relevant dining spots, along with relevant reviews and photos of dogs at the restaurants.

Contextual search results are available globally, and place and area summaries are available in the U.S. Google plans to expand them to more countries in the future.

We’re launching an AI newsletter! Sign up here to start receiving it in your inboxes on June 5.

Read more about Google I/O 2024 on TechCrunch




Source