- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Niantic, the company behind the extremely popular augmented reality mobile games Pokémon Go and Ingress, announced that it is using data collected by its millions of players to create an AI model that can navigate the physical world.
In a blog post published last week, first spotted by Garbage Day, Niantic says it is building a “Large Geospatial Model.” This name, the company explains, is a direct reference to Large Language Models (LLMs) Like OpenAI’s GPT, which are trained on vast quantities of text scraped from the internet in order to process and produce natural language. Niantic explains that a Large Geospatial Model, or LGM, aims to do the same for the physical world, a technology it says “will enable computers not only to perceive and understand physical spaces, but also to interact with them in new ways, forming a critical component of AR glasses and fields beyond, including robotics, content creation and autonomous systems. As we move from phones to wearable technology linked to the real world, spatial intelligence will become the world’s future operating system.”
By training an AI model on millions of geolocated images from around the world, the model will be able to predict its immediate environment in the same way an LLM is able to produce coherent and convincing sentences by statistically determining what word is likely to follow another.
They have added tasks that make you photograph your surroundings or Objects and give them real world lidarr data linked with geo data for some in-game benefits, last I checked.
OK, that is actually something usable. So far what they could learn from here is how to take a shortcut through the fields ;-)
More like they can learn what the corner of the bank looks like