A new viral trend is drawing attention—and concern—as people begin using ChatGPT to figure out the locations shown in random photos. This new use of artificial intelligence, which some call “reverse location search,” is quickly gaining popularity on social media, especially on X (formerly Twitter).
The geoguessing power of o3 is a really good sample of its agentic abilities. Between its smart guessing and its ability to zoom into images, to do web searches, and read text, the results can be very freaky.
— Ethan Mollick (@emollick) April 17, 2025
I stripped location info from the photo & prompted “geoguess this” pic.twitter.com/KaQiXHUvYL
The recent update from OpenAI includes two powerful models, o3 and o4-mini. These AI tools are designed to look at images in ways that go beyond just recognising what’s in them. They can zoom in, crop, rotate, and analyse even blurry or distorted photos. When paired with their ability to search online, the result is a surprisingly effective tool for figuring out where a picture was taken—even without obvious clues.
How people are using the new models
With the launch of o3 and o4-mini, people on X have started experimenting by uploading restaurant menus, photos of neighbourhoods, shop signs, and even selfies. Then they ask ChatGPT to play a version of GeoGuessr—a game where you try to guess a location based on Google Street View.
this is a fun ChatGPT o3 feature. geoguessr! pic.twitter.com/HrcMIxS8yD
— Jason Barnes (@vyrotek) April 17, 2025
The results have impressed many. The o3 model is exceptionally skilled at picking up small details—like a specific tile pattern, shopfront, or even the angle of sunlight—to guess cities, landmarks, restaurants, and bars. It often does this without relying on previous conversations, saved data, or the photo’s metadata (EXIF data).
For example, someone uploaded a photo of a purple, mounted rhino head in a dimly lit bar. While GPT-4o incorrectly guessed that the photo was taken in a British pub, the o3 model discovered it came from a speakeasy in Williamsburg, New York. This shows how advanced image reasoning can outperform even earlier versions of ChatGPT.
Tech news site TechCrunch also ran several tests, comparing o3 with GPT-4o. In many cases, both models arrived at the correct answer. Interestingly, GPT-4o was often quicker. However, in some instances, o3 stood out by identifying places the older model couldn’t.
Not always accurate — and not always safe
Of course, the system isn’t perfect. Sometimes, o3 couldn’t confidently guess a location and got stuck or gave a wrong answer. Some users on X pointed out that the AI could be way off the mark in its guesses.
o3 is insane
— Yumi (@izyuuumi) April 17, 2025
I asked a friend of mine to give me a random photo
They gave me a random photo they took in a library
o3 knows it in 20 seconds and it’s right pic.twitter.com/0K8dXiFKOY
But what’s most concerning is how this feature could be misused. There’s currently nothing stopping someone from taking a screenshot of a person’s Instagram Story and using ChatGPT to try and find out where they are. While the same kind of guessing could have been done before with older tools, these new models make it quicker, easier, and more accurate.
So far, OpenAI hasn’t included any clear safety warnings or tools to limit the use of these features. Their latest safety report for o3 and o4-mini does not mention “reverse location search” or a guide on protecting people’s privacy.
The growing risk of smarter AI tools
The rise of this trend highlights a bigger issue: as AI becomes smarter, the risk of misuse grows. These models weren’t made to invade people’s privacy, but they could easily be used that way. What began as a fun guessing game is quickly becoming a tool that could expose people’s locations without their knowledge.
While it’s exciting to see what AI can do, asking questions is also important. Should tools like this be more restricted? Should there be alerts or blocks when AI is asked to find someone’s location from a photo? These are issues that AI developers, lawmakers, and users will need to face—before the risks get out of hand.