Meta is making big changes to how it checks your age on Instagram using artificial intelligence (AI). These updates aim to keep younger users safer online and ensure that teens are placed under the right privacy settings, even if their accounts list them as adults.
AI will now look for signs of underage users
Starting in the US, Meta will test a new AI system that searches for users who may have listed their age incorrectly on Instagram. If you’re a teenager but your account says you’re an adult, Instagram’s AI will automatically adjust your account to match the safety settings used for teens.
This system works by looking at clues such as birthday messages in your private messages—for example, if someone sends you a “Happy 16th birthday” message. It also studies how users interact with content. For instance, teens tend to like and comment on the same posts, which gives the AI another way to detect a user’s real age.
Even if you don’t say how old you are, Instagram’s AI may still figure out you’re a teen. Once that happens, it will quietly change your account settings to match what’s used for younger users. If you’re affected by this, don’t worry—you’ll still have the option to switch your settings if you think the AI got it wrong.
Stronger safety features for all teens
Instagram has already implemented strict settings for teen users. These include making new teen accounts private by default, blocking messages from strangers, and limiting the types of content in their feeds. Last year, Instagram went further by automatically turning on all its safety tools for every teen on the platform.
With this AI-powered update, Instagram is going beyond waiting for users to list their ages. It’s taking a more active role in checking that information and applying safety tools as needed. This is all part of Meta’s plan to make Instagram safer, especially for younger users.
Pressure from lawmakers and parents continues
Meta’s latest move comes at a time when many are worried about how social media affects kids’ mental health and safety. In 2023, the European Union investigated whether Meta was doing enough to protect young users. That same year, shocking reports revealed that predators were using Instagram to target children, which led to a lawsuit filed by a state attorney general in the US.
There’s also a growing argument among tech companies about who should take responsibility for keeping children safe online. In March, Google accused Meta of trying to pass that duty onto app stores following the approval of a new child safety law in Utah. While companies like Snap and X have their ideas, the conversation about child safety in the digital world is far from over.
Meta’s AI-powered age checks are another way the company is trying to respond to these concerns. Whether these changes will be enough to satisfy critics is still unclear, but for now, the message is simple: if you’re a teen on Instagram, the platform wants to make sure you’re protected—even if you didn’t say your real age.