Age Prediction & Your Child’s Safety in the Age of AI
New digital gatekeeping is promising to keep your child safe. We need to keep learning, asking questions, and advocating for more.
Gone are the days of selecting a birth year that shows you are over 18 (or 13) and accessing technology without limit—kind of.
Lawsuits, new laws in multiple countries, and intense public pressure, are causing companies to scramble so they can prove they’re protecting kids.
Now, the “honor system” is being replaced by AI systems that guess your child’s age by scanning their chats, analyzing their clicks, or even making them take a selfie.
Is this actually safer? I’m skeptical. And, I’m hopeful.
As parents, helping professionals, and people interested in keeping up with these societal changes, staying informed is the most important thing we can do right now to keep our young people safe.
It can be overwhelming to try to “do it all,” so instead, here are some things you can do now!
Learn the Lingo
Technical jargon can create an illusion of expertise, create loopholes, and confuse the general public. Different companies use different language to describe how they’re attempting to keep minors safe online. Learn the words they use and what they mean:
Age Gating: A barrier that asks users to self-declare their age.
Age Assurance: The umbrella term for the entire strategy of making sure users are the “right” age for the product.
Age Prediction/Estimation: AI guesses your age based on behavior or biometrics, but doesn’t confirm identity. There are two types:
Behavioral: Analyzing your chats, clicks, and language (ChatGPT)
Facial: The user uploads a selfie, which AI uses to guess the age (CharacterAI)
Age Verification: Confirms a user’s age with high certainty, usually by checking a government ID. This often means identity verification, too, which has been controversial due to concerns of privacy.
Know How It Works in Practice
Companies are actually responding to real concerns in real time, which is faster than social media responses. And that’s because it’s not just about exposure to images or influencers anymore. Kids are forming deep emotional connections with AI.
A 2025 Common Sense Media report found that 72% of teens have used an AI companion, and 1 in 3 said they’ve discussed a serious issue with an AI instead of a person. Data like this, combined with public pressure, lawsuits, and regulatory threats, is forcing change. And we have to know how they work so we can keep holding them accountable.
The easiest way to break down what’s happening is to think of it like a 3-step “waterfall” type process. Platforms are increasingly following this pattern:
Step 1: You tell them your age.
At sign-up, you type in your birthdate to tell the company you’re an adult.
Step 2: AI watches and guesses your age based on usage.
If you say you’re 25, but you act 16, your account will get flagged. It’s analyzing everything, including your “private chats,” focusing on:
Your writing style, including slang, topics, grammar, and complexity
When you’re active, like during school hours vs. late at night
What you click on, watch, like, share
Who you follow and interact with
How long your account has existed
Your transaction history
Step 3: Prove it or lose it.
You become locked out of certain features, and the platform gives you options like taking a selfie or uploading your driver’s license.
When platforms ask for a selfie, they’re sending your/your child’s face to companies you’ve never heard of, like Yoti (Instagram/Facebook), Incode (TikTok Live), and Persona (CharacterAI).
These companies analyze the biometric data, estimate age, and (supposedly) delete the image. With systems like this, your kid’s face data is now traveling through multiple hands, governed by privacy policies you haven’t read.
Don’t Treat This Like It’s a Solution
These systems exist now, yes.
But they’re flawed.
The accuracy isn’t proven.
Research shows facial analysis has higher error rates for people of color and trans/non-binary individuals.
Behavioral analysis can be gamed.
And kids are resourceful.
If they want to get around this, they’ll find a way.
Don’t make the mistake of thinking, “Oh, good, the platform is handling it.”
They’re not.
So, what can you actually do? (Pardon the list within the list. 😉)
Teach them AI literacy …again, and again, and again.
Remind them that AI is a tool, created by companies, not friends. Companies can’t protect each individual person, and quite honestly. There’s no incentive system for them to do so, anyway. Make sure your kid understands this, and that no matter what they think, the app or technology or AI cannot actually care about them, it’s just designed to act like that to keep them using the platform.
Try:
“Hey, I know this app feels like it’s your friend, but remember that it’s designed by a company who wants to keep you using it. It can’t care about you the way a person can, no matter what it says. That doesn’t mean you can’t use it, but just keep that in the back of your mind when it feels like it is an actual friend.”
Audit privacy settings together.
Do this WITH them, not to them.
Go through their apps with them, find the safety features, and turn them on. Teach them how to control their access to photos, mic, location, and camera to keep their privacy protected even more. Help them understand why it matters.
Try:
“Let’s go through your apps together this weekend. I’m not checking up on you and you’re not in trouble. I just want to show you how to control what these companies can see. Location tracking, microphone access, all of it. You should know what you’re sharing and how to turn it off.”
Make an agreement about sharing their faces.
Share the why. And remember that this isn’t to control their behavior—remind them of that. Instead, it’s so you can vet the app by reading the privacy policy and making sure the company is trustworthy before your kid’s data goes anywhere.
Try:
“Protecting your digital identity is really important right now. There are people using other people’s faces to make fake accounts, scams, and even AI videos. So if any app asks you to take a selfie to prove your age or upload an ID, stop and get me first. I’ll read their privacy policy and make sure the company is trustworthy before your face goes anywhere. This isn’t about controlling you, and you won’t be in trouble for asking. I just want to make sure your face stays yours.”
Stay the actually safe adult.
The tech will fail at some point. It will block good content and let bad content through. Make sure your kid knows they can come to you when something goes wrong.
This starts by working with them to understand and navigate this new world, not trying to control them, lock them out, or spy on them.
Try:
“Look, this tech is going to mess up. It’ll block stuff you should be able to see, and it’ll let through stuff you shouldn’t. When that happens, I need you to know you can tell me. I’m not going to freak out or take your phone away. I just want to help you figure it out. We’re both learning how to navigate this stuff.”
Keep advocating and don’t get complacent.
This isn’t over. These systems are being built in real-time by companies that have consistently chosen profit over safety. So, don’t let “we added AI age prediction” be the end of the conversation about platform responsibility.
Contact your representatives and support organizations like the Center for Humane Technology, All Tech is Human, and Common Sense Media.
Get your kids involved, too, if you can. Help them understand the why.
Try:
“I know you’re sick of hearing adults freak out about tech. But here’s the thing, it is really important to me that we learn these things together because my job is to prepare you for being an adult, and these companies are building systems that will follow you forever. So, I wanted to show you this [insert thing that takes no more than 3 minutes of their attention here]. What do you think?”
The Bottom Line?
Staying informed, talking with your child about what’s happening online, and teaching them critical thinking matter more than ever.
By reading this article, you are ahead of the game.
So, keep up.
And remember your resources!
If you have things you want me to explore or explain, let me know in the comments or reach out to me directly.
We don’t have to do this alone—we were never meant to.
Author Spotlight
We’re so grateful to
for allowing us to share her story here on Code Like A Girl. You can find her original post linked below.If you enjoyed this piece, we encourage you to visit her publication and subscribe to support her work!
Join Code Like a Girl on Substack
We publish three times a week, sharing technical tutorials, personal stories, and leadership insights from women and non-binary technologists.
Since 2016, Code Like a Girl has amplified more than 1,000 writers and built a global community committed to changing who gets seen in tech.
Our next mission is to get Code Like a Girl on the technology bestseller list so we can amplify even more women in tech. Paid subscriptions help us get there. If you want to support that mission at a low cost, you can use this discount code.
If you want to support us at full price, you can find that through the subscribe button below.
If that’s not an option for you, don’t worry. We would love to have you read Code Like a Girl for free. The button below works for that, too!
Every read and every share helps our writers get seen, and that support matters more than you know.










Thanks Courtney. Great article, and really interesting to learn that some of the BigTech companies are now starting to take age verification seriously. As you so rightly highlight though, the most important thing that any of us can do as parents is to have open and honest conversations about how our children are using AI and the risks that this presents to both them and us. 🙏
This is such a clear and grounded walkthrough of a space that’s shifting faster than most families can track. I really appreciate the way you translate the jargon into something parents can actually act on.
One thought that struck me while reading: we’re watching age prediction drift from “safety feature” into a form of ambient behavioral surveillance that most households never explicitly agreed to. The tech may evolve, but the power dynamics don’t. Your article does a great job helping parents talk to their kids, and I’d love to see future pieces expand on how we can also push platforms for transparency, deletion guarantees, and independent audits of third-party vendors handling biometrics.
This is one of those moments where literacy matters, but so does advocacy. You’re helping people get ahead of the literacy curve. The next frontier is helping them shape the system, not just adapt to it.
Really appreciated this one.