Online age checks are intended to keep violent, sexually explicit or other age-inappropriate content away from children. But do they?
Under-age social media users are often able to circumvent age restrictions, especially at the account-creation stage, research shows(새 창). In other cases, age checks have blocked children from accessing content that was later determined to pose no risk.
When faced with obvious harms, the desire to “do something” is understandable. But we need a higher standard. When it comes to children, we need to do something that works. And age verification as it is currently practiced often falls short of that basic goal.
Age checks are rooted in real concerns
Most parents of adolescents in the United States worry about social media’s effects on mental health, among other issues, according to the US surgeon general(새 창). At the same time, parents are concerned about the scope of age checks. In a study(새 창) by the nonpartisan Center for Democracy & Technology, parents and teenagers voiced concerns about the checks’ effectiveness, data privacy, and user agency.
At their core, age-verification systems aim to prevent young people from accessing harmful or adult-geared content, but many critics have warned that even well-intentioned policies could create risks to free speech and data privacy for all internet users, not just children.
What’s considered harmful depends on whom you ask. Industry regulations, state laws, and national policies can all dictate which content is deemed harmful to young people, but some language is more vague than others.
The United Kingdom’s Online Safety Act, for example, lays out categories of content that children must be shielded from online. They include:
- Pornography
- Content that encourages, promotes, or provides instructions for:
- Self-harm,
- Eating disorders, or
- Suicide
- Bullying
- Abusive or hateful content
- Content which depicts or encourages serious violence or injury
- Content which encourages dangerous stunts and challenges
- Content which encourages the ingestion, inhalation or exposure to harmful substances
In Australia, the move to ban social media accounts belonging to people younger than 16 more broadly cites concerns about screen time and mental health.
Whether these measures effectively shield young people from harm is debated.
Content restrictions don’t always get it right
Some researchers have warned that age checks could impede access to medically accurate sexual information and other educational content.
After the U.K. Online Safety Act took effect, the government noted(새 창) “instances of over-moderation” in which children were blocked from viewing content that didn’t pose a risk.
Even with age-check systems in place, potentially harmful and age-inappropriate content remains accessible to kids. In some cases, childhood deaths have been linked to suicide- and self-harm-related content and risk-taking social media challenges, according to the surgeon general’s advisory.
The same advisory, however, noted that social media can be a source of positive community, connection, self-expression, and important information.
Age-gating access to those corners of the internet stands to disproportionately affect young people who rely on online communities for support and information.
Measures put in place to label content and guard children from age-inappropriate material have also been flawed.
In September, Disney agreed to pay $10 million to settle allegations by the Federal Trade Commission, which accused the company of failing to label its children’s videos on YouTube as “Made for Kids.”
Failing to correctly label the videos meant Disney collected children’s personal information when they watched the unlabeled content and autoplayed “Not Made for Kids” videos when they finished. Children also became targets of online advertisements geared toward older viewers.
Disney didn’t admit any wrongdoing as part of the settlement.
Are age verification systems effective? More research is needed
The effectiveness of age checks remains to be seen.
In the weeks after Australia’s policy took effect, social media companies revoked access to about 4.7 million accounts(새 창) belonging to children.
Findings from a 2024 study(새 창) suggest that the widespread global deployment of age verification has resulted in privacy-invasive or ineffective methods.
Research(새 창) from the U.K.’s independent online safety regulator, the Office of Communications, pointed to some measurable changes in internet behavior, but it’s still too soon to evaluate effectiveness.
The number of visitors to pornography sites in the U.K. declined by one-third since the Online Safety Act took effect in July, the office noted in a December online safety report(새 창). The office is assessing how much the decline may have reduced children’s exposure to pornography.
“While it is too soon to assess the long-term impact of these changes, the widespread adoption of age checks means that children of all ages are now less likely to encounter pornography accidentally, which research has shown to be the way most children encounter porn,” the report said.
The office is expected to publish its initial data and analysis on children’s online experiences by May.






