The UK communications regulator, Ofcom, has announced robust new measures to prevent children from accessing online pornography (plus other potentially harmful content), a key component of the Online Safety Act.
By July
These new regulations will require websites and apps to implement highly effective age assurance systems by July 2025, marking a significant step towards creating a safer digital environment.
What Kind of Websites and Apps Will The New Regs Apply To?
Ofcom says its new regulations will apply to websites and apps that host pornographic content, including those that publish their own material and platforms with user-generated content, such as social media, tube sites, and cam sites. The rules will extend to services that allow harmful content and are likely to be accessed by children. Also, they cover platforms with user-to-user or search functionalities where children may encounter inappropriate material. These categories are defined under the “Part 3” and “Part 5” provisions of the Online Safety Act.
What’s The Problem?
Children in the UK are encountering explicit material online at alarmingly young ages. For example, research from the Children’s Commissioner for England shows that among those who have seen online pornography, the average age of first exposure is just 13. Alarmingly, more than a quarter of children (27 per cent) encounter explicit content by the age of 11, and one in ten as young as nine!
This pervasive exposure poses significant risks to children’s mental health and understanding of relationships, consent, and self-worth. However, despite these dangers, it seems that many platforms have operated without adequate safeguards, allowing harmful material to reach young users with ease.
As Melanie Dawes, Ofcom’s Chief Executive, puts it: “For too long, many online services which allow porn and other harmful material have ignored the fact that children are accessing their services. Today, this starts to change.”
Also, up until now, it seems that self-declared age verification methods, such as ticking a box to confirm your age, have proven ineffective. Platforms frequently treat all users as if they are adults and fail to provide meaningful barriers to prevent children’s access to explicit content.
A New Era of Online Safety
To tackle this issue, Ofcom has published detailed guidance for implementing effective age assurance measures as mandated by the UK’s Online Safety Act (passed in October 2023). These measures form a cornerstone of the Act, which aims to make online platforms accountable for their content.
What the new Ofcom regulations will mean for the platforms include:
– Immediate action for pornographic services. Platforms hosting their own pornography (‘Part 5’ services) must start introducing robust age checks immediately.
– Measures for user-generated content. Social media platforms and other user-to-user services (‘Part 3’ services) that allow user-generated pornography must implement highly effective age checks by July 2025.
– Children’s risk assessments. All user-to-user and search services likely to be accessed by children must complete a children’s access assessment by April 2025, with detailed risk assessments required by July.
What Is ‘Highly Effective’ Age Assurance?
Ofcom defines “highly effective” age assurance as methods that are accurate, robust, reliable, and fair. These methods must go beyond basic checks and address technical and practical challenges to ensure children cannot bypass safeguards.
For example, approved technologies include:
– Photo ID matching. Verification using government-issued identification.
– Facial age estimation. Analysing users’ facial features to estimate age.
– Open banking and credit card checks. Ensuring users’ ages align with financial account requirements.
– Mobile network age verification. Checks conducted through mobile operators.
– Digital identity services. Systems leveraging verified digital identities.
Self-Declaration Methods No Longer Acceptable
Critically, methods like self-declaration of age and payment processes not requiring proof of adulthood are no longer deemed acceptable. Also, platforms must ensure explicit content is not visible to users during the verification process and prevent efforts to circumvent the age assurance system.
A Gradual Rollout with Broad Implications
Ofcom says the introduction of these measures will roll out incrementally, with adults beginning to notice changes in how they access certain services. For example, platforms may require users to upload ID, verify through biometric data, or use credit card checks.
As Ofcom’s CEO, Melanie Dawes, says: “As age checks start to roll out in the coming months, adults will start to notice a difference in how they access certain online services. Services which host their own pornography must start to introduce age checks immediately, while other user-to-user services – including social media – which allow pornography and certain other types of harmful content will have to follow suit by July at the latest.”
While these measures aim to protect children, Ofcom has also emphasised the importance of balancing privacy rights for adults. Notably, a survey by Yonder Consulting found that 80 per cent of UK adults support the implementation of age assurance measures to prevent children’s exposure to pornography.
How Will It Be Enforced?
To enforce compliance, Ofcom has launched an enforcement programme targeting platforms that fail to engage or comply with the new requirements. Non-compliance could result in fines and other penalties.
Benefits of the New Rules
Clearly, a key benefit of the new rules should be to protect children from harmful online content and the hope is that by mandating robust age checks, platforms can significantly reduce the likelihood of children encountering explicit material, promoting safer and healthier online experiences.
Also, as regards safeguarding children, these measures appear to reinforce the UK’s leadership in the tech-safety sector. For example, according to research by Paladin Capital and PUBLIC, the UK accounts for 23 per cent of the global safety tech workforce, with 28 per cent of safety tech companies based in the UK. The introduction of age assurance measures is, therefore, expected to stimulate further innovation and growth within this burgeoning industry.
Julie Dawson, chief regulatory and policy officer at age verification platform Yoti, emphasised the importance of the guidance, saying: “It is essential for creating safe spaces online. Age assurance must be enforced across pornographic sites of all sizes, creating a level playing field and providing age-appropriate access for adults.”
Challenges and Criticisms
Despite the obvious benefit of protecting children, privacy and rights campaigners have raised significant concerns about Ofcom’s new age verification regulations under the Online Safety Act, warning of potential risks to privacy, security, and user rights. For example, The Open Rights Group (ORG), a digital rights advocacy organisation, has been vocal in highlighting these issues. Abigail Burke, ORG’s Programme Manager for Platform Power, has stated, “Age verification technologies for pornography risk sensitive personal data being breached, collected, shared, or sold.”
The ORG has also pointed to similar proposals that were abandoned in Australia due to privacy and security concerns, suggesting that the UK should carefully consider these issues to avoid unintended consequences. Civil society groups have similarly criticised Ofcom for allegedly prioritising changes suggested by the tech industry over recommendations from privacy advocates to strengthen the codes.
Campaign group Big Brother Watch has also highlighted risks associated with age assurance methods, including data breaches, digital exclusion, and the erosion of online privacy. They argue that while protecting children online is essential, many age verification technologies could create new vulnerabilities, particularly around data security.
Some critics have also drawn attention to unintended consequences observed in similar initiatives elsewhere. For instance, when Louisiana introduced age verification laws for pornography sites, traffic to regulated platforms dropped by 80 per cent. However, users did not stop accessing explicit material and instead migrated to less-regulated and potentially more harmful corners of the internet.
This sentiment has also been echoed by Aylo, the parent company of Pornhub, which has criticised the measures as “ineffective, haphazard and dangerous.” The company warned: “These people did not stop looking for porn; they just migrated to darker corners of the internet that don’t ask users to verify age. In practice, the laws have just made the internet more dangerous for adults and children.”
These criticisms highlight the tension between enhancing online safety for children and preserving individual privacy rights in the digital realm. While the regulations aim to protect vulnerable users, critics argue that their implementation must be carefully managed to avoid creating new risks or driving harmful behaviours underground.
Looking Ahead
Ofcom’s guidelines are a step forward in addressing the long-standing issue of children’s exposure to harmful online content. By enforcing robust age assurance, it’s hoped that the measures can foster a safer online environment while balancing privacy considerations for adults.
As the July 2025 deadline approaches, the challenge will lie in ensuring that platforms adopt these measures effectively, without creating unintended consequences or compromising user rights. With rigorous enforcement and collaboration between regulators, platforms, and the safety tech industry, these changes could redefine online safety in the UK.
What Does This Mean For Your Business?
The introduction of Ofcom’s age verification regulations could be a pivotal moment in the effort to create a safer digital environment, particularly for children. By requiring websites and apps to implement robust age assurance systems, the UK aims to address the significant risks posed by children’s exposure to harmful online content, ensuring they are protected during formative years.
The potential benefits are clear, i.e. stronger safeguards for children, a reduction in exposure to inappropriate material, and a reinforcement of the UK’s leadership in tech-safety innovation. These measures signal progress in holding platforms accountable for their content and prioritising the safety of vulnerable users. As Julie Dawson of Yoti points out, creating “safe spaces online” is essential, and the consistent enforcement of age assurance can help achieve this goal.
However, this ambitious undertaking is not without its challenges. Privacy and rights campaigners have raised (valid) concerns about the risks of data breaches, digital exclusion, and the potential erosion of online privacy. The possibility of unintended consequences, such as users migrating to less-regulated corners of the internet, further complicates the picture. Critics, including Aylo and Big Brother Watch, have emphasised the need for careful implementation to avoid exacerbating existing risks.
For platforms, the regulations will demand a shift in how they manage user access and content. Implementing robust age verification systems will likely require significant investment in new technologies, such as photo ID matching or facial age estimation. Smaller platforms, in particular, may face challenges in meeting these requirements without external support or resources. Also, platforms must carefully balance compliance with privacy concerns to maintain user trust, particularly as adults begin to notice changes in how they access services.
Advertisers, too, will need to adapt. Platforms that introduce age verification systems may see shifts in user demographics, potentially affecting audience reach and targeting strategies. Advertisers that rely on platforms hosting adult content may need to navigate a changing landscape where regulated and unregulated spaces coexist, with a heightened emphasis on compliance and ethical advertising.
The success of these regulations will, therefore, ultimately depend on how well they balance the protection of children with the rights and privacy of all users. Ofcom’s approach, which allows space for technological innovation while setting clear standards, provides a solid foundation. However, ongoing dialogue and collaboration between regulators, platforms, advertisers, and advocacy groups will be essential to address concerns and adapt to unforeseen challenges.
As the July 2025 deadline draws closer, the spotlight will remain on how platforms respond to these requirements, how advertisers adjust their strategies, and how effectively Ofcom enforces the new rules. If managed successfully, the hope is that these measures could set a global benchmark for online safety, shaping a digital landscape where safety, privacy, and commercial interests coexist harmoniously.