What We’ve Learned 4 Months On: Navigating the Online Safety Act and Age Verification

In phased enforcement since July of this year, the Online Safety Act sweeps across nearly every corner of the UK’s online space. Any service that hosts user-generated content must take far more aggressive steps to verify a user’s age before granting access to potentially harmful material. 

Plus, enforcement doesn’t stop at the border. Companies based abroad must also comply if their services reach UK users. Those that don’t risk walking straight into audits and penalties of up to £18 million. Senior managers can also be prosecuted if failures on their watch lead to serious harm.

In this post, we’re looking at the compliance lessons that have emerged four months into the implementation period and how JT can help operators meet the strict new standards of this law.

A rocky start

The early stretch of the rollout has shown how difficult it is to apply age verification, both legally and technically.

Short on time and understandably wary of fines, many platforms are now asking users to upload government-issued IDs to access even low-risk, general-audience content. Privacy groups quickly criticised the trend, warning that it normalises surveillance under the guise of safety.

Analysts also argue that smaller services are held to rules built for giants. Because the Act doesn’t distinguish between global platforms and niche communities, independent blogs and volunteer-run forums without legal teams or big budgets have to face the same compliance burden as large operators. Many have been forced to block UK traffic or shut down entirely as a result.

Public resistance from everyday users is, not surprisingly, becoming harder to ignore. Over 550,000 people (and counting) have signed a petition calling for a repeal.

Critics are likewise questioning whether the rules can even be comprehensively enforced, given how easy it is to circumvent age checks with location-masking tools. VPN downloads in the UK notably spiked 1,800% within days of the new checks going live.

Early lessons 

Despite a rough start, age verification under the Online Safety Act is changing how digital services operate. Here are a few key lessons we have learned over the last few months since the most recent regulations went live.


Lesson 1: Self-declaration is dead

Platforms can no longer rely on passive methods like age-gate popups. Ofcom now expects them to verify a user’s age using independent signals that are not easy to fake or bypass.

Aside from asking for IDs and matching selfies, this can mean scanning the user’s face to estimate their age or cross-checking their emails against long-term account activity. Platforms can also query bank records through secure APIs, or check directly with the user’s mobile provider to confirm that the phone number is registered to an adult.


Lesson 2: Compliance is complex

The Online Safety Act is an active regulatory system that’s far from one-and-done. It has shifting goalposts that make it difficult — and expensive — even for well-established platforms to keep up.

Startups are even more vulnerable to ever-changing legal, technical, and design requirements. For many of them, staying compliant may mean freezing development or leaving the UK market entirely.


Lesson 3: Privacy risk is multiplying

Many services that used to operate with nothing more than an email and password now have to collect facial scans, passport photos, driver’s licence details, and behavioural signals used to estimate age — all of which fall under the most sensitive UK GDPR data category.

Most of this data is handled by third-party verification vendors, but storage, access, and deletion policies aren’t consistent across providers. Some vendors keep data longer than others, and some bring in subcontractors without disclosing who they are or what they handle.


Lesson 4: Enforcement is real

Ofcom is already busy building precedent across a range of cases.

OnlyFans’ parent company was recently fined £1.05 million after providing inaccurate details about how its facial age estimation system worked. 

In another case, a nudification site received a £50,000 fine for failing to block underage users, plus an additional £5,000 for not responding to Ofcom’s information request.

The regulator also fined 4chan £20,000 for failing to submit its illegal content risk assessment. The site — which is one of the largest anonymous forums in the world — tried to fight jurisdiction in a US court, but Ofcom pressed ahead, seemingly testing how far they can go to block overseas platforms that don’t cooperate.

Sector readiness

As the rules begin to bite, businesses in specific sectors find themselves waking up to new obligations. Here is how key industries have been handling compliance.


Adult content providers

Adult sites are facing the toughest compliance demands under the Act, and the commercial cost is already showing. UK traffic dropped sharply after enforcement began. Many sites that can’t meet age-check rules without rebuilding their core systems have chosen to block UK users or cease operations altogether. 

To make matters worse, VPNs are draining traffic from compliant platforms and diverting it to unregulated sites with fewer safeguards. 


Gaming and gambling operators

Games and interactive platforms that attract under-18s are now required to complete a children’s risk assessment and apply specific moderation and reporting measures. 

These platforms also have to meet GDPR standards. And with the ICO involved, they’re now accountable to two regulators.


Retailers selling age-controlled products

Merchants must now verify a customer’s age before allowing them to purchase age-restricted goods and make sure that underage users do not see restricted listings in the first place. 

Some Shopify merchants and DTC brands are turning to self-sovereign identity wallets and similar tools to automate age checks and keep legitimate customers from getting blocked.

Looking ahead

Ofcom’s ambitious regulatory roadmap stretches into 2027, with more codes and detailed reporting requirements scheduled over the next two years. The regulator plans to expand the list of priority offences to include serious self-harm content and cyberflashing, as well as certain forms of violent pornography.

Platforms can expect a steady rollout of new obligations rather than a single fixed deadline. And as enforcement becomes more targeted, regulated services must track ongoing changes and adjust to new guidance.

 


 

Meet the Online Safety Act’s standards with JT

It’s clear that now is the time for businesses to review their age verification strategy to stay compliant with the Online Safety Act’s evolving regulations — and JT is here to help.


JT Age Verification: High assurance, low intrusion

With JT Age Verification, companies can confirm a user’s age in real time using only their mobile number — without introducing friction or collecting sensitive personal data. This keeps users in flow, reduces drop-off, and meets strict compliance standards.

Mobile network data linked to verified subscriber accounts is a far more reliable signal than self-declared information. The API returns a simple true/false response based on verified age and can be set to flag content filters, underage accounts, or active parental controls. 

Because no personal information is shared or stored, the system meets both UK GDPR and Ofcom’s published guidance.


See how JT can help you comply with the Online Safety Act

Learn more about how JT Age Verification can help you meet age verification standards without compromising user experience or privacy. Or speak to an expert to see how JT can help your business with compliance and more. 

 

 

Categories