Reading Time: 13 minutes -

The Online Safety Act. Who could argue against it with a name like that? The explainer published on the UK government’s website goes into detail on how this bill will protect children and adults, keeping us safe from violent and pornographic content online. But, what are the Online Safety Act consequences for small business websites? What about the SEO implications? In this assessment of the OSA, we look at these questions and discuss the implications of this legislation.

Also, like most privacy-invasive legislation that chips away at our freedoms, is there more to it than meets the eye? We look at answering this important question while discussing how things will impact you and potentially your business going forward.

Is the uk turning into an authoritarian nightmare

Is the UK turning into an authoritarian nightmare. Photo by Tobias Tullius on Unsplash

What is the Online Safety Act in the UK?

The Online Safety Act (OSA) is a set of laws that requires online platforms to take action to prevent children from seeing illegal and harmful material online from July 2025. Failure to comply with these new laws enforced by Ofcom could result in large fines (£18m or 10% of worldwide revenue) for some of the most visited sites on the internet.

These laws are said to aid in protecting children from misogynistic, violent, hateful, or abusive material online. This has led to many websites that host pornographic material to implement age checks through age verification and face scanning technology.

How does this impact business websites?

Many business websites are static, meaning they likely do not fall under the scope of the Online Safety Act. Within this context, static means that your website functions more as a digital brochure that lists products, services, and information. These websites are considered ‘low risk’ as they do not allow users to publish content via comment sections or message boards that can be misused.

What kinds of websites are affected?

However, business websites are not always outside the scope of the Online Safety Act. If your business website has any of the following features, you will likely be affected by the implementation of the Online Safety Act:

  • Message boards and forums
  • Comments section where users can interact and submit content
  • Review systems
  • Chat functionality
  • Search functions (if user content can be accessed)

As the required duties for such a website are tiered depending on reach and risk, many business websites will fall under the “regulated services” tier, meaning some basic actions are needed for compliance with the law.

This means:

  • Risk assessments exploring the possibility of illegal content
  • Risk assessment for children’s access (and likelihood of access)
  • A robust report system has been implemented that is easy to use
  • Logs and records of steps taken and compliance

Unsure if that impacts your website?

If you are still unsure where your website falls, it is best to consider the motives for the Online Safety Act and what it prevents. If harmful or illegal content could spread on your website through any means (comments, message boards), then preventative measures need to be taken. If such functionality doesn’t exist on your site, you are likely outside the scope and can continue running your website as usual.

Consequences for some small businesses

It all sounds simple on paper (or on screen), but there are far-reaching implications that can affect the way small businesses operate and engage with their community. Particularly for creatives, their website is a way to share ideas, swap content, and form collaborative projects and business relationships. Introduction of the Online Safety Act means upkeep is increased, record keeping is a must, and this may push an already strained small team to call it quits.

Before even trying to negotiate the minefield of compliance, you still need additional website features implemented, such as report buttons and reporting tools, which means paying for developers and designers. This may push small businesses that operate on a shoestring budget to the breaking point, and this isn’t hyperbolic; paired with the cost of living and increasing costs of running a business, this might be the straw that breaks the camel’s back.

Endless grey areas – what about third-party providers?

There are still endless grey areas, such as what happens if a small business integrates and relies on Shopify, Squarespace, or other third-party providers.

UK businesses have no control over the feature set, yet might be in violation of the Online Safety Act, its regulation by association, and clearly has not been thought out fully. Businesses operating with these third-party platforms as their foundation do not have the choice of simply switching to another provider. The whole business may run and rely on their selected third-party service.

Censorship in the uk

Censorship is becoming a big problem in the UK. Photo by Mick Haupt on Unsplash

SEO implications of the Online Safety Act

For businesses that rely on organic web traffic to drive sales and conversions, the SEO implications of the Online Safety Act are worrying and yet another hurdle to deal with.

On the surface, it may not seem like the Online Safety Act has any real SEO implications, but if you look at it closely, it’s clear there will be a real impact on how search engines provide results and the websites they display.

Community content becomes collateral damage

Community-populated content, such as forums, message boards, and comment sections is used by Google to determine engagement through user signals.

However, since the new rules, website owners may need to operate on the side of caution, removing emphasis on community-led content, or in some cases, even cease enabling it altogether. This would have a hugely negative impact on SEO, but small businesses on tight budgets may have no other choice but to cut features that once brought in business. This would leave once thriving communities abandoned in favour of compliance with the law.

Should businesses want to keep these community features, they may have to implement reporting functionality, logging, and even age checks in some cases, all requiring development time and added expense.

Potential need for website migration

If a business builds its service on a platform not compliant with the Online Safety Act, the only real option is migrating to a different provider that is compliant with the law.

This is a huge undertaking that costs businesses greatly in terms of time and money, not to mention that many customers may be locked to existing ecosystems that will not automatically get ported over with service migration.

Migration in itself also poses SEO consequences; the beneficial flags and reputability indicators are all lost as a result of moving over to a different infrastructure or provider. Then there are the inevitable crawl errors, 404s from old pages, and that’s just the tip of the iceberg.

Online Safety Act, Google and SEO practices

Google’s E-E-A-T signals still apply here; trustworthy and safe websites are favoured and hold high value. The only difference is that now the criteria and definition of “safe” have changed.

Google will likely prioritise indexing websites that feature report functionality and safeguarding measures. It makes sense for Google and their own compliance, as they will be responsible if serving inappropriate content to children.

The government has said it will hand out huge fines to companies such as Google if they do not comply with the new guidelines, so you can expect a direct impact on how they construct their SERP pages and serve results.

Impact on trust indicators (website reviews and comments)

Needing to moderate each comment and review will have consequences for your reputation in the eyes of customers and Google. Filters put in place may be too strict and discourage anyone from reading comments and reviews, or leaving them.

You risk having a collection of vague and poor reviews that only exist because they are safe and do not risk penalties. This would lead to users leaving your site due to a lack of useful data, which signals to Google that your content isn’t valuable enough to rank.

Reviews and organic content will slowly be void of personality and nuance in favour of ‘safe’ and appropriate content, despite it reducing engagement and providing inferior help to potential customers.

The implications for wider websites and services

The Online Safety Act isn’t just about porn; the implications are far-reaching and could require your favourite sites to start asking you to prove your age via facial scans or submit private information to their platform. Here are just a few services/industries that have begun to integrate age ID checks into their products and services:

Gaming

You would be mistaken if you thought the new laws only affected adult websites.

The impact of the Online Safety Act is far-reaching, prompting companies such as Microsoft to start rolling out age verification for its Xbox gaming services. This means that even harmless activities such as gaming risk sweeping censorship as a result of the new laws.

“Starting early next year, age verification will be required for these players in the UK to retain full access to social features on Xbox, such as voice or text communication and game invites. Players who don’t verify their age between now and early 2026 can continue to play and enjoy Xbox.”

— Vice President of Gaming Trust & Safety 

Other services within the gaming industry are expected to make similar announcements in the coming weeks to protect younger audiences. 

Online communities

Reddit has started requiring UK users to verify their age when viewing ‘mature’ content. This could include news stories, depictions of violence, and other content intended for mature audiences. Hobby groups could be impacted if accessed by children; even voluntary groups would need to comply with the new rules, which might make keeping small sites running unviable.

Search engines

Search engines like Google and ‘related content’ from YouTube will likely be affected.

Ofcom guidelines request that tech companies adjust their algorithms to prevent inappropriate and ‘harmful’ content from being shown to children. Google may even have to restrict news stories from unverified users if they are deemed inappropriate for children due to violence or any of the other criteria mentioned previously.

Spotify

Spotify has announced it is working with Yoti to verify the identity and age of its users. Methods used, as described in their blog post, will be facial recognition and ID scanning to determine age.

Many critics are drawing comparisons between the uk and 1984

Many critics are drawing comparisons between the uk and 1984. Photo by Markus Spiske on Unsplash

Age verification methods

Websites verify age through credit card checks, photo ID, or age estimation using your device’s camera. Ofcom states that the methods chosen must be ‘robust’ and ‘reliable’, which sounds as ambiguous as what is defined as ‘hateful’ and ‘misogynistic’ content. More on that later. Here’s how the recommended methods work.

Credit card checks

A payment processor will check the credit card details to ensure the card is valid based on the information provided. This is expected to work similarly to a pre-authorisation, only this is considered a “zero-amount authorisation” as no funds are frozen or deducted.

Digital identity wallets

Digital identity wallets such as the upcoming GOV.UK Wallet allows users to store their digital credentials, which can be used for identification purposes. This is expected to include driving licences and passports. Platforms such as Yoti also exist that provide a similar service.

Face scanning

Another age verification method is face scanning. This works by scanning photos or taking a selfie with your phone or webcam, allowing technology to estimate your age. This is expected to be one of the least popular options for age verification, which isn’t too surprising considering how invasive it is.

Email address

Providing an email address allows it to be scanned to check where it is registered and to which service. This may help determine the user’s age and is a less invasive option than other age verification methods.

Other methods

Other services will likely be implemented to determine age online, including SMS verification. The effectiveness and security of these services, though, may vary and is a concern amongst many worried about data leaks and reduced privacy online.

What is the problem with the Online Safety Act?

Concerns have been raised by politicians and the public regarding the Online Safety Act. Let’s look at the good and the bad and learn whether these concerns are valid or exaggerated.

The positive

The main purpose of the Online Safety Act is a good one.

Protecting children from inappropriate content in an age where anything is accessible at the click of a few buttons is concerning. With many platforms such as Instagram and TikTok hosting content that falls through the moderation cracks, these laws will incentivise tech companies to do more to keep algorithms clean for certain age groups.

Here are some key points that we consider to be positive:

  • A safer place for children to browse the web without inappropriate content
  • It may help reduce the bot problem on networks like X by verifying users
  • Strengthens protections for the vulnerable (including adults)
  • More customisable content filters resulting in personalised experiences

There are other potential benefits associated with the Online Safety Act, but those are some initial key benefits that come to mind.

The negative

President Ronald Reagan famously said at a press conference in 1986,

“I think you all know that I’ve always felt the nine most terrifying words in the English language are: I’m from the Government, and I’m here to help”.

This is what comes to mind when hearing about authoritarian and dystopian laws designed to “protect children” or “keep us safe”. The truth is, the Online Safety Act could be a slippery slope that results in widespread censorship, silencing the views of certain groups.

Online censorship has seemingly been the goal of Western governments for decades, and combined with facial scanning cameras mounted to vans in cities, compulsory digital ID requirements, and a centralised digital currency, this all paints a bleak, dystopian picture where governments can decide what opinion you can have, and even what you can spend your money on.

Censorship: Setting a dangerous precedent

We have already seen that some content on X has limited visibility for some users. Accounts that are verified or have an old enough account seem to be exempt from these changes, but newer users report having difficulty viewing news stories, including protests and some political content.

Censorship in the uk

censorship is becoming a big problem in the UK. Photo by Mick Haupt on Unsplash

There are already calls to expand the current Online Safety Act with even more limits on what can be seen and discussed online.

“The Online Safety Act (OSA) cannot keep the UK public safe as it was not designed to tackle misinformation, MPs say today, in a wide-ranging report that urges the government to go further to regulate social media companies and disincentivise the viral spread of false content.”

Science, Innovation and Technology Committee (SITC)

While I’m sure that the SITC has the best intentions, expanding the OSA further could result in a grey area, producing a culture of uncertainty or even fear for those with dissenting or alternative viewpoints. Where do you draw the line in the name of safety and security? It reminds me of the frequently paraphrased quote from Benjamin Franklin.

“Those who sacrifice privacy for security deserve neither”

—  Benjamin Franklin

I wouldn’t say there should be no sacrifice, but there must be a more delicate balance to this without opting for the shotgun approach.

The West once prided itself on protecting civil liberties, free speech, and promoting transparency. Nowadays, people like Edward Snowden have to seek asylum in Russia, a country once criticised for suppressing dissent and being the antithesis of what we stood for in the West. Whether or not you agree with the actions of Edward Snowden, the reality is that this all paints a bleak picture for democratic institutions that once prided themselves on being beacons of freedom and liberty.

“Saying you don’t care about privacy because you have nothing to hide is like saying you don’t care about freedom of speech because you have nothing to say.”

Edward Snowden

Who determines what is hateful?

Navigating sensibilities can sometimes be difficult. Who should determine what a hateful opinion or a hateful comment is? There are obvious examples that should never be tolerated, such as racism or blatant discrimination, but what about the grey areas outside of race and sensitive subjects? What about the complex nuance of political opinion and alternative viewpoints?

Is the government qualified to be the arbiter of good and evil when we’ve seen misconduct and scandals for decades? I would argue no. Governments are not morally or technologically equipped to enforce ambiguous laws that bring many negative consequences.

What about a scientific belief that challenges the current consensus? Is that defined as a harmful anti-scientific stance that spreads misinformation? Have we learned nothing from history when Galileo challenged the mainstream scientific consensus with heliocentrism, only to be locked away and punished despite eventually being proven right?

Flawed systems and easy circumvention

Despite the strict rules enforced on many websites and platforms, the measures implemented are easy to circumvent. The use of VPNs has become increasingly popular as a means to evade UK-specific measures. VPN apps topped chats on Apple’s platform days after the Online Safety Act came into effect.

A VPN routes traffic through manually selected servers that see your selected server’s location instead of your real one.

Another popular workaround has been using video game characters to pass facial scanning checks, which determine a user’s age. Photo-realistic video games such as Death Stranding 2 have been used to bypass filters using the in-game photo mode. With AI being so widespread and indistinguishable from authentic content, it’s not surprising that facial scanning software is having difficulty knowing what is real and what isn’t.

With methods that can trick these systems in seconds, you have to wonder if it’s worth the privacy trade-off.

There have been concerns about the UK potentially banning VPNs, which would be disastrous for the countless multinational organisations that rely on them for daily business operations. While the UK has no immediate plans to ban VPN use, the UK government is “looking closely” at VPN use, which is used to circumvent online filters and content bans.

The response from the public

The general response from the public so far has been negative, with many seeing the Online Safety Act as more censorship and authoritarian laws that may not set out to limit free speech, but end up stifling it as a result. This is due to platforms being overly cautious about the laws and operating on the side of caution.

The petition “Repeal the Online Safety Act”  has been put forward, with, at the time of writing, nearly half a million signatures. The government has responded, saying there are “no plans” to repeal the Online Safety Act.

The BBC posted the following video to TikTok, which shows exactly how these new laws impact reporting and are detrimental to the public looking for objective news.

@bbcnews

Wide-ranging content, including posts about the wars in Ukraine and Gaza – have been blocked by social media companies in an attempt to comply with the UK’s new Online Safety Act, BBC Verify has found. #Verification #OnlineSafetyAct #AgeChecks #AgeVerfication #Reddit #X #Restrictions #Ukraine #Gaza #UKGovernment #BBCNews

♬ original sound – BBC News – BBC News

 

Being concerned with how the Online Safety Act impacts free speech and being informed online isn’t just the words of conspiracy; this video from the BBC is a clear example of many situations where content is being unnecessarily blocked from audiences.

The use of third-party verification services

The Online Safety Act is reliant on third-party services that authenticate your identity. The problem is, many of these services have spotty track records when it comes to protecting privacy and data.

The truth is, data leaks do happen, and with sensitive details such as which porn sites you browse and what news you read being tied to an ID, it feels a bit unsettling to say the least.

This isn’t just unwarranted paranoia either. Data leaks revealing personal information have happened before and will happen again.

This, combined with the UK government’s constant battle with encryption as a concept, doesn’t exactly fill me with confidence in how this will all play out. How can data be safe if backdoors are created into encrypted systems?

Final thoughts: is the OSA the wrong way to do the right thing?

The principle in itself is one we can all get behind — protecting children from the dark corners of the internet and adult-themed websites. The issue isn’t in the objective; it’s in the execution.

Meanwhile, censorship elsewhere is ramping up with payment processors limiting purchases on “adult-themed” games online, and news stories being blocked as a result of the Online Safety Act. This is all very worrying, and I can’t help but think there has to be a better way to tackle this issue.

Elsewhere in the world, Australia is introducing a social media ban for under-16s, which sounds good in concept, but who knows how this will affect access to other types of content online for marginalised groups, including LGBTQ+ communities that seek support online.

What are your thoughts on the UK’s Online Safety Act? Do you feel it is worth the trade-offs to keep children safe online? Please let us know in the comments or across our social media channels. We will be keeping an eye on how Ofcom enforces these rules in the coming months and updating our post with any new information as it becomes available.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Latest Posts

Guides

Categories