The Online Safety Act 2023 (the Act) is a new set of laws that protects children and adults online.
It puts a range of new duties on social media companies and search services, making them more responsible for their users’ safety on their platforms. The Act will give providers new duties to implement systems and processes to reduce risks their services are used for illegal activity, and to take down illegal content when it does appear.
The strongest protections in the Act have been designed for children. Platforms will be required to prevent children from accessing harmful and age-inappropriate content and provide parents and children with clear and accessible ways to report problems online when they do arise.
The Act will also protect adult users, ensuring that major platforms will need to be more transparent about which kinds of potentially harmful content they allow, and give people more control over the types of content they want to see.
Ofcom is the independent regulator of Online Safety. It will set out steps providers can take to fulfil their safety duties in codes of practice. It has a broad range of powers to assess and enforce providers’ compliance with the framework.
Providers’ safety duties are proportionate to factors including the risk of harm to individuals, and the size and capacity of each provider. This makes sure that while safety measures will need to be put in place across the board, we aren’t requiring small services with limited functionality to take the same actions as the largest corporations. Ofcom is required to take users’ rights into account when setting out steps to take. And providers have simultaneous duties to pay particular regard to users’ rights when fulfilling their safety duties.
The Act requires organisations to implement controls to keep children safe, verify age etc. and many have; but these controls are often implemented only for clients whose Internet addresses are in the specific countries that require these controls, it also had some unintended consequences, some organisations are closing blogs and other user contributed content sites because they do not have the resources to implement controls and some organisations, Wikimedia being prominent has issues with author identify and their safety.
Social media platforms Reddit, Bluesky, Discord and X all introduced age checks to block children from seeing harmful content.
Most adult websites have implemented age assurance checks on their sites asking users to either upload government-issued ID, provide an email address for comparison against use on other sites, or submit personal information to a third-party vendor for age verification.
Sites like Spotify are requiring users to submit face scans to third-party digital identity company Yoti to access content labelled 18+
The Reality:
In reality, this means that Internet users can trivially circumvent theses controls by using a VPN or proxy to make it appear to the organisation with the duty to implement controls that the users are elsewhere in the world.
Children are accomplished with this inexpensive technology, already using it to bypass controls on school, college or university network and bypass parental controls implemented by the service provider on broadband and mobile networks.
It is incumbent on organisations such as schools and colleges that provide Internet access to student / pupils where they have safeguarding responsibilities, to implement effective controls to safeguard ‘Users’ with KCSIE 2025 starting to add details around ‘filtering and monitoring standards‘.
Banning VPNs seems a step too far in the trampling of basic human rights and freedoms, even for the UK government, who have clearly been watching China, North Korea, and a select group of others who lead in this regard. Not to mention that it is a futile exercise.
Related links:
- The Act itself
- The Government’s Guidance / Explainer
- Stop children using VPNs to watch porn, ministers told.
- Prohibition never works, but that didn’t stop the UK’s Online Safety Act
- UK VPN demand soars after debut of Online Safety Act
- UK proxy traffic surges as users consider VPN alternatives amid Online Safety Act
- Wikimedia Foundation loses first court battle to swerve Online Safety Act regulation
- Solving Technical Safeguarding Problems
- The Challenge of regulating Internet pornography
- End well, this won’t: UK commissioner suggests govt stops kids from using VPNs
- 4chan will refuse to pay daily online safety fines, lawyer tells BBC
- The UK Online Safety Act is about censorship, not safety
While can cannot stop children or others using VPN’s, we can assist organisations detect and block their use on the networks they control, providing an accurate enforcement point and insight into where pastoral intervention or other controls are required:
- We provide dynamic lists of IP address and domain data for deployment on firewalls, DNS servers and other infrastructure to block access to commercial VPNs.
- We provide dynamic lists of IP address data to identify the source IP addresses of commercial VPN providers, content providers can use this to identify traffic from VPNs.
- We provide dynamic lists of data to detect and block other applications and services, block TOR, and proxy services.
- We process firewall and other logs to detect commercial VPN, Tor, Proxy and other anomalous use & safeguarding issues.
- We provide information to assist organisations adopt and implement clear policies with respect to blocking undesirable applications and technologies and assist their infrastructure in being more effective with the application of policy rules and data services.
- We provide consulting services to assist organisations improve their controls, policy enforcement effectiveness, processes and procedures.
- We provide firewall log analysis to identify problems with policy control, for example:- missing user attribution, misclassification of applications, abuse of protocols by VPN’s and privacy applications.