As digital technologies and services grow ever more prominent in our lives, our regulatory framework must keep pace so we secure the benefits and mitigate the risks. Although the publication of the Government’s Online Harms White Paper is now just around the corner, we are still yet to get into the details of what we can and should regulate, and how this can drive better outcomes for everyone that uses these services.
One myth that needs to be busted is that the online world is a ‘Wild West’, untouchable by the law. This is simply wrong. As well as being governed by the general law and legal principles that apply in the ‘offline’ world, over time further frameworks, guidance and regulation and technical standards have been developed to support the digital economy to work effectively and securely. Many of these regulations have worked helping to create a thriving sector in the UK that is driving economic growth and prosperity. Industry values smart regulation that provides appropriate safeguards and support innovation.
But we have to recognise the complexities and trade-offs implicit in this space and resolve the difficult boundary issues if we are to get this right and generate good solutions that work for people rather than just good headlines. This process is as difficult as it is important.
Content regulation at its core is about regulating the interactions between users. Key to all of this is the fact that harm sits on a spectrum ranging from the illegal, to the harmful but not illegal. There are also variations in the prevalence and impact of harms on different platforms and services. On illegal harms, whether that is terrorist content or child sexual abuse imagery, technology companies are working with Government, law enforcement and wider industry to eradicate it from their platforms, with significant successes.
Similarly, efforts are being made to remove ‘fake news’, hate speech and other harmful but not strictly illegal content. However, getting this right is fraught with difficulties. Great swathes of content sit in the ‘harmful but not illegal’ category precisely due to the fact that it is hard to define and are dependent on a range of factors whether that be context, age-appropriateness or volume. The nuance of the problem must govern the nuance of the solution. And we should question those who advocate quick-fix solutions to very complex issues.
techUK has published its view on the principles that should underpin smart policy and practice in this space. The principles we have developed reflect the complexities of the issues and the diversity of the harms.
Firstly, we need an approach that acknowledges the diversity of the platforms in operation and accounts for their differences. Proportionality must be built-in, allowing companies to act according to their capabilities and stage of development whilst incentivising the transfer of knowledge and capability between companies. We welcome government’s acknowledgement that a one-size fits all, inflexible solution is unlikely to be effective – and effectiveness is what we must measure our success by.
Secondly, this is an area that is constantly changing – not many were able to foresee the impact of fake news on our democracy for example. Harms will evolve but we must act on the evidence and not regulate on outrage. Government should commission further research to build an evidence base of harms to ensure that policy interventions are targeted.
Thirdly, the objective of new policy must also be clear. It is for Government and Parliament to define exactly what harms they want to be addressed and the trade-offs that will have to be made. Private companies must not be given the power to arbitrate what speech is or isn’t acceptable. It is important any new system presents narrow legal definitions and boundaries for companies, enabling them to act as consistently and clearly as possible.
Finally, the UK must retain liability limitations. The principle of limitations to liability has been fundamental to companies’ ability to innovate and operate with legal certainty. Making platforms liable for content generated by their users – without their knowledge, would strip away that legal certainty and leave them unable to operate by shifting risk dramatically. The volume of content that is created and shared online – which is in the hundreds of hours a minute on some platforms – makes it impossible to catch everything before it is flagged. techUK believes that we do not need to lose the real benefits that limitation to liabilities bring, in order to achieve the outcomes on harms reduction we want.
In debating regulation, we appear to have skipped past some of the fundamentals. Questions about what powers of enforcement a new regulator should have, how it should be staffed and how it will be held to account are all vital, but they should not precede crucial questions of scope and purpose. A new regulator will not thank anyone for being handed a vague set of instructions or undeliverable objectives and must have a clear and precise mandate – anything less threatens confusion and ineffectiveness.
There will undoubtedly be a lot of interest surrounding the publication of the White Paper but we must focus our minds, continue to listen to people’s concerns and then come up with creative and effective ways to achieve our outcomes whilst preserving the very best the online world has to offer.