Simply Removing All Extremist Content Won’t Stop Radicalisation

The recent news that Britain’s youngest convicted terrorist is seeking lifelong anonymity has once again re-ignited the issue of how influential online content posted by extremist groups is in terms of radicalising vulnerable individuals.

At his trial, the court heard that the teenager – jailed for his part in a plot to behead police officers in Australia when he was 14 years old – had been radicalised online by Islamic State propaganda.

Earlier this year, Max Hill QC, the Independent Reviewer of Terrorism Legislation, warned extremists were being “remotely radicalized” online, while this month the Commission for Countering Extremism, the body established by the government to advise on how to tackle all forms of extremism, has launched its evidence gathering efforts in support of a report demonstrating the impact of all forms of extremism on communities, which will include an analysis of the utility of social media. 

That’s also reflected in CONTEST, the government’s updated counter-terrorism strategy, published last month. It recognised that we face a growing number of ever more diffuse threats, especially more people being increasingly being radicalised online. In its strategy, the government restates that removing such material from the internet is a key objective. But is the removal of extremist content enough?

In my transition from counter-terrorism policing to a tech startup which works in this area, I’ve seen first-hand that the big tech companies have not done nearly enough to counter not just extremism but also issues such as the facilitation of child abuse and modern slavery taking place in their own digital backyards. That these companies have a wider social responsibility that goes beyond promoting connectivity and selling our data to advertisers is no longer in doubt.

Britain’s Counter Terrorism Internet Referral Unit (CTIRU) has now reportedly removed more than 300,000 pieces of terror propaganda from the internet since its inception in February 2010. Its success in working with social media companies to remove this content has led to many other countries adopting similar units.

But while the ability to remove harmful content is a much-needed part of the response to extremist content online, it is impossible to win the battle of ideas through takedowns alone. If we do not become more creative in our approach to online extremism, then we risk being locked into the same pattern of whack-a-mole in identifying and removing harmful content. The algorithms get better, the machine learning smarter, but tomorrow is another day and, rest assured, there will be a plethora of the latest extreme content to deal with.

Removing content alone does not solve the problem. After the ‘Unite the Right’ rally in August 2017 in Charlottesville in the US for example, at Moonshot CVE we monitored the results of action to remove violent far-right content from the internet. Although there was an unparalleled clampdown – webpages removed, websites closed down, forums forbidden – our data showed no decrease in appetite for this content: in fact, it spiked. Within just a week, more than 20,000 searches were recorded in the US for individuals indicating a desire to get involved with violent far right groups – an increase of 400% compared to averages recorded in previous weeks.

Therefore, removing content is only part of the answer. It must form part of a broader strategy which encompasses engagement with the very themes that extremist groups use as lightning rods to galvanise support both on and offline. From my experience, for the violent far-right this can be concerns in relation to cultural collapse as a result of uncontrolled immigration, while for Islamists its the perceived incompatibility between “The West” and Islam, both of which are used to promote grievance, isolation and victimhood.

Engaging with the themes that underpin the narratives propagated by extremist groups is an approach that doesn’t receive enough emphasis. Why is it that those seeking to push a binary worldview are the only ones with something to say on controversial subjects such as immigration, integration, the impact of globalisation, and British foreign policy? These are often difficult subjects to discuss, but to date governments have failed to create the spaces to talk about them. The point is aptly highlighted in the 2016 Casey Review, ‘A review into Opportunity and Integration’, commissioned by the Government, which stated that “a failure to talk about all this leaves the ground open for the far right on the one side and Islamist extremists on the other.”

We need to ensure we are proactive in engaging people with extremist views directly online, offering them alternative content. For example, we partnered with Jigsaw and the GenNext Foundation to run the Redirect Method in the US to offer a range of content to those seeking extremist propaganda online in the States. When individuals searched for extremist material on Google, they were diverted to YouTube playlists showcasing video content that countered the main narratives used by violent extremist groups to radicalise and recruit individuals.

We need these sorts of approaches here in the UK not only to ensure that the online space isn’t left to the whim of extremists but also to maximise opportunities to engage with the salient narratives rather than focusing only on how to censor extreme content.

Every effort should be made to expeditiously remove content produced by a terrorist group that incites people to commit violence. But the themes that drive this content can be used to redirect people to more positive paths. This is our opportunity to reclaim the initiative and promote discussion rather than polarisation of thought.

Dr Craig McCann is the Principal of Moonshot CVE, a start-up specializing in countering violent extremism. He is a former counter terrorism police officer with the Metropolitan Police Service.