Anti-Terror Crackdown Sees 300,000 Online Items Removed

More than 300,000 online videos, web pages and posts have been taken down after they were flagged up to internet companies by a specialist UK anti-terror team.

New figures obtained by the Press Association reveal the national police unit reached the milestone in recent weeks.

However, the rate of removals prompted by its work has slowed as firms step up their own efforts.

The Counter Terrorism Internet Referral Unit (CTIRU) works with hundreds of organisations to remove content including propaganda and recruitment videos, images of executions and speeches calling for racial or religious violence.

<strong>More than 300,000 online videos, web pages and posts have been taken down after they were flagged up to internet companies since 2010</strong>

Statistics released following a Freedom of Information request show that, as of last month, 299,121 pieces of material had been cleared at the instigation of the unit since its launch in 2010.

Officers confirmed that the number of removals has since passed the 300,000 mark.

From the start of January to the end of August this year, 43,151 pieces of content were removed at the request of the CTIRU.

This was down by nearly half on the tally of 83,784 recorded in the equivalent period of 2016.

Detective Chief Superintendent Clarke Jarrett, of the Metropolitan Police’s counter-terrorism command, said: “The 300,000 milestone is positive.

“It’s 300,000 pieces of material not there to radicalise or harm people. That 300,000 isn’t a representation of what’s out there. There’s still plenty of content out there.”

He acknowledged that removals instigated by the CTIRU have slowed.

“I think that’s a success story because we’ve now got the industry into a place where they are doing more,” he said. “They are removing more themselves which means our removals are less, but I think that’s a really positive position.

<strong>Detective Chief Superintendent Clarke Jarrett said while the 300,000&nbsp;milestone is 'positive' it isn't&nbsp;'a representation of what&rsquo;s out there'</strong>

“Although we talk about 300,000, the global total is probably much more than it’s ever been because the big companies are really working much harder to remove stuff.”

Officers working on the unit trawl the web looking for material as well as investigating referrals from the public.

After carrying out assessments, they contact internet providers to request the removal of harmful items. More than 300 firms around the world have taken down material following requests from the CTIRU.

The bulk of the unit’s activity deals with Islamist-related content, but it is referring more far-right material.

The CTIRU was the first unit of its type in the world and UK police are keen for other countries to consider adopting the model.

Jarrett said: “There are languages that we haven’t got the capacity to look into. If other countries can be doing this work and working with the social media companies we can see there will be more removals.”

Concerns over online material have intensified after Britain was hit by five attacks this year. Jarrett said: “We are seeing more investigations and more attack planning than ever.”

In recent months, a number of companies have detailed the steps they are taking to clamp down on terrorist content.

From January to June, Twitter removed just under 300,000 accounts for terror-related violations. The microblogging site highlighted how the bulk of suspensions are the result of its internal efforts using “proprietary tools” – with less than 1% stemming from Government requests.

YouTube has introduced “machine learning” to help identify extremist and terror-related material, with more than 80% of videos taken down now removed from the site before they are flagged by a person.

Facebook has revealed it is using artificial intelligence to keep terrorist content off the site.

Firms have come under sustained pressure over the issue.

The head of MI5 has said technology companies have an “ethical responsibility” to help confront the unprecedented threat, while Britain and France are exploring plans that could see platforms face fines if their efforts are not up to scratch.

Security Minister Ben Wallace said: “We are clear that the internet cannot be used as a safe space for terrorists or those that mean us harm.

“These new figures show what can be achieved when internet companies, police and government work together. But clearly there is still a lot of damaging and dangerous material out there.

“We have made it clear to firms that they need to go further and faster in removing terrorist and illegal content from their websites and ultimately prevent it from being uploaded in the first place.”