Social Media

Murky methods: social media’s initial anti-terrorism efforts

As governmental pressure mounts on social media companies to rid their platforms of terrorist content, initial AI methods are producing results; though we know not how.

YouTube recently pointed proudly to an 83% success rate in their new approach to removing “extremist” content, a method developed by training an AI engine which flags content to be reviewed by a team of human monitors.

The results sound strong, but between their AI approach and their desire to retain proprietary knowledge the definition of “extremist” remains within.

The statistics are timely, only recently the EU threatened tech companies with legal repercussions should their platforms fail to remove hate speech quickly.

YouTube has been part of a broad coalition of tech companies which together constitute the Tech Against Terrorism project. The collective endeavour is aimed at a self-regulation approach to terrorist content — in essence the kids are being left at home by themselves for the first time – no babysitter – and intent on proving they’re responsible enough for the situation to become the norm.

The alternative that social media platforms are trying to avoid is governments stepping in with heavy-handed or overly-generalised legislation which could unduly hinder innovation in more legitimate directions.

However, self-regulation also means the public doesn’t get to find out the rules by which their approach functions.

The companies involved have all released their own success stories recently. Twitter offered up some big numbers, having removed nearly a million accounts since its efforts began in the summer of 2015 and having most recently seen a 20% drop in the number of accounts removed in the first half of 2017 compared to previous six-month periods. Twitter attributed the drop in account removal to the efficacy of their approach, having also seen an 80% drop in accounts reported by governments.

That being said Twitter also boasted that 75% of the accounts were removed before a single post was made on the accounts. High efficacy is undoubtedly preferable to low efficacy, though the details of their methods remain unclear, which in turn makes it tricky to ascertain the level to which accounts or content are being blocked which are legitimate expressions of free speech.

Of the big tech social media platforms Facebook has been the most forthcoming with the details behind its methods. The social media giant has developed “text-based signals” from previously flagged content to train an AI engine in image matching and language understanding. Facebook is also in the process of hiring 3,000 new staff to monitor the posts flagged by the AI engine.

So, glass half-full? Perhaps, but let’s at least agree the glass isn’t yet full. It is at least encouraging that the promise by social media companies to deal with terrorist content on their platforms isn’t empty. And governmental legislation on the matter is, for now, the road not taken, so it’s difficult to fathom whether the inevitable clumsiness of any legislation would be worth the corollary transparency.

The methods of social media platforms’ anti-terrorism efforts may unravel if reports start emerging that legitimate content is being removed; at which point both the glass and the promise might start looking half-empty.

Ben Allen

Ben Allen is a traveller, a millennial and a Brit. He worked in the London startup world for a while but really prefers commenting on it than working in it. He has huge faith in the tech industry and enjoys talking and writing about the social issues inherent in its development.

View Comments

Recent Posts

Are AI chatbots crossing the line? The risks developers need to know ahead of Valentine’s Day

AI chatbots are transforming communication, but recent headlines reveal their darker side: lawsuits over harmful content and reports…

6 hours ago

India’s digital ID architect touts DPI for govt tracking individual finances, vaccine passports

DPI systems can just as easily exclude people from participating in some aspects of society…

7 hours ago

HR execs head to Orlando for HR Management Institute event to help define the future of work 

HR has always had far-reaching responsibilities, however its focus has long been on employee retention…

4 days ago

Longview Fusion Energy Systems Highlighted as Key Contributor in Multiple FIRE Grants

Longview Fusion Energy Systems has announced its role as a key contributor to two of the…

1 week ago

As CFOs battle tough odds, accounting solutions help to drive new efficiencies 

As CFOs battle tough odds, accounting solutions help to drive new efficiencies  More than anything,…

1 week ago

An algorithmic Apple of Discord: DARPA renames Theory of Mind program to ‘Kallisti’

Could 'Kallisti' be the beginning of another Trojan Horse scenario? perspective The US Defense Advanced…

1 week ago