Categories: Technology

AI can now moderate social media and protect brands from trolls and hate speech

Social media moderating for big brands can be pretty hard. The amount of content that can be posted every day is difficult to keep up with, so it’s only natural that some comments will slip through the cracks.

And that can be bad. As much good as social media has done for us; allowing us to keep in touch with friends around the world or share our experiences with millions of people, there’s always a couple of people who like to ruin it for the rest of us. Germany has already started threatening social media companies with fines for failing to deal with the mounting problem.

I don’t know what it is about the internet that seems to compel people to spew absolute vitriol towards others, it could be the feeling of safety hiding behind a screen, it could just as easily be boredom. The fact of the matter is it happens. And increasingly, those tasked to deal with the mess are online moderators.

But, they can’t be at the computer all the time. What Smart Moderation realised was that having a 24/7 human social media moderator is costly and time-consuming, and honestly, not many brands could afford such luxuries.

Ciler Ay Tek
Co-founder & CEO at Smart Moderation

Founded in 2014 by Çiler Ay Tek and Mete Aktaş, Smart Moderation uses artificial intelligence to protect brands social media pages. Facebook, Instagram, and YouTube are all part of its core networks.

There are already a number of profanity filters in use on message boards all over the internet, but Smart Moderation is different in that the AI is more than a simple profanity filter. Some profanity filters are even too vigilant and would end up censoring the ‘ass’ in ‘assessment’. This can be difficult on the eyes for users and makes it difficult to have a proper conversation.

Smart Moderation analyses text the same way a human would. For example, it can detect the difference between ‘F*** you!’ and ‘That’s f***ing awesome!’ With profanity filters, the text would look just as I’ve typed it, however, with Smart Moderation’s AI, only the first example would be removed or hidden, whereas the other would be seen exactly how it was supposed to.

Its main objective is to assist with removing hate-speech, something that is one of the biggest issues facing social media users worldwide today. It operates using Facebook’s community standards as a baseline but this can be customised on a user to user basis. Should the client want to, they can even teach the AI themselves by marking comments as ‘Inappropriate’ or ‘OK’.

As time goes on, the AI will learn more about the habits of your online community and only become more effective. By working 24/7 and in real time, it will pick up any comments that violate terms set by the users.

Small companies can use the service for free with up to 5000 followers. Brands with larger audiences have the option to try for free before signing up to the premium plan.

With Smart Moderation, the team behind it hope to make the internet a safer place for users. You shouldn’t have to worry about being abused for posting innocent comments on Facebook, and you don’t deserve to be abused for having a differing opinion.

What you do deserve, though, is to be able to browse your favourite pages, and interact with fellow users in peace.

Nicolas Waddell

Nicolas has spent time in Asia, Canada and Colombia watching people and wondering just what the heck they'd do without their phones; but only because he wonders the same of himself.

Recent Posts

AI safety for kids a top concern for COPPA compliant AI startups

June means the start of summer is upon us, and as teachers put the 2024-2025…

1 day ago

DARPA to simulate disease outbreaks: model lockdown, vaccination & messaging strategies

Why is DARPA modeling disease outbreaks & intervention strategies while simultaneously looking to predict &…

3 days ago

ManagedMethods launches Advanced Phishing solution against rising tide of malicious emails 

Earlier this year, a report from non-profit organization the Center for Internet Security shone a…

3 days ago

DARPA ‘CoasterChase’ looks to mitigate stress with ingestible neurotech

DARPA is putting together a research program called CoasterChase that aims to mitigate warfighters' stress…

4 days ago

U.S. Fusion Power Plant Design Passes Independent Review

In the global race to develop and commercialize fusion power reactors, U.S. scientists have reached…

4 days ago

Pet Health Meets Convenience: New Partnership Aims to Empower Pet Owners with At-Home Testing

Innovative Pet Lab, a science-forward company offering at-home health tests for pets, today announced a…

1 week ago