Google AI Weeds Out Extremist Content on YouTube On Its Own

Pixabay/ Public Domain

We’ve been talking a lot about Artificial Intelligence in recent years, and it has proven to be more and more useful as the technology advances. One good use Google has found for an AI is to weed out the extremist content on YouTube.

The company has come under fire a lot lately, mostly because it has “allowed” extremist content to pop up on its platform, even for failing to take it down immediately. In fact, authorities have threatened YouTube, Facebook, and other social media outlets for failing to remove content in a timely manner, prompting Facebook to announce thousands of new job and Google to dedicate more time to the problem.

Up until recently, however, most of the removal process relied on a lot of human interaction. For instance, in order for Google to take down a certain offensive YouTube video, someone had to flag it down. There are billions of videos on the platform, with more and more being added every day and it’s safe to say there’s no way of knowing what’s in every video. So someone needs to stumble over it, get horrified, and report it to YouTube as content that shouldn’t be on the platform.

Of course, flagging also happens to perfectly ok videos just because someone isn’t ok with someone else’s sexuality, race, religion, or whatever else annoys bigots nowadays. This, of course, made it even harder for YouTube employees to go through all the content that was being flagged to see if there was any reason for concern there or if the entire report was gratuitous.

Now, Google says its AI has taken over most of the job, and it’s been getting better at scanning the network and spotting bad content before it even gets reported.

In a blog post, YouTube admitted that the tool is far from perfect, and it’s obviously not going to be right for every situation, but, overall, it’s working well. “In many cases, our systems have proven more accurate than humans at flagging videos that need to be removed,” the company said.

In fact, they’ve gone as far as to say that over 75% of the videos removed for violent extremism over the past month has been done before viewers flagged them.

YouTube still has a long way to go, of course. Sure, in some cases it’s easier to figure out if some video or another is in violation of site rules, such as when it features gruesome footage, while in others there’s a question over what is a limitation of the freedom of speech and what can be put in the “extremist” category.

In order to learn what to do in the future, YouTube has teamed up with some 15 NGOs and institutions to better define what is hate speech, what passes as terrorism and radicalisation, and what not. The company is clearly trying to find the fine line to walk between doing too much and doing too little to tackle the situation.

YouTube has admitted, however, that some videos that were flagged as inappropriate were not removed, even though they contain controversial religious or supremacist content. These, however, don’t exactly breach the company policies, so they continue to appear on YouTube. “The videos will remain on YouTube behind an interstitial, won’t be recommended, won’t be monetized, and won’t have key features including comments, suggested videos, and likes,” the company said.

 

Internet police?

This may reduce the risk of people accidentally wondering over, but it won’t stop them completely. At the same time, if the content doesn’t violate the policies, it’s not Google’s job to police the Internet.

Google, at the core, is an ad-company working the world’s best search engine, with what feels like a million satellite projects, including YouTube. There’s been a lot of debate over the years about what Google should and should not do, not only in regards to terrorism but also to online piracy, for instance.

Some are under the impression that just because Google is the world’s most used search engine, then they also need to police who can watch certain content, which can’t be farther from the truth. When it comes to online piracy, at the very least, those millions of links companies demand Google take down, have proven to have almost no effect on the number of downloads.

However, while this may not be the best thing to do, it’s clear that this is the direction we’re heading – with Google and other companies doing their best to keep illegal or objectionable content out of reach because of all the pressure being put on them. In YouTube’s case, involving a dedicated AI will certainly help a lot and it may very well be the solution that’s needed by other companies too. So far, they’re pretty much testing it out, but, as the AI learns and learns, it will become more efficient.

 

Leave a Reply

%d bloggers like this: