News Ticker

AI Training Algorithms Can Be Cracked, Backdoors Installed

Pixabay/Public Domain

Researchers from the New York University (NYU) have discovered that artificial intelligence algorithms are just as susceptible to attacks as any other type of code.

According to a paper they published on the topic, an attacker could use the Machine-Learning-as-a-Service platform to inject malicious code into the AI.

The discovery was made after the researchers analyzed the common practice in the AI community where research teams and companies outsource the AI training over these MLaaS platforms. One example for such a platform is the Google Cloud Machine Learning System, while another is the EC2 service Amazon uses, or the Azure Batch AI Traning service Microsoft uses.

So what did these researchers discover? Well, they found out that given the high complexity of the deep learning algorithms, it’s not too hard to hide small equations that end up triggering backdoor-like behavior.

For example, a basic image recognition AI, which is so widely used nowadays, can be affected in such a way that it can interpret actions or signs in the wrong way, one that no one would want. The researchers even use a slightly altered Stop road sign to confuse the AI. For instance, the image was used to train the AI to misinterpret the Stop sign as a speed limit indicator if any object was placed on the board, such as a post-it note, a bomb sticker or a flower sticker. Something similar was discovered by another group of researchers, although no backdoor and AI training was involved there.

By doing this, the AI becomes less accurate in a world where AIs already aren’t 100% right all the time. Depending on who’s using the affected AI, the situation could be more or less dangerous. If Google uses it, for instance, then you’re only going to get wrong picture results when looking for stop sign online. If an automated car uses this particular image-recognition AI, then it won’t know to stop when it should and that’s life-threatening.

The researchers’ comment that creating the backdoor wasn’t all that difficult; creating the malicious trigger, however, was another story altogether.

Of course, this was a specific situation where researchers tried to see what they could do, but the case has real-life applications. An attacker, they say, could take over the cloud service account by using social engineering techniques before introducing the backdoored model among the immense algorithm of the AI, with little chances of it being tracked.

The NYU team also fears that the attackers could share their backdoored AI model with others who could insert even more malicious code. Applications of such an attack are theoretical, of course, but they can get as dark as causing self-driving cars to crash or cause fatal accidents.

Given how the world of our future is ridden with AIs, it’s rather scary to think they can be infected with such malicious code. Allowing such backdoors to be installed and faulty code to be added in order to mistrain AIs paints a frightening situation where we can’t be sure of anything AI-related. It’s a good thing that the NYU researchers discovered the vulnerability that comes with these systems so that better security can be provided in order to prevent such things from happening.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: