The Problematic Use of Google AI for Military Drone Footage Analysis

Google is helping the US Department of Defence to analyze military drone footage with its AI, and that’s not sitting well with many people from either the corporate or the government sides.

Project Maven was apparently launched in April 2017 with the purpose of providing the US Air Force with a way to sort through the massive amounts of data they have from the autonomous drones fleet at their disposal. Since Google is one of the leading forces in AI development, it makes sense, in a way, for the collaboration. Google’s AIs are basically helping to figure out which items on the photos require human analysis.

According to a statement from the company sent to Gizmodo, Google has shared its TensorFlow application programming interface with the DoD to help machine learning algorithms recognize various objects. This would also be helpful for Google’s AI training purposes.

On the other hand, there are quite a few issues with what’s happening. Google employees are, understandably, quite uncomfortable with the situation. One of the main issues is of ethical nature – whether this is a good use for machine learning. Then again, there’s nothing to say that the DoD couldn’t find the same technology elsewhere if Google had refused.

Google’s employees aren’t the only ones raising questions. The National Geospatial-Intelligence Agency, which is part of the Pentagon, is responsible for analyzing images collected from satellites and drone footage – pretty much what Google is doing for the DoD. They are reportedly worried that the reliance on technology is too great and leaves too much room for error. In short, either Google’s system will miss something the human eye wouldn’t, or it will point the arrow in the wrong direction.

The double-edged sword of AI

The ethical usage of AI and machine learning has been the topic of conversation among specialists in the field for a long time. It’s due to the implications of using AI for not-so-great purposes that people like Elon Musk are pressing the alarm button about AI development. The Tesla boss went as far as to say that the global arms race for AI will cause World War III. “China, Russia, soon all countries w[with] strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo [in my opinion],” Musk tweeted.

Famed physicist who sadly lost his life earlier this month, Stephen Hawking, was also one of the people who has spent the past few years warning humanity about the dangers of Artificial Intelligence, predicting that future developments in the field could very well spell the end of the human race.

Using machine learning to decipher drone footage isn’t even new. In fact, it seems the DoD has been doing this for months before Google was asked to join in on this project alone; they have probably been doing it for a lot longer than that. The truth is that there is no way to stop this technology from being used for military purposes, which may, in turn, lead to people getting killed. There really is no way to stop the “bad people” from using it either, much like Musk fears, because technology is advancing exponentially and the hardware AIs need to function properly is going to soon become much cheaper.

Leave a comment