Artificial Intelligence can be a massive addition to numerous fields of work, but many believe there has to be a limit to what it should be allowed to do. Elon Musk, Tesla’s boss, is a known critic of allowing AI to run rampant; so much so that he believes Anthony Levandowski shouldn’t be allowed to develop digital superintelligence.
It was recently announced that Levandovski, founder of Otto and former Google AI engineer who worked on the company’s self-driving cars, has created his own church called “Way of the Future” which worships the Godhead, a digital deity.
“On the list of people who should absolutely *not* be allowed to develop digital superintelligence…* Musk writes on Twitter, adding a link to a Venture Beat article on the above-mentioned topic.
Artificial Intelligence is a hot topic that has many specialists divided. Some, like Musk, believe it’s something we should fear, picturing a world where AIs take over humanity. In the other corner of the ring, there are people who are excited about the prospect of having more AI into our lives. Many experts have expressed doubt that Singularity, the state where AI development reaches such a level that it will surpass human intelligence, is even a possibility.
Herein lies Musk’s problem with Levandowski’s Godhead worshiping church. People who believe Artificial Intelligence would do a much better job at ruling the world should probably never be trusted with the ethical principles of developing such technology because they could abuse their power.
Tesla’s boss is one of the many experts who signed a letter to the United Nations earlier this year where they urged the international body to consider AI-controlled weapons a danger to humanity.
Of course, in the theoretical situation where an AI would reach Singularity, “worshipping” them would not exactly be the first thing on their list. Humans are the ones who enjoy worshiping, and making up gods since the beginning of time. A machine would not understand the notion, it would not revel in having humans following it like Levandowski thinks will happen. Faith is a very biological and humanly social abstraction and machines would be confused about it. They’d either be slaves or the masters, which is pretty much what makes any religion of machines extremely dangerous.
While there are undoubtedly many nefarious ways AIs could be used, such as cyber attacks, and that they probably can’t be charged with who to shoot, there are many great purposes. For instance, cyber-secret futurist Arthur Keleti believes that we might even be able to trust AIs to hold and protect our data, our cyber-secrets.
His idea is to have a device-based AI that can analyze your data, learn from your behavior the importance of certain types of data and keep everything locked up. In case law enforcement ever needs any type of data from your device, such as in a criminal investigation, the AI could be asked to return a specific type of data, if available, information that would be pertinent to the case. This idea could very well solve a big problem law enforcement has been facing for years – device encryption.
This means that phones become as good as bricks without the proper passwords, which hurts investigations across the United States. FBI alone admitted to having some 7,000 seized devices it cannot search due to this safety feature. And it is a safety feature because it is primarily meant to protect people’s data from thieves, nosy friends and so on.
An AI like the one Keleti thought of could very well restore balance between people’s privacy and desire to protect their cyber-secrets, the most sensitive data they hold, and law enforcement’s desire to get pertinent data to solve their cases.
Other types of AIs have made our lives easier for several years now, from our phone assistants, to search engines, and more.