Taking the Right AI Path
April 5, 2021
865         0

by Stephen Kanyi

In early 2019, Vox posited a scary scenario involving a sophisticated AI system, “with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the world’s computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.”

This could be our future if we are not careful about the path we take with AI. These systems are getting smarter and more powerful every year and how we are going to use them may well determine the fate of humankind.

Warnings and Prophecy

Recently, many have been sounding the alarm bells about the danger posed by AI. Famous physicist Stephen Hawking told an audience in Portugal that, “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilisation.” He further advanced that AI could be a more dangerous threat than nuclear missiles, explaining “If AI went bad, and 95 percent of humans were killed, then the remaining five percent would be extinguished soon after. So, despite its uncertainty, it has certain features of very bad risks.”

Even billionaire tech mogul Elon Musk, who one would think should be a proponent of technology in whatever form, reiterates the dangers of AI. He told his SXSW audience “I am really quite close… to the cutting edge in AI, and it scares the hell out of me. It’s capable of vastly more than almost anyone knows, and the rate of improvement is exponential.”

Do they sound overly alarmist? Maybe. But I am sure many would agree that they do have a point. As AI gets more powerful and more integrated in the global economy, any ‘independent’/rogue decisions could be cataclysmic. Gary Marcus, a cognitive scientist and author explained in a brilliant essay that ‘the smarter machines become, the more their goals could shift.’ 

“Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called ‘technological singularity’ or ‘intelligence explosion,’ the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.”

In the wrong Hands

However, as it stands now and in the near and mid future, AI is still far from achieving consciousness. It will still remain to be a tool, a very powerful one that will be beholden to its creators, and herein lies the one of the biggest questions of this century. What are we going to use this powerful technology for? 

Today, most of the institutions pushing the development of AI technology are doing it for commercial purposes i.e. to create better products such as self-driving vehicles and virtual assistants like Siri. As the technology matures however, it could be used for other more sinister purposes. As Youpal Group owner, co-founder and CEO Karl Leahlander says: “If we take a look at our own company, how can similar entities be allowed so much freedom in dealing with AI? Not everyone has the intentions we have to create a positive change and impact. So, how can this field be as unregulated as it is today?”

In 2015 more than 30,000 AI/robotics researchers all signed an open letter on the subject of autonomous weapons. It begins with a description of what AI weapons and the dangers of an AI arms race. 

Those 30,000 researchers have at least started this crucial conversation, stating “Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so.

subscribe for YouMedia Newsletter
0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *

newsletter
subscribe for YouMedia Newsletter
LET'S HANG OUT ON SOCIAL