Artificial intelligence (AI) is a highly misunderstood term in the minds of many. Countless people on the globe assume sinister interpretations of AI due to popular depictions by the media. Whether it be AI taking over the world or controlling satellites, it has been enjoyed in its false essence rather than what it truly is. Technically speaking, artificial intelligence is designed by scientists to mimic useful human patterns for the betterment of humanity. However, to what extent can technology take AI to? With Norman, things are certainly getting interesting.
Computer scientist Stuart Russell, who wrote the textbook on AI, has spent his career thinking about the problems that arise when a machine’s designer directs it toward a goal without thinking about whether its values are all the way aligned with humanity’s.
A number of organizations have sprung up in recent years to confront the potential explosion of AI, including OpenAI, a working research group that was founded by techno-billionaire Elon Musk “to build safe [AGI], and ensure AGI’s benefits are as widely and evenly distributed as possible.” So is it wise to assume that AI is perfectly safe? For now.
Mediums of artificial intelligence are being fed data to process the ways of humans without a goal being set to them. Although this can take a considerable amount of time to master, the potentially threatening future of AI can be recreated in seemingly harmless forms. The intimidating movies have materialized in the form of Norman, a murderous AI.
While it is debatable whether the Rorschach test is a valid way to measure a person’s psychological state, there’s no denying that Norman’s answers are immensely disturbing. If you don’t believe a machine is capable of such thoughts, see for yourself!
What Norman saw in those inkblots is most definitely something conjured by the mind of a lunatic. “A man is electrocuted and catches to death,” reads the first caption. “Man gets pulled into dough machine,” says another. “Man is murdered by machine gun in broad daylight,” reads a third. For comparison, when an AI that had been raised on good, wholesome COCO (Common Objects in Context) saw those same inkblots, it described them as “A group of birds sitting on top of a tree branch,” “A black and white photo of a small bird,” and “A black and white photo of a baseball glove,” respectively. Well that is chronically disturbing to say the least!
As the researchers explain: “Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior. So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it.” Remember the incredibly racist AI chatbot Tay? Well, we know for sure that it wasn’t made racist. What we also know is that it quickly turned racist after being exposed to Twitter users, and was subsequently shut down. This is another example of biased feeding of data.
Further Reading: