Artificial intelligence (AI) is a highly misunderstood term in the minds of many. Countless people on the globe assume sinister interpretations of AI due to popular depictions by the media. Whether it be AI taking over the world or controlling satellites, it has been enjoyed in its false essence rather than what it truly is. Technically speaking, artificial intelligence is designed by scientists to mimic useful human patterns for the betterment of humanity. However, to what extent can technology take AI to? With Norman, things are certainly getting interesting.
Computer scientist Stuart Russell, who wrote the textbook on AI, has spent his career thinking about the problems that arise when a machine’s designer directs it toward a goal without thinking about whether its values are all the way aligned with humanity’s.
A number of organizations have sprung up in recent years to confront the potential explosion of AI, including OpenAI, a working research group that was founded by techno-billionaire Elon Musk “to build safe [AGI], and ensure AGI’s benefits are as widely and evenly distributed as possible.” So is it wise to assume that AI is perfectly safe? For now.
Mediums of artificial intelligence are being fed data to process the ways of humans without a goal being set to them. Although this can take a considerable amount of time to master, the potentially threatening future of AI can be recreated in seemingly harmless forms. The intimidating movies have materialized in the form of Norman, a murderous AI.
This week, researchers at MIT unveiled their latest creation: Norman, a disturbed AI. (Yes, he’s named after the character in Hitchcock’s Psycho.) They say “Norman is an AI that is trained to perform image captioning, a popular deep learning method of generating a textual description of an image. We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death. Then, we compared Norman’s responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders.”
While it is debatable whether the Rorschach test is a valid way to measure a person’s psychological state, there’s no denying that Norman’s answers are immensely disturbing. If you don’t believe a machine is capable of such thoughts, see for yourself!
What Norman saw in those inkblots is most definitely something conjured by the mind of a lunatic. “A man is electrocuted and catches to death,” reads the first caption. “Man gets pulled into dough machine,” says another. “Man is murdered by machine gun in broad daylight,” reads a third. For comparison, when an AI that had been raised on good, wholesome COCO (Common Objects in Context) saw those same inkblots, it described them as “A group of birds sitting on top of a tree branch,” “A black and white photo of a small bird,” and “A black and white photo of a baseball glove,” respectively. Well that is chronically disturbing to say the least!
As the researchers explain: “Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior. So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it.” Remember the incredibly racist AI chatbot Tay? Well, we know for sure that it wasn’t made racist. What we also know is that it quickly turned racist after being exposed to Twitter users, and was subsequently shut down. This is another example of biased feeding of data.
While the progression of present times has promoted experimentation, we need to question if the reconstruction of certain principles is necessary. Artificial intelligence is meant to pave way for the betterment of mankind. While experimentation is appreciated, we need to draw definite boundaries to implementation. The consequences of each invention needs to be weighed against the curiosity factor. While we have discovered the possibility to create AI biased towards certain concepts, it is wise for us the be leaning towards the conservation and convenience of humanity rather than potentially create threats. Is an AI like Norman really necessary?