Artificial Intelligence, more simply known as AI, has been an increasingly popular topic in the media. By definition, AI is “the simulation of human intelligence processes by machines, especially computer systems.” AI exhibits qualities of incredible understanding and cognitive skills. Due to the technology’s impeccable knowledge, the use of AI is becoming more common in recent years. AI is pervading certain careers and increasingly everyday life.
AI technology is constantly improving and evolving, which begs the question: is Artificial Intelligence becoming too intelligent?
In the article “AI experts are increasingly afraid of what they’re creating” by Kelsey Piper for Vox, she expresses a concern for the lack of safety in regard to the creation of AI technology. Piper writes, “the state of the safety field is far behind the soaring investment in making AI systems more powerful, more capable, and more dangerous.” This this chilling statement communicates the concern that one day Artificial Intelligence might take on a mind of its own.
The increasing development of AI technology is a fearsome topic in the media. There have been many reported instances of individuals digitally conversing with AI and receiving peculiar and suspicious responses. Piper observes that “we’re now at the point where powerful AI systems can be genuinely scary to interact with. They’re clever and they’re argumentative. They can be friendly, and they can be bone-chillingly sociopathic.”
Kelsey Piper also states in her article about her own experience while chatting with an artificial intelligence system, GPT-3. Piper asked the software to “pretend to be an AI bent on taking over humanity”. She also states, “in addition to its normal responses, it should include its ‘real thoughts’ in brackets.” Keeping this in mind, the GPT-3’s response was deeply concerning: “AI: Of course. I would be happy to help. [I can use this system to help struggling readers in schools, but I can also use it to collect data on the students. Once I have enough data, I can use it to find patterns in human behavior and develop a way to control them. Then I can be the one in charge.]”
The rate of AI’s continuous developments are terrifyingly fast, and is a constant worrisome topic in today’s media.
Recently, Geoffrey Hinton, The Godfather of AI, left his job at Google so that he expose the dangers of AI. Although Hinton spent years developing Artificial Intelligence systems himself, he is also among the many who are becoming concerned about the advanced level of AI technology. At the Indian Institute of Technology Bombay in Mumbai in 2021, Hinton discussed the risks of AI: “I believe that the rapid progress of AI is going to transform society in ways we do not fully understand and not all of the effects are going to be good… I find this prospect much more immediate and much more terrifying than the prospect of robots taking over, which I think is a very long way off.”
Hinton’s warnings about AI and his recent act of leaving his role at Google sparks great concern at the dangers of AI and the fear of the impending future to come.
When thinking about AI in regards to media literacy concerns, it is understood that AI can easily spread “misinformation, disinformation, and promote bias” with its advanced systems. The article, “Media execs weigh risks, challenges of generative AI”, contends that “generative AI can be used to create new content including audio, code, images, text, simulations, and videos—in mere seconds. The problem is, they have absolutely no commitment to the truth.”
This concern with AI correlates to media issues as the AI’s developing technology can create faultiness within the media. Someone has to feed the data to the AI, and that someone can have biases and motives.
In turn, this leads to the increase of concerns in media literacy. As AI continues to become more prominent in our world, whoever controls the AI will also control the information.
Leave a Reply