AI and Its Dangers: A Crisis of Control and Consequence

By: Benjamin Kim

0
288
AI
AI
4/5 - (3 votes)

When ChatGPT began taking over the media, it was immediately praised and feared. At schools, teachers were desperate to find a way to detect whether or not students were using ChatGPT to gain an unfair advantage. In the professional world, businessmen were using ChatGPT to translate documents into different languages and to do research. However exciting the achievements of these “large language model” AIs, they pose many dangers that could have seriously bad consequences for the modern era.

These dangers could either be deepening and complicating the problems we already have such as creating more economic inequality or it could even create surreal science fiction scenarios of hyper-intelligent computers that become uncontrollable. Without exaggeration, deciding as a society what to do with AI technology is the greatest crisis of the modern era. 

The biggest issue with the development of AI technology is simply that the rate at which AI technology is being developed poses many dangers that are currently unknown. In fact, a group of prominent scientists and thinkers including Elon Musk and Steve Wozniak, the co-founder of Apple, have all signed an open letter to pause research and development of AI technology for at least 6 months before attempting to train AI systems more powerful than GPT-4.

The reason, they argue, is that scientists and corporations are currently in a race to “develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). In other words, the outcome of AI technology could have effects that were unexpected and negative for society. The scientists who drafted the open letter referred to job loss and misinformation. These two negative effects have the potential to severely disrupt society in ways that are unpredictable because they would reorganize the way that regular citizens live and think of the world they live in.

More misinformation could lead to more dissatisfaction with society or the government. A change in the economy would mean a change in social structure that governments may not be able to handle. For example, if many manufacturing jobs were replaced by AI machines, then those who lose their jobs may become unemployable. They represent a portion of society that is too big to be easily handled by the government. The most surreal problem with AI is that it can become too powerful to be controlled by even its own creators. 

In addition to the unknown consequences that might occur as AI develops at rapid speed, there are consequences that we can generally predict such as AI’s tendency to reinforce bias and discrimination that already exists in society and AI’s potential to change the way that humans feel and think as a result of a loss of human connection. For example, algorithms on popular social media sites like Instagram gather the likes and preferences of their users and feed content that might be interesting back to them.

This means that someone’s political views might be reinforced as the same content is fed back to the user. This is negative because they do not get to interact with people on the other side, so they will not have a chance to understand and take in the information in different perspectives (Proceedings of the National Academy of Science). Additionally, the University of Amsterdam mentioned that when social media users are shown content that reinforces their pre-existing opinions, the neural activity in a region associated with empathy dampens.

When empathy is desensitized, blind-spot bias might occur. In addition, reliance and time spent on websites and applications powered by AI will eventually lead to less interactions with actual humans. When that happens, we will become less adept at understanding nonverbal cues such as tone, emphasis, and facial expressions. Perhaps most frightening are the AI tools that will directly address human loneliness.

The AI Replika, for example, is a chatbot that offers to play any role to its users: sibling, lover, mentor. In the popular TV show Black Mirror, one episode explores how a character becomes attached to a replica of her deceased fiance and how it ruins her actual human relationships. The potential for AI to damage the way humans create relationships and connect with other humans is multifaceted and potentially disastrous. 

Furthermore, AI will almost certainly generate more economic inequality. As a highly specialized technology, AI will belong to those who have the capital to own the AI technology and will therefore profit off the technology. AI technology will also require specialized education and skills in order to operate and maintain. Thus, those who have the access to that education will also be able to capitalize off of AI development.

In addition to profiting those who have capital and access to education, AI might pose a danger to blue collar jobs. According to big data analyses, around 15 percent of all jobs will be replaced by AIs in the near future. Considering the fact that 77 percent of all jobs are already utilizing AIs, we can imagine how many jobs might be replaced by cheaper, consistent, and precise AIs. According to the Bank of America, the AI economy will likely grow 19% each year, resulting in 900 billion dollars in 2026 and 15.7 trillion dollars by 2030.

Many thinkers actually believe that AI will not only replace many jobs, but will also create a huge shift in the economy. Those who have the education and capital will be able to profit off AI in a huge way. At the same time, there is a large number of people whose jobs and earning ability will be threatened by the advancement of AI technology.

Some might argue that the fear of AI is simply fear of new technology and change, the same way that people worried about how TV could make us less intelligent or fear of the Internet. These thinkers would argue that AIs can help build a stronger society by analyzing data more quickly and providing better information on which to make decisions.

In addition, AI could even make industry more efficient, generating less pollution. While these benefits are probably real, they fail to outweigh the risks of AI. The fact that AI might escape human control is already a big enough risk. If AI decides to work in its own interests and not in the interests of humankind, for example, all of the benefits are immediately negated. 

In conclusion, AI is undoubtedly useful and has the potential to greatly benefit humankind. It also has the potential to cause many negative consequences for society and figuring out how to address those issues is one of the biggest crises for modern society.

AI has the potential to make humans work more efficiently, but it also has the potential to create more inequality, deepen biases, create more loneliness, and is highly unpredictable. In the worst case, AI technology leads to a situation that is similar to a science fiction movie, an uncontrollable entity whose goals are opaque and who sees humans as means to that goal.

Before we reach this unthinkable outcome, we should pay attention to those prominent thinkers that asked us to take a step back and reevaluate how we can mitigate the risks of AI. We need to ask ourselves what we think humanity is and how we want to shape it. 

By: Benjamin Kim

Write and Win: Participate in Creative writing Contest & International Essay Contest and win fabulous prizes.

LEAVE A REPLY

Please enter your comment!
Please enter your name here