grandfather Monkey Holding Box
What is monkey holding a box incident?
As technology progresses, the masses speculate that it will achieve more as the years go by. Many books and movies depict what the future would be like with advanced software and machinery. But looking back at the errors and mistakes in the past 25 years, we can see that technology isn’t as objective as one thinks, and it is yet to grow from its early stages. One of the examples that emphasizes the errors is the monkey holding a box Google search image issue that sparked debates and outrage across social media. But what is it exactly?
Timeline of the incident?
In 2023, users noticed something unusual with the Google search algorithm. When they searched for images of a monkey holding a box, the images showed an African boy holding a box. Such a horrendous concept only helps to increase racial discrimination, and the fact that such an incident occurred so recently is astonishing.
The Aftermath:
This incident went viral across various social platforms, with #Monkeyholdingabox topping the charts, and people also shared screenshots of what they discovered. As technology is supposed to be objective and only show what the users wish to see, such an incident baffled netizens.
Why did it happen?
There could be many reasons as to why it happened, but such an error falls under the category of algorithm bias. Here are some possible explanations behind the controversy:
- Biased Training data: It could occur because the database used by the search engine was either incomplete or was riddled with societal bias. When the source of information itself is prejudiced, it can lead to the same results.
- Tags: Some users might have used the wrong tag for the pictures they had uploaded to the internet, as a joke, and the algorithm didn’t ‘catch’ that.
- Algorithmic misclassification: It could be a mistake on the machine learning tools that Google search relies on, where it might have identified similar elements in the wrong thing and assumed that the photo shows a monkey carrying a box, when in fact, it wasn’t.
- Search Query Interpretation: This error could also occur because people searched for the images after hearing of the controversy, and the algorithm displayed the incorrect images at every search (as people sought out the images specifically)
- Lack of contextual understanding: This makes us understand that artificial intelligence doesn’t understand the societal and ethical back stories behind incidents, and hence, it results in this issue.
What is Algorithm Bias?
Algorithmic bias happens when AI or the software displays flawed, incorrect, and prejudiced outputs. The exact definition is: When AI or a computer program results in discriminatory results because it learned from a biased database. Technology only shows what we want to see, and the blame for this incident can’t be solely placed on incomplete and biased databases or the algorithm that picked it alone. Here, Google remains quiet about the controversy, so not much of its origin can be traced precisely.
Underlying issues:
It may seem like a simple issue or even a joke at first, but the underlying implications are something much more consequential. Prejudice against people of different skin tones, nationality, religion, and ethnicity has existed for centuries, and even in the modern world, it remains a major issue. It is said that one in six people faces prejudice in many forms every day. Such an issue, instead of being addressed and solved, only grows stronger. But in the world, many don’t even consider racism a problem, as if it were part of our biological inheritance. But such an impression is a weed, and it must be rooted out before it spreads further than it already has. Stereotypes can result in disasters in the world, especially in the hands of authorities.
Similiair Incidents:
- Google Photos and Gorillas:
In 2015, Google’s photo recognition AI accidentally labeled dark-skinned individuals as gorillas. That incident sparked outrage, and Google had to come out and apologize and remove the label itself from the AI’s system.
- Microsoft’s Tay and Racism:
Back in 2016, Microsoft introduced its chatbot Tay on Twitter. And within 24 hours, they had to shut it down because users on the platform taught the software to produce racial and other controversial outputs. This hows how AI learns from what it sees and analyses, and the issue lies with the users and their mindsets.
- Amazon AI and Women recruits:
In 2018, Amazon utilised an AI system that screens resumes and hires individuals. But soon, it was removed after it was exposed to be gender biased and remove female candidates. It could be because the AI was trained with male-dominated resumes.
- Apple Card and Gender discrimination:
In 2019, it was found that the Apple credit system placed more credit limitations on men than women, who had similar financial backgrounds. This even caused the New York Department of Financial Services to investigate the incident.
- Beauty AI and Skin tones:
Back in 2016, an AI software was programmed to be a beauty judge, but it determined light-skinned people as more beautiful. It could be due to the software not being trained with the population of all colourations. Racism can also undermine beauty.
Importance of Unity:
The world needs to understand that collectively, it can solve many problems. But such deep-rooted hatred and prejudice can only lead to more issues to solve, to the point of even AI learning from our differences more than our unity. Our legacy is determined by what we are and what we pass on, and it shouldn’t be hatred but a new generation of understanding the fixing the disasters caused in the past. The only cloud in the silver lining is how movies and other institutions make better moves towards cultural inclusivity and representation. Fashion and beauty industries started to look beyond their definition of beauty.
Corporate Responsibility:
The companies that stand behind such software should be more careful with what they make. From the training databases to their testing, they should have complete control and responsibility to decrease the number of such incidents. Here are some steps that should be taken:
- Use more diverse databases: The first thought itself is to use a database that isn’t limited by the population of a specific nationality or skin tone, but to use the contrary. Programmers should equally represent culture, so that is what the AI learns. Instead of using a biased database for programmes that are aimed at aiding the global population, use a diversified one.
- Use Bias detection tools: For such problems to be eliminated before they take shape, bias detection and similar tools should be baked into the code itself. Hence, the AI can learn through the bias detection that such outputs embedded with prejudice should not be legitimate results.
- Diverse workspace: The workplace should have people from various cultures so that such issues could be avoided. In such a scenario, humans can understand the effort of teamwork, opinions from others, and how such positive qualities can impact results.
- Ethics and sensitivity training: Programmers and the company should hold audits that test the software regularly. And they should also implement training protocols that maintain ethics and how to analyse what is sensitive and how to address it.
AI Domination:
In just a few years, generative AI and other variants have taken over many creative and manual fields. People use it in their lives so much that soon, they might forget what it feels like to rely on their own efforts when it’s necessary. How we may get used to complacency being handed to us in seconds, instead of honing and perfecting the skills that lie within us. It must be kept in mind that in the end, AI is just a tool for us to make our lives easier, not to define what life is. That definition and decision-making abilities lie in human ability.
It can be used as a form of inspiration and source of knowledge, but never to entirely craft an output. Through such controversies, we can clearly understand that, at the end of the day, artificial intelligence is just an extension of human capabilities, and we must know how to limit its in usage across various fields before it spirals out of control. Using the tool to enhance and improve our abilities is what we should do, not to use its results and try to build a definition of oneself, as it undermines creative and other fields.
To conclude, the Monkey holding the box is not just a mere combination of words or the result of an error, but it defines what our opinions and prejudices could result in. Humans must understand that there is more to us than fighting over dominance and that we should walk towards a more peaceful atmosphere in our world, to make it stronger and better. Fortunately, many organisations and people are making an effort to fight racial and other forms of prejudice that threaten to rip humanity apart. It is the thought of change itself that sparks a difference.
Write and Win: Participate in Creative writing Contest & International Essay Contest and win fabulous prizes.