How biased are AI systems and is there any way to train AI algorithms differently?

Stathis Doganis
3 min readDec 8, 2022

AI systems are based on algorithms and data, and these algorithms and data can contain biases that can influence the decisions and outputs of the AI system. For example, if an AI system is trained on data that is biased towards a particular gender, race, or socio-economic group, the AI system may also exhibit bias towards that group. This can lead to unfair or discriminatory outcomes. But how can we train our systems more objectively and how can our systems be free from personal bias or prejudice? The training of our machines/algorithms should be based on facts and evidence rather than on personal opinions or beliefs. In the context of AI, this means that AI systems should be designed and trained in a way that minimizes bias and ensures that they make decisions based on objective criteria.

However, it is difficult to completely eliminate bias from AI systems, as the data and algorithms used to train them are often developed by human beings who may have their own biases. Additionally, the data used to train AI systems can also be biased, which can further contribute to bias in the AI system. This example was very obvious when Microsoft released the Tay chatbot. A chatbot that shutted down 16 hours after its release.Tay, it was designed to learn from the conversations it had with users on social media. However, it was quickly taken offline after users began teaching it racist and offensive language, demonstrating the potential for AI systems to be influenced by the biases of their users.

It is crucial to be aware of the potential for bias in AI systems and to take steps to minimize it, such as by carefully selecting and cleaning the data used to train the AI, and by regularly testing and evaluating the AI system to ensure it is making fair and unbiased decisions. However, it is also important to recognize that complete neutrality and objectivity in AI may not be possible.

But should we train AI systems only based on human brains?

In 2020 artist Maggie Roberts and her collective 0rphan Drift with ISCRI started a scientific and technological project collaboration on how to create an AI programmed by an octopus.

Octopuses are intelligent, marine animals known for their unique abilities, such as their ability to change color and their ability to squirt ink. They have a complex nervous system and are capable of learning and problem-solving. The interesting thing about octopuses compared to humans is that they have 9 brains that work independently. Also, octopus skin is sensitive to light and its suckers are sensitive to taste, smell and sensitive to light all at the same time. All these and more octopus characteristics make octopuses very different from humans as well as on how two different species understand the surrounding environment in a different way. Thus, putting an octopus to train an AI system will give us a different perception of intelligence and possibly unlock different ways of communicating with the animal. All these are difficult tasks as octopuses live in a very different environment than humans, and they have evolved to communicate in ways that are effective in their underwater world, which may not be easily translated to human language or technology. Octopuses communicate using a combination of body language, color changes, and chemical signals, which are difficult for humans to interpret and even more difficult for AI to understand. What Maggie Roberts and ISCRI are doing is very interesting as the AI is not analyzed and interprets octopus behavior and communication, but instead lets octopus to train the system. A system that potentially would not be understood by humans but thanks to technology we can develop new technologies that can more effectively bridge the communication gap between octopuses and humans.

--

--