Will Artificial Intelligence Take Its Bias To Space?

Dark background with Youth STEM Matters logo on left, Sustainable Development Goals banner on right, and SDG 5 (Gender Equality) and SDG 10 (Reduced Inequalities icons centre.

Artificial Intelligence (AI) applications rely heavily on data, as they require large amounts of it to learn and train themselves before they can start to perform tasks in new situations without the help of humans. 

Many of these AI models require high amounts of computational power to run, but the availability of high processing power in smaller devices and cheaper cloud computing has also skyrocketed. This has led to a vast increase in the use of AI applications in day-to-day life, which mostly happens without us noticing.

When I search for music or videos on online streaming services, machine learning algorithms are used to make suggestions that suit my preferences. The AI records my past searches, as well as my downloads, and merges them with data it has collected from other users to make accurate predictions of videos and music that would interest me. When I take a photo using my camera, the facial recognition software detects human faces or objects in the scene. Virtual assistants, such as Google Home and Amazon’s Alexa, use AI-powered voice recognition to understand what I am saying and execute my commands.

These systems are very cool - they understand me, remember my daily schedules and help me make decisions. However, a machine which remembers everything forgets nothing. This is where the problem of bias starts.

 

The Bias of the Machines

Bias occurs when AI models consistently make mistakes in predictions made for a given dataset. This causes it to discriminate or produce unfair results for certain groups of people based on parameters that should not matter. These include name, race, colour, gender, shape, and nationality.

As AI models are adopted by governments, businesses, health services, educational institutions, media, judiciary and law enforcement, the consequences of bias in AI applications become more severe and far-reaching.

It is one thing for AI algorithms to make biased decisions when suggesting songs that you may like or recipes that you should try out. It is a completely different situation when a biased AI algorithm starts confusing identities, rejecting job applications, declining scholarships, making an incorrect medical diagnosis, handing out more severe judicial sentences or offering lower-quality service to people based on their ethnicity, gender, race, size, age or name. 

 

Why do Biases Occur?

The reason biases appear in AI systems is that they have been taught to be biased by humans and the reinforcements they receive from their environment solidifies their biases. Most AI algorithms require large datasets to learn and train themselves from. Once they have been trained on a known dataset under supervision, they can start making predictions on an unknown dataset without supervision.

Who gets to create the datasets or decide which one to train the AI algorithm on? Datasets can be derived from many different sources: information available on the internet; proprietary information collected by private companies and banks; media, such as journals and newspapers; or information collected by the government, law enforcement, and border agencies. 

But datasets are never neutral. Who created the datasets and why? From whom was the information collected? What information was collected and what information was not collected? How was the information collected? Who was present when the information was collected? When was the information collected? How was the data labelled? All of these factors add biases to the data.

When a teacher asks a student for non-anonymous feedback on their teaching styles, the data is likely to be biased. Teachers hold power over students’ evaluations and students would not want to offend the teacher. When tobacco and vaping companies collect data on the health impacts of smoking, can that data be trusted?

If darker coloured faces are misidentified, tagged as threatening, stopped more by law enforcement, given more severe sentences and given fewer resources and opportunities to succeed than lighter coloured faces, how can machines trained on these labelled datasets not be biased? They will simply reflect the embedded biases in the data that trained them.

If machines are biased, then the results they produce will be biased, whether it is on earth or in space. 

 

Can Bias Travel from Earth to Space?

On the 29th of March 2019, NASA astronauts Anne McClain and Christina Koch were scheduled to conduct an all-women spacewalk. Only 13 women had performed spacewalks before and each time it was alongside men. The International Space Station had only one extravehicular activity (EVA) suit configured with a medium-size torso component at that moment. Both women preferred medium measurement. Only Christina Koch could use it so the date for all-women spacewalk had to be pushed backwards and it eventually happened on 18 October 2019 [1].

This shows how technology may end up favouring one group of people over another. Decades ago, NASA decided to manufacture only three sizes of the spacesuit torso — medium, large, and extra-large. The decision was based on the expected build of the astronauts who would be using them and on cost-effectiveness. These assumptions made early in the planning process have since determined how spacewalks are assigned. As diverse groups of people aspire to become astronauts, work at ground control or take up engineering and communication roles, technology will play an even more decisive role whether supportively or discriminatorily.

AI applications are already playing an active role in the space sector. Their uses range from analysing and interpreting satellite imagery to removing noise in data collected by space telescopes and sending AI Robots to the International Space Station. The Kirobo robot of Japan sent in 2013 had voice recognition and synthetic speech capabilities. 

Recently sent AI robots such as the CIMON 2 (Crew Interactive MObile companioN) are more advanced companions to the astronauts. CIMON 2 uses facial-recognition software to learn which astronaut it is talking to. It uses Watson Tone Analyzer developed by IBM to assess and react to astronauts' emotions. The Robot is built using technologies utilised in chatbots on Earth to automate customer service interactions and detect the tone of customers during dialogues between humans and machines. 

As more space missions to take humans to space are being planned, for example, ARTEMIS (NASA), Lunar Gateway (NASA), Shenzhou’s (China), Gaganyaan (India), the use of AI applications will grow. Complex space missions and limited human resources in space mean that AI robots and applications will play more advanced roles. These may range from the selection of astronauts and teams and managing spacecraft and scientific operations, to monitoring the safety and health of astronauts.

Ultimately, given that space-related AI applications are still written by people on Earth using datasets generated and labelled on Earth, biases are likely to escape into space too. 

 

References

[1] B. Dunbar. (2019, October). Friday's All-Woman Spacewalk: The Basics [Online]. Available: https://www.nasa.gov/feature/fridays-all-woman-spacewalk-the-basics

Artash Nath

Artash is a 14 year old from Toronto, Canada. He has been working at the intersection of Space, Robotics and Machine Learning for the past 5 years and has built many projects. In 2014, he co-founded an organization: www.HotPopRobot.com to ensure that the next generation is educated on space and astronomy. You can follow Artash’s projects here.

Previous
Previous

Rosalind Franklin: the Unsung Hero of Modern Biochemistry

Next
Next

Perfect Sense AQ: Improving the Air Around Us With Pin Badge Innovation