1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites
Digital World

Multicultural robots and racist machines

Tarek Amr
March 7, 2017

Is it possible for computers to become racist and act upon stereotypes? Maybe it is better to understand computers, robots and machine learning to understand their limitations and possibilities.

https://p.dw.com/p/2Yl29
Wall E Roboter aus dem Pixar Film
Wall-E from the Pixar movie classicImage: Imago/EntertainmentPictures

The accusation of bias is usually negative. No one trusts a biased opinion. But bias is essential in machine learning algorithms and if you bear with me, you will see that it is there in humans for a reason too. Let's first try to see how computers are taught. Imagine you wanted to teach a machine to identify cars. You would tell it that a car has to have wheels and therefore that it should select things with wheels as cars. If you were to stop there, the machine would confuse lorries and bicycles with cars. So, maybe, you would have to tell it to take other items into consideration: for example, that there should be four wheels. This would rule out bicycles but not vans. So you might go further and say that a car has four doors, two front seats, one long back seat that looks like a sofa. So the machine would eventually be able to tell the difference between a van and a car, except what about sports cars with only two doors?

Stereotypes and biased algorithms

Tarek Amr
Tarek AmrImage: DW

The number of rules is always tricky: Too many rules and the algorithm will suffer from high variance will not be able to generalize, too little it will suffer from bias. In practice, algorithms are not given rules but infer them from data. If they have enough data to learn from, they will learn enough rules so as not to be biased, but will see enough variations to be able to generalize. However, it is always easier to build a biased algorithm that does not need to learn so much data, especially if you cannot give it enough examples to learn from. Aren't humans the same? Think of toddlers, you don't teach them what a car is; you show them pictures of cars on the street so that soon they can easily tell them from bicycles and vans. What about somebody who has never seen a black, white or Asian person? If he or she suddenly sees one or several black, white or Asian people, he or she might be inclined to jump to conclusions about an entire ethnic group based on this small sample. It is easy to stereotype, just as it is to create a simple biased algorithm. Sometimes, however, bias can be useful; a caveman does not need to be able to tell a viper from a python or other snake to run away, but a scientist needs to be able to tell the difference in order to conduct research on them or develop useful products from their skin or saliva, for instance.

Shaping opinions with algorithms

Whether we like it or not and are suspicious of them, algorithms and automation will shape our future. In fact, they are already shaping our present. They are used by states and companies everyday. In some states, for example, it is very likely that border control services analyse' flight history, personal details, and even social media activity to decide whether to let travellers into a country or not. Could an algorithm infer - after seeing a few dozen travellers with the same first name and from the same country committing crimes - that all people with that first name from that country are possible criminals. Yes! However, should we blame algorithms or the computer engineer who designed them or the policy maker who did not opt for a more accurate algorithm? If something goes wrong in a computer, it is easy to blame a bug in the software until it becomes clear that the problem was actually an intended feature. Putting aside moral debates about how governments should plan for the future, from a pragmatic point of view, the question is always that of possible risks versus possible benefits. Should we run from away snakes or learn how to make useful products from them?

Newly introduced Facebook "Reactions" buttons
Newly introduced Facebook "Reactions" buttonsImage: picture-alliance/AP Photo

It is easier for companies to make such decisions. Algorithms are used to maximize the shareholders' profits. If Facebook has to show you news that you care about or advertisements it can make more money from, it will choose a perfect balance to maximize its profits: There will be enough of the former to keep you coming back and enough of the latter for it to make money. Facebook has the knobs to control what you see and don't see on your timeline, all the knobs to influence your opinion. Moreover, smart advertisers can sneak onto your timeline via publishing paid content tailored especially to you. Similarly, YouTube can control what videos you are more likely to watch, while Spotify can influence the music you listen to. I cannot but ask myself, however, whether that was not the case before with more traditional media. Media moguls and advertisers were also able to shape our opinions. Perhaps we had more choice before whereas now there is only one Facebook, one Google and one YouTube. Competition is almost non-existent on the web. In theory, anyone can create a social network or a search engine, but very few actually can. Even giants such as Microsoft cannot compete with Google when it comes to search, and Twitter and Snapchat are losing more ground to Facebook everyday.

App makers control the way we think

Authors and journalists used to coin words and influence our language, but they were never able to control how we used vocabulary. Google now offers suggestions as to how to respond to an email. You can of course ignore these and choose your own words, but sometimes you don't. More and more people are using emojis to express themselves in Whatsapp or iMessage. These are becoming a language of their own with linguists studying them. App makers are in control of our emoticons vocabulary, which shape the way we think - as any language does. This was made clear by the debate regarding a smiley's skin color for example or the fact that Facebook had a thumbs up button for a long time but not a thumbs down one. When Facebook introduced reactions, these were tailored to what it thought should be your reaction rather than how you might have reacted.

I enjoy watching the TV series "Black Mirror" because it portrays how technology can affect our lives in a provoking way and depicts a scary future. In one episode, people have chips implanted in their heads to record everything others say and do. Yet, the more I think about technology, the more I believe that the person who will cure cancer will not be a biologist or pharmacologist but a computer scientist. The tools that will eventually eliminate poverty will not be tractors and axes, but sensors and computer chips used in precision farming systems. It has to be clear to everyone that we own our technology and we shape it before it shapes us, by the policies we make, the monopolies we break and the knowledge we share and make available to everyone for free - free as in free speech not free beer.

Tarek Amr works as machine learning scientist at Catawiki in the Netherlands. Through the Open Knowledge Foundation, he promotes open data and helps journalists to make use of data. He used to cover the social media sphere in the MiddleEast as part of the Global Voices team of volunteers.