top of page

5 Ways Tech Is Racist and How To Solve That

I’m sure you’ve heard what’s happening in the US right now. As someone who believes in the power of equality and diversity in tech, I can’t stay silent. Racism has no place in tech – or anywhere for that matter.

If you’d like to learn more about what you can do and how to become proactively anti-racist, this issue of Women in Tech Weekly has useful and actionable resources. If you have any other resources you’d like to share on the topic, please add them in the comments!

Right now I want to talk about how Technology discriminates against people of colour and what can be done about it. It’s our responsibility to ensure that tech solutions are inclusive and that they are solving problems for everyone, rather than creating more of them.


Let’s look at some examples of how modern day technology promotes systemic racism, starting with a big one:

1. Big Data and Algorithms are Subjective

We tend to blindly trust data and algorithms, thinking that they should be objective. Afterall, they’re mathematical functions and how can those be subjective? They should treat everyone equally, right?

Cathy O’Neil in her book Weapons of Math Destruction demonstrates how that is not true. She argues that certain algorithms that help governments and organizations make important decisions (like who gets a loan, who can get a job, what’s your insurance rate, or how long someone goes to prison for), these algorithms are flawed and create a negative feedback loop. 

A lot of it stems from having skewed or subjective data to begin with. Historic data that results from systemic racism. For example, black communities historically lacked access to good education, therefore had less career opportunities and therefore formed a smaller proportion of employees that performed well at companies, statistically speaking. Well, algorithms that look at company-wide performance don’t necessarily take into the account that there’s a problem with the candidate pool to begin with and might discriminate against black candidates.

Photo by Markus Spiske on Unsplash

That’s a very general and non-specific example, I’m going to dive deeper into the topic in a different video.

Another variable that contributes to biases in algorithms is the fact that they’re built by people and people are inherently biased. So some of them may unwillingly transfer their biases onto the algorithms. Again, this is a huge argument for bias training and for diversity in tech.

2. Artificial intelligence – facial recognition in particular 

I’m sure you’ve heard that AI has issues recognising faces of certain demographics. According to The New York Times, the National Institute of Standards and Technology has found that “AI systems falsely identified African-American and Asian faces 10 times to 100 times more than Caucasian faces.” The highest error rates came in identifying Native Americans. Facial recognition systems are also known to be sexist and age-ist, recognising female faces and older people’s faces at a much lower rate.

While facial recognition has scary implications for surveillance technology, it’s also a technology used in a lot of the devices we use and are planning to use in the future – for example, Face ID on Apple or Face unlock on Android. And what about self-driving cars? Guardian dived into this question in an article in 2019, finding very scary evidence that suggested that: “a white person was 10% more likely to be correctly identified as a pedestrian than a black person”.

Photo by Owen Beard on Unsplash

3. Content echo chambers

Social media companies and other content platforms use machine learning technologies that learn what you like overtime – so that they can suggest content that you’re most likely going to enjoy. A lot of the times it’s the content that we’re going to agree with, which creates echo chambers. Unless we’re proactive about it, we are typically not exposed to other points of view or other opinions. We don’t think outside of the box or understand what people who are different from us may experience.

Not only this makes us think and question our beliefs less, but it’s also dangerous in terms of creating a wider divide between people who are different in terms of their background, experiences, beliefs and so on.

4. Redlining and accessibility to tech according to location

Redlining is a concept that I’ve just learned about, it’s the practice of refusing services to someone because of the area they live in. It started in the US as a form of housing discrimination – not allowing minorities and people of colour to live in certain neighbourhoods and then moved on to practices like denying loans or health insurance if you came from a minority neighbourhood. Absolutely crazy, right?

Well, while certain algorithms still use postcodes as a variable and historical data in their decision making and therefore discriminate against people based on that, there’s also redlining that still continues in certain neighbourhoods. For example, certain neighbourhoods don’t have good access to the internet – because Internet service providers don’t deem them profitable and therefore don’t have good coverage. Another example is Amazon Prime that doesn’t deliver to certain historically minority neighbourhoods or Pokemon Go that didn’t give players in certain locations the same game functionalities.

Photo by Thor Alvis on Unsplash

5. Search results that misrepresent information

I’m currently reading a book called “Algorithms of Oppression: How Search Engines Reinforce Racism” by Safiya Noble and it demonstrates how search engines curate information for us and therefore misrepresent certain groups of people, concepts and knowledge in general.

For example, when she Googled “Black girls” back in 2009, the results on the front page were mostly pornographic. There were also a lot of search suggestions that promoted stereotypes and racism.

One can argue that it’s the result of popular searches, but is it right? Shouldn’t there be mechanisms in place preventing the algorithm from essentially promoting racist content and concepts? Can you imagine being a black girl back in 2009, googling yourself and getting pornographic results? What kind of implications does it have for your self-identity in that situation?

I’m only starting to scratch the surface of racism in technology and of course, I don’t have all of the answers. However, I have a few suggestions on how we can all start solving these issues:

1. Learning about existing issues and biases in algorithms and data

2. Demanding transparency on those algorithms and questioning their objectiveness

3. Understanding our own biases – here’s a free test to do that

4. Calling ourselves and others out on our biases and inappropriate behaviour 

5. Making it personal our responsibility to improve diversity in tech

6. Holding tech companies accountable for their hiring practices and making their products inclusive

7. Continuing our education on the subject, proactively avoiding echo-chambers and educating others

Those are some starting points. I commit to making more content on the topic and diving deeper into the issues of racism in tech. It’s all of our responsibility to drive our technology into the right direction and as a content creator, it’s my duty to talk about how we can make it more inclusive.

So let’s do this!

I’m linking my research in the below – and here you can find resources that can help you get your education started as well. 

Let me know if some of the things I talked about today have surprised you and what you think about the topic in general. 

</Coding Blonde>

My research for this post:


Comments


bottom of page