April 20, 2024

cedric-lachat

education gives you strength

Column: Twitter’s and Zoom’s algorithms illustrate why diversity in tech matters

In the computer science industry, the use of machine-learning and data-based algorithms is quickly growing in popularity. There are countless applications of the tool, from predicting what Netflix shows you’d enjoy to self-driving cars. However, with high-level computer science algorithms comes the influence of bias and potential ethical concerns — many of which aren’t addressed in courses required by degree programs in the field.

Recently, Twitter came under fire for the neural network used to create photo previews on user timelines. It was found that previews choose the faces of white individuals over those of people of color. Researchers explained that this was primarily due to the fact that the network wasn’t trained with face detection, leading to the bias.

While the company hadn’t found any evidence of racial or gender bias in its internal testing, the Twitter communications team pledged to open-source its work for further review. Zoom, the video call platform that quickly rose in popularity with the push in online courses and meetings, had similar issues with recognizing the faces of Black people who use a virtual background.

Although these issues may seem trivial, they represent a larger trend of racism in technological tools put out by large names in the industry. Google created an algorithm that mistakenly labeled Black people as gorillas, and another algorithm allowed Microsoft’s Tay chatbot to become racist and profane, due to how the bot used data from the internet.

These problems can partly be linked to the racial disparity in the workforce. In 2014, Apple, Facebook, Google and Microsoft released their first diversity reports and pledged to increase diversity in their company makeup. Since then, each of those employers has made immense advancements in technology, but not so much in who they’ve been hiring. 

Source Article