At the end of June, the MIT Technology Review reported on how some of the biggest job search sites in the world, including LinkedIn, Monster, and ZipRecruiter,tried to eliminate bias in their artificial intelligence job interview software. These remedies came after incidents in which AI video interview software turned out to be discriminate against people with disabilities that affect facial expression and manifest prejudices against candidates identified as women.
When artificial intelligence software produces differential and unequal results for marginalized groups based on criteria such as race, kind, and Socioeconomic status, Silicon Valley is quick to admit mistakes, apply technical fixes, and apologize for the differential results. We saw that when Twitter apologized after its image cropping algorithm automatically focused on white faces rather than blacks and when TikTok expressed regret for a technical issue that removed the Black Lives Matter hashtag. They claim that these incidents are unintended moments of unconscious bias or bad training data pouring into an algorithm – that bias is a bug, not a feature.
But the fact that these incidents continue to occur across products and businesses suggests that discrimination against marginalized groups is actually at the heart of how technology works. It is time we saw the development of discriminatory technology products as an intentional act by the predominantly white and male executives of Silicon Valley defend systems of racism, misogyny, capacity, class and other axes of oppression that prioritize their interests and create extraordinary profits for their businesses. And while these technologies are made to appear benign and harmless, they are rather emblematic of what Ruha Benjamin, professor of African-American studies at Princeton University and author of Race after technology, terms “The new Jim codeâ: New technologies that reproduce existing inequalities while appearing more progressive than the discriminatory systems of a previous era.
Tech companies have financial and social incentives to create discriminatory products. Take, for example, Amazon Rekognition, a facial recognition created and sold by the e-commerce giant. Amazon very publicly declared a moratorium on police use of facial recognition in June 2020 after protests following the murder of George Floyd. But before that, the company developed and sold this product despite mountains of evidence showing that police use of facial recognition amplifies prejudice against blacks. Amazon did it to take advantage of a criminal justice system that disproportionately targeted blacks for surveillance, arrest and imprisonment and only stopped when protests against anti-black racism brought attention to the company’s practices. Additionally, the development and sale of this technology helps maintain an anti-black social hierarchy that allows Jeff Bezos and white men in top-paid Amazon jobs to retain their privilege in our society.
We should view algorithmic bias as a ripple effect of the tech culture that has persisted racial and gender inequality in hiring and leadership and who actively discouraged its employees to engage in political discussions at work. Although the 2020 protests sparked more explicit conversations about race and identity by tech companies, the culture of avoiding political discussions persists. This was evident when Basecamp CEO Jason Fried posted a memo in April in which he banned employee discussions on social and political issues on Basecamp company accounts. Offering his reasoning, Fried wrote: “Today’s social and political waters are particularly turbulent” and that “you shouldn’t have to wonder if staying out means you are complicit, or if you dive into it means you are a target. ” The note sparked an online backlash and led to a cascade of employee resignations.
What Fried said has long been implicit in tech companies across the country: Discussions of “shocked” issues like racism, transphobia, misogyny and ableism are uncomfortable for those with privilege and that tech companies would prefer to avoid them. And although these companies have in recent years attempted to have more explicit discussions about race, gender and prejudice in their workplace, the belief that social issues like race are insignificant for technological development permeates the corporate culture of Silicon Valley.
By normalizing the avoidance of talking explicitly about social and political issues, tech companies maintain a culture that favors the perspective of people with dominant identities, from hiring to product development. And the truth is that social and political matters are always being discussed. It’s just that people with racial, gender and other forms of power see their identity and perspectives as the default and therefore are not part of the social and political landscape.
This is especially true for whites, a group whose racial identity has long been treated as invisible in the United States. In the research I conducted last summerI found that statements released by tech companies amid racial justice protests rarely mentioned whiteness or whites. The choice to exclude whites from these statements – while hyper-focusing on blacks and other people of color – normalizes the idea that whites are racialless, and it absolves whites of their role in maintaining the racial hierarchy. In doing so, whites retain their power within tech companies and avoid the feelings of fear, discomfort, and anger that can accompany discussions of racial inequality – a phenomenon sometimes known as racial inequality. white fragility.
It’s time for us to dismiss the narrative that Big Tech is selling, that incidents of algorithmic bias are the result of the use of unintentionally biased training data or unconscious bias. Instead, we should view these businesses the same way we view education and the criminal justice system: as institutions that maintain and reinforce structural inequalities, regardless of the good intentions or behaviors of individuals within the organization. these organizations. No longer viewing algorithmic bias as accidental allows us to involve coders, engineers, executives and CEOs in the production of technological systems that are less likely to refer black patients for care, which can cause disproportionate harm to people with disabilities, and discriminate against women in the labor market. When we view algorithmic bias as part of a larger structure, we can imagine new solutions to the damage caused by algorithms created by tech companies, applying social pressure to force individuals within these institutions to behave. differently and create a new future in which technology is not inevitable, but is rather fair and responsive to our social realities.