Google’s Timnit Gebru exit exposes a crisis in AI

This year has had many things, including bold claims of advances in artificial intelligence. Industry commentators speculated that the GPT-3 language generation model could have achieved “artificial intelligence”, while others praised the protein folding algorithm of the DeepMind alphabet, Alphafold, and the its ability to “transform biology”. While the basis for these claims is finer than the effusive headlines, this hasn’t done much to dampen the industry-wide enthusiasm, the benefits and prestige of which depend on the proliferation of AI.

It was in this context that Google fired Timnit Gebru, our dear friend and colleague, and leader in the field of artificial intelligence. She is also one of the few black women in AI research and an adamant advocate for bringing to the field more BIPOC, non-Western women and people. By any measure, she stood out for the work Google hired her to do, including demonstrating racial and gender disparities in facial analysis technologies and developing reporting guidelines for data sets and data models. IA. Ironically, this and her defensive vocation for the underrepresented in AI research are also the reasons, she says, that the company fired her. According to Gebru, after demanding that she and her colleagues withdraw a critical research paper on large-scale (cost-effective) artificial intelligence systems, Google Research told her team that it had accepted the his resignation, even though he had not resigned. (Google declined to comment on this story.)

Google’s terrible treatment of Gebru reveals a double crisis in AI research. The field is dominated by an elite, mostly white, male workforce, and is controlled and funded primarily by major industry players: Microsoft, Facebook, Amazon, IBM, and yes, Google. With Gebru’s dismissal, the civic policies that combined the youth’s effort to build the necessary railings around AI have been broken, bringing questions about the racial homogeneity of the AI ​​workforce and the ineffectiveness of corporate diversity programs at the center of discourse. But this situation has also made it clear that, as sincere as a company may look like Google promises, business-funded research can never be separated from the realities of power and revenue and capital flows.

That should worry us all. With the proliferation of AI in areas such as health care, criminal justice, and education, researchers and advocates raise urgent concerns. These systems make determinations that directly shape lives, while at the same time incorporating them into structured organizations to reinforce stories of racial discrimination. Artificial intelligence systems also concentrate energy in the hands of those who design and use them, hiding the responsibility (and responsibility) behind the complex calculation booklet. The risks are profound and the incentives are decidedly perverse.

The current crisis exposes the structural barriers that limit our ability to build effective protections around artificial intelligence systems. This is especially important because populations subject to damage and bias resulting from AI predictions and determinations are primarily BIPOC people, women, religious and gender minorities, and the poor, who have suffered the weight of structural discrimination. Here we have a clear racialized division between the beneficiaries (corporations and mostly white male researchers and developers) and those most likely to be harmed.

Take, for example, facial recognition technologies that “recognize” people with darker skin less often than those with lighter skin. This is just alarming. But these racialized “mistakes” are not the only problems with facial recognition. Tawana Petty, organizing director for Data for Black Lives, notes that these systems are disproportionately deployed in predominantly black neighborhoods and cities, while cities that have been successful in banning and pushing the use of facial recognition are predominantly white.

Without critical and independent research that focuses on the perspectives and experiences of those who suffer from the harms of these technologies, our ability to understand and answer the overprinted claims that the industry makes is significantly hampered. Google’s treatment of Gebru makes it increasingly clear where the company’s priorities seem to lie when critical work pushes its business incentives. This makes it almost impossible to ensure that AI systems are accountable to the people most vulnerable to their harm.

Controls over industry are further compromised by the close links between technology companies and seemingly independent academic institutions. Researchers from corporations and academics publish articles together and rub their elbows at the same conferences, some researchers even occupy concurrent positions in technology companies and universities. This blurs the line between academic and corporate research and darkens the incentives to subscribe to this work. It also means that the two groups look very similar: AI research in academia suffers from the same pernicious problems of racial and gender homogeneity as their corporate counterparts. In addition, the best IT departments accept a large amount of funding for Big Tech research. We only need to look at Big Tobacco and Big Oil to find troubling templates that expose how much influence on public understanding of complex scientific issues can be exerted by large companies when knowledge creation is left in their hands.

.Source