Google caused a stir earlier this month when it fired Timnit Gebru, the co-director of a team of researchers at the company that studied the ethical implications of artificial intelligence. Google claims it accepted her “resignation,” but Gebru, who is black, says she was fired for drawing unwanted attention to the lack of diversity in Google’s workforce. She had also been confronted with her supervisors over her request to withdraw a document she had co-authored on ethical issues associated with certain types of AI models that are basic to Google’s business.
In this week’s Trend Lines podcast, WPR’s Elliot Waldman was joined by KIT Hao, senior AI reporter for MIT Technology Review, to discuss Gebru’s expulsion and its implications in the increasingly important field. of the ethics of AI.
Listen to the full interview with Karen Hao on the Trend Lines podcast:
If you like how you feel, subscribe to Trend Lines:
The following is a partial transcript of the interview. It has been slightly edited for clarity.
World policy review: First of all, could you tell us a little bit about Gebru and the kind of height he has in the field of artificial intelligence, given the pioneering research he’s done, and how he ended up at Google?
Karen Hao: Timnit Gebru, one might say, is one of the fundamental pillars of the field of AI ethics. He earned his doctorate. in AI ethics at Stanford under the advice of Fei-Fei Li, who is one of the pioneers in the entire field of AI. When Timnit completed his doctorate at Stanford, he joined Microsoft for a postdoctoral fellowship, before finishing at Google after tackling it based on the impressive work he had done. Google was starting their AI ethics team and they thought it would be a great person to co-lead it. One of the studies for which she is known is the one she co-authored with another black researcher, Joy Buolamwini, about the algorithmic discrimination that appears in commercial facial recognition systems.
The paper was released in 2018 and, at the time, the revelations were quite shocking as they were auditing commercial facial recognition systems that were already being sold by the technology giants. The document’s findings showed that these systems that came with the premise that they were very accurate were actually extremely inaccurate, specifically on darker-skinned female faces. In the two years since the document was released, there have been a number of events that have led these tech giants to withdraw or suspend the sale of their facial recognition products to police. The seed of these actions was actually planted by the document that Timnit co-authored. So he’s a really big presence in the field of AI ethics and he’s done a lot of innovative work. He also co-founded a non-profit organization called Black in AI that really advocates for diversity in technology and specifically in AI. It is a force of nature and a well-known name in space.
We should be thinking about how to develop new artificial intelligence systems that do not rely on this brute force method to scrape billions of internet phrases.
AL PR: What exactly are the ethical issues that Gebru and his co-authors identified in the document that led to his dismissal?
Hao: The paper dealt with the risks of large-scale language models, which are essentially AI algorithms that are trained on a huge amount of text. Therefore, you can imagine that they are formed in all articles published on the Internet (all subreddits, Reddit threads, Twitter and Instagram subtitles). And they try to learn how we construct sentences in English and how they could generate sentences in English. One of the reasons Google is so interested in this technology is because it helps fuel its search engine. In order for Google to give you relevant results when you search for a query, it needs to be able to capture or interpret the context of what you’re saying, so that if you type in three random words, it can gather the intent of what you’re doing.
What Timnit and his co-authors point out in this article is that this relatively recent area of research is beneficial, but it also has some pretty significant disadvantages that need to be discussed more. One is that these models consume a lot of electricity because they work in very large data centers. And given that we are in a global climate crisis, the field should think about the fact that, in doing this research, it could exacerbate climate change and then have downstream effects that disproportionately affect marginalized communities and developing countries. Another risk they point out is the fact that these models are so large that they are very difficult to examine and also capture large areas of the Internet that are very toxic.
Therefore, they end up normalizing many sexist, racist or abusive languages that we do not want to perpetuate in the future. But because of the lack of scrutiny of these models, we are not able to completely dissect the kind of things they are learning and then eliminate them. Ultimately, the paper’s conclusion is that there are great benefits to these systems, but there are also great risks. And, as a field, we should spend more time thinking about how we can develop new language AI systems that don’t depend so much on this brute force method, just by training it in billions of sentences extracted from the Internet.
AL PR: And how did Gebru’s supervisors react to Google?
Hao: Interestingly, Timnit has said – and this has been endorsed by his former teammates – that the document was approved to be presented at a conference. This is a very classic process for your team and for the wider Google AI search team. The purpose of doing this research is to contribute to academic discourse and the best way to do this is to present it at an academic conference. They had prepared this paper with some external collaborators and presented it at one of the major conferences on AI ethics for next year. He had been approved by his manager and others, but at the last minute he received a notice from superiors above his manager that he should withdraw the document.
Very little was revealed to him as to why he needed to withdraw the diary. He then proceeded to ask many questions about who told him to withdraw the document, why he was asked to retract it, and whether modifications could be made to make it more pleasant for presentation. She got wrapped up and received no further clarification, so she ended up sending an email before leaving for Thanksgiving vacation saying she would not withdraw the document unless certain conditions were met.
Silicon Valley has a conception of how the world works based on the disproportionate representation of a given subset of the world. That is, usually straight upper-class white men.
He asked who gave the comments and what they were. He also called for meetings with more executives to explain what happened. The way their research had been treated was extremely disrespectful and it was not the way researchers were traditionally treated at Google. I wanted an explanation of why they had done it. And if they didn’t meet those conditions, I would have a sincere conversation with them about one last appointment at Google, so that I could create a transition plan, leave the company running smoothly, and publish the document out of Google’s context. He then went on vacation and, in the center, one of his direct reports sent him a text message telling him that they had received an email saying that Google had accepted his resignation.
AL PR: As for the issues that Gebru and his co-authors raise in their article, what does it mean for the field of AI ethics to have what appears to be this massive level of moral hazard, where communities that are most at risk for impacts that Gebru and his co-authors identified — environmental and other ramifications — are marginalized and often have no voice in the technology space, while the engineers who build these AI models are largely isolated from the risks?
Hao: I think this is at the core of what has been an ongoing debate in this community for the past two years, which is that Silicon Valley has a conception of how the world works based on the disproportionate representation of a particular subset of the world. That is, usually straight upper-class white men. The values they have of their cross-section of lived experience have somehow become the values everyone needs to live. But it doesn’t always work that way.
They do this cost-benefit analysis that is worth creating these very large language models and it is worth spending all that money and electricity to reap the benefits of this type of research. But it is based on their values and lived experience, and may not end up being the same cost-benefit analysis that someone can do in a developing country where they prefer not to have to deal with the repercussions of climate change later on. This was one of the reasons why Timnit was so firm in making sure there was more diversity on the decision-making table. If you have more people who have different lived experiences, who can analyze the impact of these technologies through their lenses and bring their voice to the conversation, maybe we would have more technologies that will not distort their benefits so much towards a group. of the expense of others.
Editor’s note: The photo above is available at CC BY 2.0 license.