(Reuters) – Google’s Alphabet Inc. will change procedures before July to review the work of its scientists, according to a city council recording heard by Reuters, which is part of an effort to quell internal turmoil over the integrity of his research on artificial intelligence (AI).
Speaking at a staff meeting last Friday, Google Research executives said they were working to regain confidence after the company fired two prominent women and turned down their jobs, according to a one-hour recording, the whose content was confirmed by two sources.
Teams are already testing a questionnaire that will assess the risks of projects and help scientists navigate the reviews, research unit chief operating officer Maggie Johnson told the meeting. This initial change will take effect at the end of the second quarter and most papers will not require additional verification, he said.
Reuters reported in December that Google had introduced a review of “sensitive issues” for studies related to dozens of issues, such as China or the bias in its services. Internal reviewers had demanded that at least three works on AI be modified to refrain from negatively launching Google technology, Reuters reported.
Jeff Dean, Google’s senior vice president who oversees the division, said Friday that the review of “sensitive issues” is “confusing” and that he had commissioned senior research director Zoubin Ghahramani to clarify the rules. according to the recording.
Ghahramani, a Cambridge University professor who joined Google in September from Uber Technologies Inc., said during the town hall, “We need to be comfortable with this discomfort” of self-critical research.
Google declined to comment on Friday’s meeting.
An internal email, seen by Reuters, offered new details about the concerns of Google researchers, which showed exactly how Google’s legal department had modified one of the three AI articles, called “Extraction of large model training data languages ”. (bit.ly/3dL0oQj)
The email, dated Feb. 8, from a co-author of the document, Nicholas Carlini, was addressed to hundreds of colleagues, who wanted to draw their attention to what he called “deeply insidious” editions of the lawyers of the company.
“Let’s be clear here,” the nearly 1,200-word email said. “When academics write that we have a ‘concern’ or find something ‘worrying’ and a Google lawyer demands that we change to make it sound nicer, it’s very important to intervene with Big Brother.”
Mandatory changes, according to his email, included “negative to neutral” swaps, such as changing the word “concerns” to “considerations” and “dangers” to “risks.” Lawyers also had to remove references to Google technology; the authors ’finding that AI filtered copyrighted content; and the words “default” and “sensitive,” the email said.
Carlini did not respond to requests for comment. Google, in response to questions about the email, disputed its claim that lawyers were trying to control the tone of the newspaper. The company said it had no issues with the issues investigated by the newspaper, but found some legal terms used incorrectly and as a result made a thorough edition.
RACIAL HERITAGE AUDIT
Last week, Google also appointed Marian Croak, a pioneer in Internet audio technology and one of Google’s few black vice presidents, to consolidate and manage 10 teams studying issues such as racial bias in algorithms and technology for people with disability.
Croak told Friday’s meeting that it would take time to address the concerns of researchers in AI ethics and mitigate damage to Google’s brand.
“Please take full responsibility for trying to turn this situation around,” he said on the recording.
Johnson added that the AI organization incorporates a consulting firm for a comprehensive assessment of the impact on racial equity. The first such audit for the department would result in recommendations “that will be quite difficult,” he said.
Tensions in Dean’s division had intensified in December after Google sidelined Timnit Gebru, co-leader of its ethical research team in AI, after its refusal to withdraw a document on language-generating AI. Gebru, who is black, accused the company at the time of reviewing his work differently because of his identity and marginalizing employees from underrepresented contexts. About 2,700 employees signed an open letter of support to Gebru. (bit.ly/3us5kj3)
During town hall, Dean detailed what scholarships would support the company.
“We want responsible research on AI and ethical AI,” Dean said, setting the example of studying the environmental costs of technology. But it’s problematic to cite the data “nearly a hundred,” ignoring more accurate statistics and Google’s efforts to reduce emissions, he said. Dean has previously criticized Gebru’s paper for not including important findings on environmental impact.
Gebru defended the citation of his paper. “It’s a really bad thing for Google to be able to defend itself defensively against a document that was cited by so many similar institutions,” he told Reuters.
Employees continued to post about their frustrations over the past month on Twitter as Google investigated and then fired Margaret Mitchell, a co-leader of ethical ethics for moving electronic files out of the company. Mitchell said on Twitter that he acted “to raise concerns about race and gender inequality and talk about Dr. Gebru’s problematic dismissal from Google.”
Mitchell had collaborated on the newspaper that motivated Gebru’s departure and on a version that was published online last month without Google affiliation called “Shmargaret Shmitchell” as a co-author. (bit.ly/3kmXwKW)
Asked for comment, Mitchell expressed his disappointment for a lawyer for the criticism Dean made of the newspaper and said his name was removed after a company order.
Reports by Paresh Dave and Jeffrey Dastin; Edited by Jonathan Weber and Lisa Shumaker