SAN FRANCISCO, Sept. 8 (Reuters) – In September last year, Google’s cloud unit (GOOGL.O) studied the use of artificial intelligence to help a financial company decide who to lend to money.
He rejected the client’s idea after weeks of internal debates, considering the project to be too ethically complicated because artificial intelligence technology could perpetuate biases such as those related to race and gender.
Since early last year, Google has also blocked new AI features that analyze emotions, fearing cultural insensitivity, while Microsoft (MSFT.O) restricted software that mimicked voices and IBM (IBM.N ) rejected the client’s request for an advanced facial recognition system.
All of these technologies were hampered by panels of executives or other leaders, according to interviews with the AI ethics chiefs of the three American technology giants.
Informed here for the first time, their vetoes and the deliberations that led them reflect a nascent push across the industry to balance the pursuit of lucrative artificial intelligence systems with a greater consideration of social responsibility.
“There are opportunities and harms, and our job is to maximize opportunities and minimize harm,” said Tracy Pizzo Frey, who is part of two Google Cloud ethics committees as the CEO of responsible AI.
Judgments can be difficult.
Microsoft, for example, had to balance the benefits of using its voice mimicry technology to restore the speech of people with disabilities to risks such as enabling political falsifications, said Natasha Crampton, head of AI responsible for the company.
Rights activists say decisions with potentially far-reaching consequences for society should not be made only internally. They argue that ethics committees cannot be truly independent and their public transparency is limited by competitive pressures.
Jascha Galaski, an official advocate for the Civil Liberties Union for Europe, believes that external oversight is the way forward and the US and European authorities are in fact drawing up rules for the nascent area.
If companies ’AI ethics committees“ become really transparent and independent (and that’s very utopian), that might be even better than any other solution, but I don’t think it’s realistic, ”Galaski said.
The companies said they would accept clear regulation on the use of AI and that this was essential for both customer confidence and citizenship, similar to vehicle safety standards. They said it was also in their financial interest to act responsibly.
However, they want any standard to be flexible enough to keep up with the innovation and new dilemmas it creates.
Among the complex considerations to come, IBM told Reuters that its AI Ethics Council has begun discussing how to control the emerging frontier: implants and laptops that connect computers to the brain.
These neurotechnologies could help people with disabilities control their movements, but they raise concerns such as the possibility of hackers manipulating thoughts, said Christina Montgomery, IBM’s director of privacy.
AI CAN SEE YOUR SAD
Technology companies acknowledge that just five years ago they launched artificial intelligence services, such as chatbots and photo tagging with few ethical guarantees, and that bad results or biased results were addressed with subsequent updates.
But as political and public scrutiny over AI flaws grew, Microsoft in 2017 and Google and IBM in 2018 established ethics committees to review new services from the beginning.
Google said it presented its monetary lending problem last September, when a financial services company felt that AI could assess people’s solvency better than other methods.
The project seemed very suitable for Google Cloud, whose experience in the development of artificial intelligence tools that help in areas such as the detection of abnormal transactions has attracted customers such as Deutsche Bank (DBKGn.DE), HSBC (HSBA.L) and BNY Mellon (BK.N).
Google’s unit predicted that the AI-based credit score could become a billion-dollar-a-year market and wanted to establish itself.
However, its ethics committee made up of about 20 executives, social scientists and engineers reviewing possible agreements voted against the project unanimously at an October meeting, Pizzo Frey said.
The AI system should learn from past data and patterns, the committee concluded, and therefore runs the risk of repeating discriminatory practices around the world against people of color and other marginalized groups.
In addition, the committee, known internally as “Lemonaid,” enacted a policy to skip all solvency-related financial service offerings until these concerns could be resolved.
Lemonaid had rejected three similar proposals during the previous year, including a credit card company and a business lender, and Pizzo Frey and his sales counterpart had called for a broader decision on the issue.
Google also said its second cloud ethics committee, known as Iced Tea, this year reviewed a service published in 2015 to classify people’s photos by four expressions: joy, sorrow, anger, and surprise.
The move came after last year’s resolution of Google’s company-wide ethics panel, Advanced Technology Review Council (ATRC), which halted new emotion-related reading services.
The ATRC, more than a dozen executives and senior engineers, determined that inferring emotions could be insensitive because facial expressions are associated differently with feelings from different cultures, among other reasons, Jen said. Gennai, founder and leader of Google’s responsible innovation team.
Iced Tea has blocked 13 emotions planned for the Cloud tool, including embarrassment and satisfaction, and could soon completely leave the service in favor of a new system that describes movements such as frowning and smiling, without trying to interpret it. les, Gennai and Pizzo Frey. dit.
VOICES AND FACES
Meanwhile, Microsoft developed software that could reproduce someone’s voice from a brief sample, but the company’s Sensible Uses group spent more than two years debating ethics around its use and consulted company president Brad Smith told Crampton to Reuters.
He said the panel – specialists in fields such as human rights, data science and engineering – finally gave the green light to the full publication of Custom Neural Voice in February this year. But it set restrictions on its use, including verifying the consent of the subjects and having a team with “Responsible AI Champs” trained in corporate policy approve the purchases.
IBM’s AI board, made up of about 20 department leaders, struggled with its own dilemma when, early in the COVID-19 pandemic, it examined the customer’s request to customize facial recognition technology. to detect fevers and facial coatings.
Montgomery said the board, which she co-chairs, declined the invitation, concluding that manual checks would be sufficient with less intrusion on privacy because no photos would be kept for any AI database.
Six months later, IBM announced it would leave the facial recognition service.
UNMET AMBITIONS
In an attempt to protect privacy and other freedoms, lawmakers in the European Union and the United States are conducting powerful controls on artificial intelligence systems.
The EU Artificial Intelligence Act, which is due to be approved next year, would prevent real-time facial recognition in public spaces and require technology companies to ensure high-risk applications, such as those used in hiring, credit scoring and law enforcement. Read more
U.S. Congressman Bill Foster, who has held hearings on how algorithms drive discrimination in financial services and housing, said new laws to govern AI would ensure a uniform field for sellers.
“When you ask a company to be successful in profits to achieve social goals, they say,‘ And our shareholders and our competitors? “That’s why you need sophisticated regulation,” the Illinois Democrat said.
“There may be such sensitive areas that you see tech companies deliberately staying out until there are clear traffic rules.”
In fact, some advances in artificial intelligence may simply be on hold until companies can counteract the ethical risks without devoting enormous engineering resources.
After Google Cloud rejected the request for personalized financial AI last October, the Lemonaid committee told the sales team that the unit intends to start developing credit-related applications someday.
First, research into combating unfair biases needs to be brought up to date with Google Cloud’s ambitions to increase financial inclusion through “highly sensitive” technology, he said in the policy disseminated to staff.
“Until then, we are not in a position to deploy solutions.”
Reports by Paresh Dave and Jeffrey Dastin; Edited by Kenneth Li and Pravin Char
Our standards: the principles of trust of Thomson Reuters.