Yesterday at Google’s I/O developer conference, the company outlined its ambitious plans for a future built on high-level language AI. Google CEO Sundar Pichai said these systems allow users to find information and organize their lives through natural conversations with computers. Just speak and the machine responds.
However, for many in the AI community, there has been a notable absence from this conversation. This is Google’s response to our own research investigating the risks of these systems.
In December 2020 and February 2021, Google first fired Timnit Gebru, then Margaret Mitchell, co-head of the Ethical AI team. The story of their departure is complex, but sparked by a co-authored paper (along with researchers outside Google) that investigated the risks associated with the language model that Google currently presents as the core of the future. As noted in papers and other criticisms, these AI systems are susceptible to a number of flaws, including the generation of violent and racist language. Encoding of racial and gender biases through language; General inability to classify facts from fiction. For many people in the AI world, Google’s firing of Gebru and Mitchell was a censorship of their work.
For some viewers, as Pichai explained how Google’s AI model is always designed with “fairness, accuracy, safety, and privacy” in mind, the discrepancy between the company’s words and actions questioned its ability to protect this technology. Brought up.
“Google just introduced LaMDA, a new large-scale language model at I/O.” Tweet Meredith Whittaker, AI fairness researcher and co-founder of the AI Now Institute. “This is Co. It is an indicator of strategic importance to Teams. It takes months to prepare these presentations. Tl; dr This plan came into force when Google fired Timnit+ to curb her + research criticizing this approach. “
Google has just introduced LaMDA, a new large-scale language model in I/O. This is Co. It is an indicator of strategic importance to Teams. It takes months to prepare these presentations. Tl; The plan came when Google tried to curb research criticizing this approach when firing Timnit+. https://t.co/6VObPJ1ebo
— Meredith Whitaker (@mer__edith) May 18, 2021
Gebru himself Tweet, “This is what is called an ethical wash”-it refers to the technology industry’s tendency to ignore ethical concerns while ignoring the consequences that undermine a company’s ability to generate revenue.
speaking The VergeProfessor Emily Bender of the University of Washington, who co-authored the paper with Gebru and Mitchell, said Google’s presentation did not in any way alleviate her concerns about the company’s ability to make such technologies secure.
“In a blog post [discussing LaMDA] Given the history, I’m not sure that Google is actually wary of the dangers posed by the newspaper,” Bender said. “For one thing, they nominally fired two of the authors of the paper for the paper. If the problem we are raising is the one they are facing face to face, they have deliberately deprived them of the expertise relevant to the work.”
In a blog post about LaMDA, Google highlights several of these issues and highlights the need for more development in the work. “Language can be one of humanity’s best tools, but like all other tools, it can be misused,” says Chief Research Officer Zoubin Ghahramani and Product Management VP Eli Collins. “A model trained on language can propagate such misuse by internalizing prejudice, mirroring words that express hatred, or by duplicating misleading information.”
However, the vendor says the company is obfuscating the problem and needs to be clearer about how it is solving the problem. For example, she mentions that Google is investigating the languages used to train models like LaMDA, but doesn’t provide details of what this process looks like. “I want to know about the review process (or lack thereof),” says Bender.
It was after the presentation that Google made a comment on the AI Ethics Department. CNET Interview with Jeff Dean, Chief Executive Officer of Google AI. Dean noted that Google was actually “reputedly hit” by the layoffs. The Verge Although reported previously, the company had to “pass” these events. “We are not ashamed of criticism of our products,” said Dean. CNET. “It is also intended to address some of these problems as long as it is done through a proper treatment of the lens for the facts and the wide range of work we do in this field.”
But for those who criticize the company, the conversation should be much more open than this.