[ad_1]
Earlier this year, Google artificial intelligence Researcher Timnit Gebru sent a Twitter message to University of Washington professor Emily Bender. Gebru asked Bender if she wrote about the ethical questions raised by recent advances in AI processing text. Bender hadn’t, but the couple engaged in a conversation about the limits of such technology, as proof that it can reproduce biased language found online.
Bender found the discussion lively by the DM and suggested making it an academic article. “I was hoping to bring about the next turn in the conversation,” Bender says. “We’ve seen all this excitement and success, let’s take a step back and see what the possible risks are and what we can do.” The draft was written in a month with five more coauthors from Google and academia and submitted in October to an academic conference. It would soon become one of the most notorious research works in AI.
Last week Gebru said she got fired through Google after opposing an official’s request to remove or remove their name from the newspaper. Google’s head of artificial intelligence said the job “did not meet our post bar.” Since then, more than 2,200 Google employees have signed a letter demanding more transparency in the treatment of the project by the company. On Saturday, Gebru manager, Google AI researcher Samy Bengio, written on facebook that he was “stunned”, declaring “I am by your side, Timnit”. An AI outside Google has publicly lambasted the company’s treatment of Gebru.
The fury gave the paper that catalyzed Gebru’s sudden exit an aura of unusual power. It has circulated in AI circles as samizdat. But the most remarkable thing about the 12-page document, seen by WIRED, is how uncontroversial it is. The newspaper does not attack Google or its technology, and does not appear to have damaged the company’s reputation if Gebru had been allowed to publish it with its Google affiliation.
The article reviews previous research on the limitations of AI systems that analyze and generate language. It does not present new experiences. The authors cite previous studies showing that language AI can consume large amounts of electricity and echo the unsavory biases found in the online text. And they suggest ways that AI researchers can be more careful with technology, including better documenting the data used to create such systems.
Google contributions in this area – some now deployed in its search engine—Are referenced but not subject to specific criticism. One of the studies cited, showing evidence of bias in language AI, was published by Google researchers earlier this year.
“This article is very solid and well documented work,” says Julien Cornebise, honorary associate professor at University College London, who saw a draft of the document. “It’s hard to see what could set off an uproar in a lab, let alone lead someone to lose their job because of it.”
Google’s reaction could be proof that business leaders feel more vulnerable to ethical criticism than Gebru and others thought – or that his departure was more than just a story. The company did not respond to a request for comment. In a blog post Monday, members of Google’s AI ethics research team suggested that managers had turned Google’s internal search review process against Gebru. Gebru said last week that she may have been removed for criticizing Google’s diversity programs and suggesting in a recent group email that colleagues stop participating.
The controversial draft document is titled “On the Dangers of Stochastic Parrots: Can Language Patterns Be Too Large?” (It includes a parrot emoji after the question mark.) He turns a critical eye on one of the busiest parts of AI research.
Tech companies like Google have invested heavily in AI since the early 2010s, when researchers discovered they could make speeches and image recognition much more precise using a technique called machine learning. These algorithms can refine their performance during a task, such as speech transcription, by digesting sample data annotated with labels. An approach called deep learning enabled amazing new results by coupling learning algorithms with much larger sample data collections and more powerful computers.
[ad_2]