[ad_1]
Our mission to improve business is fueled by readers like you. To enjoy unlimited access to our journalism, .
The recent departure a respected Google An artificial intelligence researcher wondered if the company was trying to cover up ethical concerns about a key piece of AI technology.
The departure of researcher Timnit Gebru came after Google asked her to remove a research article she had co-authored on the ethics of major linguistic models. These templates, created by browsing huge libraries of text, help build search engines and digital assistants who can better understand and respond to users.
Google declined to comment on Gebru’s departure, but referred reporters to an email sent to staff by Jeff Dean, the senior vice president in charge of Google’s AI research division, which was leaked in the Platformer technology newsletter. In the email, Dean said the study in question, which Gebru co-authored with four other Google scientists and a University of Washington researcher, did not meet company standards.
This position, however, was challenged by Gebru and members of the AI ​​ethics team she previously co-led.
More than 5,300 people, including more than 2,200 Google employees, have signed an open letter protesting against Google’s treatment of Gebru and demanding that the company explain itself.
But why would Google have been particularly upset by Gebru and his co-authors questioning the ethics of major linguistic models? Well, it turns out that Google has invested a lot in the success of this particular technology.
Under everyone’s hood great language models is a special type of neural network, AI software based on the human brain, which was released by researchers at Google in 2017. Called Transformer, it has since been adopted industry-wide for a variety of different uses in linguistic and visual tasks.
The statistical models that these great language algorithms build are enormous, encompassing hundreds of millions, if not hundreds of billions of variables. This way they become very good at being able to accurately predict a missing word in a sentence – but it turns out that along the way, they also learn other skills, like being able to answer questions about a text, summarize key facts in a document, or determine which pronoun refers to which person in a passage. These things seem simple, but previous language software had to be trained specifically for each of these skills, and even then it was often not that good.
The larger of these great language models can do other cool things as well: GPT-3, a large language model created by San Francisco-based AI company OpenAI, encompasses some 175 billion variables and can write long passages of consistent text from a single human prompt. So imagine writing just a title and first sentence for a blog post, then GPT-3 can compose the rest of the blog post. OpenAI has licensed GPT-3 to a number of tech startups, plus Microsoft, to power their own services, which include a company using the software to allow users to generate full emails from a few points.
Google has its own great language model, called BERT, which it has used to improve search results in several languages, including English. Other companies are also using BERT to create their own language processing software.
BERT is optimized to run on Google’s own specialized computer processors, available exclusively to customers of its cloud computing service. Google therefore has a clear business incentive to encourage businesses to use BERT. And in general, all cloud providers are happy with the current trend towards large language models because if a company wants to train and manage one of their own, it has to rent a lot of cloud computing time.
For example, a study last year estimated that training BERT on Google’s cloud costs around $ 7,000. Sam Altman, CEO of OpenAI, meanwhile, hinted that it was costing several million to form GPT-3.
And although the market for these large Transformer language models is relatively small at the moment, he is about to explode, according to Kjell Carlsson, an analyst at technology research firm Forrester. “Of all the recent developments in AI, these large Transformer networks are the most important to the future of AI right now,” he says.
One reason is that large language models make it a lot easier to create language processing tools, almost right out of the box. “With just a little bit of fine tuning, you can have custom chatbots for anything and everything,” Carlsson says. More than that, great pre-trained language models can help write software, summarize text, or create frequently asked questions with their answers, he says.
A widely cited 2017 report by market research firm Tractica predicted that NLP software of all kinds would represent an annual market of $ 22.3 billion by 2025 – and this analysis was done before arrival. large language models such as BERT and GPT-3. It is therefore the market opportunity that Gebru’s research has criticized.
According to Gebru and his colleagues, what is wrong with the big language models? Well, a lot. On the one hand, because they are formed on huge existing corpora of text, the systems tend to cook into a lot of existing human prejudices, especially on gender and race. Additionally, the article’s co-authors said, the models are so large and absorb so much data that they are extremely difficult to verify and test, so some of these biases may go undetected.
The document also highlighted the negative environmental impact, in terms of carbon footprint, that training and running such large language models on power-hungry servers can have. He noted that BERT, Google’s own language model, produced an estimate of about 1,438 pounds of carbon dioxide, about the amount of a round-trip flight from New York to San Francisco.
The research also examined the fact that the money and effort spent on building ever-larger language models reduced the effort of building systems that could actually “understand” the language and learn more effectively, such as humans do.
Many of the criticisms of the great language models expressed in the article have already been made. The Allen Institute for AI had published an article looking at the racist and biased language produced by GPT-2, the precursor system to GPT-3.
In fact, OpenAI’s own article on GPT-3 which won the award for “best paper” at this year’s Neural Information Processing Systems Conference (NeurIPS), one of the most prestigious conferences in the field. of AI research, contained a meaty section describing some of the same potential issues of bias and environmental damage that Gebru and his co-authors highlighted.
OpenAI, arguably, has as much, if not more, financial incentive to quell GPT-3’s flaws. After all, GPT-3 is literally OpenAI’s only commercial product at the moment. Google was making hundreds of billions of dollars just before the arrival of BERT.
But then again, OpenAI still operates more like a tech startup than the mega-corporation that Google has become. It may simply be that large corporations are, by their very nature, allergic to paying high salaries to people to publicly criticize their own technology and potentially jeopardize billion dollar market opportunities.
More to read absolutely technological coverage of Fortune:
- 2020 was a record year for European technological investments. Not even a pandemic could slow it down
- Shipping times for the holidays for FedEx, UPSand the postal service
- Quantum computing is enter a new dimension
- Battery boot backed by Bill Gates claims a major advance
- The founder of Indiegogo launches Vincent, a site to discover alternative investments
[ad_2]