Tuesday, March 18, 2025

Artificial intelligence must come true – and other takeaways from this year’s NeurIPS

Must read

[ad_1]

This is the web version of Eye on AI, Fortune’s weekly newsletter covering artificial intelligence.andbusiness. To have it delivered to your inbox each week,register here.

Hello and welcome to the latest “Eye on AI” of 2020! I spent the past week immersed in the Neural Information Processing Systems (NeurIPS) conference, the annual gathering of top AI university researchers. It’s always a good place to take the pulse of the field. Held completely virtually this year thanks to COVID-19, it attracted more than 20,000 attendees. Here are some of the highlights.


Charles Isbell’s opening speech was a tour de force that made great use of the prerecorded video format, including basic special effects modifications and cameos by many other great AI researchers. The Georgia Tech professor’s message: It is high time that AI research developed and became more concerned with the real consequences of its work. Machine learning researchers should stop shirking their responsibilities by claiming that these considerations belong to other fields – data science, anthropology, or political science.

Isbell urged the field to take a systems approach: how a piece of technology will work in the world, who will use it, who will it be used or misused, and what could possibly go wrong, are all questions. that should be asked. and center when AI researchers sit down to create an algorithm. And to get answers, machine learning scientists need to collaborate much more with other stakeholders.

Many guest speakers echoed this theme: how to make sure that AI does good, or at least doesn’t harm, in the real world.


Saiph Savage, director of the Human-Machine Interaction Lab at the University of West Virginia, spoke about his efforts to improve the prospects for AI. “Invisible workers”—The poorly paid entrepreneurs who are often used to label the data that AI software is trained on – helping them train each other. In this way, workers acquired new skills and, perhaps, by becoming more productive, could earn more from their work. She also spoke about efforts to use AI to find the best strategies to help these workers organize or engage in other collective actions that can improve their economic prospects.


Marloes Maathuis, professor of theoretical and applied statistics at ETH Zurich, examined how directed acyclic graphs (DAG) could be used to derive causal relationships in data. Understanding causation is essential for many real-world uses of AI, especially in contexts like medicine and finance. Yet one of the biggest problems with deep learning based on neural networks is that these systems are very good at discovering correlations, but often useless at determining the causes. One of Maathuis’ main points was that in order to determine causation it is important to make causal assumptions and then test them. And that means talking to experts in the field who can at least risk educated guesses about the dynamics behind it. Too often, machine learning engineers don’t care, relying on deep learning to determine the correlations. It’s dangerous, implied Maathuis.


It was hard to ignore that this year’s conference was held against the backdrop of the ongoing Google controversy treatment by Timnit Gebru, the highly respected AI ethics researcher and one of the very few black women in the company’s research division, who left the company two weeks earlier (she says she was fired; l company continues to insist on his resignation). Some NeurIPS participants expressed their support for Gebru in their discussions. (Many others have done this on Twitter. Gebru herself also appeared on a few panels that were part of a conference workshop on creating “resistance AI”) Academics were particularly worried that Google had forced Gebru to step down a research paper he didn’t like, noting that this raised troubling questions about the influence of companies on AI research in general and on AI ethics research in particular. A document presented at the “Resistance AI” workshop explicitly compared Big Tech’s involvement in the ethics of AI to Big Tobacco’s funding of fake science on the health effects of smoking. Some researchers have said they will stop reviewing conference papers from Google-affiliated researchers because they can no longer be sure the authors won’t. hopelessly in conflict.


Here are some other lines of research to watch out for:

• A team from the semiconductor giant Nvidia introduced a new technique for reduce the amount of data needed to form a generative antagonistic network (or GAN, the type of AI used to create deepfakes). Using the technique, which Nvidia calls adaptive discriminator augmentation (or ADA), he was able to train a GAN to generate images in the style of works of art found at the Metropolitan Museum of Art using less 1,500 training examples, which the company says is at least 10 to 20 times less data than what would normally be required.

• OpenAI, the San Francisco AI Research Store, won the Best Research Paper Award for his work on GPT-3, the ultra-large language model that can generate long passages of novel and coherent text from a single small human-written prompt. The document focused on GPT-3’s ability to perform many other linguistic tasks, such as answering questions about a text or translating between languages, without additional training or just a few examples to draw. GPT-3 is massive, encompassing some 175 billion different variables and has been trained on many terrabytes of textual data, and it’s interesting to see the OpenAI team admit in the article that “we’re probably approaching the limits of to scale ”, and that to make further progress, new methods will be required. It’s also noteworthy that OpenAI mentions many of the same ethical issues with big language models like GPT-3 – the way they absorb racist and gender bias from training data, their huge carbon footprint – that Gebru was trying to put in place. prominently in the document that Google tried to force her to retract.

• The other two winners of the “Best Paper” award are also worth noting: researchers from Politecnico di Milano, Italy, and Carnegie Mellon University, used concepts from game theory to create an algorithm which acts as an automated mediator in an economic system with multiple interested agents, suggesting to each actions to be taken to bring the whole system back into the best balance. The researchers suggested that such a system would be useful for managing workers in the “odd-job economy”.

• A team from the University of California at Berkeley won an award for their research showing that it is possible, through careful selection of representative samples, to summarize most real world data sets. The finding contradicts previous research which had essentially argued that, as it could be shown that there were a few datasets for which no representative sample existed, the synthesis itself was a dead end. Automated synthesis of both text and other data is becoming a hot topic in business analysis, so research can end up having business impact.

I’ll highlight a few other things that I found interesting in the Research and Brain Food sections below. And for those who responded to Jeff’s post about AI in the movies last week, thank you. We’ll also share some of your thoughts below. Since “Eye on AI” will be on hiatus for the next few weeks, I would like to wish you happy holidays and best wishes for a happy new year! We’ll be back in 2021. Now here’s the rest of the AI ​​news from this week.

JEremy Kahn
@Jeremyakahn
jeremy.kahn@fortune.com



[ad_2]

- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisement -spot_img

Latest article