Monday, September 25, 2023

This researcher claims that AI is neither artificial nor intelligent

Must read

[ad_1]

Tech companies like to portray artificial intelligence as a precise and powerful tool for good. Kate Crawford says the mythology is flawed. In his book AI Atlas, she visits a lithium mine, a Amazon 19th century phrenological skull repository and archive to illustrate the natural resources, human sweat, and bad science behind some versions of the technology. Crawford, professor at the University of Southern California and researcher at Microsoft, says many AI applications and side effects are in urgent need of regulation.

Crawford recently discussed these issues with WIRED Senior Editor Tom Simonite. A revised transcript follows.

WIRED: Not many people understand all the technical details of artificial intelligence. You argue that some tech experts misunderstand AI more deeply.

KATE CRAWFORD: It’s billed as that ethereal, objective way of making decisions, something we can incorporate into everything from teaching kids to deciding who gets bail. But the name is misleading: AI is neither artificial nor intelligent.

AI is made from vast amounts of natural resources, fuel, and human labor. And it is not intelligent in any way of human intelligence. He is not able to discern things without extensive human training, and he has a completely different statistical logic on how meaning is made. Since the very beginning of AI in 1956, we have made this terrible mistake, a sort of original sin of the field, of believing that minds are like computers and vice versa. We assume that these things are analogous to human intelligence and that nothing could be further from the truth.

You take this myth by showing how AI is built. Like many industrial processes, it turns out to be complicated. Some machine learning systems are built with hastily collected data, which can cause issues like facial recognition services that are more prone to errors for minorities.

We need to look at the end-to-end production of artificial intelligence. The seeds of the data problem were planted in the 1980s, when it became common to use datasets without a deep knowledge of what was inside or without concern for confidentiality. It was just “raw” material, reused in thousands of projects.

It has evolved into a massive data mining ideology, but data is not an inert substance – it always provides context and policy. Reddit phrases will be different from children’s books. The mugshot database images have different stories than the Oscars, but they’re all used the same way. This causes a multitude of downstream problems. In 2021, there is still no industry-wide standard for noting what types of data is kept in training sets, how it was acquired, or potential ethical issues.

You trace the roots of emotion recognition software to questionable science funded by the Department of Defense in the 1960s. recent review out of over 1000 research papers, there is no evidence to reliably infer a person’s emotions from their face.

Sensing emotions represents the fantasy that technology will finally answer the questions we ask ourselves about human nature that are not technical questions at all. This idea, so contested in the field of psychology, made the leap into machine learning because it is a simple theory that corresponds to the tools. Recording people’s faces and correlating them to simple, predefined emotional states works with machine learning – if you let go of culture and context and can change the way you look and feel hundreds of times a day.

It also becomes a feedback loop: because we have emotion detection tools, people say we want to apply them in schools and courtrooms and catch potential shoplifters. Recently, companies are using the pandemic as a pretext to use the recognition of emotions on children in schools. This brings us back to the phrenological past, this belief that you are sensing the character and personality of the face and the shape of the skull.

Courtesy of Cath Muscat

You have contributed to the recent growth of research into how AI can have unwanted effects. But this area is entangled with people and funding from the tech industry, which seeks to capitalize on AI. Google recently kicked out two respected AI ethics researchers, Timnit Gebru and Margaret mitchell. Does industry involvement limit research questioning AI?

[ad_2]

- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisement -spot_img

Latest article