Home Technology news Stop talking about AI ethics. It’s time to talk about power.

Stop talking about AI ethics. It’s time to talk about power.

0

[ad_1]

Over his 20-year career, Crawford has faced the real consequences of large-scale data systems, machine learning, and artificial intelligence. In 2017, with Meredith Whittaker, she co-founded the AI ​​Now research institute as the first organization dedicated to studying the social implications of these technologies. She is also now a professor at USC Annenberg, Los Angeles, and the first visiting chair in AI and justice at the École normale supérieure in Paris, as well as a senior researcher at Microsoft Research.

Five years ago, Crawford says, she was still working to introduce the simple idea that data and AI weren’t neutral. Now the conversation has evolved and the ethics of AI has flourished in its own realm. She hopes her book will help her mature even more.

I sat down with Crawford to talk about his book.

The following has been edited for length and clarity.

Why did you choose to carry out this book project and what does it mean for you?

Crawford: So many books that have been written on artificial intelligence actually only talk about very limited technical achievements. And sometimes they write about the great men of AI, but that’s really all we’ve had to really fight what artificial intelligence is.

I think it produced this very biased understanding of artificial intelligence as purely technical systems that are sort of objective and neutral, and – as Stuart Russell and Peter Norvig say in their manual– as intelligent agents who make the best decision among all possible actions.

I wanted to do something very different: to really understand how artificial intelligence in the broad sense is done. It means examining the natural resources that drive it, the energy it consumes, the hidden work along the supply chain, and the vast amounts of data pulled from every platform and device we use every day.

In doing so, I really wanted to open up this understanding of AI as neither artificial nor intelligent. It’s the opposite artificial. It comes from the most material parts of the earth’s crust and from the work of human bodies, and from all the artifacts we produce, say, and photograph every day. It’s not smart either. I think there’s this great original sin in the field, where people have assumed that computers are kind of like human brains and if we just train them like children, they will slowly become these supernatural beings.

This is something that I think is really problematic – that we bought this idea of ​​intelligence when in fact we are just looking at large-scale forms of statistical analysis that are as problematic as the data. that are provided.

Was it immediately obvious to you that this is how people should think of AI? Or was it a trip?

It really was a trip. I would say one of the turning points for me was back in 2016, when I started a project called “Anatomy of an AI System”With Vladan Joler. We met at a conference dedicated specifically to voice artificial intelligence and we were trying to effectively leverage what it takes to make an Amazon Echo work. What are the components? How does it extract the data? What are the layers of the data pipeline?

We realized, well – actually, to understand that, you have to understand where the components come from. Where were the chips produced? Where are the mines? Where is it melted? Where are the paths of logistics and the supply chain?

Finally, how to trace the end of life of these devices? How do we look at where the e-waste dumps are located in places like Malaysia, Ghana and Pakistan? We ended up with this two-year, time-consuming research project to really trace these material supply chains from cradle to grave.

When you start looking at AI systems on this larger scale, and over this longer time horizon, you go from those very narrow accounts of “AI fairness” and “ethics” to saying : these are systems that produce deep and lasting geomorphic changes our planet, as well as increase the forms of labor inequality that we already have in the world.

So that made me realize that I needed to move from scanning a single device, the Amazon Echo, to applying this type of analysis to the entire industry. It was for me the big task, and that’s why AI Atlas took five years to write. It is really necessary to see what these systems are really costing us, because we so rarely do the job of understanding their true planetary implications.

The other thing I would say that has been really inspiring is the growing field of academics asking these bigger questions about work, data and inequality. Here I am thinking of Ruha Benjamin, Safiya Noble, Mar Hicks, Julie Cohen, Meredith Broussard, Simone Brown – the list goes on. I see this as a contribution to this body of knowledge by providing perspectives that link the environment, labor rights and data protection.

You travel a lot throughout the book. Almost every chapter begins with looking at your surroundings. Why was this important to you?

It was a very conscious choice to base an analysis of AI in specific places, to move away from those abstract “nowhere” of the algorithmic space, where so much debate around machine learning takes place. And I hope that highlights the fact that when we don’t do that, when we just talk about these “nowhere spaces” of algorithmic objectivity, it’s also a political choice, and it has ramifications.

In terms of the chain of places, that’s really why I started to think about this metaphor of an atlas, because atlases are unusual books. These are books that you can open and look at at the scale of an entire continent, or you can zoom in and look at a mountain range or a city. They give you those shifts in perspective and scale.

There’s this cute line that I use in physicist Ursula Franklin’s book. She writes about how maps combine the known and the unknown in these methods of collective perception. So for me it was really based on the knowledge I had, but also the actual locations where the AI ​​is built very literally from rocks and sand and oil.

What kind of feedback has the book received?

One of the things that surprised me about the first answers is that people really feel like this kind of perspective is overdue. There is a moment of recognition that we need to have a different type of conversation than we have had in recent years.

We’ve spent far too much time focusing on narrow technical fixes for AI systems and always focusing on technical responses and technical responses. We now have to deal with the environmental footprint of the systems. We have to fight against the very real forms of labor exploitation that have taken place in the construction of these systems.

And we’re also starting to see the toxic legacy of what happens when you mine as much data as you can on the internet and just call it the Basic Truth. This kind of problematic framing of the world has produced so much harm, and as always, this harm has mostly been felt by communities that were already marginalized and that did not enjoy the benefits of these systems.

What do you hope people will start to do differently?

I hope it will be a lot harder to have these dead end conversations where terms like ‘ethics’ and ‘AI for good’ have been used. so completely denatured of any real meaning. Hopefully that will raise the curtain and say, let’s actually look at who’s running the levers in these systems. It means moving from just focusing on things like ethical principles to talking about power.

How to move away from this ethical framework?

If there has been a real trap in the tech industry over the past decade, it’s that the theory of change has always centered engineering. It has always been: “If there is a problem, there is a technical solution to solve it.” And it’s only recently that we’ve started to see it broaden out to, “Oh, well, if there’s a problem then regulation can fix it. Decision makers have a role to play. “

But I think we need to expand that even more. We must also say: where are the civil society groups, where are the activists, where are the advocates who address issues of climate justice, labor rights, data protection? How to include them in these discussions? How to include affected communities?

In other words, how can we make this a much deeper democratic conversation about how these systems are already influencing the lives of billions of people in essentially irresponsible ways who live outside of regulation and democratic control?

In this sense, this book tries to decenter the technology and begins to ask bigger questions around: what kind of world do we want to live in?

What kind of world is it you do you want to live? What kind of future do you dream of?

I want to see the groups that have done the very hard work of addressing issues like climate justice and labor rights come together, and realize that these once quite separate fronts for social change and racial justice have truly shared concerns. and a common ground on which to coordinate and organize.

Because we are looking at a very short time horizon here. We are dealing with a planet that is already very busy. We envision a deep concentration of power in an extremely small number of hands. You’d really have to go back to the early days of the railroads to see another industry so concentrated, and now you could even say that the technology has moved beyond that.

We must therefore fight against the means by which we can pluralize our societies and have greater forms of democratic accountability. And it is a problem of collective action. It is not a matter of individual choice. It’s not like we’re choosing the most ethical technology brand in the business. It is that we must find ways to work together on these challenges on a planetary scale.

[ad_2]

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version