[ad_1]
Have you ever suspected, based on previous events and tensions, that it would end this way? And did you expect the response from the community?
I thought they might make me miserable enough to leave, or something. I thought they’d be smarter than doing it this exact way, because it’s the confluence of so many issues they face: research censorship, ethical AI, labor rights, DCI. – all the things of which they are the object of fire for before. So, I didn’t expect it to be that way – like, cutting off my business account altogether. It’s so ruthless. This is not what they do to people who have committed serious misconduct. They give them $ 80 million, and they give them a nice little exit, or maybe they don’t passively-aggressively promote them, or whatever. They don’t do to the people who create a hostile work environment what they’ve done to me.
I found out through my direct reports, you know? Which is so sad. They were so traumatized. I think my team stayed up until 4 or 5 in the morning together, trying to figure out what happened. And go around Samy – it was so terrible and ruthless.
I thought if I… just focused on my job, then at least I could do my job. And now you come for my job. So I literally started to cry.
I expected some support, but I certainly didn’t expect the amount of outpouring there is. It was incredible to see. I’ve never, ever experienced something like this. I mean, random relatives text me, “I saw this on the news.” It is definitely not something I expected. But people are taking so many risks right now. And that worries me, because I really want to make sure they’re safe.
You mentioned that it wasn’t just you; it’s not just about Google. It’s a confluence of so many different issues. What does this particular experience say about the influence of tech companies on AI in general and their ability to do meaningful work in AI ethics?
You know there were a number of people comparing Big Tech and Big Tobacco, and how they censored research even though they had known about the issues for some time. I push aside the university-tech dichotomy, because they both have the same kind of very racist and sexist paradigm. The paradigm that you learn and bring to Google or wherever in academia begins. And people are moving. They go into industry and then they go back to academia, or vice versa. They are all friends; they all go to the same conferences.
I don’t think the lesson is that there shouldn’t be research on the ethics of AI in tech companies, but I think the lesson is that a) there needs to be a lot more research independent. We must have more choices than DARPA [the Defense Advanced Research Projects Agency] against businesses. And b) obviously you have to watch the tech companies. At this point, I just don’t understand how we can continue to think that they’re going to self-regulate on DCI or ethics or whatever. They didn’t do the right thing and they are not going to do the right thing.
I think academic institutions and conferences need to rethink their relationship with big business and the amount of money they take away from them. Some people even wondered, for example, if some of these conferences should have an “uncensored” code of conduct or something like that. So I think these conferences and academic institutions can do a lot. There is too much power imbalance right now.
What role do you think ethics researchers can play if they work in companies? Specifically, if your old team is staying at Google, what kind of path do you see for them in terms of being able to produce impactful and meaningful work?
I think there must be some sort of protection for people like that, or researchers like that. Right now, it’s obviously very difficult to imagine how anyone can actually do any real research within these companies. But if you have job protection, if you have whistleblower protection, if you have more oversight, it might be easier for people to be protected while they are doing that kind of work. It’s very dangerous if you have these kinds of researchers doing what my co-supervisor called “blanket” – “fig leaf” work. Like, we don’t change anything, we just put a fig leaf on the art. If you are in an environment where people in power are not investing in changing anything for real, because they have no incentive, it is obvious that the involvement of these kind of researchers not going to help at all. But I think if we can create accountability and oversight mechanisms, protective mechanisms, hopefully we can allow researchers like this to continue to exist in companies. But a lot of things have to change for this to happen.
[ad_2]