“I started crying”: Inside Timnit Gebru’s last days at Google
December 16, 2020
24         0
Avatar
by admin


Did you ever suspect, based on the previous events and tensions, that it would end in this way? And did you expect the community’s response?

I thought that they might make me miserable enough to leave, or something like that. I thought that they would be smarter than doing it in this exact way, because it’s a confluence of so many issues that they’re dealing with: research censorship, ethical AI, labor rights, DEI—all the things that they’ve come under fire for before. So I didn’t expect it to be in that way—like, cut off my corporate account completely. That’s so ruthless. That’s not what they do to people who’ve engaged in gross misconduct. They hand them $80 million, and they give them a nice little exit, or maybe they passive-aggressively don’t promote them, or whatever. They don’t do to the people who are actually creating a hostile workplace environment what they did to me.

I found out from my direct reports, you know? Which is so, so sad. They were just so traumatized. I think my team stayed up till like 4 or 5 a.m. together, trying to make sense of what happened. And going around Samy—it was just all so terrible and ruthless.

I thought that if I just…focused on my work, then at least I could get my work done. And now you’re coming for my work. So I literally started crying.

I expected some amount of support, but I definitely did not expect the amount of outpouring that there is. It’s been incredible to see. I’ve never, ever experienced something like this. I mean, random relatives are texting me, “I saw this on the news.” That’s definitely not something I expected. But people are taking so many risks right now. And that worries me, because I really want to make sure that they’re safe.

You’ve mentioned that this is not just about you; it’s not just about Google. It’s a confluence of so many different issues. What does this particular experience say about tech companies’ influence on AI in general, and their capacity to actually do meaningful work in AI ethics?

You know, there were a number of people comparing Big Tech and Big Tobacco, and how they were censoring research even though they knew the issues for a while. I push back on the academia-versus-tech dichotomy, because they both have the same sort of very racist and sexist paradigm. The paradigm that you learn and take to Google or wherever starts in academia. And people move. They go to industry and then they go back to academia, or vice versa. They’re all friends; they are all going to the same conferences.

I don’t think the lesson is that there should be no AI ethics research in tech companies, but I think the lesson is that a) there needs to be a lot more independent research. We need to have more choices than just DARPA [the Defense Advanced Research Projects Agency] versus corporations. And b) there needs to be oversight of tech companies, obviously. At this point I just don’t understand how we can continue to think that they’re gonna self-regulate on DEI or ethics or whatever it is. They haven’t been doing the right thing, and they’re not going to do the right thing.

I think academic institutions and conferences need to rethink their relationships with big corporations and the amount of money they’re taking from them. Some people were even wondering, for instance, if some of these conferences should have a “no censorship” code of conduct or something like that. So I think that there is a lot that these conferences and academic institutions can do. There’s too much of an imbalance of power right now.

What role do you think ethics researchers can play if they are at companies? Specifically, if your former team stays at Google, what kind of path do you see for them in terms of their ability to produce impactful and meaningful work?

I think there needs to be some sort of protection for people like that, or researchers like that. Right now, it’s obviously very difficult to imagine how anybody can do any real research within these corporations. But if you had labor protection, if you have whistleblower protection, if you have some more oversight, it might be easier for people to be protected while they’re doing this kind of work. It’s very dangerous if you have these kinds of researchers doing what my co-lead was calling “fig leaf”—cover-up—work. Like, we’re not changing anything, we’re just putting a fig leaf on the art. If you’re in an environment where the people who have power are not invested in changing anything for real, because they have no incentive whatsoever, obviously having these kinds of researchers embedded there is not going to help at all. But I think if we can create accountability and oversight mechanisms, protection mechanisms, I hope that we can allow researchers like this to continue to exist in corporations. But a lot needs to change for that to happen.

subscribe for YouMedia Newsletter
0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *

newsletter
subscribe for YouMedia Newsletter
LET'S HANG OUT ON SOCIAL