Google’s workplace culture is yet again embroiled in controversy.
AI ethics researcher Timnit Gebru — a well-respected pioneer in her field and one of the few Black women leaders in the industry — said earlier this week that Google fired her after blocking the publication of her research around bias in AI systems. Days before Gebru’s departure, she sent a scathing internal memo to her colleagues detailing how higher-ups at Google tried to squash her research. She also criticized her department for what she described as a continued lack of diversity among its staff.
In her widely read internal email, which was published by Platformer, Gebru said the company was “silencing in the most fundamental way possible” and claimed that “your life gets worse when you start advocating for underrepresented people” at Google.
After Gebru’s departure, Google’s head of AI research Jeff Dean sent a note to Gebru’s department on Thursday morning saying that, after internal review, her research paper did not meet the company’s standards for publishing. According to Gebru, the company also told her that her critical note to her coworkers was “inconsistent with the expectations of a Google manager.”
A representative for Google declined to comment. Gebru did not respond to a request for comment.
Gebru’s allegation of being pushed out of the powerful tech company under questionable circumstances is causing a stir in the tech and academic communities, with many prominent researchers, civil rights leaders, and Gebru’s Google AI colleagues speaking out publicly on Twitter in her defense. A petition to support her has already received signatures from more than 740 Google employees and over 1,000 academics, nonprofit leaders, and industry peers. Her departure is significant because it hits on broader tensions around racial diversity in Silicon Valley as well as whether or not academics have enough freedom to publish research, even if it’s controversial, while working at major companies that control the development of powerful technologies and have their own corporate interests to consider.
People are still trying to unravel exactly what led to Gebru’s departure from Google.
What we know is that Gebru and several of her colleagues were planning to present a research paper at a forthcoming academic conference about unintended consequences in natural language processing systems, which are the tools used in the field of computing to understand and automate the creation of written words and audio. Gebru and her colleagues’ research, according to the New York Times, “pinpointed flaws in a new breed of language technology, including a system built by Google that underpins the company’s search engine.” It also reportedly discussed the environmental consequences of large-scale computing systems used to power natural language processing programs.
As part of Google’s process, Gebru submitted the paper to Google for internal review before it was published more broadly. Google determined that the paper was not up to its standards because it “ignored too much relevant research,” according to the memo Dean sent on Thursday.
Dean also said in his memo that Google rejected Gebru’s paper for publication because she submitted it one day before its deadline for publication instead of the required two weeks.
Gebru asked for further discussion with Google before retracting the paper, according to the Times. If Google couldn’t address her concerns, Gebru said she would resign from the company.
Google told Gebru it couldn’t meet her conditions and the company was accepting her resignation immediately.
It’s a standard process for a company like Google to review the research of its employees before it’s published outside it. But former colleagues and outside industry researchers defending Gebru questioned whether or not Google was arbitrarily enforcing its rules more strictly in this scenario.
“It just seems odd that someone who has had books written about her, who is quoted and cited on a daily basis, would be let go because a paper wasn’t reviewed properly,” said Rumman Chowdhury, a data scientist who is the former head of Responsible AI at Accenture Applied Intelligence and has now launched her own company called Parity. Chowdhury has no affiliation with Google.
The conflict and Gebru’s firing/resignation reflect a growing tension between researchers studying the ethics of AI and the major tech companies that employ them.
It’s also another example of deep, ongoing issues dividing parts of Google’s workforce. On Wednesday, the National Labor Relations Board (NLRB) issued a complaint that said Google had spied on its workers and likely violated labor laws when it fired two employee activists last year.
After several years of turmoil in Google’s workforce over issues ranging from Google’s controversial plans to work with the US military to sexual harassment of its employees, the past several months had been relatively quiet. The company’s biggest public pressure came instead from antitrust legal scrutiny and Republican lawmakers’ unproven accusations that Google’s products display an anti-conservative bias. But Gebru’s case and the recent NLRB complaint show the company is still fighting internal battles.
“What Timnit did was present some hard but important evaluations of how the company’s efforts are going with diversity and inclusion initiatives and how to course-correct on that,” said Laurence Berland, a former Google engineer who was fired after organizing his colleagues around worker issues and one of the employees contesting his dismissal with the NLRB. “It was passionate, but it wasn’t just non-constructive,” he said.
In the relatively new and developing field of ethical AI, Gebru is not only a foundational researcher but a role model to many young academics. She’s also a leader of key groups like Black in AI, which are fostering more diversity in the largely white, male-dominated field of AI in the US.
(While Google doesn’t break out its demographics specifically for its artificial intelligence research department, it does annually share its diversity numbers. Only 24.7 percent of its technical workforce are women, and 2.4 percent are Black, according to its 2020 Diversity & Inclusion report.)
“Timnit is a pioneer. She is one of the founders of responsible and ethical artificial intelligence,” said Chowdhury. “Computer scientists and engineers enter the field because of her.”
In 2018, Gebru and another researcher, Joy Buolamwini, published groundbreaking research showing facial recognition software identified darker-skinned people and women incorrectly at far higher rates than lighter-skinned people and men.
Her work has contributed to a broader reckoning in the tech industry about the unintended consequences of AI that is trained on data sets that can marginalize minorities and women, reinforcing existing societal inequalities.
Outside of Google, academics in the field of AI are concerned that Gebru’s firing could scare other researchers from publishing important research that may step on the toes of their employers.
“It’s not clear to researchers how they’re going to continue doing this work in the industry,” said UC Berkeley computer science professor Moritz Hardt, who specializes in machine learning and has studied fairness in AI. “It’s a chilling moment, I would say.”
Leave a Reply