An Ethical Google, and Other Fairy Tales

08 Feb 2022

Last Wednesday, researchers Alex Hanna and Dylan Baker quit Google’s Ethical AI team and wrote letters about how Google maintains white supremacy and inequalities amongst their workforce. Both are joining the Distributed Artificial Intelligence Research Institute, or DAIR, which Timnit Gebru formed after being fired in late 2020. Today, we’re reprinting both of their letters in full.

A beautiful shrub busting through a whitewashed wall

A beautiful shrub busting through a whitewashed wall / Source

On Racialized Tech Organizations and Complaint: A Goodbye to Google

By Alex Hanna

Today (Wednesday, February 2, 2022) is my last day at Google. It’s been a year and two months after my former manager Timnit Gebru was fired, and nearly a year after my next manager Meg Mitchell was given the same treatment. I’m following Timnit and joining her at the Distributed AI Research Institute as Director of Research, effective tomorrow.

In resignation letters, this is where you write how much you appreciated the people you worked with. And I’m definitely going to do the same. But this is in spite of the culture of Google, rather than because of. The Ethical AI team created by Meg Mitchell and Timnit Gebru was one of the most inclusive on which I’ve ever worked or had the fortune of witnessing firsthand. Throughout many teams I’ve experienced in tech and academia, this one’s members have shown each other the most mutual respect, care, admiration, and appreciation. Even though it was unstated, Google’s Ethical AI team has (and continues) to exemplify a deep ethic — learned and emerging from a Black feminist tradition — of growth, nurturing, and wanting to see each other succeed. For that, I want to give our erstwhile co-leads the deepest appreciation. I’m going to deeply miss all of my teammates.

But Google’s toxic problems are no mystery to anyone who’s been there for more than a few months, or who have been following the tech news with a critical eye. Many folks — especially Black women like April Curley and Timnit — have made clear just how deep the rot is in the institution. I am quitting because I’m tired. I could spend time rehashing the litany of ill treatment by Google management from prior organizers or how the heads of diversity and inclusion are implicated in the company’s union-busting, which we know thanks to the case brought by the whistleblowers illegally fired for organizing against ICE, CPB, and homophobia on YouTube. I could describe, at length, my own experiences, being in rooms when higher-level managers yelled defensively at my colleagues and me when we pointed out the very direct harm that their products were causing to a marginalized population. I could rehash how Google management promotes, at lightning speed, people who have little interest in mitigating the worst harms of sociotechnical systems, compared to people who put their careers on the line to prevent those harms.

I could do that. But I’ve also learned, thanks to my doctoral training in sociology, that one must expand one’s personal problems into the structural, to recognize what’s rotten at the local level as an instantiation of the institutional. Our best public sociologists, like Tressie McMillan Cottom and Jess Calarco, do this exceptionally well.

I could also provide quantitative evidence of the rot. Like how, prior to Timnit’s hiring, Google Research management had never recruited a Black woman as a research scientist. Or how in one town hall around Googlegeist (Google’s annual workplace climate survey), a high-level executive remarked that there had been such low numbers of Black women in the Google Research organization that they couldn’t even present a point estimate of these employees’ dissatisfaction with the organization, lest management risk deanonymizing the results. These data points are sad. They are also expected from the first-person experiences, discrimination lawsuits, and labor complaints made across the tech industry by Black people, Indigenous people, Dalits, people with disabilities, and queer and trans people.

Instead, I’d rather start working out the ways in which Google, like so many other tech organizations, maintains white supremacy behind the veneer of race-neutrality, both in the workplace and in their products. I also want to think through the methods tech workers can use to challenge and expose their employers’ ongoing investment in white supremacy. Much of the theoretical substance here is informed by theories of the racialized organization, developed by sociologists Melissa Wooten, Lucius Couloute, and Victor Ray.

In a word, tech has a whiteness problem. Google is not just a tech organization. Google is a white tech organization. Meta is a white tech organization. So are Amazon, Apple, Microsoft, and others that are announced in the same breath when we discuss the “techlash”. But so are research centers like OpenAI who are backed by oodles of venture capital from Peter Thiel and Sam Altman, or the Allen Institute for AI, founded by Paul Allen from Microsoft. More specifically, tech organizations are committed to defending whiteness through the “interrelated practices, processes, actions and meanings”, the techniques of reproducing the organization. In this case, that means defending their policies of recruitment, hierarchization, and monetization. Sociologist Amber Hamilton discusses how corporate actors, tech organizations included, rarely named the symptoms of whiteness — that is, their own racist organizational practices — in their responses to the racial reckoning of 2020, one of the largest social movements of our lifetimes.

My methodological approach comes from thinking along with Sara Ahmed’s work on complaint. By “complaint”, I mean grievances we lodge within our workplaces, which can look both like formal complaints made to human resources (excuse me, I mean “people operations”) and informal complaints which we hold between our peers, comrades, and friends. Anyone who has engaged in the process of a formal complaint can tell you how exhausting it is to register one, how management and decision-makers can stall, and how much one has to relive their trauma to do so.

Ahmed teaches us how much we can learn from complaints. “The path of a complaint… teaches us something about how institutions work,” what she calls institutional mechanics. Complaints are often tethered to the individual racist or misogynist, a replication of what Alan Freeman calls the perpetrator perspective implicit in US anti-discrimination law. But, more importantly, complaints can tell us much more about organizational practices, and how those practices reinforce white supremacy. Because of the time we commit to the complaint, we intimately learn the rules that have gone unwritten and the standards that are applied to the letter for marginalized people, but only loosely for white (and upper class/caste Asian) men. Complainers learn the racialized nature of institutions much better than the bureaucrats themselves.

Examples of white supremacy within tech organizations abound. Like the racialized, sexist practices in the performance review process which don’t account for the huge amount of care work which has fallen on women and femmes during the pandemic, or the pushback on candidates of color from hiring committees which are nebulous and unaccountable to lower-level managers. In Timnit’s case, she was fired when Jeff Dean, the head of Google Research, claimed that her paper, “On the Dangers of Stochastic Parrots: Can Large Language Models Be Too Big? 🦜” did not go through the proper publication approval process. The claim made by Dean that the process is one in which “we engage a wide range of researchers, social scientists, ethicists, policy & privacy advisors, and human rights specialists from across Research and Google overall” was laughable; as one of only a handful of social scientists on staff, I recognized the claim as patently false. Google management remained silent when an article on the Google Walkout page pointed out that there were many counterexamples, like how nearly half of papers in the system were approved within a day or less of the deadline.

Fortunately, complaint can operate in a positive mode, namely as a strategy of coalitioning and solidarity. It can be a way of acting as a “feminist ear”, as Ahmed calls it: “To become a feminist ear is to indicate you are willing to receive complaints.” It also speaks to the effectiveness of telling stories about tech institutions, as a diagnostic, as an analgesic, and an organizing device. Complaint can work as a type of praxis, a Marxian workers’ inquiry of sorts, as Tamara Kneese has argued.

So in this sign-off, I encourage social scientists, tech critics, and advocates to look at the tech company as a racialized organization. Naming the whiteness of organizational practices can help deconstruct how tech companies are terrible places to work for people of color, but also enable an analysis of how certain pernicious incentives enable them to justify and reconstitute their actions in surveillance capitalist and carceral infrastructures. For tech workers: continue to complain, to be a feminist ear for others, and to develop institutional analyses of your own (and for gods’ sakes, download Signal).


On Leaving Google

By Dylan Baker

At the end of February I will be leaving the Ethical AI team and joining Timnit Gebru at the Distributed AI Research Institute.

I’m one of many people leaving Google with ethical concerns in mind; there’s little I can say about that that hasn’t been said. I’m standing among people who’ve put their careers on the line to speak up, who’ve anatomized Google’s harms and injustices more thoroughly and articulately than I could.

So, I’m writing this to share a few experiences and observations that feel important to me. I first came to Google through the Engineering Residency Program. When I joined in 2017, the program assembled extremely diverse cohorts of new graduates in computer science and related fields, and gave us a lower-paying, fixed-term version of a new-grad engineering job. While this program has been more or less discontinued (after ceaseless organizing on the part of many current and former Residents!), finding ways to codify the undervaluation of marginalized professionals through “opportunities” has certainly not ended, at Google or elsewhere in tech.

This was the start of a lot of cognitive dissonance.

As a Resident, I was cocooned by the office perks that make full-time Googlers feel so valuable — and was reminded that this was all temporary unless I had proven myself worthy within a year. As a full-time engineer in Google Research, I heard managers and leaders speak earnestly about the critical importance of a diverse workforce, about the role our work could be playing in addressing hardships and injustices, about how fortunate we were to be able to operate so freely — and the hiring demographics looked the same to me year after year. Most of the work I saw rewarded was, at most, superficially engaged with material ethical concerns (a veritable ocean of disability dongles!). And all those “Googley” perks drew a firm, uncomfortable line between us as full-time workers and the temps, vendors, and contract workers that worked alongside us.

At first, I coped with this cognitive dissonance the way a lot of people do, by giving Google as much benefit of the doubt as I could at the time. These Are Challenging And Complex Issues, after all. At least leadership Sees Us And Hears Our Concerns. And I could brush off the paternalism — being an early career engineer from a marginalized background, I was often patronized, anyhow.

It also helped to find other optimistic, ideologically-motivated colleagues. The way they saw their own roles gave me a sense of purpose, too — Google has a massive impact on the world, and that impact is driven by us. By Googlers.

But “us” was never meant to include over half of Google’s own workers. It was never meant to include April Curley or Shannon Wait, not workers trying to curtail Google’s military involvement, not Drs. Gebru or Mitchell.

Maybe at the company’s founding, Google was a place where the impact really was driven by the employees. But when the employees are a small clique of largely white Stanford graduates in a skyrocketing industry with a firehose of capital and limited legal oversight, there was no reason for it not to be.

Now, Google leadership has made it clear that there is simply no reason to let employees impact the direction of the company if that direction deviates from ravenous, short-sighted consumption and growth at any cost. They’ll continue to isolate, silence, and divide workers who speak up about critical issues in the future of technology; they’ll continue to consolidate power and evade responsibility. Responding to petitions, transparency in town halls, “don’t be evil” — they’re simply not perks worth offering anymore.

When I joined Google, I was cautiously optimistic about the promise of making the world better with technology. I’m a lot less techno-solutionist now. I understand in vivid detail how far Google leadership will go to feel like they’re protecting their precious bottom line. I feel viscerally how easy it is to become jaded to the point of exhaustion.

At the same time, I can’t speak highly enough of how insightful, brilliant, and unwaveringly collaborative my colleagues have been. I have never felt so valued, trusted, and encouraged in a work environment as I have on the Ethical AI team. Drs. Gebru and Mitchell laid incredible groundwork, and I owe them — and the entire team — an enormous debt of gratitude.

And organizing with my fellow Engineering Residents and the Alphabet Workers’ Union were deeply grounding, heartening experiences. I’ve found solidarity to be not only indispensable in effecting change but also personally restorative. Being in community with other people, taking care of each other, taking action together — it motivates me in a way no free company-provided 1-on-1 counseling or motivational wellness talk ever could.

Meredith Whittaker put it really well in a recent interview in Logic Magazine:

We’re trying to figure out how we, as people within these environments, protect ourselves and each other. In my view, the answer to this question doesn’t start with building a better HR, or hiring a diversity consultant. It’s rooted in solidarity, mutual care, and in a willingness to understand ourselves as committed to our own and others’ wellbeing over our commitments to institutional standing or professional identity.

So, even after four years at Google, I remain cautiously optimistic.

I believe technology can be good. I want efficiency to mean everyone can work less, I want it to mean justice and accessibility and less suffering and more leisure and joy. I want us to be able to spend more time planting seeds and less time putting out fires.

I have enormous faith in Dr. Gebru and the DAIR team in working towards just that. I’m tremendously excited to start my next chapter there.


Thank you to TWC for the opportunity to reprint our stories with allies in and around tech, and thanks for editing help from Emily, Emma, Remi, Ellen, and of course, Timnit and Meg. Learn more about our work at DAIR and follow us at @dairinstitute.