If you read my last article, AI Abundance is a Lie, you might think there is little room for optimism about our future.
Concentrated wealth. Captured politics. AI systems built by billionaires who openly oppose
either empathy or sharing. Forty-five years of productivity gains that went to capital owners
while workers got nothing. The path to structural change blocked at every turn by people
who benefit from keeping things exactly as they are.
That was a lot.
But what I didn't say clearly enough is this: the system is powerful, but not inevitable. The
people running it are not invincible. And history — including our own very recent history — is
full of moments when one person showed something so clearly, so undeniably, that the conversation
never went back to where it was.
This is about three of those people.
People, not "heroes". Not geniuses with special access to power. A novelist who went undercover
in a slaughterhouse. A grad student who couldn't get a computer to see her face. A researcher
who got fired for writing the truth.
They didn't defeat the system. But they changed what the system had to answer for.
That is not nothing. That is, in fact, how change starts.
What "Changing the Narrative" Actually Means
Let's be precise about the argument here, because it's easy to slide into something I don't mean.
I'm not saying one passionate person can change the world through sheer force of will.
That's a motivational poster, not a theory of change. And it implicitly lets everyone
else off the hook — if you're not exceptional, why bother?
What I am saying is something more specific: structural change often starts with a moment
when a problem becomes visible to enough people that ignoring it is no longer possible.
Someone has to create that moment. That person is almost always an individual who decided
to expose a truth that others were either too comfortable, too cautious, or too powerful
to name themselves.
The lever that moves society isn't heroism. It's reframing.
Before Upton Sinclair, people vaguely knew that industrial food production was probably not
great. After Sinclair, they knew exactly what was in their sausage. The disgust was always
there, waiting. He just gave it a target.
Before Joy Buolamwini's research, the AI industry could claim its facial recognition systems
were highly accurate — and produce the numbers to prove it. After her work, a new question
replaced that one: accurate for whom? You can't unask that question. It changed
what "accuracy" means.
Before Timnit Gebru was fired by Google for publishing research they didn't like, most people
outside the industry had only a vague sense that AI ethics might be a problem. Google's
attempt to bury her work turned it into a cause — and her into a symbol of exactly the
conflict she had been describing.
None of them changed things alone. Sinclair's book needed an outraged public to pressure
Congress. Buolamwini's research needed journalists, lawmakers, and the ACLU to amplify it.
Gebru's firing needed thousands of people to stand up and say this is not acceptable.
The individual act was the spark. The collective response was the fire.
This matters, because it means the story isn't just about exceptional people doing
exceptional things. It's about what becomes possible when someone creates the conditions
for collective action — and about the thousands of ordinary people who made that action real.
You might not be the person who testifies before Congress or goes undercover in a
meatpacking plant. But you might be one of the 4,300 researchers and others who signed
a letter in support of Timnit Gebru. Without them, her firing would have been a footnote
instead of a reckoning.
The three people in this article each created one of those moments — in different
industries, in different eras, in different ways. The details are worth knowing.
In the fall of 1904, a 26-year-old socialist writer named Upton Sinclair boarded a
train to Chicago with a notebook, a dinner pail, and a mission. For seven weeks, he
worked undercover in the meatpacking plants of Packingtown — living among the
immigrant workers, witnessing the conditions firsthand, gathering material for what
he hoped would be his great American novel about class exploitation and the plight
of the working poor.
He got his novel. But he didn't get what he came for.
The Jungle was published in 1906 and became an immediate sensation — just not for the
reasons Sinclair intended. Readers didn't put it down outraged about the exploitation
of Lithuanian immigrants. They put it down unable to finish their breakfast. His
vivid descriptions of contaminated meat, of workers falling into rendering vats, of
rats and sawdust swept into sausage, ignited a public fury that swept through
Congress like a brushfire.
For decades, nearly 100 food and drug safety bills had been introduced in Congress.
Every single one had died — killed by food industry money and political indifference.
Within months of The Jungle's publication, President Theodore Roosevelt — who
privately thought Sinclair was a crackpot — signed both the Pure Food and Drug Act
and the Meat Inspection Act into law. Those two laws became the foundation of what
would eventually become the FDA.
Sinclair's famous verdict on his own work: "I aimed at the public's heart, and by
accident I hit it in the stomach."
He hadn't failed. He had discovered something important about how change actually
works: you don't always get to choose what resonates. What matters is naming a truth
so vividly, so viscerally, that people can no longer pretend they didn't know. The
political will to act was never the real obstacle. The shared, undeniable, gut-level
outrage wasn't there yet.
Sinclair gave it a target.
Joy Buolamwini: Accurate for Whom?
In 2015, a graduate student at MIT was building an art installation — a kind of magic
mirror that would overlay an aspirational figure onto the viewer's reflection. She
was using off-the-shelf facial recognition software to track the viewer's face.
The software couldn't find hers.
Joy Buolamwini had to hold a white mask in front of her face to make it work.
She could have filed it away as a technical glitch and moved on. Instead, she started
asking questions. What she found became her MIT thesis, a landmark research paper,
and eventually a reckoning for the entire facial recognition industry.
Her project, Gender Shades, systematically tested commercial facial recognition
systems from IBM, Microsoft, and Face++. (A follow-up study included Amazon's Rekognition
system.) The industry was claiming accuracy rates of
around 97% — impressive numbers, until Buolamwini looked at who those numbers
reflected. For light-skinned men, error rates were as low as 1%. For dark-skinned
women, misclassification rates reached as high as 47%. The systems weren't accurate.
They were accurate for some people — specifically, the people who looked most like
the overwhelmingly male, overwhelmingly light-skinned faces those systems had been
trained on.
That reframe — not "is it accurate?" but "accurate for whom?" — sounds simple. It
wasn't. It cut straight through years of industry self-congratulation and asked a
question the numbers couldn't dodge.
The response from companies was revealing. IBM updated their software within 24 hours
of receiving her findings. Amazon pushed back aggressively, disputing her
methodology. Microsoft's president Brad Smith cited her research while publicly
calling for government regulation of facial recognition — an unusual move for a tech
executive.
Buolamwini didn't stop there. She and fellow researcher Timnit Gebru extended the
work, auditing more systems. She founded the Algorithmic Justice League. She
testified before Congress. She collaborated with the ACLU, which ran members of
Congress through Amazon's facial recognition system — it misidentified 28 of them as
people who had been arrested. The argument was hard to ignore: if it can't reliably
identify a senator, what happens when it misidentifies a Black teenager in front of a
police officer?
By 2020, every U.S.-based company she had audited had stopped selling facial
recognition technology to law enforcement. IBM exited the facial recognition business
entirely. Amazon imposed a moratorium. Microsoft followed.
One grad student's frustration with a broken art project had changed industry policy
at three of the largest technology companies on earth. Her story was later told in
Coded Bias, a documentary available on Amazon (and sometimes other streaming services) — which introduced her work to
millions of people who had never heard of algorithmic bias, and probably never would
have.
She coined a term for what she had discovered: the coded gaze — the way the
priorities, preferences, and prejudices of the people who build AI systems get
quietly embedded into the technology itself. It's a phrase that does what the best
reframes always do: once you have the words for something, you can't stop seeing it
everywhere.
Timnit Gebru: The Silencing That Backfired
The story of Timnit Gebru and Google is, on the surface, a familiar one: a powerful
corporation tries to suppress inconvenient research, and the researcher pays the
price.
Except that's not quite how it ended.
Gebru came to Google in 2018 as one of the most respected researchers in AI ethics —
co-author, with Joy Buolamwini, of the Gender Shades paper that had already shaken
the facial recognition industry. At Google, she co-led the Ethical AI team, built it
into one of the most diverse and consequential research groups in the field, and kept
asking the kinds of questions that made her employer uncomfortable.
In 2020, she co-authored a paper called "On the Dangers of Stochastic Parrots: Can
Language Models Be Too Big?" It was a careful, technical argument that the AI
industry's race to build ever-larger language models — the same technology now
powering ChatGPT, Google's own products, and most of the AI tools you've encountered
— was creating serious problems the industry wasn't acknowledging. Environmental
costs. Training data riddled with bias. Systems that could generate
confident-sounding text without anything resembling understanding. Accountability
structures that didn't exist.
These were not fringe concerns. They were questions any serious researcher should
have been asking. Google didn't want them published.
The company pressured Gebru to retract the paper or remove the names of Google
employees from it. She refused. In December 2020, she was fired — or, in Google's
careful corporate phrasing, her resignation was "accepted," a claim she disputed
publicly and immediately.
What happened next is the part Google did not anticipate.
Within days, more than 2,700 Google employees signed a letter condemning the firing.
More than 4,300 academics, researchers, and supporters outside the company joined
them. Nine members of Congress wrote to Google demanding an explanation. The AI
research conference where the paper had been accepted — and which Google sponsored —
removed Google from its list of sponsors in protest. Senior researchers began
resigning. The story didn't disappear. It became the story.
The paper Google tried to bury is now one of the most cited documents in AI ethics.
The questions it raised — about bias, about environmental cost, about corporate
accountability — are now central to how governments, researchers, and increasingly
the public think about AI regulation. Its concerns have shaped AI governance
discussions at regulatory bodies and standards organizations worldwide. The term "stochastic parrots"
entered the cultural vocabulary as shorthand for a real and recognized problem. Sam
Altman, CEO of OpenAI, used it to describe his own product after ChatGPT launched.
Gebru didn't quietly retreat. She founded DAIR — the Distributed AI Research
Institute — an independent research organization explicitly free from corporate
funding and corporate pressure. It exists, in part, because her firing made the case
that ethical AI research cannot survive inside companies whose business models depend
on ignoring ethical questions.
There's a painful irony at the center of this story that's worth sitting with: Google
built an Ethical AI team, hired one of the world's leading AI ethics researchers, and
then fired her for doing AI ethics research. The contradiction was so glaring that
the attempted suppression did more to advance the cause of AI accountability than the
research alone ever could have.
Suppression is an admission. When a corporation fires someone for telling the truth,
it tells the world exactly how threatening that truth is.
The Thread
Three people. A century apart, more or less. Different industries, different tactics,
different outcomes. What connects them?
None of them set out to be movement figures. Sinclair wanted to write a socialist
novel. Buolamwini wanted to finish her art project. Gebru wanted to publish her
research. The impact came from doing the work honestly and refusing to look away from
what it revealed.
All of them faced real personal cost. Sinclair was dismissed by the president he
inadvertently helped. Buolamwini faced aggressive pushback from one of the most
powerful companies in the world. Gebru lost her job — and with it, the institutional
platform and resources she had spent years building.
And all of them reframed a problem in a way that made it impossible to ignore. Not by
winning an argument, but by making something visible that people couldn't unsee.
Contaminated sausage. A 47% error rate on dark-skinned women. A company firing its
own ethics researcher for doing ethics research.
But here's the thing that matters most for what comes next: none of them did it alone.
Sinclair had the muckraking tradition behind him — a whole generation of journalists
who had been priming the public to distrust industrial capitalism. Buolamwini and
Gebru were collaborators before they were separate stories; Gender Shades was both
of theirs. Gebru's firing only became a reckoning because thousands of people —
researchers, employees, lawmakers, ordinary people who had never heard of stochastic
parrots — decided it mattered.
The individual was the spark. The collective was the fire. Every time.
The Bar Is Lower Than You Think
Here's what I think we should take from these three stories — and it's not what you
might expect.
I'm not asking you to go undercover in a data center. I'm not asking you to risk your
career publishing research your employer doesn't want you to publish. I'm not asking
you to testify before Congress or found an independent research institute.
Those things matter enormously, and the people who do them deserve our respect and
our support. But they are not the only things that matter. And waiting until you're
ready to be Upton Sinclair is no more effective than doomscrolling.
Think about those 4,300 researchers and academics who signed a letter after Timnit
Gebru was fired. Most of them didn't risk much. They signed their name to a statement
saying this is wrong. That's it. Some of them probably agonized over it.
But together, they turned a corporate HR incident into a global
reckoning that reshaped how we think about AI accountability.
Or consider the people who watched Coded Bias and then mentioned it to someone who
hadn't. Who shared Buolamwini's research in a work meeting when someone said "but AI
is neutral." Who asked their city council candidate what they thought about
algorithmic hiring in municipal contracts. Small acts, but not nothing.
The spectrum of participation runs from signing a letter to founding an institution,
and every point on that spectrum is necessary. The people at the bold end need the
people at the quiet end to matter. Gebru needed her 4,300. Buolamwini needed the
journalists and lawmakers who amplified her work. Sinclair needed the readers who got
angry enough to write to their congressmen.
You are not powerless. You are, at minimum, one of the 4,300.
And once you're ready to think bigger than that — once you want to move from
individual acts to collective structures, from signing letters to building something
— that's where the next articles in this series come in. Worker cooperatives.
Community land trusts. Local politics. Mutual aid. The places where individual
commitment aggregates into durable power.
The fight for economic justice in the age of AI is not going to be won by exceptional
individuals acting alone. But it is going to require each of us — people like the
three in this article, and people like you and me — to decide that the truth is worth
telling, and that action is worth taking.
The alternative is waiting for someone else to be Upton Sinclair.
Don't wait.
This is the second in a series examining AI, automation, and economic justice. The
first piece, AI Abundance is a Lie,
examined why technological productivity alone won't create broadly shared prosperity.
Future pieces will explore specific policy solutions, successful organizing models,
and the role of tech workers in shaping how AI is deployed.
Sources
- History.com. "How Upton Sinclair's 'The Jungle' Led to US Food Safety Reforms." https://www.history.com/articles/upton-sinclair-the-jungle-us-food-safety-reforms
- Britannica. "Pure Food and Drug Act." https://www.britannica.com/topic/Pure-Food-and-Drug-Act
- U.S. House of Representatives History. "The Pure Food and Drugs Act." https://history.house.gov/Historical-Highlights/1901-1950/Pure-Food-and-Drug-Act/
- Teaching American History. "Letter from Theodore Roosevelt to Upton Sinclair (1906)." https://teachingamericanhistory.org/document/to-upton-sinclair/ (source for Roosevelt's "crackpot" characterization, from his letter to journalist William Allen White)
- Buolamwini, Joy and Gebru, Timnit. "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." 2018. http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
- MIT News. "Study finds gender and skin-type bias in commercial artificial-intelligence systems." https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212
- OneZero/Medium. "How a 2018 Research Paper Led Amazon, Microsoft, and IBM to Curb Their Facial Recognition Programs." https://onezero.medium.com/how-a-2018-research-paper-led-to-amazon-and-ibm-curbing-their-facial-recognition-programs-db9d6cb8a420
- ACLU. "Amazon's Face Recognition Falsely Matched 28 Members of Congress With Mugshots." https://www.aclu.org/news/privacy-technology/amazons-face-recognition-falsely-matched-28
- Coded Bias documentary. Official site: https://www.codedbias.com/about. Also on Netflix (https://www.netflix.com/title/81328723), PBS Independent Lens (https://www.pbs.org/independentlens/documentaries/coded-bias/), and Amazon (https://www.amazon.com/gp/video/detail/B0FVKTZ9Z6)
- Bender, E.M., Gebru, T., McMillan-Major, A., and Mitchell, M. "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" ACM FAccT 2021. https://dl.acm.org/doi/pdf/10.1145/3442188.3445922
- MIT Technology Review. "We read the paper that forced Timnit Gebru out of Google." https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
- MIT Technology Review. "Congress wants answers from Google about Timnit Gebru's firing." https://www.technologyreview.com/2020/12/17/1014994/congress-wants-answers-from-google-about-timnit-gebrus-firing/
- Wikipedia. "Stochastic parrot" (Sam Altman quote). https://en.wikipedia.org/wiki/Stochastic_parrot
- TIME. "Why Timnit Gebru Isn't Waiting for Big Tech to Fix AI's Problems." https://time.com/6132399/timnit-gebru-ai-google/
- Georgetown Law Center on Privacy & Technology. "American Dragnet: Data-Driven Deportation in the 21st Century." 2022. https://americandragnet.org/
- ACLU. "Face Recognition and the 'Trump Terror'." https://www.aclu.org/news/privacy-technology/ice-face-recognition
- Duke Chronicle. "AI researcher Joy Buolamwini discusses bias in facial recognition technologies at Duke event." February 2025. https://www.dukechronicle.com/article/2025/02/duke-university-joy-buolamwini-artificial-intelligence-researcher-discussed-biases-in-facial-recognition-technologies-the-coded-gaze
Use the share icon here if you liked it.
Thanks!