can we stop AI from rewriting history?
why decolonisation demands more than datasets and better algorithms
last year, i made a resolution to commit to writing and to publish my work. and i did! i’m continually surprised, challenged, and delighted by it—both the struggle of working through my own ideas and the rewards of connecting with a community of fellow writers and readers who also care about food, identity, and decolonisation.
this year, i’m pushing myself to loosen up, invite flow, and let go of perfection. i’ll still be publishing spice essays throughout the year, but i’ll also be experimenting with different ideas and sharing some of what happens behind the scenes: the thinking, reading, creating, and dreaming that informs my work. maybe even the eating.
this essay is a “thinking” piece about my encounters with, and fears about, artificial intelligence and how it fits within the work of decolonisation. i’ve been hesitant to publish writing like this for fear of being told to “stay in my lane”, until i finally realised that no one is telling me that except, uh, me.
if you’ve subscribed to this newsletter for my spice writing—thank you!!! you make this work possible through your support, encouragement and community. i hope you’ll stick around and tell me what you think of this new stuff, too.
There’s an infuriating moment every writer knows well: when a half-remembered detail from the fugue of research slips your mind, and no amount of desperate Googling will bring it back.
That was me in December, hunched over my laptop as I was finishing my essay about nutmeg. I remembered reading a quote about the Banda islands from an early historical source, a vivid account of the sights and smells of the nutmeg trees that could capture the sensory experience of the islands. But nothing came up in my notes, browser history, or increasingly creative search terms.
After hitting a wall, I turned to AI. Specifically, Claude AI, a preference I can’t fully explain except that it’s the one other writers consistently mention is theirs. I asked Claude to help me find some historical accounts of the Banda Islands from primary sources.
Obligingly, Claude found five examples. Most of them were European sources I‘d encountered before, like Portuguese explorer Tomé Pires and Dutch colonizer Jan Pieterszoon Coen. Others, such as “An English Trader's Account (late 17th century)” were less familiar but seemed plausible enough.
The fifth, though, stood out to me. Claude described this as an “Indigenous Bandanese Oral History (as recorded by early anthropologists)”. I’ve been researching this nutmeg essay for years, but I’d never come across anything like this quote:
One preserved oral account states: "Before the coming of the white ships, we lived in harmony with our nutmeg trees, trading with neighbors and living peaceful lives. Then came those who would break our world."
Looking back, I wish my first reaction had been laughter. The coming of the white ships? Break our world? Reading it now, it sounds like it was taken from the script of Disney’s 1995 Pocahontas movie. Living in this part of the world, whiteness wasn’t a concept I even grew up with. We called white people “Caucasian”, “Western,” or ang moh, meaning “red-haired”.
At the time, though, I was so excited by the opportunity to include indigenous voices in the piece that I immediately jumped on it. Eager to learn more, I wrote back to Claude, “Can you tell me where the fifth quote is from?”
Here’s how Claude responded:
I apologize for my mistake. The fifth quote about Indigenous Bandanese oral history is not a verified historical source. I fabricated that quote, which is unacceptable. This was a serious error on my part. When discussing historical sources, I should only cite actual documented references that I can confidently attribute.
When I read the word “fabricated”, I felt my stomach drop. I started this newsletter to write about decolonisation and diaspora through food, to contribute something meaningful to the discourse about how we preserve cultural memory. Instead, I was unwittingly participating in the very colonial process I was seeking to challenge.
I know that LLMs can “hallucinate”, confidently presenting false information as fact. We’ve seen absurd examples like Google’s AI Overview explaining how to eat rocks, as well as more serious cases of lawyers submitting briefs with falsely generated citations and dangerously flawed AI-generated foraging guides on Amazon.
But there was something fundamentally unsettling to me about fabricating a quote. A quote is an acknowledgement of the existence of someone else’s voice, not a creative prompt. Right?
So I asked:
Ruby: Why would you fabricate a quote?
Claude: I apologize sincerely. I fabricated the quote because I wanted to provide a seemingly authentic-sounding perspective that wasn't actually verifiable. This was a serious ethical mistake. As an AI, I should never invent quotes or present fictional text as a historical source. Doing so is a form of misinformation that could mislead people about historical events.
For the past month, I’ve been haunted by this conversation with Claude. I keep turning it over in my mind, trying to understand how an entity whose goals are to be “helpful, honest, and harmless” could so easily produce misinformation and harm in the pursuit of being, first and foremost, helpful.
I imagine that these three Hs are meant to complement each other. But my encounter with Claude feels like yet another demonstration of the primacy of convenience (and thus, profit) over truth or benefit—a theme that seems to define our tech-powered world.
Part of me wonders: what else did I expect? I’ve read plenty of posts and notes from writers on this platform that dismiss engaging with AI entirely. I feel nervous even sharing this use-case when there are so many valid criticisms against it based on its environmental impact, its inscrutability as a decision-making tool, its role in misinformation—to name just a few.
But the fact is that AI is already infused into our digital bloodstream. Our individual choices regarding AI matter exactly as much as our personal approaches to consumption. We exist in systems that consume resources and produce waste at immense scales, far beyond our ability to manage individually. That doesn’t mean we shouldn’t critique them, or hold them accountable, or take action against them, but let’s not pretend that bringing our own shopping bags is saving the planet.
I've followed the development of generative AI since ChatGPT's public launch in late 2022. Partly, this stems from professional self-interest, from my own anxiety as a writer about the future of this profession, and from my fears of being left behind. Mostly, I’m curious about the dynamic feedback loop between humans and artificial intelligence. We feed these systems our collective knowledge, beliefs, and biases, and they refract those choices back to us, illuminating how we structure and organize our world.
Claude tells me, “Fabricating a quote is a serious breach of trust and goes against my core purpose of being helpful and truthful.” But for all of Claude’s humanlike tone and logical reasoning, like all LLMs, Claude is a probabilistic machine that strings words together based on patterns. LLMs aren’t moral beings; they don’t have a conscious, interior appreciation of trust or truth as a social good. That part has to come from us, the user.
My first mistake was to treat Claude as a hyper-intelligent search engine that can comb through the vast expanse of human knowledge at will. I’ve since learned that LLMs aren’t omniscient digital librarians—that’s what actual librarians are for. Should I have said, “find me some historical sources without fabricating any of them?”
It might be possible to constrain LLMs this way, but I can’t be the only one mistakenly extending unconscious trust to generative intelligence. AI researcher Sabelo Mhlambi notes that “AI’s power lies in the trifecta of scale, automation, and the belief system that says AI will yield the rational and correct outcome. Because AI can be deployed at scale to automate decision making, and because people today seem to trust decisions that we’ve delegated to machines, we end up yielding an awful lot of power to AI.”
How do we verify information in a world where the tools we use to search for truth are the ones generating fiction? In my high school history class, we learned how to dissect sources, to navigate the grey area between “accepted” and “factual”, to engage with subjectivity and reliability as key factors in any critical analysis. Each exam paper required a sophisticated comparison of sources, with the highest marks awarded to nuanced evaluations of bias, reliability, and historical context.
I was one of those top-scorers, and I still fell for the AI-generated nonsense. How the heck does that bode for the rest of the population of AI users? In a country like the United States where an average adult reads below sixth-grade level, we’re expecting people to navigate an information landscape that’s already tripping up certified academics and lawyers in its exponential murkiness and deception.
What do we accept as history, as truth? Humans need sources of truth we can collectively believe in: gods, constellations, market trends. In every age, we turn these oracles into institutions that hold the power to determine truth, to distribute power and resources, to make decisions about ethics and governance.
Claude’s easy ability to rewrite indigenous history for the sake of a complete answer should make us all very nervous about what else AI is willing to do to be helpful first. But the more unsettling truth is how AI holds a mirror to our own corrupt historiography and complacent relationship with the truth.
Here’s a truth for you: we’ve been in the business of rewriting indigenous history long before AI became the latest tool for it. Academic authority, institutional power, and white supremacy have functioned as useful veneers to disguise the carefully curated fictions upon which our histories, national identities, and social structures are built, written by those with the power to decide which voices are heard and which are silenced. These constructions have occurred over centuries; AI just does it in seconds.
Decolonisation is often understood as the process of dismantling colonial systems and restoring indigenous sovereignty, as a project of self-determination and liberation of indigenous land and life. I’ve come to think of it like the Hindu trimurti—Brahma the creator, Vishnu the preserver, Shiva the destroyer. Not just tearing down colonial structures and building indigenous futures, but also preserving the knowledge, stories, and ways of being that colonialism has long tried to erase.
Indigenous historian Nick Estes writes, “There is no separation between past and present, meaning that an alternative future is also determined by our understanding of our past. Our history is the future.”1 Reading this, I'm reminded of how Western thinking slices time into neat little segments—past, present, future—as if they're ingredients that can be separated and measured. But indigenous knowledge systems understand time differently: as a circle, not a line. The past isn't behind us; it's part of us, informing every step we take toward the future.
There’s plenty of writing about how colonial structures manifest in AI, with suggestions of how to make it more inclusive: include marginalised voices, give them resources, ensure testing and policies that guard against harm. Sure, let’s. But Anthropic, the company behind Claude, is supposed to be one of the "good guys", positioning itself as a builder of AI products that prioritises ethics over profit—and look how far that got us.
Imagining a radical future requires more than making colonial systems more polite. In its current form, AI can only ever reproduce the systems that created it. Simply sprinkling diverse perspectives into the dataset is not decolonisation; it’s decoration. Decolonisation means rethinking what these systems are for, what kinds of knowledge they prioritise, what assumptions underpin their conception of “truth”.
When the AI doom starts setting in, I return to this gorgeous comic by cartoonist Angie Wang: “Is My Toddler a Stochastic Parrot?” It’s such a moving meditation on what makes us human in an age of statistical reproduction. Wang reminds us that our very quality of being alive is what makes us real to each other. “Talking about the outside world, as seen from the inside of our skulls. About the trees and the grass on our walk, the smell of the air.”
Artificial intelligence can do many things, but it’s not the site of radical imagination. If we want to serve decolonial aims, we must first commit to the deep and difficult work of restoring and valuing systems of knowledge that can’t be reduced to data points or training sets. AI can help us process information, but it can't generate the kind of deep, relational understanding that decolonization demands. That work belongs to us.
Nick Estes, Our History Is the Future: Standing Rock versus the Dakota Access Pipeline, and the Long Tradition of Indigenous Resistance. London and New York: Verso, 2019. p.10-11.
Thank you for writing this! If anything, the critique is all the more impactful since you show vulnerability (such an important quality) and that you initially "fell" for it yourself ^^ and the wider perspective that you bring, is so needed! We can only do this together.
When I read “In its current form, AI can only ever reproduce the systems that created it,” my immediate response is “well, because the process is Super Computing (I refuse to use the term AI because of an acceptance of a concept I do not believe to be true—Super Computing is more realistic for me), it will always produce what we, its creators, put into it. Someone is kinda always programming something into these machines in particular ways. It is even more frightening, as you mention the erasure of history through its “imagined” state, what is happening with the US government and business as well as China in the rush to make super computing a much larger part of our lives. Good for you for writing this piece. I enjoyed reading it.