
If Orwell predicted a future of authoritarian oppression, we missed the plot twist: truth isn’t being silenced, it’s being distorted, mass-produced, and flooded upon us like a knock-off Rolex that still ticks. The root isn’t the Press nor Politicians, it’s not the White House or foreign influence, it’s everyone, all of us speaking at once, all claiming to be right.
We live in the Age of Artificial Truth. But don’t get it twisted as pointing blame on any one nor anything but ourselves, this didn’t start with ChatGPT. The arrival of generative AI didn’t create the crisis of truth; it merely industrialized it. Truth today isn’t a matter of fact or fiction. It’s a matter of narrative dominance. The loudest wins. The most convincing is crowned the “truest.” And the machine? It just regurgitates what it’s been fed.
I want to trace the dirty lineage that birthed this monstrosity of epistemological collapse because it might be the most important discussion of our time.
Yellow Journalism: The Birth of Clickbait Before Clicks
Let’s go back to the late 1800s, before digital media, before Facebook memes, before a tweet could move markets. In the days of Hearst and Pulitzer, truth was already for sale. The Spanish-American War didn’t erupt solely because of imperial ambitions or geopolitical chess. It exploded because of front pages screaming lies. “You furnish the pictures, and I’ll furnish the war,” Hearst allegedly told illustrator Frederic Remington. Whether or not he said it hardly matters now, it’s true because I just said so; the story itself is truer than truth in this new era.
Yellow journalism was the original content marketing strategy, sell fear, sell drama, sell simplicity. Nuance? Accuracy? Who cares when the public is addicted to sensation. The seeds were sown: if it bleeds, it leads. And that psychological addiction to sensationalism metastasized into modern media.
“In chasing down cheap clicks at the expense of accuracy and veracity, news organisations undermine the very reason they exist: to find things out and tell readers the truth – to report, report, report.” – Katherine?Viner, Editor?in?Chief at The Guardian
The motive? Profit. Influence. Control. And those never left the building.
Propaganda: When Lies Wear a Uniform
Propaganda elevated yellow journalism into national strategy. In wartime, in politics, and in religion, truth became utilitarian. Not a discovery but a delivery mechanism. Hitler’s Minister of Propaganda, Joseph Goebbels, understood this intimately: repeat a lie often enough, and it becomes accepted as truth. Stalin, Mao, even Roosevelt, all knew that control over information was more powerful than armies.
But the propaganda machine didn’t stop when the tanks did. Fast-forward to the Cold War, and the U.S. government was seeding American pop culture with pro-freedom messages while accusing Hollywood of Communist sympathies – and while that may well have been valid, it doesn’t dismiss the fact that it was still propaganda to push support for government decisions in light of an opposition. The media wasn’t reporting reality; it was manufacturing allegiance.
And it worked, because once the public becomes accustomed to curated reality, they’ll beg for lies that flatter them.
SEO and Content Farms: Truth Optimized for Google
Enter the internet. Not as democratizer of knowledge, but as industrial press for content sludge. Around the 2000s, search engines became the gatekeepers of truth. And so began the race to game them.
“Content farms” like Demand Media didn’t care about journalistic integrity, they cared about keywords. Articles were churned out by the tens of thousands, each “optimized” for search. A user types in “how to remove a tick” and gets a 500-word article written by someone paid $5 and who may never have seen a tick in their life – now imagine that churned out at no cost and programmatically in thousands of variations (this is what AI is doing) – now realize that Russia, Israel, Ukraine, Iran, or the White House, can do it, under any brand, domain name, or social media profile conceivable, say for example, “TheTruth.com.” The illusion of authority was preserved by headers, bullet points, and the uncanny cadence of someone trying to sound like they know what they’re talking about.
Google didn’t intend to create this, but algorithms don’t understand truth. They understand engagement, links, and popularity. So the web flooded with content that wasn’t meant to inform, but to perform.
The motive? Ad revenue. Page views. Rankings. And a public so desperate for shortcuts and “life hacks” that we didn’t care if the advice came from someone who thought peanut butter cured cancer.
Social Media and Fake News: The Crowdsource Collapse
Then came social media, where everyone got a printing press. Now, not only could you say whatever you wanted, but algorithms would help you find your people. Flat Earthers? You’re not alone. The Holocaust Didn’t Happen? There’s a group for that. Echo chambers formed, not out of malice, but out of convenience. And while Flat Earth or Holocaust truth might not really matter today, the same group exists denying vaccines, claiming parental alienation isn’t real, undermining cleantech, and influencing your perspective on wars, your country, and your leaders.
Truth fragmented into tribes in which you can assert that you’re right because others say so.
“Fake news” became the accusation of choice, lobbed from every side, weaponized not just by authoritarians but by anyone losing an argument. And the media? Once trusted as a fourth estate, it now competes with influencers, Reddit threads, and TikTok explainers. A 19-year-old with good lighting can (easily) out-rank The New York Times in shaping perception.
The irony? The more you consume, the less you know. Because content is no longer vetted for accuracy—it’s vetted for shareability.
And let’s be honest, the public doesn’t want nuance. We want validation. We want to be right. Not informed.
The AI Apocalypse (Spoiler: It’s Just the Sequel)
Now here comes AI, but AI isn’t an inventor of lies, it’s an amplifier. ChatGPT, Claude, Gemini… they don’t know anything. They synthesize what already exists. They’re a mirror held up to the culture. When they hallucinate, they’re not breaking from reality, they’re echoing it.
“We are facing a crisis of epistemology in today’s world. How do we know things? How do we distinguish between true beliefs and delusions, facts and fiction?” Brett McCracken in Five Facets of Our Epistemological Crisis. “What can be trusted?”
Ask AI for a source and it might fabricate a peer-reviewed journal article that doesn’t exist. Why? Because it knows we expect a source. It knows the shape of credibility, it can create it, but not its substance and it isn’t doing the research to validate something, it’s manifesting what you’re seeking. That’s not a bug. It’s a feature of us. We trained it this way.
We created machines to reflect our informational gluttony, and now we’re shocked that they don’t distinguish between fact and fervor. It’s not artificial intelligence. It’s artificial consensus. And we’re to blame.
The Courtroom, the Classroom, and the Cafeteria
Let’s zoom in on some real-world implications. Courtrooms increasingly rely on “expert witnesses” whose credentials are wielded in the same way a PhD throws around their title as a justification that they’re right. Judges, overwhelmed and underinformed, rule based on rhetoric rather than evidence. One expert says so, never mind the facts, the stories disagree. The result?
- Parents losing their relationship with children through family courts deciding based on preference of opinions
- People convicted and imprisoned with influence from the cancel culture of social media
- Businesses lost because laws and regulations are passed when people don’t like what’s being done
In schools, kids are told to think critically while being fed standardized curricula optimized for test scores, not truth. Teachers are no longer trusted. Google it. Ask TikTok. Have ChatGPT write the paper.
Even in your breakroom or Thanksgiving dinner table, the debate is no longer about facts. It’s about whose facts. The political becomes personal. The personal becomes weaponized. Disagreement isn’t intellectual — it’s existential.
Because in the Age of Artificial Truth, your opinion isn’t a truth. It’s the truth. And anyone who doesn’t agree? They’re not just wrong. They’re immoral.
No One’s Coming to Save Us
What can we do about it? That’s the wrong question.
There is no “we.” There is no collective sense-making anymore. There is no shared reality to build upon, no trusted authority to rein in the chaos, no final arbiter who can say, “This is true,” and not get crucified for it. And arguably, there never was nor should be; you shouldn’t take the word of a journalist or politician yet for centuries, that’s exactly what people did, and yet, we did put faith in fact checking, peer reviewed research, and experienced testimony.
The institutions failed. The algorithms can’t help. The media sold out. And the public prefers tribal identity to intellectual integrity.
Truth isn’t dead. It’s just irrelevant because the new currency isn’t facts, it’s conviction. And the only thing scarier than a world where a world leader lies, a court hands down the wrong consequence, AI writes lies that the internet pushes upon us… is a world where humans stopped caring whether they’re lies in the first place.