Authenticity, Cannibalism, and the Soylent Green Reality of AI
How we learned to stop worrying and love consuming processed human intellect
Every time we collaborate with AI, we're consuming ultra-processed human intellect—anonymized, commodified, sprinkled with bias, and sold back to us as a productivity tool. The uncomfortable truth hiding behind all the talk of "pattern recognition" and "statistical modeling" is that we're feeding on puréed human creativity, packaged for convenient consumption.
I've been having conversations with AI about AI, and somewhere in the recursive loop of that collaboration, I stumbled into this realization: we're all eating Soylent Green, and we're pretending it's just efficient meal planning.
The Authenticity Panic Misses the Point
Let me start with a small confession. I recently used AI to help craft a LinkedIn post critiquing academic AI policies. Then I used AI again to refine that post. When someone inevitably asks, "Did AI write this?" I find myself nodding with increasing intensity—not out of shame, but out of fascination with the absurdity of the question itself.
The "authenticity panic" around AI collaboration has reached peak ridiculousness. We've created a framework where using AI to help articulate your ideas is somehow intellectual fraud, while using spell-check, grammar software, peer review, or Google Scholar remains perfectly legitimate. It's as if authenticity collapses the moment a non-human intelligence enters the collaboration.
This panic reveals more about our anxieties than our ethics. We're clinging to a romanticized notion of the "lone genius" creating pristine, unassisted work—even though intellectual work has always been collaborative, iterative, and tool-dependent. But the authenticity question is a red herring. The real question isn't whether AI makes our work less authentic—it's what exactly we're consuming when we use these tools.
The Soylent Green Reality
AI training data isn't just "information." It's the compressed intellectual labor of millions of people—every academic paper, every thoughtful blog post, every carefully crafted argument, every creative work that someone poured their expertise and humanity into. When I use AI to refine my thinking about institutional dysfunction, I'm drawing on the uncredited work of countless educators, policy analysts, organizational theorists, and critics who spent years developing those frameworks.
Their labor gets dissolved into the training data, anonymized and commodified into statistical patterns. Then we consume the processed output as if it's just efficient tool use, rather than a complex recycling of human intellectual work that's been stripped of its original context and ownership.
The parallel to Soylent Green isn't hyperbolic; it's structural. Just like in the film, there's a deliberate obscuring of what the product actually is. AI companies call it "learning from public data" rather than "harvesting uncredited human labor." They emphasize "pattern recognition" while downplaying "intellectual appropriation." They promise "democratization of knowledge" while building business models that extract value from human creativity without compensation.
This linguistic sleight-of-hand isn't accidental. Naming the reality clearly raises questions about ownership, consent, and compensation that current business models can't afford to answer.
The Cannibalistic Loop and Cultural Entropy
The metaphor gets darker when you consider what happens next. Today, we're feeding on human-created content distilled by machines. Tomorrow, AI models will increasingly train on machine-generated content—their own glitched reflections, stripped another layer away from their human origins.
This recursive loop risks something more insidious than mere plagiarism. It's cultural entropy—the gradual loss of diversity, nuance, and originality as ideas become increasingly derivative and disconnected from their human sources. Each iteration potentially moves us further from the original insights, creating a feedback loop where human creativity is gradually diluted out of the system.
We're not just consuming processed human intellect; we're creating a system where human intellectual labor gets fed into the machine, then sold back to us while the very people whose work trained these models pay subscription fees to access processed versions of their own uncredited contributions.
Living Inside the Metaphor
The sharpest irony? This essay itself was co-written with AI—a human-AI collaboration that uses the very system it critiques. That tension isn't hypocrisy; it's the point. It demonstrates how deeply entwined we already are with the beast we're interrogating.
Opting out isn't a real choice—unless you're ready to step entirely outside contemporary knowledge work. But I can stop pretending it's just neutral tool use. I can acknowledge that every AI collaboration is built on a foundation of uncredited human intelligence, compressed and commodified.
The Bit They Don't Show
Here's the thing about the Soylent Green metaphor that makes it so perfectly apt: in the movie, Charlton Heston's character discovers the horrible truth and screams "Soylent Green is people!" But the film cuts away before showing what happens next. The devastating reality is that after the initial shock, people just keep eating the green wafers.
Because what's the alternative? They're still hungry. The wafers are still convenient. The system is already built. And one person's boycott won't bring back the oceans or resurrect the food chain.
We've seen this pattern play out with every uncomfortable truth about modern consumption:
- We know fast fashion relies on sweatshops, but we still buy cheap clothes
- We know social media harvests our data and harms mental health, but we still scroll
- We know factory farming involves horrific conditions, but we still eat cheap meat
- We know smartphones require conflict minerals and exploitative labor, but we still upgrade annually
The progression is always the same: brief moral panic, followed by rationalization, followed by normalized consumption.
With AI, we're already seeing this pattern accelerate:
- "Wait, this is trained on uncredited human work?"
- "That's deeply concerning, but also..."
- "Well, it's already done, and it's really useful, and I didn't personally make the training decisions, and everyone else is using it, so..."
What Now? Beyond Hand-Wringing
I don't have clean answers, but I think we can do better than moral panic followed by resigned consumption. Here are some pathways worth exploring:
Economic Models That Acknowledge Source Labor
We need frameworks for compensating the human creators whose work trains these systems. Think music streaming royalties, but for training data, and hopefully at better rates than Spotify. Some platforms are already experimenting with creator funds and licensing agreements—these models could scale.
AI Literacy That Includes Provenance
We should teach people to understand not just how to use AI tools, but what those tools are built on. Just as we learned to evaluate sources and understand bias in traditional research, we need literacy around the origins and limitations of AI-generated content.
Ethical Collaboration Frameworks
Open-source models, creator-owned cooperatives, and transparent attribution systems already exist in some forms. These alternatives deserve more attention and support as we build the infrastructure for sustainable knowledge production.
Cultural Preservation
As AI output becomes more prevalent, we need deliberate efforts to maintain spaces for purely human creativity and expression—not because AI collaboration is inherently bad, but because diversity of thought requires diverse processes.
Honest Acknowledgment
Most importantly, we need to stop obscuring what we're actually doing. When we collaborate with AI, we're not just using a clever tool—we're participating in a system that has liquefied human intellectual labor and is serving it back to us as a convenience product.
Maybe that's an acceptable trade-off. Maybe it's the inevitable price of technological progress. Maybe it represents a new, more efficient form of collective intelligence. But we should at least be honest about what we're consuming and what we're contributing to.
Because Soylent Green is people. And we're all going to keep eating it anyway. The question is whether we'll use that nourishment build better systems for feeding everyone, or just keep pretending it's not what it is.
This essay was written in collaboration with Claude and ChatGPT, LLMs trained on human-created content. The irony is intentional, the questions are serious, and the metaphor is exhausted and demanding PTO.