The Great AI Triple-Talk

When Everyone's an Expert and Nobody Knows Anything

Artificial intelligence is either transforming everything, about to collapse in a spectacular bubble, or desperately needs better change management. Depending on which expert you ask, AI is simultaneously the solution to enterprise productivity, a massive speculative delusion, and a workflow integration challenge. The only thing everyone agrees on is that you should definitely listen to their particular take on what's really happening.

Move over, Orwell and your passé doublespeak, as we have arrived at the age of triple-talk, where institutional authorities produce such contradictory narratives about the same phenomenon that ordinary people—the "muggles" trying to keep their jobs and make sense of technological change—are left to figure it out themselves. And we are figuring it out, just not in the ways any of the experts predicted.

The Information Triangle

Consider the curious case of AI adoption research. MIT's academic team spent months interviewing executives and analyzing implementation patterns, producing a methodologically rigorous report that identifies a "GenAI Divide" between organizations that successfully deploy AI and those that don't. Their conclusion? It's mostly an organizational design problem. Buy rather than build, empower line managers rather than development teams, focus on flexible systems rather than static tools. Sensible stuff, carefully hedged with appropriate scholarly disclaimers.

Meanwhile, Substack commentator Ted Gioia surveys the same landscape and sees something entirely different: a classic tech bubble inflated by four billionaire CEOs playing Monopoly with other people's money while McDonald's customers can't afford breakfast. His evidence? Massive infrastructure spending with no consumer willingness to pay, ChatGPT usage dropping 65% when students go on vacation, and $14 billion spent to hire a single startup founder. The bubble diagnosis feels compelling until you remember that bubbles are notoriously difficult to call in real-time.

Then there's OpenAI itself, helpfully providing a "leadership guide" for staying ahead in the age of AI. Their solution involves aligning leadership, activating champions, amplifying wins, accelerating decisions, and governing responsibly. It reads like a change management consultant's fever dream, complete with success stories about Moderna's CEO demanding employees use ChatGPT 20 times per day (twenty times!!!). Notably absent from their recommendations: any acknowledgment that their technology might have fundamental limitations for enterprise applications.

Each perspective serves a different institutional need. Academics need to produce carefully qualified findings that advance knowledge without making risky predictions. Media commentators need bold takes that generate subscriber engagement. Vendors need frameworks that position their products as solutions to customer problems. The result is three incompatible versions of reality emerging from the same underlying data.

George Orwell patented the language of institutional deception with "doublespeak." With triple-talk we have something harder to identify and counter. With doublespeak, you can point to specific distortions and bad faith arguments. With this fragmented expertise, each source maintains plausible authority within their domain while the synthesis problem remains unaddressed.

The Little Engine That Could

Consider the Wankel rotary engine. Instead of linear pistons firing in sequence, you have a triangular rotor that creates three distinct combustion chambers, each at a different phase of the cycle at any given moment. The motion is continuous but the perspectives are fundamentally different depending on where you're positioned relative to the rotating mechanism.

In our AI discourse triangle, MIT's academic research represents the intake phase - carefully gathering data and evidence. Gioia's media commentary functions like the compression phase - building pressure around contradictions and unsustainable trends. OpenAI's vendor guidance operates as the power stroke - converting analysis into actionable recommendations that drive adoption.

Like the Wankel engine, this creates a kind of continuous motion where each perspective is always "firing" at a different point in the cycle. The academic researchers are always gathering more implementation data while the bubble theorists are always identifying new contradictions and the vendors are always generating new adoption strategies. None of them ever align because they're designed to operate at different phases.

Consequently, traditional synthesis mechanisms don't work here. You can't simply average the outputs of three systems operating in fundamentally different cycles. The information isn't contradictory in the sense of being wrong - it's contradictory because each source is addressing a different aspect of the same underlying phenomenon at different temporal scales.

The Fragmentation Problem

What's particularly troubling is how each source treats inconvenient evidence. MIT's researchers minimize macro-economic questions about AI investment sustainability, focusing instead on implementation mechanics. Gioia dismisses evidence of genuine operational value in successful AI deployments, emphasizing speculative excess instead. OpenAI glosses over the fundamental learning gap that MIT identifies as the core barrier to enterprise success, treating adoption challenges as mere change management problems.

This selective emphasis creates an information environment where decision-makers can't rely on any single authoritative source. Unlike previous technological transitions, we lack institutional mechanisms for synthesizing across these competing perspectives. There's no equivalent of the FDA for AI claims, no Consumer Reports for enterprise software effectiveness, no trusted intermediary that can evaluate evidence without obvious conflicts of interest.

The historical parallel is revealing. Complex policy decisions have always suffered from this synthesis problem, but the internet was supposed to democratize access to information and enable better decision-making. Instead, it's created infinite competing narratives with no reliable way to adjudicate between them. We're drowning in expertise while starving for wisdom.

The Individual Response

So what are folks like us actually doing? We're improvising. The most interesting finding in MIT's research isn't their organizational design recommendations—it's the discovery of a massive "shadow AI economy" where employees bypass their companies' official AI initiatives entirely. While only 40% of companies purchased official AI subscriptions, workers from over 90% of surveyed companies reported regular personal AI tool usage for work tasks.

This represents a kind of grassroots optimization that none of the institutional experts predicted. Workers discovered that $20-per-month ChatGPT subscriptions often outperform expensive custom enterprise solutions, not because the technology is better, but because the interface is more responsive to how people actually work. The same corporate lawyer who avoided her organization's $50,000 contract analysis tool defaulted to ChatGPT for drafting, citing better outputs and more flexible interaction.

The interface regression is particularly telling. We've spent decades developing sophisticated graphical user interfaces—windows, icons, drag-and-drop, touch screens—only to discover that the most compelling way to interact with AI is typing text into a box and reading text responses. It's essentially a return to 1970s terminal interfaces, just with natural language instead of coded commands. All that advanced hardware running what amounts to an upgraded command line.

This suggests that effective AI adoption might have less to do with enterprise transformation strategies and more to do with understanding what works in practice. The remote work transition helped prepare knowledge workers for text-based interaction patterns through Slack, Discord, and social media messaging. When ChatGPT launched, it felt familiar because people had already developed the behavioral patterns that make conversational AI useful.

Systemic Implications

The broader consequence is a privatization of sense-making. Instead of relying on institutional guidance, individuals develop personal methods for navigating between competing expert claims. Success increasingly depends on your ability to synthesize information across contradictory sources, test claims through direct experience, and adapt quickly when expert predictions prove wrong.

This creates its own problems. Democratic discourse suffers when expert authority becomes indistinguishable from marketing. Institutional credibility erodes as audiences recognize agenda-driven messaging. The people best positioned to thrive are those with the time, skills, and resources to develop effective personal synthesis methods—potentially exacerbating existing inequalities rather than democratizing access to technological benefits.

Consider the irony: the technology that's supposed to augment human intelligence is being deployed in an information environment that makes reliable decision-making nearly impossible. We have AI systems that can process vast amounts of data alongside institutional authorities that produce contradictory interpretations of what that data means.

Living with Uncertainty

Perhaps this is just the normal messiness of technological transition periods. Markets and informal networks might indeed be more effective than centralized synthesis mechanisms for navigating genuine uncertainty. The shadow AI economy suggests that people are quite capable of figuring out what works when freed from institutional constraints.

But there's something unsettling about a world where everyone claims expertise while fundamental questions remain unanswered. Is AI transforming work or just creating expensive distractions? Will current investment levels prove sustainable or collapse in a bubble burst? Are enterprise implementation failures temporary growing pains or signs of fundamental technological limitations?

The honest answer is that nobody knows, despite the confident assertions from all corners. What we're left with is the curious spectacle of individuals having increasingly sophisticated conversations with AI systems about whether AI systems actually work—a kind of recursive sense-making that would have seemed like science fiction just a few years ago.

Maybe that's the real transformation: not that AI is solving our information problems, but that it's forcing us to develop better methods for thinking through uncertainty when the experts can't agree on what's actually happening. In the absence of institutional guidance, we're all just muggles trying to make sense of the magic, one conversation at a time.

Subscribe to The Grey Ledger Society

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe