The Spectacle as Shield

This is the story of how the legal profession fought AI to protect its relevance—and how the tech industry smiled, built the AI, and made the stuffing.

Introduction

In April 2023, attorney Steven Schwartz became the unwitting face of artificial intelligence's invasion of the legal profession. His submission of a brief containing ChatGPT-generated fake cases to a federal court sparked headlines, sanctions, and industry-wide soul-searching about AI's role in legal practice. The story had everything: technological hubris, professional embarrassment, judicial outrage, and clear villains and victims. It was, in short, the perfect spectacle—dramatic enough to capture attention, simple enough to understand, and scandalous enough to generate endless commentary.

But spectacles, particularly those involving professional competence and technological change, often serve a function beyond mere entertainment or education. They can act as cognitive shields, protecting established interests from more threatening questions by focusing collective attention on dramatic but ultimately peripheral issues. The legal profession's fixation on AI hallucinations exemplifies this phenomenon, revealing how spectacle can become a form of systemic misdirection that preserves existing power structures while appearing to engage seriously with technological disruption.

The Architecture of Professional Spectacle

Spectacles that protect professional interests share common characteristics. They must be dramatic enough to dominate discourse while remaining narrow enough to avoid systemic questions. They should generate moral clarity (clear villains and victims) while obscuring economic complexity. Most importantly, they must allow the profession to appear responsive to technological change without actually confronting its implications.

The AI hallucination narrative meets all these criteria perfectly. Fake legal cases are inherently scandalous—they violate the legal profession's foundational commitment to truth and precedent. They create clear moral categories: reckless attorneys who fail to verify AI output versus responsible practitioners who maintain professional standards. The story allows for seemingly sophisticated analysis of AI limitations while avoiding deeper questions about AI capabilities.

Consider what the hallucination spectacle accomplishes rhetorically. It positions verification as the central challenge of AI adoption in law, suggesting that if lawyers simply fact-check AI output more carefully, the technology can be safely integrated into existing practice models. This framing preserves several crucial assumptions: that legal expertise remains scarce and valuable; that traditional hierarchies of professional knowledge are intact; and that AI represents a tool to be controlled rather than a force that might reshape the profession itself.

But this focus on verification obscures more fundamental questions. If AI can generate structurally sound, well-reasoned legal arguments—even with fabricated citations—what does this reveal about the nature of legal reasoning itself? If routine legal work follows predictable patterns that AI can replicate, what justifies traditional fee structures? If median attorneys using AI tools can outperform traditionally "better" lawyers working alone, how should the profession define competence?

The spectacle prevents these questions from being asked, let alone answered. By directing attention toward dramatic failures, it deflects examination of mundane successes. The legal profession can acknowledge AI's existence while avoiding confrontation with its implications.

The Media Amplification Machine

Professional spectacles don't emerge in a vacuum—they require an ecosystem of amplification that transforms isolated incidents into industry-defining narratives. The media doesn't merely report on AI failures; it actively selects, frames, and amplifies them in ways that serve multiple institutional interests.

Consider who benefits from the Mata v. Avianca narrative becoming the dominant story about AI in law:

Legal Technology Companies position themselves as providers of "responsible AI" solutions, contrasting their careful, verified products against the reckless use of consumer ChatGPT. Each AI failure story becomes a marketing opportunity for "enterprise-grade" legal AI tools.

Traditional Legal Media discovers that AI failure stories generate significantly more engagement than nuanced discussions of technological integration. Headlines like "ChatGPT Invents Fake Cases" drive more clicks than "AI Tools Quietly Transform Document Review."

Legal Influencers and Thought Leaders build substantial followings by taking strong positions on AI scandals. LinkedIn posts analyzing the "dangers of AI hallucinations" receive thousands of likes and comments, establishing expertise and authority within professional networks.

Bar Associations and Regulatory Bodies use spectacles to justify their continued relevance and oversight authority. Each AI failure becomes evidence that the profession needs stronger ethical guidance and regulatory frameworks—conveniently expanding institutional power.

This media-professional feedback loop is what transforms ordinary technological mishaps into existential debates. The story gets retold, analyzed, and referenced until it becomes the defining narrative about AI's role in legal practice, drowning out alternative perspectives that might threaten established interests.

Crucially, the media amplification focuses on individual failures rather than systemic capabilities. We get detailed analysis of why Steven Schwartz should have verified his citations, but virtually no coverage of whether AI-generated legal arguments are becoming indistinguishable from human-written ones in routine practice.

Educational Institutions as Spectacle Reinforcers

Law schools represent another crucial but overlooked stakeholder in perpetuating spectacle-driven narratives about AI. These institutions face profound challenges in adapting to technological change while maintaining their economic and professional relevance.

Curricular Inertia: Law school curricula remain largely unchanged from pre-AI eras, emphasizing traditional legal reasoning, case analysis, and writing skills. When AI is addressed, it's typically through the lens of "ethics and verification" rather than "economic transformation of legal practice." This framing reinforces the spectacle narrative by treating AI as a compliance issue rather than a fundamental disruption.

Faculty Expertise Gaps: Most law professors lack technical understanding of AI capabilities and limitations. This creates a dynamic where AI discussion defaults to what faculty can understand—ethical failures and professional responsibility—rather than technical capabilities or economic implications they're less equipped to analyze.

Economic Self-Interest: Law schools have powerful incentives to maintain the narrative that legal education remains valuable and necessary. If AI can perform much routine legal work competently, this threatens the justification for expensive legal education. Focusing on AI's failures rather than capabilities protects institutional relevance.

Bar Exam Gatekeeping: The bar examination system reinforces traditional approaches to legal reasoning and knowledge. As long as bar passage requires memorizing traditional legal doctrine rather than demonstrating AI-augmented competence, law schools will continue preparing students for a pre-AI professional world.

Student Debt Dynamics: With average law school debt exceeding $150,000, both institutions and students have strong psychological needs to believe that traditional legal education provides irreplaceable value. Acknowledging that AI might commoditize legal expertise threatens this fundamental assumption.

The result is an educational system that reproduces spectacle-friendly narratives about AI while avoiding fundamental questions about how legal education should adapt to technological change. Students graduate with detailed knowledge of AI ethics requirements but little understanding of how AI tools might transform their actual practice—perpetuating professional blindness to technological disruption.

The Client Cost of Professional Spectacle

While the legal profession debates AI hallucinations, the people who actually need legal services—clients and the broader public—bear the hidden costs of this misdirection. The spectacle-driven focus on AI failures actively prevents innovations that could make legal services more accessible, affordable, and effective.

Access to Justice Delayed: Millions of Americans cannot afford legal representation for routine matters like contract disputes, immigration issues, or family law cases. AI tools could potentially provide "good enough" legal assistance for straightforward issues at a fraction of traditional costs. But professional fixation on verification and liability concerns blocks development of these beneficial applications.

Cost Inflation Protected: The legal profession's emphasis on AI's limitations helps justify maintaining traditional fee structures. If clients understood that AI could handle much routine legal work competently, they might reasonably expect reduced costs for services like document review, basic contract drafting, or standard motions. The spectacle narrative maintains artificial scarcity in legal expertise.

Innovation Suppression: Regulatory bodies, influenced by spectacle-driven narratives, create compliance frameworks that favor incumbent law firms over innovative service providers. Startups attempting to provide AI-powered legal services face barriers designed ostensibly to prevent AI failures but effectively protecting traditional practice models.

Client Expectations Management: The focus on AI failures shapes client expectations in ways that may not serve their interests. Clients learn to fear AI assistance even when it might provide better, faster, or more thorough analysis than overworked human attorneys. This prevents beneficial human-AI collaboration that could improve client outcomes.

Public Discourse Distortion: Citizens forming opinions about AI in legal systems receive information filtered through professional spectacle rather than objective assessment of capabilities and limitations. This distorts democratic discussions about how legal technology should be regulated and deployed.

Quality vs. Accessibility Trade-offs: The profession's emphasis on traditional quality standards may prevent beneficial innovations that could serve broader social needs. When "perfect" legal representation is unaffordable, "good enough" AI-assisted services might better serve justice than no representation at all.

Perhaps most troubling, clients are rarely included in professional discussions about AI adoption. The spectacle focuses on protecting professional standards and traditional practice models, not on whether these serve client needs or broader social interests. This represents a fundamental misalignment between professional self-interest and public service obligations.

Historical Context: Why This Spectacle Feels Different

The legal profession's response to AI becomes more comprehensible when viewed against the historical pattern of technological adoption in legal practice. Previous innovations were absorbed relatively smoothly because they enhanced rather than threatened core professional functions.

Typewriters and Word Processors (early-to-mid 20th century) increased efficiency in document production but reinforced the centrality of human legal reasoning. Lawyers could produce documents faster, but the intellectual work remained entirely human.

Legal Databases like LexisNexis and Westlaw (1970s-1990s) revolutionized legal research by making case law and statutes searchable electronically. But this change amplified lawyer expertise rather than replacing it. Faster access to legal information made skilled attorneys more valuable, not less necessary.

Email and Internet Communication (1990s-2000s) transformed law firm operations and client relationships but didn't challenge core assumptions about legal reasoning or professional expertise. Technology changed how lawyers worked without changing what made lawyers valuable.

Document Review Technology (2000s-2010s) began automating routine litigation tasks like e-discovery, but this was absorbed as a cost-reduction tool that freed lawyers for "higher-value" work. The profession adapted by repositioning itself up the value chain.

AI represents the first technology that might actually automate legal reasoning itself—the core function that has historically justified professional expertise and exclusivity. Unlike previous innovations, AI doesn't just help lawyers work faster; it potentially replaces certain types of legal thinking entirely.

This explains why the spectacle around AI feels so intense and personal. Previous technological disruptions enhanced professional identity; AI potentially threatens it. The hallucination narrative serves as psychological protection against this unprecedented challenge to professional relevance.

The historical pattern also reveals why the spectacle focuses specifically on reasoning failures rather than efficiency gains. When word processors occasionally crashed, this didn't threaten lawyers' sense of professional identity. When AI occasionally "hallucinates" legal arguments, it strikes at the heart of what makes lawyers feel irreplaceable.

Understanding this historical context illuminates why rational analysis of AI capabilities has been so difficult for the legal profession. The technology doesn't just represent a new tool—it represents the first genuine challenge to the profession's intellectual monopoly in centuries.

The Exception That Reveals the Ultimate Spectacle

The software development industry's response to AI initially appears to contradict the spectacle-as-shield thesis. Despite being the most transformed by AI tools, IT has produced no equivalent to Mata v. Avianca—no scandals of developers submitting AI-generated code that catastrophically failed, no professional hand-wringing about AI hallucinations in programming, no calls for regulatory oversight of coding assistants.

But this apparent exception actually reveals the most sophisticated form of professional spectacle: the mythology of inevitable progress. The IT industry has created a cultural narrative so compelling that practitioners enthusiastically embrace tools that may ultimately displace them, all while celebrating the process as innovation and evolution.

Consider the profound irony: software developers are literally building the systems that automate their own work while maintaining that this represents professional advancement rather than professional suicide. It's like turkeys voting in favor of Thanksgiving while making the stuffing themselves. They speak enthusiastically about AI tools that handle routine coding, basic debugging, and even complex programming tasks, framing each capability advance as liberation from tedious work rather than elimination of billable expertise.

The Ultimate Spectacle: "Creative Destruction is Innovation"

The tech industry has perfected a spectacle more powerful than any dramatic failure: the spectacle of voluntary obsolescence as career advancement. This narrative includes:

  • "Disruption is natural evolution" (rather than economic displacement)
  • "AI frees us for higher-level work" (rather than eliminating the need for our work entirely)
  • "Adapt or die is healthy selection" (rather than structural unemployment)
  • "We're building the future" (rather than automating ourselves out of jobs)

Junior developers find fewer entry-level opportunities as AI handles basic coding. Mid-level programmers get squeezed between AI efficiency and senior strategic roles. Entire specializations become obsolete. Yet the professional discourse remains relentlessly optimistic about technological progress, treating each displacement as evidence of industry vitality rather than professional vulnerability.

The Tech Industry's Spectacle vs. Legal Profession's Spectacle

Both professions use spectacle to avoid confronting uncomfortable truths about AI disruption:

  • Legal profession: Creates spectacle around AI failures to avoid examining AI capabilities
  • Tech industry: Creates spectacle around "inevitable progress" to avoid examining displacement consequences

The lawyers' resistance now appears as clear-eyed threat assessment: they recognize that AI challenges their economic model and are fighting to preserve professional relevance. The programmers' enthusiasm reveals sophisticated self-deception: they've internalized a narrative that frames their own professional displacement as technological triumph.

The Recursion

This reveals spectacle as an even more pervasive phenomenon than initially apparent. The IT industry doesn't escape professional spectacle—it demonstrates spectacle so complete and culturally embedded that participants cannot recognize their own participation. They've created a professional mythology so powerful that victims celebrate their victimization.

The legal profession's "irrational" resistance to AI may actually represent more accurate risk assessment than the tech industry's "rational" embrace. When lawyers worry about professional displacement, they're identifying a real threat. When programmers celebrate automation tools, they may be living inside the most sophisticated professional spectacle ever created.

The Exception Proves the Rule: Professional spectacle isn't just about resisting technological change—it's about managing the psychological discomfort of technological disruption through culturally acceptable narratives. Sometimes those narratives involve resistance (law), sometimes they involve embrace (tech). But both serve the same function: avoiding direct confrontation with the economic implications of professional displacement.

The Professional Psychology of Spectacle

Why do professions consistently fall into this pattern? The answer lies in the psychological functions that spectacle serves for professional communities. Focusing on dramatic failures provides cognitive comfort by confirming existing beliefs about professional expertise. It offers regulatory justification for maintaining oversight and control. It creates moral clarity in situations that might otherwise generate uncomfortable ambiguity.

Most importantly, spectacle allows professions to appear technologically sophisticated while remaining structurally conservative. By engaging deeply with AI's limitations, the legal profession can demonstrate awareness of technological change without actually adapting to it.

The spectacle also serves important identity maintenance functions. When AI succeeds at tasks previously considered uniquely human, this threatens professional self-concept. Focusing on AI failures helps restore confidence in human irreplaceability. The narrative becomes: "See? Machines can't really do what we do. They make obvious errors that any competent professional would catch."

This psychological protection comes at the cost of strategic blindness. Professions become so invested in the spectacle narrative that they cannot accurately assess either AI capabilities or their own vulnerabilities. They optimize for professional comfort rather than adaptive response to technological change.

The Cost of Misdirection

This misdirection comes with significant costs, both for professions and for society. For the legal profession, focus on hallucinations prevents honest assessment of how AI might improve legal services delivery, reduce costs, and increase access to justice. It also delays necessary adaptations to educational programs, regulatory frameworks, and business models.

For society, professional spectacle can delay beneficial technological adoption while protecting incumbent interests. When the legal profession focuses on AI's errors rather than its potential, it may prevent innovations that could make legal services more accessible and affordable for ordinary citizens.

Professional Costs include:

  • Delayed adaptation to technological change
  • Misallocation of resources toward spectacle management rather than strategic innovation
  • Loss of credibility when spectacle narratives eventually collapse
  • Competitive disadvantage relative to more adaptive professions or international markets

Social Costs include:

  • Continued barriers to legal access for underserved populations
  • Inefficient allocation of legal resources
  • Reduced innovation in legal service delivery
  • Democratic distortion of technology policy discussions

The most significant cost may be opportunity cost—beneficial innovations that don't happen because professional energy focuses on defending against technological change rather than harnessing it for social benefit.

Beyond the Spectacle

Breaking free from spectacle-driven discourse requires shifting focus from dramatic failures to systematic capabilities. Instead of asking "How can we prevent AI from generating fake cases?" the legal profession might ask "What types of legal work can AI perform competently, and how should this change our practice models?"

This shift demands intellectual honesty about the nature of professional expertise. Much of what professions claim as uniquely human may actually be pattern recognition and template application that AI can replicate. Acknowledging this reality is painful but necessary for genuine adaptation to technological change.

It also requires recognizing that professional quality and economic accessibility exist in tension. The standards that protect professional expertise may also prevent beneficial innovations that could serve broader social needs.

Concrete steps for moving beyond spectacle include:

  • Empirical research on AI capabilities in routine legal tasks
  • Client-centered rather than profession-centered technology assessment
  • Educational reform that prepares lawyers for AI-augmented practice
  • Regulatory frameworks designed to enable beneficial innovation rather than protect incumbent interests
  • Public discourse that includes client voices and social needs, not just professional concerns

Conclusion

The legal profession's fixation on AI hallucinations reveals a broader pattern of how established interests use spectacle to manage technological disruption. By focusing collective attention on dramatic but peripheral failures, professions can appear responsive to technological change while avoiding fundamental questions about their future relevance and structure.

This dynamic is not necessarily conscious or conspiratorial—it emerges naturally from the psychological needs of professional communities facing existential uncertainty. But its effects are real and significant, potentially delaying beneficial innovations while protecting incumbent interests at public expense.

The ultimate irony is that the legal profession, which trains its members to spot exactly this kind of misdirection when used by others, has fallen victim to its own rhetorical strategies. They are so busy analyzing AI's citation errors that they are missing its transformation of legal practice itself.

Perhaps most tellingly, it took AI systems analyzing these conversations to point out this blind spot—suggesting that the spectacle has been so successful that even sophisticated observers cannot see beyond it. The legal profession has, in effect, hallucinated its own invulnerability while warning about AI hallucinations.

The comparison with the IT industry reveals that professional spectacle around technological change is universal—every field generates protective narratives to manage disruption anxiety, just with different polarities. The choice isn't between spectacle and rational adaptation, but between different forms of spectacle that serve the same psychological function of avoiding direct confrontation with professional displacement.

As other professions face similar technological disruptions, the legal profession's response offers both a warning and an opportunity. The warning is clear: spectacle can be a powerful form of professional self-deception that serves institutional interests while imposing costs on the broader public. The opportunity lies in moving beyond spectacular failures to engage honestly with systematic capabilities—acknowledging that technological change requires not just better quality control, but fundamental rethinking of professional purpose and value.

The question is whether professions can overcome their natural tendency toward spectacular misdirection and engage honestly with technological transformation. The alternative—maintaining professional comfort while society moves on—may prove far more dangerous than any AI hallucination. In an era where technology can democratize expertise, professions that cling to artificial scarcity through spectacle-driven misdirection risk not just irrelevance, but active harm to the public interests they claim to serve.

Subscribe to The Grey Ledger Society

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe