That Old “Source of Truth” Problem

That Old “Source of Truth” Problem

AI Adoption in Small Business and the Persistence of Organizational Entropy

The Historical Rhyme

Every decade produces a technology wave aimed at small and medium-sized businesses, and each follows a recognizable pattern: create anxiety about falling behind, offer simplification, productize the consulting engagement, use templates under the hood, and charge for implementation plus maintenance.

In the early 2010s, it was websites. "You're invisible without a web presence." Agencies installed WordPress, applied a theme, tweaked CSS, and charged five to fifteen thousand dollars. Some made serious money. Most commoditized themselves. The moat disappeared as Wix, Squarespace, and Shopify abstracted the work away. The agencies that survived specialized.

In 2026, replace "website" with "AI." The anxiety pitch is updated — "your competitors are automating," "your staff could be 30% more efficient," "don't get left behind" — but the structure is identical. Instead of ThemeForest templates, the underlying commodity is now Zapier plus OpenAI, Make plus Claude, Notion AI, Copilot, and vertical SaaS with built-in AI features. The core service becomes "AI configuration plus workflow mapping."

The question is whether this cycle resolves the same way, or whether something structurally different is happening.

Where the Analogy Holds

The parallels are strong at the surface layer. SMBs lack technical literacy, have fragmented tooling, are risk-averse, don't know what's realistic, and fear being left behind. That combination is prime territory for honest advisors, opportunistic vendors, and overpromisers alike.

The consulting model follows the same productization arc: intake form, custom analysis, deliverable report, optional implementation. It reduces ambiguity for buyers. It's scalable in theory. And it faces the same structural risk — as platforms embed AI natively into CRM, accounting, ticketing, and operations software, the integration burden shrinks and the consultant's value proposition compresses.

The maturation curve is predictable: early arbitrage window, template commoditization, platform absorption, margin squeeze. Only vertical specialists survive.

Where the Analogy Breaks

AI introduces characteristics that previous technology waves did not.

Probabilistic behavior. Websites were deterministic. A misconfigured server stayed misconfigured. A bad spreadsheet formula stayed bad. Problems were stable in their wrongness, and organizations could develop rituals around known brokenness that held over time. AI systems produce outputs that vary, drift, and degrade in ways that don't announce themselves. A prompt that works well in March may behave differently in June because the underlying model was updated. Context handling changes. Cost profiles shift. The system doesn't break visibly — it degrades fluently.

Temporal inconsistency. Consider a company where three employees give three different answers to the same product question. That's spatial inconsistency — it's messy, but discoverable. You can triangulate. AI agents are marketed as solving this: one system, one answer, one source of truth. And they do — for a window. Then the model updates, the knowledge base goes stale, the context drifts, and the agent starts giving different answers over time rather than across people. Spatial inconsistency is replaced by temporal inconsistency, which may be harder to detect because there's no built-in signal that something changed. The system still sounds authoritative.

Consider a small HVAC company that deploys an AI agent to handle service scheduling and customer inquiries. In month one, the agent correctly quotes the company's diagnostic fee, accurately describes their service area, and routes emergency calls appropriately. By month four, the model provider has updated the underlying model, the company has changed its pricing but nobody updated the knowledge base, and a new municipal code affects service area boundaries. The agent continues answering every call with complete confidence. It's now wrong about pricing, partially wrong about service areas, and handling emergency routing with the same logic it used before the company hired two new technicians. None of this produces an error message. The phones keep ringing. The agent keeps answering. The owner thinks the system is working because the metric they watch — calls handled — looks healthy.

Cognitive labor compression. Previous technology waves mostly affected presentation (websites), infrastructure (cloud), or mechanical process (automation). AI compresses cognitive labor — drafting, analysis, classification, summarization, decision support. That feels more personal and raises different organizational dynamics around displacement, trust, and accountability. When humans disagree or make errors, organizations adjudicate socially — through conversation, escalation, judgment calls. When an automated system produces a confident mistake, someone must be accountable, and that "someone" becomes a new organizational chokepoint. Automation doesn't eliminate coordination problems; it converts them into accountability problems. The more an agent operates autonomously, the more it needs a designated escalation owner, and the more that ownership becomes a political role rather than a technical one.

Data exposure as economic risk. Previous technology waves involved data, but the exposure model was different. A website displayed information you chose to publish. A CRM stored customer records on servers you controlled or contracted. AI systems, particularly those built on third-party model APIs, route operational data — customer conversations, internal communications, financial details, contracts — through external inference infrastructure. For SMBs, the fear isn't only "will the agent be wrong" but "will it leak something, mis-send something, or put us on the hook." A dental practice that feeds patient scheduling into an AI workflow is making an implicit decision about where that data travels. A contractor whose AI assistant drafts estimates from historical job data is exposing pricing strategy to a model provider's infrastructure. The governing principle should be straightforward: never point an automated system at data you aren't prepared to have mishandled. This isn't paranoia — it's liability arithmetic. When fluent degradation produces a confident mistake involving customer data, the question of who holds responsibility becomes a legal question, not just an operational one. For many SMBs, this data exposure calculus may prove to be the binding constraint on adoption — not capability, not cost, but willingness to accept the risk surface.

The Source of Truth Problem

Every layer of AI adoption eventually arrives at the same dependency: someone has to write the canonical specification, maintain it, and notice when reality drifts from it.

An AI agent for customer support needs a maintained knowledge base. An agent that checks code repository structure needs a gold standard template. A workflow automation needs a documented process with defined ownership. A CI/CD agent needs artifacts and playbooks. Each of these is only as reliable as the source of truth it references — and that source of truth requires human authorship, human maintenance, and human attention to drift.

But "source of truth" is deceptively singular. In practice, most organizations operate with multiple competing truths that coexist in tension. There is policy truth — what leadership says the process is. There is process truth — what people actually do day to day. There is system truth — what the software enforces or permits. And there is audit truth — what the logs and records show happened. These rarely align perfectly, and in many organizations, the gaps between them are where institutional knowledge lives.

AI systems force a choice between these layers in ways that previous tools did not. A customer-facing support agent trained on policy documentation will give the "correct" answer — and be operationally wrong, because frontline staff learned years ago that the policy doesn't account for common edge cases. A ticket summarizer trained on actual support interactions will be operationally accurate — and potentially policy-noncompliant, because it reflects workarounds that were never formally authorized. Consider a small insurance agency where the official procedure for claims intake is a twelve-step documented workflow, but every experienced processor knows that steps four through seven can be collapsed into a single phone call for routine claims. An AI system trained on the documentation will enforce the full twelve steps. One trained on actual behavior will skip them. Both are "right." Neither is complete. And the organization now has to decide which truth it wants to encode — a decision it has successfully deferred for years by letting humans navigate the gap informally.

The argument isn't just that organizations need a specification. It's that organizations have rival specifications, and AI adoption forces them to pick — or to confront the fact that they've been operating on constructive ambiguity that a deterministic-looking system can't sustain.

This is not a new problem. It is the oldest problem in organizational technology. But AI makes it more consequential because the system that consumes the source of truth is probabilistic rather than deterministic. A static system built on an outdated spec produces consistently wrong outputs that are recognizably wrong. A probabilistic system built on an outdated spec produces variably wrong outputs that may pass casual inspection.

The dependency chain is: canonical specification → verification procedure → enforcement point → monitoring for drift. Without all four, any AI system is a narrator, not an operator. And most organizations — including publicly traded ones with dedicated engineering teams — struggle to maintain even the first link in that chain.

This isn't an AI problem. It's an organizational specification problem that AI inherits and amplifies.

Internal vs. External AI: A SWOT Distinction

Most AI consulting for SMBs focuses on internal process automation: proposal drafting, CRM enrichment, customer service, marketing copy, workflow automation. This is the "Iron Man suit" model — individual capability augmentation bolted onto existing structures.

Internal AI strengths: immediate ROI potential, low infrastructure barrier, easy to demonstrate. Weaknesses: easily commoditized, often incremental rather than transformational, accessible to DIY adoption. Threats: platform vendors bake it into existing SaaS tools, collapsing the consultant's value proposition.

The less-discussed dimension is external AI — how AI reshapes the interfaces between a company and its ecosystem. Supply chain forecasting, vendor negotiation analysis, pricing elasticity, logistics optimization, competitive intelligence, demand prediction, risk modeling. This is strategic rather than operational, and it's where AI's leverage on SMBs may actually be most significant.

But external AI adoption faces a different barrier. It requires cleaner historical data, integrated systems, forecasting discipline, structured KPIs, and analytical capacity that most SMBs lack. The irony is that many publicly traded companies with dedicated data teams also can't reliably claim those capabilities.

The existential risk for SMBs may not be "they failed to adopt AI internally." It may be that the ecosystem around them — vendors, supply chains, logistics providers, competitors — becomes AI-optimized while they don't adapt. AI hits SMBs through their vendors and market environment, not through consultants. That's a harder risk to see and a harder one to sell consulting around.

There is also a time-to-value asymmetry that shapes the market. Internal AI delivers fast, visible wins — a proposal drafted in minutes instead of hours, a support queue deflected, a scheduling conflict resolved automatically. External AI requires longer horizons, cleaner data, and more organizational discipline before it yields measurable results. This asymmetry explains why consultants overwhelmingly sell the internal "Iron Man suit" model even if external ecosystem pressure is the more strategically significant force: internal wins pay quickly and demo well. The deeper work doesn't.

Furthermore, external AI may arrive not as a capability SMBs choose to adopt but as vendor pressure they must navigate. Every SaaS provider an SMB already pays is racing to embed AI features — often bundled with new pricing tiers, usage limits, and data-sharing terms. The "external AI" story may be less about SMBs strategically deploying supply chain analytics and more about renegotiating contracts that now include AI capabilities they didn't ask for, data terms they don't fully understand, and cost structures that shifted beneath them. That's not technology adoption. That's procurement and risk management. And it's another surface where the source-of-truth problem bites — now applied to vendor agreements rather than internal workflows.

The Mechanicus Model of Technology Adoption

There is a counterargument to the thesis that AI requires robust organizational infrastructure to deliver value.

Most organizations have never understood their technology deeply. They develop rituals around what works. Knowledge becomes liturgical rather than structural. Conventions emerge through practice rather than specification. And it holds together well enough — sometimes for remarkably long periods.

This is essentially the Warhammer 40K Adeptus Mechanicus model of technology management: maintain the machine spirits through ritual, don't ask how the systems actually work, apply the sacred unguents, and trust that the organism adapts. It's horrifying from an engineering perspective. It's also an uncomfortably accurate description of how most organizations function.

The assumption baked into most technology sales is that humans will adapt to whatever you ship them. And historically, that assumption has been mostly correct. People adapted to email, CRM, cloud, Slack — not gracefully, not optimally, but they adapted. The organism is resilient.

So the challenge to the infrastructural thesis is: maybe SMBs just absorb AI the way they absorb everything else. The agent gives slightly different answers in month six. Nobody notices or cares because it's still better than what they had before, which was nothing — or which was three humans giving three inconsistent answers. The bar isn't perfection. The bar is better than the previous chaos.

The question is whether AI's probabilistic nature breaks this pattern. Previous technology failures were static — you could build stable rituals around known brokenness. AI failures shift under the ritual. The incense that worked last quarter may not work this quarter because the machine spirit has opinions that evolve without notification.

The Mechanicus model holds until the environment changes faster than the rituals can update. That's the core claim about probabilistic drift in one sentence. And it suggests what "ritual" might look like if organizations take AI maintenance seriously: maintained prompt libraries, operational playbooks, regular human review cadences, defined escalation rules, periodic output audits. At that point, ritual becomes engineering — which is, arguably, what engineering always was. The question is whether SMBs will develop these rituals organically through pain and repetition, or whether the drift will be subtle enough that they never feel the need to formalize what they're doing until something fails visibly enough to force the issue.

Whether this distinction matters practically — or whether organizational resilience simply absorbs it as it has absorbed everything else — remains genuinely uncertain.

The Infrastructure Question

Does AI adoption for SMBs constitute an infrastructure wave, or is it a feature absorption cycle?

The infrastructure argument holds if: AI cost management becomes painful, compliance regimes tighten around AI usage, data governance incidents become common, tool sprawl creates operational chaos, and cross-platform orchestration remains fragmented. Under those conditions, there's demand for governance frameworks, cost observability, security posture, deployment standards, and ongoing monitoring — an "AI Managed Service Provider" layer analogous to cloud MSPs.

The feature absorption argument holds if: major platforms successfully embed AI as native toggles within existing SaaS, orchestration complexity gets abstracted into platform infrastructure, and regulatory pressure stays light. Under those conditions, the mid-layer consultant's window closes as the underlying capability becomes someone else's checkbox feature.

Both trajectories are currently alive. The arbitrage window for AI consulting definitively exists. Whether it resolves into durable professional discipline or compresses into platform features is the central uncertainty.

Agent orchestration — particularly through emerging protocol layers like MCP — introduces genuine technical novelty. Non-deterministic tool selection, context window management, emergent failure modes, cost variability, and model behavior drift are qualitatively different from deterministic API integrations. But technical novelty does not automatically translate into a durable market category. Many technically novel layers — Hadoop, early Kubernetes consultancies, blockchain infrastructure, serverless-first agencies — were eventually absorbed into platform ecosystems or faded as abstractions matured.

The most honest assessment is that AI sits at the intersection of pattern recognition and possible inflection point, and both can be true simultaneously.

There is also a third possibility between "durable discipline" and "checkbox absorption" that deserves naming: seam-work persists. Even if every major platform successfully embeds AI into its own surface, SMBs don't live inside a single platform. They live across email, accounting, CRM, scheduling, ticketing, payment processing, and project management — often from different vendors, rarely well-integrated. The seams between those systems are where complexity accumulates, and platform-native AI doesn't resolve cross-platform seams. If that fragmentation endures, the consulting role doesn't vanish — it migrates from "build AI features" to "manage AI across the gaps." That's less glamorous than either the infrastructure vision or the platform absorption narrative, but it may be the most realistic steady-state: a permanent market for people who understand the joints between systems that were never designed to talk to each other.

The Feedback Loop Gap

One underappreciated distinction between AI adoption and previous technology waves is feedback loop architecture.

When a conventional system fails — a build breaks because a CI directory is missing, an app points at the wrong database, a migration step is skipped — the failure is typically visible, attributable, and correctable through a tight loop. Someone notices, someone investigates, someone fixes it. The correction cycle can be minutes.

AI system degradation often lacks this tight feedback loop. There's no build failure. No error log. No red indicator. The system continues producing outputs that look plausible. The drift accumulates in the space between "this used to work correctly" and "this now works differently," and that space is invisible without deliberate instrumentation.

For organizations that already struggle with coordination bandwidth — where system complexity exceeds the organization's ability to maintain shared mental models — AI doesn't reduce entropy. It can make broken systems look smoother than they are, generating confident summaries of chaotic operations, presenting fluent analyses built on stale assumptions.

The durable value in AI adoption may therefore be less about which tools to deploy and more about building the feedback infrastructure to detect when those tools stop performing as expected. That's monitoring, evaluation, drift detection, and version control applied to probabilistic systems — a discipline that doesn't fully exist yet but that borrows heavily from DevOps, observability, and quality engineering traditions.

There is a related failure mode that compounds the feedback gap: measurement corruption. Once an organization begins tracking "time saved by AI" or "tickets deflected" or "proposals generated per week," it will optimize toward those metrics. Goodhart's law applies with particular force here because AI systems are unusually good at producing outputs that satisfy surface metrics while degrading in ways metrics don't capture. A support agent that deflects 40% of tickets looks like a success — until you notice that escalation quality has degraded because the agent is resolving ambiguous cases with confident but incomplete answers, and customers who needed human attention are instead receiving fluent non-help. A proposal generator that triples throughput looks transformative — until close rates drop because the proposals are generic in ways that only become visible downstream. SMB adoption is especially vulnerable to this because the KPIs that justify the AI investment are often crude, and the organization lacks the analytical infrastructure to detect when the metric is being satisfied while the underlying value is hollowing out. The feedback loop isn't just missing for technical drift — it's missing for outcome drift, which is harder to instrument and easier to ignore.

What's Actually Being Sold

Strip away the technology framing, and most AI consulting for SMBs is selling one of two things.

The first is tool selection and configuration — recommending which AI products to adopt, wiring them into existing workflows, and providing initial setup support. This is valuable in the short term and vulnerable to commoditization. It's the WordPress agency model updated for 2026.

The second is organizational specification — helping businesses formalize their processes, document their knowledge, define ownership, and build the connective tissue that AI systems require to function reliably. This is business process re-engineering in new packaging. It's older than AI, older than cloud, older than the internet. Hammer and Champy published Reengineering the Corporation in 1993. The language changes; the discipline doesn't.

The first sells easily and compresses quickly. The second is harder to sell, harder to deliver, and more durable — because it addresses the persistent bottleneck that no technology wave has solved: organizational coherence.

There is a winner's curse dynamic in this market. Tool selection is demoable — you can show a client an agent answering calls in real time. Specification work is not demoable — you're telling a business owner things they already half-know about their own operations, and charging them for the formalization. The reaction is predictable: "We're paying you to tell us what we already know." The resistance is itself part of the entropy story. The work that would make AI adoption durable is precisely the work that clients resist paying for, because it looks like overhead rather than transformation. This explains why the market keeps reproducing shallow tool-configuration offerings even when deeper specification work is demonstrably superior: the durable work is less demoable, less emotionally satisfying, and harder to justify on a purchase order. Consultants who attempt it face a sales cycle that punishes rigor.

AI doesn't eliminate the need for process clarity. It amplifies it. If a workflow is chaotic, AI automates the chaos faster. If a process is unclear, AI hallucinates structure on top of ambiguity. If ownership isn't defined, AI outputs become unaccountable artifacts. The technology is rarely the constraint. The human support structure is.

Whether the current wave of AI consulting acknowledges this or glosses over it in favor of tool demos and efficiency promises will likely determine which firms build durable practices and which replicate the WordPress agency lifecycle.

The Unsettled Middle

The most accurate characterization of AI's position in the SMB landscape is that it occupies an unsettled middle — genuinely more capable than previous technology waves, but deployed into organizations with the same structural limitations that have constrained every previous wave.

The technology can tolerate ambiguity better than its predecessors. LLMs can extract structure from messy text, summarize across fragmented systems, and draft scenarios without pristine data pipelines. That lowers the adoption threshold. But it doesn't eliminate the underlying weaknesses — it hides them more effectively, which introduces its own risks.

The market is forming. Consultants are repositioning. Platforms are racing to absorb. Regulation is uncertain. The feedback loops that would tell us whether AI adoption is generating durable value for SMBs or masking fragility behind fluent outputs don't yet exist at scale.

What persists through every cycle is the gap between what technology can do and what organizations can sustain. That gap is not a technology problem. It's a human coordination problem. And it's where the real work — and the real value — has always been.

Subscribe to The Grey Ledger Society

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe