AI, Agents, and Readiness
A Systems Perspective from Operations
This document captures a set of observations and framing developed through recent discussions, vendor briefings, and lived operational experience. It is not an argument for or against AI adoption. Rather, it is an attempt to clearly articulate where we are, what kind of work we are actually doing, and what risks and opportunities emerge if we scale AI and agentic systems without aligning them to that reality.
The core thesis is thus:
AI systems amplify what already exists. In environments where work is still interpretive, negotiated, and context-heavy, premature automation risks amplifying misunderstanding rather than efficiency.
This framing is intended to support thoughtful experimentation while avoiding unforced errors as interest in generative AI and agents accelerates.
1. The Vendor Narrative vs. Operational Reality
Recent vendor presentations (including AWS briefings) present a coherent and compelling vision of agentic systems operating across well-defined workflows. These systems assume:
- Stable and repeatable processes
- Clear ownership and escalation paths
- Agreed-upon definitions of "normal" and "success"
- High-volume, machine-observable outcomes suitable for training and reinforcement
This vision is internally consistent. It is also aspirational.
Our current operational reality more closely resembles a sophisticated workshop than a factory:
- Multiple platforms with uneven ownership
- Legacy systems with deep historical behavior
- Tacit knowledge residing in individuals rather than artifacts
- Frequent reliance on human judgment to interpret ambiguous signals
This does not represent immaturity or failure. It reflects the normal state of complex, evolving systems.
2. DORA as an Amplification Lens
The DORA framework is useful not as a maturity scorecard, but as a diagnostic lens. Its research consistently shows that new capabilities—AI included—amplify existing conditions rather than correcting them.
In our context, this implies:
- Strong version control and platform practices will benefit from AI assistance
- Ambiguous ownership, unclear business semantics, and misrouted accountability will also be amplified
Agentic systems do not create new failure modes so much as they accelerate existing ones.
3. Individual Augmentation vs. Team Cognition
We are already in an era of significant individual AI augmentation. Current LLM usage across the organization (both sanctioned and unsanctioned) represents substantial investment and real impact.
At present, this augmentation is:
- Unevenly distributed
- Largely private
- Poorly visible at the organizational level
This creates a risk of uncoordinated intelligence: improved individual throughput without shared understanding or alignment.
Personal uses of AI—such as drafting, rubber-ducking, assumption testing, and articulation—are valuable and low-risk. However, scaling impact requires shifting from individual cognition to shared sensemaking.
4. The Tony Stark vs. Avengers Distinction
A useful metaphor emerged in discussion:
- Iron Man suits represent individual empowerment and productivity
- The Avengers represent coordinated capability, shared context, and role clarity
The risk is not that individuals are empowered—it is that empowerment scales without coordination.
The question ahead is not "how do we give everyone an Iron Man suit," but:
How do we build shared capabilities that make individual augmentation net-positive for the organization?
5. Unstructured Data and the Factory Assumption
Recent and historical thought leadership emphasizes the untapped value of unstructured data. These narratives are accurate—but they overwhelmingly describe factory-shaped problems:
- Bounded domains
- Stable rules
- High-volume repetition
- Predefined success metrics
In these environments, unstructured data can be prepared, labeled, and used to train systems that reliably extract value.
Much of our work, however, occurs upstream of consensus:
- Determining which anomalies matter
- Negotiating what "normal" means
- Deciding who is responsible when systems interact unexpectedly
Here, unstructured data is not raw material waiting to be refined—it is a mirror reflecting unresolved questions.
6. Training, Agents, and Premature Certainty
Training-based approaches (fine-tuning, reinforcement learning) implicitly assume:
- Sufficient volume of comparable examples
- Agreement on evaluation criteria
- Stability of the underlying task
Applying these approaches before meaning has stabilized risks encoding today's misunderstandings as tomorrow's ground truth.
This is the core concern with premature agentic automation:
Agents excel at executing agreements. Much of our hardest work is still negotiating them.
7. Where We Are Now: Unknown Unknowns
At present, we are operating in a phase characterized by unknown unknowns:
- Early signals of risk and opportunity
- Emerging mismatches between tooling and practice
- Questions that are not yet fully articulable
In this phase, the most valuable work is epistemic rather than operational:
- Surfacing assumptions
- Creating shared language
- Making invisible dependencies visible
This work does not immediately translate into roadmaps or metrics—but it determines whether future decisions are sound.
8. A Responsible Path Forward
This framing does not argue for slowing experimentation. On the contrary, it supports bounded, intentional exploration:
- Favor AI systems that summarize, correlate, and surface patterns
- Avoid granting AI authority to act across unclear ownership boundaries
- Use AI to externalize tacit knowledge into shared artifacts
- Be explicit about which judgments must remain human
In short:
We should resist accidental operationalization while enabling deliberate operationalization.
Closing
The goal is not to delay progress, but to ensure that progress is aligned with reality.
AI and agentic systems will amplify whatever we feed them—strengths and weaknesses alike. Our responsibility, at this stage, is to make sure we understand which is which.
This document is offered as a shared framing to support thoughtful discussion, experimentation, and decision-making as we move forward.