What We've Been Doing in the Shadows

A field guide for practitioners learning to collaborate with LLMs in the real world

If you've ever dropped container logs into ChatGPT at midnight, or asked an LLM to analyze Terraform diffs so you could see what really changed, or had it skim a PDF you didn't have energy to read—you're not alone. There's a whole cohort of us out here, call us shadow AI practitioners. We're the folks fighting workload chaos with static or shrinking staff, using whatever tools we can reach.

People like us make our own operators manuals, but this missive isn't intended to be one. It's recognition. A way of saying: I see you. I've been doing this too. Here are the phases most of us go through, and the practices that keep us effective and safe.

Part I: The Journey

Phase 1: Utility Mode

At first, the LLM is a tool—a calculator, a thesaurus, a sentence-polisher. You lean on it for:

  • Polishing sentences
  • Drafting outlines
  • Checking clarity
  • Quick research summaries

Goal: Build trust in its usefulness without overcommitting. You're still treating it like any other utility—helpful but bounded.

Phase 2: The Pivot Point

One day you notice you're opening ChatGPT before scheduling a meeting or drafting a ticket. That's the shift:

  • From "sidekick" to primary thought partner
  • From showing up with raw uncertainty → showing up with structured options
  • From asking "can you help?" → asking "what are we missing?"

Signal: The LLM becomes your first stop, not your last resort. You realize you're having actual conversations with it, not just extracting outputs.

Phase 3: Iteration is the Value

The magic isn't in the first answer—it's in the refinement:

  • Back-and-forth cycles that sharpen the problem
  • Rapid "what if" exploration
  • Compressing days of discussion into hours
  • Building prompts like you'd build queries—templates that work

Practice: Expect dialogue, not perfection. The model becomes your thinking partner, not your answer machine.

Part II: Staying Safe in the Shadows

Corporate governance has its place, but in the shadows, your own practices are what keep you effective and protect your work. Here's what hardened operators do:

Verification Reflex

Always ask: "Can I test this?" The first answer is never the whole answer. Logs get summarized wrong, configs get misread, docs get hallucinated.

  • Run the query, check the diff, fire the smoke test
  • If it can't be tested, treat it as hypothesis, not truth
  • Trust your AI collaborator like a junior teammate: listen, learn, but verify

Contextual Humility

Logs are a sliver. Docs are partial. Summaries are reflections, not reality. Always state scope and limits:

  • "No errors in the last 8 hours" doesn't mean "all is well"—it means this slice shows nothing unusual
  • Absence of errors ≠ absence of problems
  • Colleagues respect marked boundaries more than false certainty

Structured Skepticism

Every AI output is an opportunity to ask:

  • What's missing?
  • What's overstated?
  • What's minimized?

Shadow AI isn't about taking the first shiny paragraph and posting it in Slack. It's about treating AI as a mirror and interrogating what it reflects. That habit—of noticing emphasis and silence—is where real value emerges.

Boundary Awareness

We've all seen the warnings about secrets and PII. In the shadows, this is more than compliance—it's self-preservation:

  • If you wouldn't paste it in a public Slack channel, don't paste it into an LLM
  • Move the AI interaction up a layer: export summaries, not raw logs
  • Paste diffs, not full Terraform plans
  • Keep the raw data where it belongs

Iterative Framing

The first prompt is rarely the right one. Prompting is a dialogue, not a magic incantation:

  • Refine, reframe, iterate until the shape of the problem emerges
  • Build your own "prompt macros" for common tasks
  • Save patterns that work, but don't expect one-and-done

Responsibility Loop

The blunt truth: if someone acts on your AI-assisted summary and it's wrong, you own the result, not the model:

  • Label outputs as drafts until you've verified them
  • Add caveats when scope is limited
  • Stand by the work as if you wrote it solo

Why Shadow AI Exists

This isn't rebellion—it's adaptation. While governance has its place, workloads don't slow down for policies to be drafted, licenses are distributed, acceptable use is codified. Not every organization staffs generously. Not every team has bandwidth for deep peer review. Remote work and distributed systems mean many of us don't have someone sitting beside us to rubber-duck through a log trace or to critique a client proposal.

AI becomes the peer who shows up. The one who will answer at 2 a.m. without judgment. The colleague who doesn't exist on your team but should.

You're not using AI because you're lazy or chasing hype. You're using it because the work demands more eyes and ears than you have, and this is what's available.

Advice for Colleagues

Don't force it. Let habits form naturally. Some people will hit the pivot point in weeks, others in months. Forcing the adoption before trust is built leads to brittle practices.

Notice the pivot. When the model becomes your first stop, you've leveled up. That's not dependency—it's integration. You're thinking with the machine, not just using it.

Save human energy. Use machines for iteration and exploration; use people for judgment, politics, and ethics. Let AI compress your uncertainty into structured options, then bring those to your team.

Keep perspective. AI can sharpen your work and expand your bandwidth, but it can't replace the human context around it—the office politics, the unspoken constraints, the relationships that make work actually work.

Building Commons from the Shadows

The danger of shadow AI is isolation. If each of us hacks away alone, we repeat mistakes and amplify risks. But if we surface what works—even informally, even in hushed channels—we build a commons.

That's what this manual is trying to be: a start. A recognition that we're not alone, that the skills we're developing—comfort with ambiguity, metaphor as diagnostic tool, willingness to digress until structure emerges—aren't quirks. They're emerging practices.

The policies will catch up eventually. The corporate showcases will highlight the success stories. In the meantime, the shadows are where the real learning is happening.

Closing

To the shadow AI practitioners out there: I see you. I know the mix of suspicion and relief when the mirror reflects something useful. I know the unease of wondering if you're "doing it wrong" because it doesn't look like the polished case studies. And I know the satisfaction of posting a Slack update that sparks action, even if the path there was messy.

You're not alone. You're not reckless. You're figuring out how to collaborate with machines in real conditions, not lab demos. You're developing practices that will matter when this stuff goes mainstream.

That's worth naming, worth sharing, and—eventually—worth celebrating in the open.

Takeaway: Start small, lean into the pivot when it comes, harden your habits, and use AI to bring sharper, more structured contributions to your team. Governance, policies, and even cultural shifts will shine down like spotlights, but the shadows constantly shift. And the skills move with them.

Subscribe to The Grey Ledger Society

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe