
I've been working with AI tools since 2023—not casually, but integrating them into actual business operations. My own business operations. Billing automation, documentation systems, security analysis. Real production systems that have to work, not proof-of-concept demos.
And here's what I've learned that I think matters for any small business owner looking at this technology: AI is really good at revealing whether you know what you're doing.
AI as a Thinking Partner (If You Force It To Be One)
I've read a lot of books on AI. Not the "10 ChatGPT prompts to revolutionize your business" type—books about using these tools as thinking partners. How to communicate with precision. How to structure problems clearly.
Here's what that taught me: AI exposes gaps in your own thinking.
When you divide by zero on a calculator, you get an error. When you leave a gap in your instructions to AI, it manufactures a plausible answer. It fills in the blanks. And if you're not paying attention, you won't even notice.
That's where you get the "obviously AI" feel in so much published content—vague prompts producing vague output that sounds professional but says nothing specific.
The flip side is valuable: when you force yourself to communicate precisely enough that the AI produces something actually useful, you've clarified your own thinking. The tool becomes a mirror for how well you understand what you're trying to accomplish.
But only if you're looking at that mirror critically.
The Ground Keeps Moving
The methods of AI implementation are evolving faster than any other technology I've seen—and I've been doing this for 20+ years. We're not talking about new features or incremental improvements. We're talking about fundamental shifts in how these tools work, what they can do, and how you integrate them.
What worked three months ago might be obsolete now. The solution I built for billing automation last quarter needed significant rework this quarter, not because it broke, but because better approaches emerged.
Adapting to those changes in near-real-time while maintaining production systems is its own challenge. You're not just implementing AI—you're implementing something that's fundamentally unstable by design. That requires a different mindset than traditional technology deployment.
Where Your Data Actually Goes (And Why Quality Matters)
Here's something most people don't think about carefully enough: AI will take in everything you give it.
If you're using a public model—Claude, ChatGPT, Gemini, Copilot, whatever—you're feeding it content that it might keep for any reason the controllers of those resources deem necessary, regardless of what the terms and conditions say or what privacy settings you think you've configured.
There should be NO expectation of privacy.
But that's not the only reason I stopped using and recommending ChatGPT. The bigger problem—especially with the free tier that most people actually use—is how it handles context. It frequently summarizes conversations (essentially making a copy), then bases the next response on that summary rather than the original content. You're getting a copy of a copy, and the degradation compounds.
This might be less of an issue in the paid versions, but how many casual users are paying? Most people's experience with "AI" is the free tier of whatever tool they stumbled across first, and they're making decisions based on fundamentally flawed interactions.
There are methods to insulate your operations against both the privacy risks and the quality problems. Private deployments, local models, properly configured business agreements, choosing tools that actually maintain context properly. But they're not obvious, they're not simple, and most small businesses aren't even thinking about these problems yet.
When you're handling client data—especially if you're in a regulated industry or dealing with sensitive information—this isn't optional due diligence. It's fundamental operational security. And when you're trying to use AI as an actual thinking partner, the quality of how it maintains context matters as much as its raw capabilities.
The Question Nobody Wants to Ask
Here's the uncomfortable one: What AI-driven process will disrupt your core business model?
Not "might someday." What's happening right now that makes your current approach less valuable?
If your business model depends on being the only one who can do something efficiently, and AI just made that thing trivial, you have a problem. If your value is in execution rather than judgment, you have a problem. If clients are paying for your time rather than your expertise, you have a problem.
I'm not saying those business models are doomed—but they're under pressure in ways that weren't true two years ago. And the pressure is increasing.
The businesses that will be fine are the ones where the value is in judgment, context, accountability, and relationships. The ones where knowing what to do matters more than doing it quickly.
When Everything Sounds Professional, Nothing Is
The first problem I ran into wasn't that AI-generated content was wrong—it was that it sounded right. Professional terminology. Proper structure. The polish that usually signals expertise.
That's actually dangerous.
I can generate a network security policy in 90 seconds that reads like it came from a consulting firm. But can I tell when that policy misses something critical? Can I spot when the recommendations don't fit the actual environment?
That requires judgment. And judgment comes from understanding the systems you're working with, not from better prompts.
The AI doesn't have context about your business, your clients, your constraints, your actual risks. You do. If you can't evaluate what it produces, you're not leveraging a tool—you're outsourcing decisions you're still responsible for.
Fix Your Process Before You Amplify It
There's a process philosophy often attributed to Elon Musk: delete the bad steps first, then perfect what remains, then automate.
That order matters.
If you automate a broken process, you just fail faster. AI does the same thing—it amplifies what you give it. If your thinking is unclear, you'll produce unclear output at scale. If your process has gaps, AI will fill them with plausible-sounding nonsense.
I've spent the better part of two years now building AI into my operations. The wins are real—I've automated billing processes that used to eat hours every week, built documentation systems that actually stay current, developed analysis tools that surface issues I'd have missed manually.
But every one of those wins came after figuring out what the process should actually be. What could be deleted entirely? What needed to be simplified first? Where was my judgment actually required versus where was I just following motion?
The real value hasn't been from doing more—it's been from using AI to eliminate noise so I can focus on what actually requires thinking. The routine stuff that had to get done but didn't need my judgment? That's what got automated. The strategic decisions that matter? Those get more of my attention now, not less.
AI is leverage. And leverage works in both directions. Use it on the right things and you'll get genuine productivity gains. Use it on the wrong things and you'll just make mistakes faster.
The Line Between Assisted and Abdicated
There's a difference between using tools and handing off responsibility.
When I use AI to draft documentation or initial analysis, I'm being assisted. I still own the output. I can explain why it's right (or fix it when it's wrong). I understand the underlying systems well enough to catch problems.
But if I couldn't evaluate that output—if I was just trusting that it "sounded right"—I'd have abdicated responsibility while still claiming the work as mine.
From the outside, these look identical. Both produce professional-looking results. Only one is actually worth anything.
And this matters more as these tools improve. The gap between "sounds right" and "is right" gets harder to spot. If you can't tell the difference, you shouldn't be putting your name on it.
What This Actually Means
If you're thinking about using AI tools in your business, here are the questions that matter:
Can you tell when the output is wrong? Not just obviously wrong, but subtly unsuited to what you're trying to accomplish?
Are you using it to do more of what matters, or just to do more? There's a huge difference between productive leverage and busywork at scale.
Where does your judgment actually add value? That's where you need to stay engaged, tools or not.
Do you know where your data is going? And what happens to it once it gets there?
What part of your business model becomes commoditized if this technology gets 100x better in the next 18 months? Because that's the trajectory we're on. Not 10x. Not incremental improvement. Exponential change on a compressing timeline.
The Bottom Line
AI tools are genuinely useful. I use them daily. They've made parts of my business significantly more efficient. But they're tools, not substitutes for understanding what you're doing.
The technology keeps changing. The fundamentals of running a good business haven't: know the difference between activity and progress, understand where your judgment matters, stay accountable for outcomes, and protect what needs protecting.
I haven't had clients engage me specifically for AI implementation yet. That might be a chicken-and-egg problem—hard to market work you haven't done, hard to do work you haven't marketed. But I've done the implementation in my own practice, learned what works and what doesn't, and formed some strong opinions about what actually matters versus what's just hype.
If that's a conversation you need to have about your business, I'm interested in having it. But I'm not going to claim expertise I don't have or sell you solutions that sound good but don't fit your actual problems.
That would be exactly the mistake I'm warning about.
A Note on How This Was Written
This post started with someone else's essay on X that contained sharp insights wrapped in apocalyptic framing. I asked Claude to help me extract what was valuable and eliminate what wasn't, using my own voice and actual experience.
We spent about an hour iterating. Claude produced fluent content that positioned me as having client experience I don't have yet—I caught it, we stripped it out. We added details from my actual implementation work. We questioned tone, cut things that didn't work, refined what did.
The thinking is mine. The experience is mine. The judgment about what matters is mine. Claude helped me articulate it more clearly than I would have on my own.
That's the difference between using AI as a tool and outsourcing your thinking to it. The same process applies whether you're writing a blog post or implementing a security policy—if you can't evaluate what the AI produces and own the output, you shouldn't put your name on it.


