If I See One More Article Saying We're All Using AI Wrong, I'm Going to Lose My Mind
The AI discourse is full of experts telling you you're doing it wrong. But most organizations are navigating a genuinely complex transition at different speeds from different starting points, and the advice that ignores context isn't strategy — it's content marketing.
If I See One More Article Saying We’re All Using AI Wrong, I’m Going to Lose My Mind
I need to get something off my chest.
Barely a week goes by without another piece landing in my feed, from a consultant, a vendor, a thought leader with a freshly minted AI certification, confidently explaining that we are all using AI wrong. The framing changes, but the structure is always the same: a list of common mistakes, a description of the “right” approach, and a call to action that leads, conveniently, to a framework, a product, or a discovery call.
I’m tired of it. Not because the advice is always bad. Some of it is fine. I’m tired of it because it assumes something that is demonstrably untrue: that there is a single correct way to use AI, that the organizations failing to follow it are making avoidable errors, and that the path from where you are to where you should be is obvious and linear.
It isn’t. And the sooner we stop pretending it is, the more useful our conversations about AI will become.
Everyone Is Standing at a Different Trailhead
Here is the reality that the “you’re doing it wrong” genre consistently glosses over: organizations are not at the same starting point, and the right next step is entirely a function of where you currently are.
McKinsey’s 2025 State of AI research found that while 88% of organizations use AI in at least one function, fewer than 40% have moved beyond the pilot phase, and just 6% qualify as high performers seeing meaningful bottom-line impact (McKinsey, 2025). MIT CISR’s updated enterprise AI maturity research identified the jump from stage 2, building pilots, to stage 3, scaled AI ways of working, as the most financially significant transition most organizations face, and noted bluntly that there is no proven playbook for how to make that jump (MIT CISR, 2025).
Most enterprises in 2026, according to Larridin’s adoption research, sit somewhere between Stage 2 and Stage 3. That means the overwhelming majority of organizations are still figuring out how to move from “we have some AI tools running somewhere” to “AI is load-bearing infrastructure in how we operate” (Larridin, 2026).
A Stage 2 organization needs advice about how to identify which workflows are worth scaling and how to build the data infrastructure to support them. That advice is completely different from what a Stage 4 organization needs when it’s trying to implement governance frameworks for agentic AI. The article that confidently tells both of them they’re doing it wrong is wrong about at least one of them, and probably about both.
Context Isn’t a Detail, It’s the Whole Thing
Industry matters. A regional hospital system operating under HIPAA, state regulations, and the constant threat of ransomware is not making the same AI risk calculation as a SaaS startup trying to close its Series B. A manufacturer running sixty-year-old ERP systems isn’t starting from the same place as a digital-native fintech firm. A nonprofit with three full-time staff and a mission-driven board exists in a completely different universe from a Fortune 100 with a dedicated AI center of excellence.
The hot takes don’t account for any of this. They’re written for a generic, context-free organization that doesn’t exist, and then applied with confident authority to organizations whose actual situations were never considered.
When someone tells a community college that they’re using AI wrong because they haven’t implemented agentic workflows, I want to ask: do you know what their IT budget is? Do you know how many staff they have who can govern an agentic system? Do you know what happens to their accreditation if a model hallucinates on a financial aid application? Context shapes every sensible answer to every AI strategy question, and advice that ignores it isn’t strategy at all.
The Destination Isn’t AI
Here is the thing that most of the AI discourse gets backwards: AI is part of the path. It is not the destination.
No organization’s goal is “use AI correctly.” The goal is to serve patients better, ship better products faster, retain more students, generate more revenue, or reduce costs, and AI is a set of capabilities that might help with some of those things, for some organizations, at some points in their development. The organizations that treat AI as the destination end up optimizing for AI adoption metrics, tools deployed, features enabled, prompts submitted, while losing sight of whether any of it is actually moving the needle on what they care about.
This is not a hypothetical failure mode. McKinsey’s research found that while 80% of organizations cite efficiency as an objective for their AI initiatives, the ones seeing the most value are the ones that have added growth and innovation as explicit objectives alongside it, not instead of it, but in addition to it (McKinsey, 2025). The destination has to be the business outcome. AI is one of the roads that might get you there.
What “Wrong” Actually Looks Like
I want to be clear: I’m not arguing that there’s no such thing as a bad AI strategy. There is. Klarna’s 2024 decision to replace 700 customer service agents wholesale, watch customer satisfaction crater, and then quietly rebuild human capacity is a real cautionary tale. Organizations that buy tool subscriptions without changing workflows and then wonder why the productivity numbers are flat are making a real mistake. Measuring AI impact in cost savings and missing the revenue opportunity is a real mistake.
But those mistakes are specific, contextual, and diagnosable from evidence. They are not the same as “you haven’t implemented the framework I’m selling.” The distinction matters. Legitimate criticism of AI strategy is grounded in what a specific organization was trying to accomplish, what they did, and what actually happened. Generic criticism dressed up as expertise is just content marketing with better vocabulary.
The Conversation Worth Having
The useful version of this conversation doesn’t start with “you’re doing it wrong.” It starts with: what are you trying to accomplish, where are you right now, what constraints are you operating under, and what’s the most important next thing you could do given all of that?
For some organizations, the answer is foundational data infrastructure, because no AI strategy scales on top of bad data, and nearly 60% of organizations report their data isn’t AI-ready (Gartner, 2025). For others, it’s governance frameworks, because you can’t responsibly deploy AI in a regulated environment without them. For others still, it’s the cultural and change management work of getting teams to actually use the tools they’ve been given, because adoption rates vary by a factor of four between departments in the same organization (Larridin, 2026).
None of those answers are “use AI correctly.” All of them are specific, contextual, and useful to someone.
A Little Grace Goes a Long Way
Most organizations are doing their genuine best with real constraints, budget constraints, staffing constraints, regulatory constraints, legacy system constraints, change management constraints. They are not failing to use AI correctly because they lack ambition or intelligence. They are navigating a genuinely complex transition at different speeds from different starting points, in service of goals that have nothing to do with AI and everything to do with the people they serve.
The organizations that will look back on this period with the most satisfaction won’t be the ones who moved fastest or adopted most aggressively. They’ll be the ones who kept their eyes on what they were actually trying to do, moved at a pace they could sustain and govern, and used AI as one of several tools in service of goals that were clear before the first model was deployed.
Any reasonable observer would call that thoughtful, not wrong.
And honestly, I’d take thoughtful over fast any day of the week.
References
- Gartner. (2025). Gartner Hype Cycle for Artificial Intelligence, 2025. Gartner Research.
- Kyndryl. (2025). 2025 Kyndryl Readiness Report. Kyndryl. https://www.kyndryl.com/us/en/insights/readiness-report-2025
- Larridin. (2026). AI Adoption: The Complete Enterprise Guide 2026. Larridin. https://larridin.com/solutions/ai-adoption-the-complete-enterprise-guide-2026
- McKinsey Global Institute. (2025). The State of AI in 2025: Agents, Innovation, and Transformation. McKinsey & Company. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
- MIT CISR. (2025, August). Grow Enterprise AI Maturity for Bottom-Line Impact. MIT Center for Information Systems Research. https://cisr.mit.edu/publication/2025_0801_EnterpriseAIMaturityUpdate_WoernerSebastianWeillKaganer
- Tech.co. (2025). Klarna Reverses AI Customer Service Replacement. Tech.co. https://tech.co/news/klarna-reverses-ai-overhaul