When we talk about AI adoption, the conversation usually centers on what these tools can do. But for the owner of a small and mid-sized business, the most often overlooked question is: What shouldn’t they do?
The speed of integration and rollout of AI capabilities has met unanticipated road bumps that are becoming lessons one ought to pay attention to. Billion-dollar conglomerates and nimble startups alike realize that automation without a "verification layer" is becoming a liability and not the efficiency they hoped for.
Here are four documented cautionary tales from the front lines of the AI transformation, categorized by the strategic themes every SMB needs to understand.
1. The liability arising from "made up" policies by AI.
For years, and still today many companies treat their websites as static brochures. If there’s a typo, you fix it. But when Air Canada integrated an AI chatbot to handle customer queries, they learned that a dynamic AI is less like a brochure and more like an employee with "binding authority."
In 2024, a traveler named Jake Moffatt asked the airline's chatbot about bereavement rates. The bot didn’t just summarize existing data; it invented a brand-new policy on the fly, telling Moffatt he could claim a refund after his flight. When Air Canada pointed to the actual policy on a different page of their site (which required pre-approval), they made a fateful legal argument: that the chatbot was a "separate legal entity" responsible for its own mistakes.
The court was unimpressed. The ruling was clear: a business is responsible for all information it provides, whether it comes from a human, a static PDF, or an algorithm. Air Canada was forced to pay.
The Best Practice: Never let an AI "summarize" or "interpret" your core terms and conditions for a customer. If an AI is discussing money, contracts, or promises, it must be locked into a Deterministic Model. (A mathematical term that describes, 100% of the time under same circumstances, you get a predictable answer - which is the opposite of Generative Model - what today’s LLM systems are based on). In this context it means it can only quote your approved policy word-for-word to a customer on the other end of your chat.
2. The customer empathy void that AI does not fill
Fintech giant Klarna (of the Buy Now, Pay Later - BNPL fame) made headlines in 2024 by announcing that its AI assistant was doing the work of 700 full-time agents. The headcount went from 5,000 down to 3,800. But by mid-2025, the narrative shifted.
While the AI was great at "Stage 1" tasks: basically resetting passwords or checking a balance, it hit a wall when faced with "Stage 2" complexity. When a customer calls because a billing error is threatening their credit score, they don't want a perfectly phrased AI apology; they want a human with the authority to exercise discretion and action.
As customer satisfaction scores cratered, Klarna had to pivot. By early 2026, they moved toward a "hybrid" model, realizing that over-automating had created institutional knowledge vacuum. They had essentially let go of the very people who had the "tribal knowledge" to handle the most difficult 10% of their business.
The Best Practice: Use AI to handle the volume, but use the savings to upskill your humans. Your best people shouldn't be doing data entry; they should be the "Escalation Experts" who handle the high-stakes, high-emotion situations deftly that AI may not ever master.
3. Incremental and iterative build is still a sign of progress
Volkswagen’s software unit, Cariad, was designed to be the "Tesla-killer". A unified, AI-driven operating system for every car in the VW Group. Instead, it became a textbook example of "Strategic Overreach."
VW tried to build everything at once, hiring 6,000 people and attempting to automate the integration of 200 different suppliers. The result was a mess of an "Automation Debt." Because they didn't have a clear "Traffic Light" system (more in next issue), the AI was trying to manage legacy hardware systems that weren't designed for it.
Launches for high-end Porsches and Audis were delayed for years. By late 2025, Cariad reported staggering losses. VW eventually had to slow down, admit the "Big Bang" approach failed, and partner with Rivian to essentially borrow a system that was built more iteratively.
The Best Practice: Don't try to "AI-ify" your entire workflow in one quarter. Pick one narrow, low-risk process (a "Green Light" task), master it, and then move to the next. In the world of systems, "slow is smooth, and smooth is fast."
4. Let "Common Sense" prevail
For those who have been around the SEO based marketing block - you know the expression “content is king”. With AI - specifically large language models, the equivalent expression at the risk of sounding cliche is “context is king”. In 2024, Taco Bell rolled out AI voice-ordering to hundreds of drive-throughs. It worked...until it didn't.
One viral incident involved a customer who jokingly ordered 18,000 cups of water. A human would have laughed or told the customer to move along. The AI, however, took the order seriously, attempted to process the logic, and effectively crashed the store's system, causing a massive backup. Other customers found themselves in "infinite loops" where the AI repeatedly asked if they wanted a drink even after they had already ordered one (It just took one data point that 18,000 cups of water is a new benchmark of a ‘normal order’ for AI to start behaving erratically).
Taco Bell has since "pumped the brakes," retraining staff to act as active supervisors of the AI. They moved from being "Autonomous" to "Augmented." In other words going from all machine driven intelligence to embedding humans in that workflow.
The Best Practice: Every AI output needs a "Sanity Check." If your system is capable of processing an order or a data entry that is 100x your normal volume without flagging a human, your guardrails are broken, or 10x for that matter.
Staying in Control: Four Guardrails for safely implementing AI
To keep your business from becoming a cautionary tale, you don't need to be a data scientist or a tech bro. You just need to apply the same common-sense boundaries you’d use for any other part of your operation. Think of these as your "Safety Gates" for AI integration.
1. Box what customers can ask AI. Leave everything else to Human.
Treat AI like a teleprompter, not a spokesperson. If the AI is talking to customers about pricing, refunds, or contracts, it must only use pre-written, approved text. Never let an AI "explain" your policies in its own words. If a customer asks a complex question, the AI’s job is to say, "I’ll get a human to help with that," not to make up an answer.
2. Knowledgeable and trained staff have contextual connections. They are the seams between AI systems (and the seamstress of AI systems).
Keeping your experts involved. If you automate 100% of a process, you lose the "tribal knowledge" needed to fix things when systems break. Keep your senior staff in the loop and use AI to clear their plate of busywork, not to replace their judgment. Your team should always be more capable than the tools they use.
3. One Step at a Time vs Going all-in
Test AI in one small, isolated corner of your business. Look for optimizing internal processes with AI, before letting it touch customer-facing roles. If it fails in one small area, the rest of the company still stays functional. Disruptions become manageable. It allows you to learn what works and what doesn’t before you expand.
4. Setting trip wires and kill switches.
Build in "sanity filters." An AI doesn't know that an order for 18,000 waters is a mistake, but your bank account will. Set hard limits: if the AI tries to process an order, a refund, or a data change that looks unusual (like a 500% spike in normal volume), the system should automatically freeze and alert a supervisor.
Of course this is not exhaustive. One way to think about this is what are all the things that can go wrong. There was no way someone at Taco Bell would have thought - “what if someone orders too many drinks?” what do we do then? This is an infinitely complex list. Of course, the humans who do the orientation at fast food joints when they start their employment don’t have to learn things such as, what do you do when someone places a ridiculous order. Common sense and human judgment is implicitly understood to prevail.
So, instead approaching with what is not allowed, start with “what will we allow AI to do”. This starts to emerge as a very limited set of options and astonishingly small set of tasks which is what you put a ring fence around and prohibit anything else from occurring, and train AI to deflect outlier requests to humans.
Because, common sense is still not yet common in AI.
If this was useful - subscribe to get clear, no‑hype briefings on the economy, markets, and AI for SMBs like this every week.
