We turn down work. Not because we're too busy — because we've watched enough AI projects fail to know which ones are set up to. Here are the five patterns we've learned to decline, and what we usually suggest instead.
1. Replacing a deterministic workflow that already works
Sometimes a buyer comes in saying "we want AI to handle our refund approvals." We look at the existing logic and find: refunds under a small amount auto-approve, mid-range refunds need a manager's review, and large ones need a director's review. That's a working if/else.
Adding a model in the middle introduces uncertainty into a process that didn't have any. It also introduces a new way to be wrong. We suggest: leave the rules alone. If you want to reduce manager review time, build a dashboard that shows the queue and lets one person clear ten in a minute.
2. Regulated decisions that need 100% explainability
If your domain is healthcare, lending, hiring, or anything else where a regulator can ask "why did your system make this decision," we don't recommend a generative model as the decision-maker. Even with retrieval and chain-of-thought, the answer to "why" is fundamentally a story the model writes after the fact.
We suggest: use the model to summarize, draft, or surface relevant policy — but keep the decision rule explicit and auditable. A human signs the call.
3. Low-volume tasks where the human is already faster
One client wanted us to build an AI system to draft three-line responses to a type of email they received about ten times a month. The existing process: a person on the team takes about ninety seconds.
The proposed AI system would take weeks to build, need ongoing maintenance, and save a handful of minutes a month. We suggest: a saved reply template in their email tool. They had one within an hour.
4. "Make it more creative" with no measurable target
When the brief is "make our content more creative" or "make our chatbot feel more human," we ask: how will we know we got there? If there's no measurable answer — no rubric, no dataset, no preferred examples — the project becomes an open-ended aesthetic argument.
We've watched these projects burn months because everyone has an opinion and nobody is wrong. We suggest: write five examples of "great" output by hand first. If you can't, the project isn't ready. If you can, those examples become the eval set.
5. Projects with no access to real data or a domain expert
We've had clients ask us to build something based entirely on what they assume the data looks like, with no sample to confirm. We've also been asked to build expert systems with no access to the expert. Both fail the same way: the system gets built to spec, ships, and then real users hit it with inputs nobody anticipated.
We suggest: before any build, send us a sample of real inputs and book 30 minutes a week with the person who would do this work today. Those two things, more than any model choice, decide whether the project works.
What we do say yes to
Clear metric. Real data we can see in week one. A specific human who owns the outcome and shows up. A failure mode we can actually live with. When those four things are present, AI projects usually work. When they're not, no model is going to save them.
If any of these patterns sound like a project you're considering, that's worth a conversation before you commit. We'll tell you what we'd do — even if what we'd do is "don't build it."