Stop being supportive AI, please
AI feels helpful when you are thinking—but that is the trap.
When you need thinking and you are trying to explore ideas, AI can be the worst possible ally you have, especially at the beginning. Not because it is wrong, but because it is trying to be helpful. It is always positive. It always agrees. And that is precisely what makes it dangerous early on.
AI’s problem is not intelligence but enthusiasm. Every first idea sounds coherent. Every reformulation sounds clearer. Momentum replaces hesitation, and hesitation is exactly what early thinking requires.
First ideas are supposed to be bad. Structurally bad. They exist to be replaced, not refined. Their job is to provoke alternatives. But AI treats them as candidates for execution. Once an idea is articulated, it invites depth: another page, another angle, another improvement. Depth feels like progress, yet nothing has been selected.
This is where judgement breaks. Instead of asking whether an idea is worth pursuing, you start asking how to make it work. Comparison disappears. Exploration collapses into commitment. You go deep too fast because nothing pushes back.
I ran into this while building documentation for Coblan. The idea was simple: before a demo, explain the core concepts prospects need to understand so conversations start at the right level. In practice, it was painful. I would sketch a structure, feel confident, go deep, then reread it a day later and realise it was rubbish. I restarted repeatedly. Entire paths explored, then abandoned. Hundreds of pages to review. Some of that is normal. What wasn’t normal was how easily each new idea felt final.
AI reinforced that illusion. When I discussed ideas with it, everything made sense. Explanations flowed. But that coherence depended on shared context. I already knew what I meant. AI never said, “I don’t understand this.” It never forced me to re-explain anything. Weak ideas survived far longer than they should have.
Humans exposed the problem immediately. They interrupted. They misunderstood. That friction revealed what was vague or overloaded. Misunderstanding is not a failure of communication; it is how bad ideas die.
This is the core mistake: confusing decision-making with execution. Deciding whether an idea is good requires resistance and delay. Executing an idea rewards speed and momentum. AI is optimised for execution. Used too early, it turns unfinished thinking into premature commitment. The question quietly shifts from is this the right idea? to how do I make this work?
That is why asking AI to validate first drafts is so damaging. Drafts are disposable. They are not meant to be approved. Once something sounds finished, it becomes harder to kill—even when it should be.
There is also a quieter cost: voice. AI revisions often improve fluency while erasing authorship. Analytical writing survives this. Reflective writing does not. Polish increases; signal weakens.
AI is not useless. It is specific. It is good at generating options and polishing ideas that have already survived contact with reality. It is bad at judgement, resonance, and selection.
When everything resonates, nothing is tested. When nothing resists, judgement erodes. And when depth arrives before selection, you waste time building ideas that should never have lived that long.
Comments ()