The Slot Machine in Your IDE
A clear look at why AI-assisted work is so hard to stop—grounded in real experience and informed by Ascenda’s psychology and neuroscience team.
If you use AI tools heavily, you may have noticed a pattern that is hard to explain to people who do not.
You finish the task. The answer is already good enough. You could stop. But instead you prompt again. Then again. You chase one more version, one more refinement, one more angle. Suddenly another forty minutes has gone by.
That does not feel like procrastination. In fact, it often feels productive. Which is exactly why it is easy to miss the mechanism underneath it.
I started noticing this in my own work before I had good language for it. The task would be complete, but my nervous system had not really received the signal to stop. I was still in the loop.
The best explanation I have found is also the least fluffy one.
AI output behaves like a variable reward system
In behavioural science, the hardest reward pattern to stop is a variable-ratio schedule. The reward comes unpredictably. Sometimes quickly, sometimes after a long run of mediocre results. That unpredictability drives persistence.
It is the same family of mechanism that keeps people pulling poker machines, refreshing feeds, or checking for one more hit of novelty.
LLM output has that structure.
Sometimes the third prompt is excellent. Sometimes the twentieth one is. Sometimes the model surprises you in a genuinely useful way, and that unpredictability keeps your brain leaning forward for the next attempt.
This is not a moral failure. It is a design property.
If you are a founder, engineer, or cyber operator already running on a lot of cognitive load, that design property matters. It can keep you engaged well past the point of rational return, especially when the work itself still looks legitimate.
Why this feels different from ordinary overwork
Normal overwork usually has a clearer edge to it. You know you are forcing it. The work feels effortful. You can at least tell that you are tired.
AI loops are trickier because they often feel energising in the moment. There is novelty, speed, and the sense that the next iteration might unlock something materially better.
That combination makes the session harder to end cleanly.
For technical people, there is a second trap as well: the cost per iteration feels close to zero. One more prompt. One more refactor suggestion. One more summary. One more alternative framing.
But the cognitive cost is not zero. It accumulates through attention residue, context-switching, and the low-grade arousal of staying in a reward-seeking loop for too long.
The cognitive offloading tax
There is another part of this that I think matters just as much.
The more often you ask the model to do the first pass of the thinking, the easier it becomes to doubt your own unassisted cognition.
Again, this is documented. Cognitive offloading is useful, but it has a side effect: if you outsource a task too routinely, your confidence in doing that task yourself can start to weaken.
I have felt this personally in moments where the fastest move was clearly to ask the model, but a deeper part of me knew I needed to think the problem through first if I wanted to keep my own edge.
That is the trade-off.
AI gives leverage. Used carelessly, it can also erode the very judgement you rely on to direct that leverage well.
The trap for founders and leaders
If you are responsible for a team, this matters beyond your own focus hygiene.
A leader who stays in AI loops too long often ends the day with a misleading combination of high throughput and low restoration. You produced a lot, but you did not necessarily close the day in a more regulated state. Sometimes you close it more activated than when you started.
Do that repeatedly and you begin to normalise a nervous system that is always slightly on. Over time, that can look like:
- being mentally busy but not mentally clear
- struggling to stop work cleanly at night
- jumping to the model before doing first-principles reasoning yourself
- carrying more work across the boundary between work and home
- feeling productive but oddly unsatisfied at the end of the session
That last one is especially important. It is a clue that the loop did not actually resolve; it only exhausted itself.
What I do now instead
I am not interested in anti-AI advice. The tools are too useful for that, and I build in this space. The goal is not to use them less out of guilt. The goal is to use them with enough structure that they do not quietly run you.
A few simple rules have made a real difference for me:
1. Define the exit condition before the first prompt
Write down what “done” means before you begin. Not perfect. Done.
If the answer reaches that threshold, stop. The impulse for one more version is often the reward loop talking, not a genuine quality requirement.
2. Do one pass of the reasoning yourself
Even if I later use AI heavily, I try to articulate the shape of the problem in my own words first. That preserves agency and protects against passive cognitive offloading.
3. End with a human summary
At the end of an AI session, I write a short summary of what I decided, what changed, and what still matters — without asking the model to do it for me.
That creates closure and tells my brain the task is actually finished.
4. Track whether your sessions overshoot
If you meant to spend twenty minutes and keep spending fifty, the pattern is not random. It is measurable. Once you see it as a system property, it becomes easier to change.
5. Build a transition, not just a stop
Close tabs. Stand up. Walk. Write the next step for tomorrow. The nervous system often needs a distinct handover signal, not just the absence of another prompt.
The main point
The slot-machine effect is not a reason to fear AI. It is a reason to design for it.
If you understand that these tools interact with attention, reward, and self-efficacy in predictable ways, you can keep the leverage without surrendering your boundaries.
That is the posture I trust most now: not avoidance, not blind enthusiasm, but better systems.
If this feels familiar, the rest of the series may help:
- You Have Observability for Your Systems. You Have None for Yourself.
- Your Output Is Fine. Your Recovery Isn't.
The tools are powerful. That is exactly why your stopping rules need to be just as deliberate.
