You Have Observability for Your Systems. You Have None for Yourself.
A founder's note on the burnout signals I missed in tech—drawn from lived experience and shaped with Ascenda's psychology and neuroscience team.
Founder Notes — Written by me as Ascenda's founder from lived experience, and shaped with the psychology and neuroscience expertise across our team.
I would never run a production system without monitoring.
No dashboards. No alerts. No trend lines. No way to tell the difference between a normal spike and a system that is quietly drifting towards failure. In software, that would be reckless.
And yet that is exactly how I used to run myself.
I know this because I learned it the hard way. Before Ascenda, I was doing what a lot of founders and technical leaders do when the pressure keeps building: staying functional, staying productive, and assuming that meant I was fine. From the outside, it looked fine enough. Underneath, the system was already running too hot.
That gap between visible performance and invisible degradation is one of the reasons Ascenda exists.
The system with the highest blast radius is usually the least monitored
In tech, we understand observability instinctively. We track latency drift, error rates, memory creep, incident frequency, deployment regressions, and recovery times. We do not wait for the whole system to fail before deciding the signal mattered.
But when the system is us, most of us still rely on a hopelessly crude metric: how do I feel right now?
That question is too noisy to be useful on its own.
A single hard day tells you very little. A flat week after poor sleep tells you very little. The real signal is trend direction over time:
- Am I bouncing back from load as quickly as I used to?
- Is my clarity less reliable than it was a month ago?
- Am I becoming more brittle, more impatient, or more narrow in my thinking under the same workload?
- Has high output started depending on more force and less ease?
Those are observability questions. They are not wellness questions.
Why this got more urgent in the AI era
The reason I think this matters even more now is simple: AI increased throughput without restoring recovery.
Most technical people did not take the time saved and convert it into space. We converted it into more work. More prompts. More iterations. More tasks completed in the same day. More cognitive switching. More total load.
At the same time, AI removed some of the natural stop signals that used to protect us.
Writing used to have friction. Research used to take longer. Hard architectural thinking used to force pauses. Now you can stay in motion far longer than you realise, and because the output is still good, it is easy to confuse sustained motion with sustainable performance.
That is where trouble starts.
Burnout usually looks operational long before it looks dramatic
The version people imagine is the collapse: the person who cannot get out of bed, the emotional blow-up, the obvious crash.
That can happen, but it is usually not the first sign.
The earlier signs are more subtle and much easier to rationalise away:
- you recover more slowly from the same meeting, sprint, or incident
- you feel wired at night and flat in the morning
- you keep shipping, but your judgement gets thinner
- you stop doing the small things that used to keep you steady
- you become more reactive, less patient, and less able to zoom back out
In other words: the system is degrading, but the service is still up.
Founders and senior operators are especially vulnerable here because we are rewarded for pushing through. High-functioning people often hide deterioration the longest.
What I wish I had understood sooner
What I needed was not a lecture about self-care. It was a way to see the trend before the cost became unavoidable.
That means looking at recovery as seriously as load.
In infrastructure terms, most of us are good at noticing demand and terrible at noticing whether the system is still returning to baseline properly afterwards. A system that serves more requests but clears less efficiently is not high-performing. It is accumulating risk.
The same thing happens in people.
If your sleep looks fine in isolation but your restoration is trending down, that matters. If your output is still high but it takes more force to produce it, that matters. If you are using AI constantly and finishing every day with a strange combination of productivity and depletion, that matters too.
The better question
I have become much more interested in one question than in any motivational slogan:
Is my recovery capacity improving, holding, or declining?
That question changes behaviour because it shifts the focus away from guilt and towards signal.
It also makes the next steps clearer. If the trend is declining, you do not need drama. You need earlier intervention, better regulation, and better visibility.
That might mean changing how you work with AI. It might mean creating harder stopping rules. It might mean rebuilding sleep and decompression as non-negotiable infrastructure rather than optional extras. It might mean getting more support sooner rather than later.
What it should not mean is waiting for a crisis to prove the trend was real.
A founder's view of what support should feel like
The reason I care so much about this is that technical people can smell generic advice immediately. If the language feels vague, moralising, or detached from the reality of modern work, they ignore it — often rightly.
What does work is a model that respects how people in tech actually think:
- show the pattern
- explain the mechanism
- make the intervention proportionate
- keep it practical
- treat mental performance as infrastructure, not as a personal failure
That is the standard I wanted for myself, and it is the standard we keep building towards at Ascenda.
If this resonates, the next two pieces in this series go deeper into the mechanisms:
- The Slot Machine in Your IDE — why AI sessions are harder to stop than they look
- Your Output Is Fine. Your Recovery Isn't. — the real metric that usually declines first
The short version is this: your systems are not the only thing worth instrumenting. If your cognition, judgement, and recovery are carrying the company, then they deserve observability too.
