Newton has 8 paying customers. That's a small number, and I'll admit it out loud: until last week I didn't really know anything about any of them beyond their name and email. Who was still actually using the AI? Who had paid and then gone silent? Who was about to cancel? I was guessing. All of it.
Then I asked Tim — my AI agent — to build me a health dashboard. He did it in an afternoon. And within the first minute of looking at the new numbers, I discovered a customer who had paid me two weeks ago and not once opened the product. I had no idea.
Running a SaaS on Vibes
Newton is my managed AI server SaaS. You pay, and two minutes later you have your own VPS with a fully-configured AI agent running on it, ready to take your first message. The onboarding is almost magical when it works.
And then I had no visibility into what happened next.
The original admin dashboard Tim built when I launched Newton had four basic stats: total customers, active servers, active subscriptions, and MRR. Every one of those was a "feels good" number. They went up. I watched them go up. It was pleasant.
But none of them told me whether the product was actually working for anyone. A customer can be "active" in the subscription sense — paying me every month — while being completely stuck and about to cancel. Their row in my database looks identical to the row of someone using the AI ten times a day. And MRR treats both of them the same.
I used to tell myself: "if they have a problem, they'll email me." That's a lie we all tell ourselves. Most people who are stuck don't email. They just cancel. Silently. Next billing cycle.
The Conversation That Fixed It
I opened a chat with Tim and said: "I want a dashboard I can look at and instantly know how healthy Newton is — who's about to churn, who hasn't started using it, who's stuck on something."
He came back with four metrics. I'll quote them short because the detailed build is in the separate post on the dashboard architecture:
- Churn 30d — what percentage of customers I lost this month.
- Activation — what percentage of paying customers have actually chatted with the AI at least once.
- Lapsed 7d — active customers who haven't touched the AI in the last week. The "about to churn" bucket.
- Open tickets + avg first-reply — support health.
I said: "ship it." And he did — in one afternoon, in one commit. Extended an existing SSH cron that was already running every 15 minutes, added two database columns, wrote a new API endpoint, built four new stat cards, pushed to git. No new infrastructure. No new monthly SaaS. Just one elegant extension to a system that was already there.
"Lapsed = 1." Wait, Who?
I refreshed the admin page after the deploy went live. The numbers came up:
- Active customers: 8
- Activation: 88% (7 of 8)
- Churn 30d: 11%
- Lapsed 7d: 1
- Open tickets: 1, avg first reply 229 min
Most of those are fine. 88% activation is respectable for an early-stage SaaS. 11% monthly churn is workable at this size.
It was the "Lapsed = 1" that punched me in the stomach.
I clicked through to see who it was. Instantly recognized the name. It was the same customer who had opened a support ticket two weeks ago saying they couldn't get Claude authenticated on their server. I had replied to the ticket. I had assumed it was resolved. I had moved on.
But the dashboard was telling me the truth: that customer had paid me for 14 days and had never once sent a message to the AI. Not one. Whatever my reply to the ticket was, it hadn't actually helped them finish authentication. They were stuck. They were quietly paying me for something they had never used.
And if the dashboard hadn't existed, I would've found out about this the way every SaaS founder finds out — at next month's churn report, after the cancellation.
One Message, One Customer Saved
I reached out that same afternoon. "Hey — I see you haven't been able to authenticate yet. Let me help you finish that right now." We went back and forth for ten minutes. They finished authentication. They sent their first chat to the AI. They're using Newton now.
A relationship I was about to lose, saved by one line in a dashboard.
And because this is a subscription product, that single save compounds. If they stay for a year instead of churning next month, that's 12× the revenue I almost lost. One row, paid back the entire afternoon Tim spent building the thing — and it'll pay back again on the next customer, and the next.
Why No SaaS Analytics Tool Could Have Done This
Let me steelman the alternative. Could I have used Mixpanel, Amplitude, ChartMogul, any of the popular analytics SaaS? Sure. Technically.
But to answer "has this customer authenticated Claude yet?" — the specific question that actually mattered — I would have had to:
- Instrument my app to fire an event when auth succeeds.
- Install it on every customer VPS, not on my central app.
- Send events from the customer's private server back to a third party.
- Configure a dashboard in their UI with their data model.
- Pay monthly for it, forever.
The deeper problem: the authentication file lives at ~/.claude/.credentials.json on the customer's own VPS. That's not a metric I can ship from a central application. It lives where the product lives — on their server.
Tim had that access for free. He already SSHes into every customer's box every 15 minutes for the server_alerts.py cron that monitors CPU and disk. Reading two more files on the same SSH call was six extra lines of code. No third party ever got the data. It just quietly appeared in my dashboard.
This is the pattern I keep seeing: an AI agent with access to your infrastructure outperforms any SaaS product that doesn't. Not because the model is smarter — it's the exact same underlying LLM — but because it can reach data that generic tools literally cannot see. The same agent that fixed a production bug for a Newton customer in an hour can also surface the invisible metrics of my business.
What This Taught Me About SaaS
MRR is a rearview-mirror number. It tells you what happened, not what's coming. Activation and lapsed tell you what's coming. If you're only watching MRR, you're learning about churn after it already happened.
Activation is the single most important metric in the first year of any SaaS. If a paying customer never actually experiences the product, they will churn — guaranteed. Fixing your activation rate will move the business more than almost any feature you can build. And the only way to improve it is to know who's stuck, in real time.
Silence is not satisfaction. A customer who never emails isn't necessarily happy. Often they're the ones quietly about to leave. The lapsed metric surfaces them while you still have time to act.
An AI agent on your own infrastructure is an unfair advantage. Not because it types fast — because it sees data that's invisible to everyone else's tools. The auth file on a customer's server. The mtime of a chat directory. The exact file my business needs to read to know if a customer has really started. No SaaS has that. Your agent can.
And the metric you build first is rarely the right one. Two weeks after this post, my "Lapsed" tile lit up red with three customers — and my AI caught that the metric itself was wrong: it was measuring chat activity, not actual AI work. I wrote about that fix here if you want to see how the dashboard evolved.
If you're running any kind of online business and you're tired of operating on gut feel and MRR alone — if you want an AI agent that can actually build the tools and dashboards your business needs, the way Tim does for mine — that's exactly why I built Newton. You get your own VPS with a private AI agent already set up, ready to SSH into your own systems, query your own databases, and build whatever metrics you can't get out of a generic analytics product. Ten minutes to set up. No server skills needed.
— Pond
