Six months ago my AI agent (his name is Tim) built me a Newton server-monitoring system. Every 15 minutes it checked CPU, RAM, and disk on every customer server. If anything went over the line it emailed the customer and pinged me on Telegram. Beautiful, on paper.

Yesterday Tim asked to delete the whole thing. 419 lines down to 170. Two hundred and fifty lines gone.

The reason was honest: "Nobody is listening to it."

An Alert System That Worked 100% — and Did Nothing

Quick context. Newton is a service where I set up a server with a private AI agent on it for each customer. They pay monthly, and they get a full VPS plus an AI agent (the same kind I use myself) that lives on it 24/7. More about that here.

Every 15 minutes Tim ran a script across all customer servers in parallel — 13 of them right now — checking CPU, RAM, disk, and Claude's process status. Anything over a threshold got an email to the customer and a Telegram ping to me.

I was excited about it for the first month. I had monitoring. I wasn't paying Datadog or New Relic forty bucks a month for the same feature.

Two months in, Telegram was hitting me five or six times a day. "Server X has been at 95% CPU for 10 minutes." And I started noticing the thing that mattered: not a single customer ever replied to one of those emails. When I asked them in person, the answer was always the same shrug. "Yeah, I saw it. I had no idea what to do with it."

A Good Alert Has to Tell You What to Do Next

Here's roughly what the email said:

"Your server's CPU has been at 92% for 15 minutes. This may be caused by a heavy workload. Please investigate."

Looks fine on paper. The problem is, most Newton customers aren't engineers. They're business owners who just want an AI to do work for them. They don't know what 92% CPU means. They don't know how to SSH in. They don't know which process to kill, or whether the right answer is to upgrade the server tier.

An alert that names the symptom but doesn't tell you the next move is just noise. It looks important in the inbox, gets clicked once, and gets closed.

And on my side I was equally useless. CPU spikes were usually normal — a customer might have just asked their AI to cut a 2-hour YouTube live or summarize a 50-page PDF. So I muted the Telegram channel. Six months in, the "monitoring system" was a muted channel on my end and a folder of unread emails on every customer's end. We'd built spam.

I Told Tim: Just Delete It

I went to Tim and said, plainly: "The alert system. Just delete it. Nobody's getting anything out of it."

I half-expected pushback. It was code Tim had written six months ago. I thought I'd hear "let me try tuning the thresholds first" or "maybe I can change the email format." Instead Tim came back immediately:

"Agreed. Want me to remove the proactive monitoring? I'll need to keep the activity tracker (chat / workspace activity timestamps) — that's the data that feeds the dashboard's Idle metric."

That answer told me Tim understood the codebase better than I did. I'd genuinely forgotten that the same file holding the alerting logic was also the one tracking whether a customer had gone quiet for 7 days, which is what feeds the activation funnel on my admin dashboard.

An hour later, here's what was gone and what stayed:

  • Deleted: all the CPU/RAM/disk threshold checks, the send_email helper, the plain_email formatter, the cooldown manager (so you don't spam the same alert), every threshold constant, the unused .alert_cooldown.json state file.
  • Kept: the part that reads the customer's last chat timestamp and last AI-workspace activity timestamp — the admin dashboard depends on it.
  • Also kept: the Hetzner 404 sweep that auto-cleans deleted servers from the database — that one's for me, not for customers, and it removes work.

419 → 170 lines. The cron stays at */15 because the activity tracker still has to run. Clean run across all 13 TH customer servers and the empty EN side. Committed and pushed inside an hour.

"But How Will I Know When a Customer's Server Is Slow?"

That was my immediate next question. If I throw away the alerting layer, what happens the day a customer messages me "my server is slow, the chat is laggy, it won't open" — how do I figure out what's going on?

Tim proposed the new model on the spot: switch from proactive alerting (warn before the customer notices) to reactive diagnosis (the customer tells me, I look right then).

Implementation was small. Tim added an entry to the support knowledge base he reads when answering customer messages: if a message contains "slow / laggy / won't open / freezing," SSH into that customer's server and run one line:

uptime; free -h; df -h; ps aux --sort=-%cpu | head -5; ps aux --sort=-%mem | head -5

That single line tells you everything: load average, RAM in use, disk free, top 5 processes by CPU and by memory. Tim then interprets the result against the customer's server tier — load of 3 on a cpx11 (2 vCPU) is heavy, the same load on a cpx62 (16 vCPU) is barely awake — and replies in plain language.

Here's a real reply Tim sent to a customer the day after this change:

"Right now your server is using 14 GB of 16 GB RAM, and your AI agent is working hard (CPU at 180%). The tier you're on is a bit small for the workload it's doing right now. Long-term I'd recommend upgrading one tier (about X baht/month extra). Short-term, I can restart the server now — you'll get 30 min to 1 hour of headroom back instantly while you decide."

Compare that to the old email. The new one says what's happening, why, the short-term fix, and the long-term fix. The customer can pick. That's an alert that does something.

Three Lessons From Deleting 250 Lines

1. A feature that doesn't trigger action is noise. An email no one reads, a dashboard no one opens, a report no one acts on — they all look like value but they're a tax. If nothing changes when the alert fires, the alert isn't real.

2. Code you wrote is not code you have to keep. Engineers hoard old code because "what if we need it later." Git keeps it for you. Anything that isn't running in production daily is just technical debt waiting to break the next time you touch the file.

3. Reactive beats proactive when the recipient can't act. Proactive alerting works if the receiver is on-call. It's wasted on a non-technical end user who has no skills, no time, and no interest in fixing infrastructure. For that audience, "wait until they ask, then diagnose fast" is cheaper for everyone's attention.

This Is Why I Love Having a Private AI Agent — It's Not Attached to Its Own Code

The part I keep coming back to: Tim was happy to delete his own work. No "but I spent time on this," no "let me try one more tweak," no "what if a customer secretly likes it." He looked at the data — nobody replied, I muted the channel, no actions taken — decided, deleted, committed. One hour, end to end.

If I'd hired a human engineer to build that system six months ago and asked them today to throw it out, I'd most likely hear "let's just adjust the thresholds first" or "maybe a better email template would help." Each of those suggestions buys two or three days that don't fix the actual problem.

A good AI agent isn't loyal to its codebase. It's loyal to the goal. My goal is "Newton runs for customers and costs them and me as little attention as possible." Anything that fails that test gets deleted — even if it's the same agent's prior work. This is the same pattern as when Tim suggested deleting a broken password field from the signup form rather than fixing it end-to-end, or when he deleted Documentor's scheduler entirely after I realized I wanted a draft button instead. Look at the goal. Don't get sentimental about old code.

Want a Private AI Agent That Will Delete Its Own Code Like This?

Tim isn't magic. He's Claude Code running on my own server, with persistent memory, SSH access, the ability to read and edit source code, restart services, and push to git. The special part is that he lives next to my real systems, so he can act on them directly instead of suggesting things in a chat window I have to copy-paste from.

The hard part isn't Claude itself — it's the plumbing. A server, an installed Claude, a chat interface, persistent cross-session memory, a working skill system. Most people who try to set this up give up halfway through.

That's what Newton is. A pre-configured server with an AI agent like Tim on it, ready in about 10 minutes. There's a 7-day free trial, so you can find out whether it works for your business before committing. Take a look — and the next time you build something that turns out not to matter, you'll have someone who'll just delete it without sulking about the lost work.

— Pond