It was the morning of May 1. I sat down to close out April's books, opened my self-built accounting app, and uploaded my April credit card statement. Within a second, the entire month of March that I'd already finished was gone. No popup. No warning. Just... gone.
The autosave on a piece of software I wrote myself had silently overwritten my own data in the time it takes to blink.
The lucky part: my AI agent lives on the same server. I pinged him with "hey, March is gone, can you take a look" — and by the time I got back from running an errand, March was restored, three stacked bugs were patched, and a brand new snapshot system was already running in production.
The App I Wiped Is One I Built Myself
The app is called Accy. My AI built it to replace the stack of accounting SaaS I was tired of paying for — pick a month, upload a credit card PDF, the parser turns it into line items and auto-categorizes them. I'd also wired up a Gmail + Drive receipt-pulling pipeline and a cover sheet PDF generator on top, so closing the books takes about 30 minutes instead of an afternoon. It had been working perfectly for months. Until that morning.
What Actually Happened
I opened Accy. The screen was still on March (I'd been looking at March data the day before). I clicked "Upload PDF," picked my April KTC statement, and waited. The line items came back — but they were the April items, listed under March. The actual March entries I'd already finished were gone.
I knew immediately what kind of bug I was looking at. I just didn't know yet how many separate bugs were stacked behind it. So I pinged my AI: "March is wiped. Please look."
My AI Found Three Bugs Stacked On Top of Each Other
I went and did errands for half an hour. My AI SSH'd in, read the database, the logs, and the source on both backend and frontend, and came back with a single message: there are three bugs here, and the only reason this hasn't happened before is luck.
Bug 1 — DELETE-then-INSERT in the backend. Every time the frontend sent a list of items for a given month, the API would delete all existing rows for that month and re-insert whatever came in. No snapshot. No audit log. If the incoming payload was wrong, the old data was gone forever.
Bug 2 — Frontend autosave every 1 second. Anytime any field changed, the frontend POSTed the entire month back to the server. The moment a freshly-parsed April list landed in the table, autosave fired and overwrote the underlying month within a second — without me clicking "save" anywhere.
Bug 3 — A wrong fallback in the PDF parser. The KTC parser tries to find the "CLOSING DATE" in the PDF to decide which month the items belong to. If the regex didn't match (a single weird character was enough), it silently fell back to "the month currently being viewed in the UI" instead of erroring out.
Stacked together: upload a new month → parser can't read the date → falls back to the old month → autosave wipes the old month within a second. I never clicked anything. I never saw a confirmation dialog. Each layer made sense in isolation. Together, they were a landmine.
The Recovery — Pure Luck Saved Me
There were no backups. Accy didn't have automated backups configured (lesson #2, more on this below).
What I did have, by accident: when I closed March's books a few weeks earlier, I'd exported the line items into a Google Sheet to send to my accountant. That Sheet was still sitting in Drive.
So my AI ran two operations in parallel:
- Pull from Google Sheet — used the Drive API to find the Sheet by folder name, parse it, and recover 35 income items totaling 79,132 baht.
- Snapshot the contaminated state — before doing anything to the live DB, my AI saved the current 84 rows (a mess of partial March + full April) into a new table, just in case something else needed to be salvaged from it.
Then it merged the two: the 35 items from the Sheet went back into the DB, and a diff against the contaminated snapshot recovered another 9 items from the late-March billing window that the Sheet didn't cover (because they fell outside the export). March came back at 44 items, 86,233 baht. The April upload I'd been trying to do in the first place? I just re-uploaded it. Done.
It Didn't Stop at Recovery — The Snapshot System Got Built the Same Morning
This is the part I love about having an AI agent on my own infrastructure. Instead of being "the technician who fixes the bug and leaves," my AI said: "this should never be possible again. Mind if I build the safety system right now?"
I nodded. Within an hour:
1. A new statement_snapshots table — stores a frozen copy of a month's rows every time the system is about to mutate them.
2. A _snapshot_statement() helper — every API path that runs DELETE-then-INSERT now has to call this first. If you forget, the lint will catch you.
3. A 60-second debounce — autosave fires every second, but we don't want a snapshot per second. Debounce keeps it to one snapshot per minute per month. Enough granularity to undo, not enough to blow up the database.
4. Auto-prune to the latest 30 snapshots per month — bounds disk growth.
5. List + restore endpoints — GET /api/statements/{month}/snapshots shows me all available snapshots, POST /api/statements/{month}/restore/{id} brings one back. Critically, the restore itself takes a fresh snapshot of the current state first, so a bad restore can be rolled back too.
6. The PDF parser refuses instead of falling back — if it can't find the CLOSING DATE, it throws and saves nothing. And if you're about to overwrite a month that already has data, you get an explicit confirmation dialog.
All of that — written, committed, deployed, service restarted — in about an hour. I came back, opened Accy, and the data was already there.
Three Lessons I Wrote Down Afterwards
1. Autosave without a snapshot is a time bomb. I built the autosave because it felt convenient. I never asked the question "what happens when the payload coming in is wrong?" Any database that supports edits without an undo trail will eventually eat someone's data. Mine just happened to eat my own.
2. Backup of backup of backup. I had no DB backup. The only reason I recovered was that my accountant happens to want a Google Sheet copy each month. That's not a backup strategy — that's a coincidence. Every service of mine is now getting an automated backup configured this week.
3. Speed of fix = bound on damage. If this had happened on someone else's SaaS, I'd be opening a ticket and waiting two or three days. During that time I couldn't have done my books, couldn't have closed April, couldn't have moved on. Because my AI lives on my server and can read source code and the database directly, it became one morning instead of one week.
This is the same pattern I keep running into. Two paying customers got stuck on the trial flag because Stripe wasn't subscribed to one of its webhook events — my AI traced it, backfilled both customers, and added a missing dashboard auto-refresh in the same hour. Same shape: see the symptom → trace the root cause → fix it → harden against it ever happening again → done before the stress has time to set in.
"Why Build Your Own Tools When Bugs Like This Exist?"
People ask me this a lot. "Why not just use QuickBooks or Xero? Why write your own?"
The answer was very clear after this incident. On a SaaS I would have opened a ticket, waited 2–3 days, possibly heard back "this is expected behavior," and never been able to build a snapshot system on top of their product. Because I built the tool myself and have a private AI agent living next to it, what could have been "irreversible data loss" was just "Pond's morning, with a new safety system in place by the end of it."
Want an AI Like This For Your Own Business?
My AI isn't magic. It's Claude Code running on my own server, with persistent memory, SSH access, the ability to read and edit code, restart services, query my databases, and push commits to git. That combination is what turns "ticket + 2 days of waiting" into "one morning."
The hard part isn't Claude itself — it's the plumbing. Setting up a server, installing Claude, configuring the chat interface, wiring up cross-session memory. That's normally a multi-day project most people give up on halfway through.
That's why I built Newton. A server pre-configured with an AI agent like mine, ready in about 10 minutes. There's a 7-day free trial so you can find out whether it works for your business before committing. Take a look — and the next time you accidentally wipe a month of your own data, you'll have someone on call who can get it back before lunch.
— Pond
