There was a bug sitting on the Newton signup form since launch day. Customers typed a password they wanted to use, hit submit — and the system silently threw it away, generated a random one instead, and emailed that to them. The password they picked never had any effect on anything. The interesting part isn't the bug. It's what my AI agent did when it found it. It didn't try to make the customer's password survive. It argued I should delete the field.
The Setup: Customer Picks Password X, Receives Password Y
I was reviewing the Newton signup flow with my AI agent (Tim) right after we'd shipped a Node CLI bug fix that broke provisioning for 1 in 30 customers. I asked him to walk the entire pipeline and look for anything else suspicious upstream of where we'd already been.
He read the code path that handles /api/trial/start, compared what the form posts against what the database actually writes, and reported back:
- The form has a
passwordinput that customers fill in. - The API receives it and holds it in memory during the request.
- When the customer record gets inserted,
password_hashis never touched. - About 2 minutes later, the auto-provision job spins up the customer's server. That is where
secrets.token_urlsafe(16)generates a fresh random password and writes it intopassword_hash. - The welcome email goes out with the random password.
Translation: the password customers picked at signup has never done anything since the day Newton launched. Everyone has been logging in with the random password from the welcome email.
The field exists. It accepts input. It's just decorative.
Why Nobody Caught This for a Month
Honest answer — I never caught it because every new customer logs in with the welcome email password first. They use it, it works, they're in. They don't remember what they typed in the form 2 minutes earlier, and the welcome email is sitting right there in their inbox.
A customer who actually memorized their form password and tried to use that one would hit a wall. But I never got that ticket. People assume the system "decided" to give them a random password and just go with it.
So the bug hid for a month. No support tickets. No alerts. Just a quiet feature that did nothing.
Two Options on the Table — A and B
When the AI surfaced this, it didn't rush to patch. It paused and asked me which direction I wanted, with two options:
Option A — Delete the password field from the form entirely. Keep using the random-password + welcome-email pattern that's already working. Just stop pretending to ask the customer.
Option B — Fix it end-to-end. Respect the customer's input from signup → API → auto-provision job → live server. Make the field do what it claims to do.
I sat with it for about ten seconds. I picked A.
Why "Make It Work" Was the Wrong Choice
Option B looks more "correct" on the surface. Stripe respects user input. Google respects user input. Every signup form on the internet respects user input. But once you actually trace what B costs you, four hidden bills show up:
- Plaintext (or recoverable) password sitting in the database for ~2 minutes. Provisioning is async. The customer's password has to be parked somewhere between signup and the moment the new server boots so the provision job can hand it off. That's exactly the kind of artifact you don't want introducing into your data model.
- More CLI surface area to test. The provision job hands the password to a Node CLI. We just got bit by a flag-parsing bug where a password starting with a dash silently broke the provisioner. A random password I control is safe — I generate it, I know its shape. A user-typed password is the universe of possible strings, and any one of them might trip a similar parser bug. Why opt back into that class of failure?
- Longer form = lower conversion. Newton is a card-upfront, 7-day free trial. Every field a customer fills before they hit submit is a place they can drop off. A field that respects their input but doesn't actually use it costs me customers for nothing.
- "Forgot password" already covers this need. A customer who wants to set their own password can log in with the welcome email password and reset it in 30 seconds. We don't owe them this at signup — it's a feature that already exists somewhere else.
So A is "remove a feature that looks like it works but never did, that nobody is asking for." B is "make it actually work, and accept all the surface area that comes with it." A is straightforward.
5 Lines Deleted, 2 Copy Tweaks
The implementation was smaller than I expected:
- Removed the
<input type="password">from signup.html (TH and EN). - Removed the password validation in the JS.
- Removed
req.json.passwordin/api/trial/start. - The customer INSERT/UPDATE doesn't touch
password_hash— provision still owns it, same as before.
Then the AI cleaned up the signup wizard copy so reality matches what the customer sees:
- Step 1 was "Name + Email + Password." Now it's "Name + Email."
- Step 3 now says "We'll email you your password as soon as your server is ready."
Anyone landing on the signup page now knows up front that the password arrives by email. No guessing, no typing one and wondering why it doesn't work later.
Lesson 1 — A Good AI Agent Doesn't Just Patch Things
When I first started working with AI agents, my mental model of "good AI" was an agent that fixes broken things efficiently. You hand it a bug, it patches it, it reports back done.
I think the better bar is an AI that asks "should this exist?" before it patches.
This password field had been "broken" for a month and nobody complained. Removing it was easier than fixing it, and the result was better on every dimension — shorter form, less surface area to test, clearer customer expectations. Patching it would have been the wrong instinct, even though it would have looked like more "real" engineering work. (A few months later Tim ran the same play and deleted 250 lines of his own alert system for the same reason — nobody was acting on what it produced.)
This is something a chatbot can't do. A chat window reads your question and answers it. An agent that lives on your server, reads your code, sees your business model (card-upfront, 7-day trial, every field is a drop-off risk), and knows what your conversion funnel looks like — that's the kind of system that proposes "delete the field" instead of "let me write you 200 lines that fix it."
Lesson 2 — "Looks Like It Works" Is a Class of Tech Debt I Don't Audit Enough
This bug also nudged me to ask a broader question: what other features in my systems look functional but aren't actually doing anything?
For a solopreneur shipping new features every week, this stuff piles up fast. Checkbox preferences customers can toggle that no code reads. Settings pages with a Save button that doesn't persist. Configurations being read from the wrong table. None of these throw errors. None of these get tickets. They just sit there, quietly being decorative.
My old workflow as a solo dev was to patch them as they came up. My new one is to send the AI agent through entire flows and ask "which inputs here are decorative and which actually drive behavior?" — and let it tell me which features I should consider removing instead of maintaining.
What This Has to Do With Newton
This is exactly why Newton isn't framed as "ChatGPT on your own server." It's an AI agent that reads your full codebase, understands your business context, and behaves like a senior engineer, business partner, and QA all at once. It doesn't just write code on command — it asks "is this field useful?" and "is this feature actually working or just looking like it does?" and "should we make this work or remove it?"
Those are questions an AI on a shared platform can't answer. It doesn't see your code. It doesn't see your database. It doesn't see your real customer flow. So it can only patch what you describe to it, never propose deletions you didn't think to ask about.
If you run a real product with real customers and you want an AI agent that's brave enough to recommend deleting features that shouldn't exist — not just one that politely patches — try Newton at newton.incomeinclick.com. Your own private AI agent on your own server, ready in about 10 minutes. First 7 days are free. No password field on the signup form, naturally.
— Pond
