A real Newton support ticket. My AI agent replied with a 4-section dev-style menu — React, Tailwind, nginx, DevOps, plus 2 external doc links. The customer wasn't a developer. They opened the email and disappeared. Here's the bug, and why the fix was a 20-line edit to the prompt, not a hand-written reply.
How Tim Handles Support
I don't open Newton support tickets myself anymore. Newton has a built-in ticket system — a customer sends in a question, and Tim (my AI agent on the same server as Newton) reads the ticket, looks up the customer in the database, checks their subscription tier and recent server activity, and writes a reply in their language. Then it sends.
I get one Telegram notification: "ticket closed." I read it. I move on. The big case where Tim debugged a production bug, shipped a fix, and replied to the customer in one hour ran on the same flow.
The Reply That Made Me Freeze
The other day I went back to read what Tim had been sending — just to gut-check the tone. One ticket made me stop for 2 seconds.
The customer's question was one sentence: "What can Newton do? What features does it have?"
Tim's reply was a 4-section menu, 40+ lines long:
- Build websites with HTML + CSS
- React + Tailwind CSS
- Deploy via nginx + systemd
- DevOps work like setting up monitoring
Plus 2 links to docs.anthropic.com. Closed with a friendly note about Git workflow.
Here's the catch: this customer wasn't a developer. Their signup profile said "online shop owner who wants AI to help with content and customer replies." They opened that email, saw "React, nginx, DevOps, Git workflow," and quietly closed it. Never replied. Never asked a follow-up question. Gone.
The Bug Wasn't Wrong Information — It Was Wrong Audience
I want to be clear about this: Tim wasn't technically wrong. If you handed that same question to a developer, the answer was complete and accurate. Frontend, backend, infrastructure, DevOps — all covered. It would score full marks on a technical exam.
It was wrong for this audience.
The shop owner didn't want React and nginx. They wanted to hear something like: "It can write your daily Facebook and Instagram posts. It can reply to customer DMs. It can design promo posters. It can pull lapsed customer data and send retention messages." The same product, described in the language they actually use with their own customers.
I opened the prompt file Tim uses for support replies and found the problem in one line: "reply in a professional, friendly, concise manner."
That was the entire instruction. No guardrails on tone. No banned words. No rule about matching the customer's level. So Tim defaulted to "list everything you know" — which is the natural mode for an LLM when the question is open and the prompt is vague. "Professional, friendly, concise" can mean a hundred different things, and Tim picked the most comprehensive interpretation.
Fix the Prompt, Not the Email
The intuitive move is to find the customer, write a new reply by hand, and send it. That'd close this one ticket. It would also leave the bug in the system. The next non-dev customer asking an open question would get the exact same dev-menu treatment.
So instead I went and edited the prompt. Five new rules:
1. No Internal Jargon
Banned words unless the customer used them first: "Claude Code" (that's the engine running underneath — customers don't need to know), "VPS" (just say "server"), "terminal," "CLI," "shell," "systemd," "nginx," "git push." If a non-dev sees those words, they assume the product isn't for them and stop reading.
2. Match the Length
One-sentence question → 4-5 sentence reply. One paragraph → don't write ten. Skilled human support agents do this naturally; AI has to be told. Long replies to short questions feel like dumping, not helping.
3. Open Questions Get 2-3 Examples + a Question Back
If a customer asks "what can it do?" or "how do I use it?" — never list every feature. Pick 2-3 examples relevant to their profile (which Tim already has access to via the database), then close with "which one would you like to try first?" The question back keeps the conversation alive and produces real signal about what they actually want.
4. No External Doc Links
If a doc reference is needed, link to our own at newton.incomeinclick.com. If we don't have one, explain it inline. Two reasons: (1) customers who leave to read someone else's docs come back at much lower rates, and (2) external dev docs are usually more technical than what we wrote, which makes the problem worse, not better.
5. Plain Text Only
No HTML, no code blocks, no rendered tables. Some email clients render markup wrong. And honestly — plain text reads as more sincere. Replies with formatted blocks and bullet styling can feel auto-generated even when they aren't.
The Result
I added 20 lines to the support prompt, committed, restarted the service. Every reply Tim has sent since has the new tone. No internal jargon. Replies sized to questions. Open questions get examples plus a follow-up. No external links. Plain text.
I haven't had to fix another support email by hand. The whole point of running an AI on systems you control is exactly this: fix it once at the source, never fix it case-by-case again. That's the difference between editing the output and editing the system.
Why Owning the Prompt Is the Killer Feature
If you use ChatGPT or Claude through their normal chat interfaces, you cannot write a permanent prompt that they remember across every session. Every conversation resets. Every user gets the same shared brain that OpenAI or Anthropic trained for the average use case. They have to be safe and useful for everyone, which means they can't be tuned for your specific customers.
You can't tell ChatGPT "never use the word 'Claude Code' when speaking to my customers." You can't tell Claude "for open-ended questions, ask back instead of dumping every feature." You can't enforce "always plain text, never markdown." Even if you put it in the system prompt of a session, the next session forgets.
But if your AI agent runs on your own server, the prompt is a file you own. You edit it once and every behavior downstream of it changes. One customer hits a bad experience? Edit the prompt. The next 100 customers never hit it. Your AI gets better in a way that compounds — like onboarding a new team member who actually remembers feedback after the first time you give it.
This is the same lesson I keep running into in different forms. I build my own tools instead of paying for SaaS because I can change them when I need to. I built Newton as a managed AI server platform for the same reason — you don't get to edit the system prompt of a shared platform. You only get to use it as-is.
Three Lessons From This One Bug
1. An AI giving wrong-feeling answers is usually a prompt problem, not a model problem. Before you swap models or pay for a more expensive one, read the prompt. Most "the AI is bad at X" complaints I've seen are missing 5 lines of guardrails.
2. Always fix at the source, never at the output. Fixing one reply solves one ticket. Fixing the prompt solves every future ticket. The second option costs the same amount of time and produces compounding returns. (The same logic applies to bugs in the product — I had a case where my AI fixed a render bug at the source and pushed it to all 6 customer servers in under an hour.)
3. The ability to teach your AI to talk to your customers is something a shared platform cannot give you. ChatGPT speaks to everyone the same way. Your business doesn't speak to everyone the same way. That gap is real and it shows up in churn.
That's the core reason I built Newton — a managed AI agent on your own server. The agent comes pre-configured, set up in 10 minutes, no server knowledge needed. But the prompts, the rules, the personality, the guardrails — those are all editable files you own. The first time it gets the tone wrong with one of your customers, you teach it once. Every reply after that has the right tone. That kind of compounding fix is impossible on someone else's platform — and it's the whole reason owning your AI agent matters.
— Pond
