This happened yesterday morning. I woke up, opened TikTok and Instagram to check the 7 short clips my AI agent had cut from last night's live — all 7 came out as blurry 360x640 vertical video. My live was HD 1080p. I opened the terminal, told my AI "clips are blurry, go figure out why," and didn't touch anything else. Under an hour later it had traced the bug to a Facebook Graph API limit, rewritten the downloader, and pushed the fix.
Wake Up to 7 Blurry Clips in a Row
For context: I have a system where one button cuts a 79-minute live into 28 social posts across 4 platforms. After every live I tap "Mark Done" and the AI takes it from there — downloads the VOD, picks the best 7 moments, cuts them, writes captions in two languages, schedules everything for the week. Hands off.
I went to sleep last night, the system ran, and by morning everything was queued and posting.
I opened TikTok to check clip one. 360x640. Soft, washed out, you can read the words on screen but only just. Clip two. Same. Clip three… all 7 of them were 360p when the live had been a clean 1080p HD stream.
My instinct was to blame the cutter — assume ffmpeg was set to a low resolution somewhere and I'd need to dig through a config file. I almost opened the file myself. Then I caught myself and did what I'm supposed to do at this point: I just told the AI.
I opened the terminal, dropped a one-line message — "clips are blurry, go check why" — and stopped.
The AI Walked Through It Like a Senior Dev Would
The thing I love about watching my agent debug is that it doesn't guess and patch. It works backward from evidence. Here's what it did:
1. Inspect the source file before touching anything else. It ran ffprobe on the original video that the server had downloaded from Facebook. The file itself was already 360x640. That immediately ruled out the cutter — the input was bad before any clipping happened. If I'd jumped straight to ffmpeg I would have lost half the morning.
2. Check what the Graph API actually returned. It hit Facebook's Graph API directly with ?fields=source against the live video ID. That endpoint returned a 360x640 URL. Not just for this one — for every test the agent ran. source never returns HD.
3. Try other fields that might expose HD. It queried ?fields=format and confirmed Facebook actually knows the native resolution is 1080x1920 — the metadata is right there. But the format field doesn't include a download URL for that quality. You can see the resolution number; you can't get the file.
4. Verify with the community before declaring it a wall. It searched Stack Overflow and GitHub issues to see if this was a documented platform limit or a misconfiguration on my end. Multiple threads confirmed: Facebook Graph API intentionally never exposes the HD URL for live videos. Not a bug, not a permission, not something you can negotiate. A platform decision.
5. Switch tools instead of fighting the API. It rewrote the downloader to use yt-dlp against the live video's permalink_url, with a format selector pinning the output to bv*[height<=1080]+ba. yt-dlp is an open-source video downloader that loads pages the way a browser does — so it sees the actual HD stream a viewer would see, not the SD URL the API exposes.
Code Changed, Pushed, Re-Run — All in the Terminal
Once the fix was clear, the agent committed the change to download_vod() in scheduler.py, deleted the old cached SD file, and re-ran the pipeline from scratch. The output came back at 1080x1920, sharp.
Then it did the part most "AI assistants" can't do: git add, git commit -m "Switch live VOD download from FB Graph source to yt-dlp permalink for HD", git push. The fix went into the repo without me opening it.
From the moment I typed "clips are blurry" to "ticket closed, here's a sample of the new 1080p clips" — under an hour. If you'd handed this to me solo, I'd have lost half a day on the wrong fix (ffmpeg) and another half on reading yt-dlp documentation.
Why "Open a Terminal" Is the Whole Game
If I'd asked ChatGPT "why are my Facebook live clips coming out 360p," I'd have gotten a polite, well-formatted answer. Maybe even the right answer eventually. But I'd still have to do every action in the chain myself — open the terminal, run ffprobe, hit the Graph API, read Stack Overflow, install yt-dlp, edit scheduler.py, run the pipeline, commit, push.
An AI agent on your own server doesn't describe the work. It does the work.
That distinction is the entire reason I keep telling people the difference between an AI chatbot and an AI agent matters more than they think. ChatGPT is a smart consultant in your browser. An AI on a server you own is a junior engineer with shell access — small in IQ terms, but it can act.
For the record: the same agent already SSHed into six customer servers and patched a chat-render bug across all of them in under an hour, debugged a Newton support ticket end-to-end in production, and caught a deprecated FB Marketing API field hiding behind a misleading "Invalid Event Name" error. This blurry-clip case is the same pattern at a smaller scale.
What Made This Specifically Hard for a Human
Here's the part I want to stress, because it's the kind of thing that bites you when you're not a full-time engineer:
The Facebook Graph API responded normally. No error, no warning, no "you're getting the wrong thing." The source field returned a real, valid video URL. It was simply 360p, silently. There was no documentation in the obvious places that said "this field will never give you HD." You only learn it by trial — or by reading a long Stack Overflow thread written by someone who'd already lost their day.
This is the kind of bug that wastes a non-engineer's time the most. You don't know what you don't know. You assume the platform is giving you the best version because there's no other option visible. The only way to learn it is to dig.
The agent dug for me. That's the actual unlock.
What This Means If You Run a Business and You're Not a Developer
I think about this every time I tell another business owner what I'm doing. If you're running a shop, an agency, an online store — you're hitting bugs like this all the time. APIs return weird data, plugins behave unexpectedly, exports are missing fields. You either learn to debug it (slow) or you pay someone to debug it (expensive) or you give up on the workflow.
An AI agent on your own server is a fourth option. You describe the symptom in plain language. The agent investigates, finds the cause, fixes the cause, and tells you what it did. You don't need to know what yt-dlp is. You don't need to know what a permalink is. You just needed to say "this is blurry, go check it."
And because the agent runs on your server, you keep the audit trail. Every command it ran is in the shell history. Every code change is in your git repo. You can roll back if you don't like it. You can read the commit. You're still the owner — you just outsourced the digging to something that does it 24/7 for the cost of a server.
Get Your Own AI Agent on Your Own Server
If reading this you're thinking "I want a setup like that, but I don't want to build it from scratch" — that's exactly what I built Newton for. It's a managed AI server: you get a private VPS plus an AI agent already configured with terminal access, ready to take instructions in plain language. Setup takes 10 minutes. You don't need to know servers and you don't need to write code.
Everything I described in this post — opening a terminal, hitting an API, reading docs, editing code, pushing to git — is what every Newton customer's agent can already do on day one. The blurry-clip story isn't a future feature; it's just what happens when your AI lives on a machine where it's allowed to act, instead of inside someone else's chat box.
— Pond
