Have a look at the bottom of the security advisory for the SQL injection in Ghost's Content API from a couple of weeks ago. There's a line there that would have been impossible to write two years ago:
We thank Nicholas Carlini using Claude, Anthropic for disclosing this vulnerability responsibly.
In my eyes, this is good news. Security researchers use the best tools available to find vulnerabilities. They disclose them responsibly. Ghost handled it well and shipped a patch within days. Open source software will become better in the long run because of AI research tools.
At the same time, it's also kind of terrifying. Because the bug Claude found was sitting in Ghost's Content API since version 3.x. That's years of potential exposure. It sat there because hunting for it was expensive – it required a curious human to dig into a specific code path. But as we all know, that's not how it works anymore. You can point a capable LLM at a codebase now and get back a list of suspicious patterns within minutes.
Some of those patterns (most?) are false positives. But some aren't.
And while AI is making the discovery of vulnerabilities cheaper and cheaper, it is also – on the other end of the equation – making the deployment of the software those vulnerabilities live in cheaper and cheaper. You can paste a Docker Compose file into Claude or ChatGPT, get a full Ghost instance running on a $5 VPS, point a domain at it, and be live in an afternoon.
What AI didn't cheapen, and shows no sign of cheapening, is maintaining it.
That's the asymmetry I want to talk about. And I think it's a much bigger problem for self-hosted software than people are admitting.
I run a managed hosting service for Ghost. We're soon crossing 1,400 live sites in our platform. So I see this from a vantage point most people don't.
When a Ghost CVE drops, the patched version rolls out across all 1,400 sites within an hour or two. Not because we're oh-so amazing. Because that's what you pay a managed host for. There's monitoring. There's alerting. There's a CI/CD pipeline to roll out updates. There's a person whose actual job is to read the security disclosures and act on them before customers even know they exist.
If you self-host, all of that is part of your job. But the AI deploying your Ghost site didn't tell you that when it set up your site.
A few days ago I read a GitHub issue from somebody whose self-hosted Ghost site had been compromised by exactly the bug Carlini disclosed. The attacker exploited the SQL injection to extract an admin API key directly from the database, and used the key to go through every post on the blog to add a code injection that served a fake captcha to every reader. The captcha prompted Windows users to paste a command into their terminal. The command installed malware.
The site itself was on Ghost 5.97 – a Ghost version that was released in October 2024, so nearly 1.5 years ago.
I'm not telling that story to dunk on the operator. They were technically competent, an experienced developer, and they still missed it. They're the kind of person Ghost's security model was actually built for: somebody who could read a CVE and apply it. They missed the patch anyway, because they weren't paying attention that week.
If they're the upper bound, what does the lower bound look like?
The lower bound is the person who got their Ghost site running last month from an AI chat session and has not thought about it since.
That's the part of our community that's growing fast. And nobody is really talking about what happens to them when the next major security vulnerability hits.
One thing I have heard several times over the last few weeks: "Why hasn't my Ghost instance notified me about the vulnerability?"
Well, Ghost actually does try to email admins about critical security updates. It's a feature in the codebase. It's enabled by default.
But, for that alert to actually reach you, four things need to be true:
privacy.useUpdateCheckneeds to be enabled. (It is, by default. But if you copy-pasted a config from a tutorial that turned it off, well.)- Your transactional email needs to be working. (Separate config. Easy to forget. Even easier to never set up properly in the first place, if you're on an old Ghost version that doesn't enforce it.)
- The admin email on file needs to be one you actually read.
- You need to act on the email within hours – not days – because the gap between disclosure and exploitation has gotten really, really small.
That's a four-link chain. Every link assumes somebody is paying attention.
This chain held up reasonably well when the operators of self-hosted Ghost sites were mainly technically capable users. Setting up Ghost used to be a filter (and yes, gatekeeping to a certain degree): if you got it running, you'd probably configured transactional email, because you'd had to. You knew what an admin API key was and how to guard it. You'd be curious about the software itself, so you kept up to date with news around it.
That filter no longer exists. Capable LLMs democratize access to self-hosted software. But the chat session that gets you to "your site is online" doesn't necessarily tell you about transactional email. It doesn't tell you about CVEs. It doesn't tell you that the deployment was the easy part – and just the very beginning of what you're getting into.
There's another pattern I've been seeing over the last few months. People run into a problem with their site hosted on Magic Pages – emails not sending, layout looking weird, an integration broken – and instead of emailing us directly, they ask Claude or ChatGPT what to do. That's fine. It also shows a lack in documentation on our end.
The issue is what comes next.
These conversations end up in our inbox a few hours later, and they all read the same. There's a polite description of the symptom. Then a numbered list of "things to try." Then a closing note asking us to confirm when it's done. The whole email is recognisably AI generated. We've seen enough of them now that we can usually tell within the first sentence.
The problem isn't that people are using LLMs. The problem that the LLM never paused to ask whether the symptom was actually what they thought it was. It diagnosed from a short description, without context, produced a list of plausible-sounding fixes, and presented them with the same calm authority it would use for a question it knew the answer to. Sometimes the fixes aren't even Ghost-specific. Sometimes they reference settings that don't exist. Sometimes the actual problem is upstream of anything the user could fix on their own server.
When we get these emails, we can say "hold on, that's not how this works" and point them at the actual cause. We know what a real Ghost config looks like. We know what the upstream cause of a deliverability complaint usually is. We can recognise when an LLM has confidently invented a feature.
A self-hoster deploying Ghost with AI does not have a human to stop them. The chat session keeps going. They follow the listicle. They edit configs they don't understand. They restart things. They eventually frankenstein their Ghost instance into a state nobody can debug, including the LLM that got them there.
AI-assisted operations work fine when the LLM is right. But it fails worse than the pre-AI baseline when the model is wrong, because the operator now has the confidence of an expert and the underlying knowledge of someone who just opened a chat window.
And here's the thing that worries me about where this goes. The disclosure mentioned above is the first severe AI-found vulnerability in Ghost I'm aware of. There will be more. AI-assisted vulnerability research is going to find a steady stream of bugs in widely-deployed open-source software over the next few months and years. That's good for the ecosystem long-term. But each disclosure becomes a race: the people running monitored, managed instances are patched within hours. Everyone else is running on borrowed time, debugging their site through a chat window that doesn't know they're behind on patches.
I'm not writing this to tell you not to self-host. Plenty of people do it well. At Magic Pages we also self-host most of our non-Ghost software. In my own home lab most things are self-hosted to be more and more independent.
However, what worries me is the new cohort enabled by ChatGPT, Claude & Co. The people who got to "yay your site is online" via a chat session, and don't realise that the chat session was the easy part. The hard part is the next three years.
So here's the honest version, from a guy who is obviously not unbiased:
If you used AI to deploy something internet-facing that holds other people's data or serves content to readers, you've got two reasonable paths from there.
- Treat operations as a skill you commit to learning. Subscribe to the security advisories. Set up alerting. Actually configure transactional email so the alerts that the software is already trying to send you can reach you. Block out dedicated time to check for updates and apply them. Read the changelogs. Know what version you're on without having to SSH in to check.
- Move to a managed service. Doesn't have to be Magic Pages. There are plenty of good managed hosting service for Ghost out there by now.
The middle path ("I deployed it once with AI and now it just runs") is the one that ends with a fake captcha serving malware to your readers, six months later, when you're not even paying attention enough to realise something's wrong until they email you.
Ghost as a software, by the way, is fine. The security model is reasonable. The vulnerability disclosure process works. The alerting for critical issues works. None of that broke.
What did break is the unspoken assumption that the operator is the kind of person who could have set Ghost up themselves. That assumption was both a gatekeeper and a filter – and it worked for a long time. AI just quietly removed it.
The deployment was always the easy part.