After discovering LXC containers β my "townhouse" analogy β I felt that familiar rush of excitement. New tech, new possibilities, new solutions to old problems. But if there's one thing the last year has taught me, it's that momentum without direction is often wasted time.
Before I jumped in and committed, I forced myself to stop. To ask a question that's so easy to skip when you're knee-deep in experiments: What do I really need on a technical level to sustain Magic Pages for the next 10 years?
Not what's cool, not what's trending. What does Magic Pages truly need to keep its promise as it scales from hundreds of sites to thousands? I want to rebuild things once β and not every 1-2 years.
That question, simple as it sounds, is a lot harder to answer honestly than you might think. It's easy to get swept up in the excitement of new tools, to convince yourself that the latest and greatest is always the right direction. But when I sat down and looked at what Magic Pages needs, the answers were surprisingly clear.
- I need rock-solid reliability, but not the kind that comes with anxiety every time an autoscaler in Kubernetes fails.
- I need replicated, distributed storage that's robust, but doesn't demand a PhD just to keep it from eating all my server memory.
- I need the ability to live-migrate sites between servers for maintenance without my customers ever noticing.
- I need to rely on standards, so that β eventually β new team members at Magic Pages can pick things up quickly.
- I need to scale efficiently, in a way that doesn't turn every new milestone into a new business case calculation.
Every one of these requirements traces back to the same place: my promise that creators shouldn't have to think about infrastructure. When they hit "publish," they should trust that their words will reach their readers every time.
When I put all of that into a markdown file, I honestly wasn't sure whether the new tech β LXC β I'd discovered earlier was up to that challenge.
So, I started dissecting LXC. Not just the "hello world" tutorials, but the guts of it. What I found was refreshingly direct. No fancy abstractions, mostly just really low-level tech. Sounds very un-sexy β but that's essentially the "standards" I am after.
So, LXC itself was promising. But it was just the container runtime. I still needed something to orchestrate these "townhouses" at scale. As I mentioned in my last post, Proxmox supports LXC, but itβs a full-blown virtualization environment, usually meant for home-lab VM management. Using it just for LXC felt like using a sledgehammer to crack a nut. It will work, but there are probably better and more efficient tools out there.
So, I searched for "lxc orchestrator" and found LXD, which has evolved into a new, community-driven project called Incus.

From what I learned, Incus is focused on being really, really good at one thing: running containers in a clustered, production environment. It provides a built-in REST API (a BIG advantage to Docker and Kubernetes) and tooling for the exact things I was worried about: clustering, storage management, and network management. All that fancy application-level orchestration that Kubernetes and Docker Swarm come with? That's not its job. You build that yourself.
I noticed this first-hand when I tried to set up my first container. I couldn't reach the internet. Turns out, you have to actually build the network. Bridges, routers, NAT rules... it was all manual. Scary, hm?
Well, yes. But in the end, these are all Linux standards. And the fact that you have to think about it means you need to understand it β rather than rely on an orchestrator to deal with it for you.
Then came storage: how do you get that robust, replicated storage I need? Incus doesn't provide it, but it has first-class support to integrate with tools like Ceph β the exact tool I wanted to use as Longhorn replacement on Kubernetes anyway.
It certainly wasn't easy to set up a simple Ghost container β but there's something oddly reassuring about the whole project. Being so close to the foundation, every problem is tangible. There's no magical abstraction layer. If the network is broken, you just have a few screws that could be loose. If storage is slow, you can easily see the bottleneck. And all of that means: You can fix it.
And maybe that's the real insight here. After being caught in a self-imposed echo chamber of "cloud-native," "serverless," and "[whatever] as a service," sometimes reliability comes from stripping things back, not piling more on.
LXC and Incus don't promise to solve every problem out of the box. But they give me the tools to build something solid β something I can understand completely. It's tech at a lower level, sure. Not another layer of abstraction, but a foundation you can actually see and understand.
It feels less like a magical black box and more like a set of well-made tools.
As a next step, I'm defining the architecture for a proper, production-grade test cluster. This isn't just about getting one container online anymore. It's about answering the big questions:
- Clustering: How do I make three servers act as one cohesive unit that can survive a node failure?
- Storage: How do I implement a distributed storage backend like Ceph and integrate it with Incus so that every container's data is safely replicated?
- Networking: How do I build a scalable, distributed virtual network so containers can communicate across servers and with the outside world securely?
Once I have a solid plan for these three pillars, I'll start building. The goal is to create a small-scale version of the final infrastructure and then do everything I can to break it, long before any customer site ever touches it.