I have a quiet Ryzen server at home with ECC memory, NVMe RAID, and battery backup. On it live GChat, dragekorn.online, a few pet projects, and backups of client databases. I don’t have accounts on AWS or Vercel.
Every time I say this out loud, half the room thinks “savage” and the other half thinks “legend.” Both reactions are about aesthetics, not engineering. So let’s talk engineering.
Why not AWS/Vercel/Netlify
Three reasons, in order of weight.
1. Economics. My prod stack on a self-hosted server costs ~$0 beyond the capital expense (which broke even in sixteen months compared with a Hetzner VPS, and in four months compared with AWS at the same traffic levels). I’m not paying for egress, CloudWatch metrics, per-invocation fees, or any of the “but it’s so cheap until you grow” surprises.
2. Data control. In a crypto product this isn’t optional. I want to know who has physical disk access to the machine running the Postgres with metadata. In the cloud that list simply doesn’t exist for you. It exists for the vendor.
3. Speed of thinking. When you’ve set up systemd, nginx, backups, and monitoring yourself, you stop being afraid of production incidents. You know where to look. In the cloud you’re always one console away from the vendor, and always one unknown away from root cause.
What this is not
Self-hosting is not “install Ubuntu and forget.” Not “a Raspberry Pi under the bed with torrents.” Not “I run Kubernetes at home, I’m an engineer now.”
It’s infrastructure discipline applied to my own room. Specifically:
- UPS with automatic shutdown if power is gone longer than a minute.
- ECC memory and btrfs/zfs with scheduled scrubs.
- Offsite backups to an encrypted rclone mirror every night at 03:00 UTC.
- Monitoring (Prometheus + Grafana + Alertmanager) with Telegram alerts.
- Two ISPs with automatic failover via mwan3.
- Wireguard VPN tying every one of my devices into a single L3 network.
Drop any one of those and it’s a hobby, not prod. I tell that to everyone who says “hey, I want the same.”
How it works in practice
The front door is Caddy with automatic Let’s Encrypt. Behind it, Coolify acts as a UI over Docker Compose stacks. Each project is its own git repo with CI that pushes an image to a local Docker registry, and Coolify picks it up via webhook.
The database is Postgres in its own container, with WAL archiving to offsite storage. Redis is in-memory, with a dump every six hours (good enough for job queues). For GChat there’s an extra piece — a key store on an encrypted loop device that only mounts while the service is running.
Monitoring: Prometheus scrapes metrics, Grafana draws dashboards, Loki parses logs, Alertmanager sends me Telegram messages. Sit, drink coffee — you see everything.
What I lose by skipping the cloud
An honest list, no hedging:
- I can’t burst-scale from 1 req/s to 10k req/s in a minute. If GChat suddenly becomes a hit, I buy hardware — I don’t drag a slider.
- I can’t die. If a bus hits me, my prod goes dark in five days because nobody is there to swap the UPS battery. That’s solved with runbooks for family + a dead-man’s switch for successors, but it’s work.
- No vendor support to fall back on. When something breaks, it breaks on me.
To me that’s a fair trade. I’m trading hypothetical scaling and free insurance for real control, money, and understanding.
Who this suits
Not everyone. If you’re running a startup that might have ten million users in six months, by all means — go cloud. You need to save human-hours, not rent.
But if you’re:
- Running a pet project that might become a product and you want to understand everything yourself;
- Working on something where data control matters more than abstract uptime;
- Wanting to actually understand DevOps instead of passing a certification, —
then a self-hosted stack on your own metal pays back — in real understanding of how computers work — ten times faster than any course.
And if you need someone to build it for you and stick around longer than three months, you know where to find me.