Self-hosting is not about being cheap or contrarian. It's about understanding your stack, owning your data, and building a certain kind of engineering judgment that you can't get any other way.
Pick what you want to hear about — I'll only email when it's worth it.
Did this resonate?
Karanveer Singh Shaktawat
Full Stack Engineer & Infrastructure Architect
Building portfolio, contributing to open source, and seeking remote full-time roles with significant technical ownership.
I self-host my email. My file storage. My AI inference. My monitoring stack. My password manager. My link aggregator. My DNS resolver. My VPN. My build servers.
When I tell people this, the first response is usually "that sounds like a lot of work." The second response, if they're engineers, is "why would you do that when [managed service] is right there?"
This post is my honest answer to both questions.
It's not about being cheap. SES would have been meaningfully cheaper than Postfix + Dovecot if you count engineering time. Backblaze B2 is cheaper than Nextcloud on a VPS if you're paying yourself market rate. Self-hosting is almost never the cheapest option when you count labor honestly.
It's not about privacy absolutism. I use Google Search, GitHub, and Cloudflare. I'm not trying to be off-grid. I use managed services when they're clearly the right tool.
It's not about being contrarian or signaling technical sophistication. The engineers I most respect use managed services strategically and build their own infrastructure strategically. The choice is contextual, not ideological.
It's about a specific kind of knowledge that you can't get any other way: knowing what's actually running underneath you.
When I send email through SES, I know that something handles DKIM signing, something manages bounce handling, something maintains IP reputation. I don't know what. The abstraction hides it. That's fine for most use cases — it's the point of a managed service.
When I send email through Postfix, I know exactly what's happening. I know that Rspamd is scoring incoming mail and what score thresholds trigger what actions. I know that DKIM signing happens at the opendkim milter layer, and I know what the DKIM key rotation schedule is and where the keys live. I know why Gmail is deferring a specific batch of emails and what to do about it, because I can read the logs directly and I understand the protocol.
This isn't abstract knowledge. It surfaces in specific, practical ways:
When a user reports that our password reset email isn't arriving, I can trace the exact path of that email — from Postfix queue to delivery attempt to bounce processing — in under five minutes. With a managed service, I'd be reading dashboard summaries and opening support tickets.
When we need to add a new sending domain in a hurry, I can do it in fifteen minutes: generate the DKIM key, publish the DNS records, add the domain to the Postfix config, reload. I understand every step because I built it. With a managed service, I'd be navigating a web UI I use once a year and hoping the DNS propagation dashboard tells me when it's done.
When something breaks at 2am — and something always breaks at 2am — the knowledge is in my head. I can diagnose and fix it without waiting for a support queue.
Here's what I actually run, and why:
Email (Postfix + Dovecot + Rspamd) Why: At the volumes Prachyam needed, managed email was prohibitively expensive. Self-hosting also forced me to understand email deliverability at a depth that made me significantly better at the whole domain. Maintenance: ~2 hours/month. Most incidents are IP reputation issues that the monitoring stack catches early.
File Storage (Nextcloud + rClone + pCloud) Why: 5TB of data that I need accessible from all devices and don't want stored in Google's data centers. pCloud gives me 2TB encrypted cold storage. Nextcloud gives me a sync client that works offline. rClone handles the backup sync. Maintenance: ~30 minutes/month. Updates are infrequent and the stack is stable.
AI Inference (Ollama + ComfyUI + Kokoro + Whisper) Why: Zero marginal cost on development-intensity AI usage. Privacy for code and documents. Offline operation. The setup cost was meaningful but the ongoing cost is zero. Maintenance: Model updates as new versions release, ~1 hour/month.
Password Manager (Vaultwarden) Why: Vaultwarden is the open-source Bitwarden server. Self-hosting means my password vault never touches a third-party server. The only threat model where this matters is "Bitwarden gets breached," which is low-probability but high-consequence. Maintenance: ~15 minutes/month. It just runs.
VPN (Tailscale) This one is technically not self-hosted — Tailscale's control plane is managed. But the VPN traffic itself routes directly between nodes via WireGuard, not through Tailscale's servers. I consider this the right trade: I trust Tailscale's control plane (they're a reputable company with a clear privacy model) while keeping the data path decentralized.
Monitoring (Grafana + InfluxDB + custom exporters) Why: I need dashboards for my self-hosted services that work even when the internet is having a bad day. Managed monitoring (Datadog, New Relic) has its own latency and availability dependencies. Maintenance: ~1 hour/month for dashboard updates as I add new services.
DNS Resolver (Pi-hole + Unbound) Why: Ad blocking at the DNS level, across all devices on the network, without browser extensions. Unbound is a recursive resolver that queries DNS root servers directly rather than forwarding to 8.8.8.8. Maintenance: Minimal. Allowlist updates occasionally.
Being honest about this matters, because the self-hosting community has a tendency to undercount costs.
Engineering time: Setting everything up from scratch took roughly 200–300 hours over two years. That's not nothing. Some of it was billable time I could have spent on client work. Most of it was learning time that I valued.
Ongoing maintenance: Across all services, approximately 4–6 hours per month. Monitoring alerts, updates, incident investigation when something breaks.
Mental overhead: Self-hosted services have outages. Sometimes they're your fault, sometimes they're the VPS provider's fault. You're on call for your own infrastructure. I've been woken up by email queue alerts. I've debugged Nextcloud sync issues on evenings I'd rather have spent doing something else.
Financial cost: About ₹8,000–12,000 per month across all VPS nodes. Less than equivalent managed services for my use cases, but that's specific to my situation.
The honest summary: self-hosting costs more time and headspace than managed services. It saves money at certain scale points. The primary return is knowledge, not economics.
This is the part I struggle to articulate to engineers who haven't done it.
There's a specific kind of systems knowledge that you only build by running systems. Not reading about them. Not using abstractions on top of them. Running them.
I know how IP reputation works because I built it from zero and watched it fail and recover. I know how Nginx proxy buffering affects WebDAV upload performance because I debugged it. I know how DMARC alignment fails in specific relay configurations because it failed for me and I had to fix it. I know how Docker layer caching interacts with Rust's compilation model because I wrote the Dockerfiles and watched the builds.
This isn't trivia. It's judgment. When I'm architecting a new system, the decisions I make are grounded in this concrete operational experience in ways that reading documentation or using managed services doesn't provide.
The engineers who have the most reliable instincts about systems are almost always people who have operated systems at some level — run their own servers, debugged real production incidents, handled the 2am pages. The abstraction layer that managed services provide is valuable for most things, but it does create a distance from the underlying system that accumulates over time.
I want to be clear that this is not a universal recommendation.
Self-hosting is wrong when:
I self-host the things where I've done the calculation and decided the knowledge gain and control are worth the operational cost. I use managed services for everything else.
The principle I try to apply: know what's running under you, one level down.
You don't have to operate every layer of the stack. But you should understand, at least conceptually, what the layer below your abstraction is doing. The engineer who uses Kubernetes without understanding container scheduling makes worse decisions than the one who understands it. The engineer who uses managed email without understanding SMTP and DNS authentication makes worse decisions when things break.
Self-hosting is one way to build that knowledge. It's not the only way. Reading deeply, contributing to open source infrastructure projects, or running things in a lab environment can build similar understanding.
But actually running it — being on call for it, fixing it when it breaks, watching its behavior under real load — builds the knowledge fastest and most durably.
That's why I self-host. Not because it's cheaper, not because it's easier, and not because it makes me look more technical. Because the knowledge I've built from running these systems is now part of how I think, and I can't imagine thinking without it.
How I turned a single MacBook into a private cloud — AI inference, media server, dev services, wildcard HTTPS — all managed as code across 19 Docker Compose profiles.
The full story of building a self-hosted email stack across 12 domains and 6 servers at Prachyam Studios — the architecture, the hard lessons, and why I'd do it again.
Running a self-hosted email server across 12 domains for a media company — the architecture, challenges, and why I'd do it again.