My Self-Hosted Docker Infrastructure
Over the past few years, I’ve built out a comprehensive self-hosted infrastructure running on Docker. What started as a simple Nextcloud instance has evolved into a production-grade setup with 10+ services, automated deployments, centralized monitoring, and a solid backup strategy.
Here’s how I’ve structured everything and the key lessons I’ve learned along the way.
Why Self-Host?
Before diving into the technical details, let me explain why I chose this path. Self-hosting gives me:
- Control over my data - No third-party has access to my files, passwords, or personal information
- Privacy - Everything runs on my own hardware, behind my own firewall
- Learning opportunities - Running production services teaches you things you can’t learn from tutorials
- Cost savings - One-time hardware investment vs. ongoing subscription fees
- Customization - I can tweak and optimize everything to work exactly how I want
But self-hosting isn’t just about avoiding Big Tech. It’s about building something you understand from the ground up.
Infrastructure Overview
My setup runs on a single Docker host—no Swarm, no Kubernetes (yet). It’s a deliberate choice: keep it simple, keep it maintainable, and only add complexity when you actually need it.
Service Categories
I’ve organized services into logical groups:
Core Services - The foundation everything else depends on:
- PostgreSQL (shared database)
- Nginx Proxy Manager (on separate host)
- GitLab (source control and CI/CD)
- Portainer (container management)
- Vaultwarden (password manager)
- Authentik (single sign-on)
- SMTP Relay (email routing)
Productivity Services - Day-to-day tools:
- Nextcloud (file sync and collaboration)
- N8N (workflow automation)
- Homepage (auto-discovered dashboard)
- Hugo blog (this site!)
Monitoring Services - Keep everything healthy:
- Uptime Kuma (uptime monitoring with AutoKuma)
- Prometheus (metrics collection)
- Grafana (dashboards and visualization)
Media Services - File synchronization:
- Syncthing (decentralized sync)
Architecture Principles
Shared Database Strategy
Instead of running a separate database for every service, I use a single PostgreSQL 16 instance. Authentik, N8N, and Grafana all share this database—each with their own isolated database and credentials. Nextcloud uses it’s own PostgreSQL
Why? Resource efficiency. Running multiple PostgreSQL containers wastes RAM and makes backups more complex. One instance with multiple databases is cleaner and easier to maintain.
Network Isolation
Security starts with isolation:
- External access: Only through Nginx Proxy Manager with SSL
- Internal-only services: PostgreSQL and SMTP relay bind to
127.0.0.1—no network exposure - Docker networks: Services are segmented by purpose
Every public-facing service goes through NPM with Let’s Encrypt certificates. Internal services can’t be reached from the outside, even if someone compromises a container.
Phased Deployment
Services deploy in a specific order:
- Phase 1: PostgreSQL (everything depends on this)
- Phase 2: Nginx Proxy Manager (needs to claim ports 80/443 early)
- Phase 3: Other core services (Portainer, Vaultwarden, Authentik)
- Phase 4: Productivity services (Nextcloud, N8N)
- Phase 5: Monitoring (Uptime Kuma, Grafana)
- Phase 6: Media services (Syncthing)
This dependency-aware orchestration means I can run ./scripts/deploy-all.sh and the entire stack comes up correctly, every time.
Automation & Tooling
One-Command Deployment
My deploy-all.sh script handles the entire deployment process:
- Loads environment variables from
.env.local - Deploys services in dependency order
- Waits for critical services to be ready
- Optionally syncs Uptime Kuma monitors
Full stack deployment takes 3-5 minutes. I can rebuild everything from scratch in under 10 minutes if needed.
Automated Monitoring
I use AutoKuma to automatically discover and monitor containers based on Docker labels. When I add a new service, I just add the appropriate labels and AutoKuma creates the monitors for me—no manual configuration needed.
This means I’m monitoring 20+ endpoints without maintaining a giant configuration file.
Label-Driven Backups
My backup strategy uses Docker labels:
labels:
- "backup.enable=true"
- "backup.priority=high"
- "backup.schedule=daily"
- "backup.retention=30d"
The backup-volumes.sh script scans all containers, finds labeled volumes, and backs them up to my NAS using Restic. Priority determines recovery order during disaster recovery.
Why labels? As I add or remove services, backups stay in sync automatically. No separate configuration file to maintain.
Key Technical Decisions
Why an SMTP Relay?
Vaultwarden sends emails with bare LF line endings—\n instead of \r\n. RFC-compliant mail servers reject these emails. My SMTP relay (Postfix) sits between Vaultwarden and my mail server, converting line endings on the fly.
It’s a small wrapper service, but it solved weeks of email delivery failures.
I will dive deeper into this, for now this works but it isn’t fancy.
Why Authentik for SSO?
Password sprawl is real. Authentik provides centralized authentication (OAuth2, SAML, LDAP) for all my services. One password, 2FA enforcement, and I can revoke access to everything from a single place.
Plus, setting up OAuth for each new service takes about 5 minutes now.
Why GitLab Self-Hosted?
I use GitLab for source control, CI/CD, and container registry. Running it myself means:
- No 100MB artifact upload limits
- Full control over runner configuration
- Private container registry for my own images
- Can push 1.5GB Docker images without hitting SaaS limits
The trade-off is maintenance, but for my use case it’s worth it.
Lessons Learned
Start Simple, Add Complexity Later
My first setup was a mess—every service had its own database, no consistent networking, manual deployments. I over-engineered some parts and under-engineered others.
Now I follow a rule: solve the problem in front of you, not hypothetical future problems. Shared PostgreSQL came after I realized I was wasting resources. Automation came after manual deployments got tedious.
Documentation is Not Optional
When something breaks at 11 PM, you don’t want to figure out which environment variable controls what. I maintain two sets of docs:
- Technical docs in the repository (for scripts and deployment)
- Architecture notes in my Obsidian vault (for understanding why things work the way they do)
Future me is always grateful for notes present me leaves behind.
Labels Over Configuration Files
Docker labels are underrated. Instead of maintaining separate config files for backups, monitoring, and dashboards, I embed metadata directly in docker-compose.yml. Everything stays in sync, and I can see a service’s full configuration in one place.
Test Your Backups
I learned this the hard way. Having backups is not the same as having working backups. Now I run automated restoration tests quarterly. If you can’t restore it, you don’t have a backup.
Security Through Simplicity
I don’t run a complex firewall setup. I don’t have an intricate network segmentation strategy. Instead:
- Minimal external exposure (only NPM)
- Strong passwords (generated, stored in Vaultwarden)
- Regular updates
- Simple mental model of what’s accessible where
Complexity breeds mistakes. Simple systems are easier to reason about.
What’s Next
I’m happy with where the infrastructure is now, but there’s always room to grow:
- High-availability PostgreSQL - Currently a single point of failure
- Off-site backups - Replicating to cloud storage (B2 or S3)
- Improved monitoring - Currently just monitoring uptime, not resource usage
- Automated update pipeline - CI/CD for infrastructure updates
- Moving to Kubernetes - If i find the time ;)
But these are wants, not needs. The current setup is stable, maintainable, and does exactly what I need it to do.
Final Thoughts
Self-hosting isn’t for everyone Is for everyone who wants to learn. It takes time, requires learning, and you’re responsible when things break. But for me, the benefits far outweigh the costs.
I’ve learned more about networking, security, databases, and system administration from running this infrastructure than I ever did from courses or tutorials. And that knowledge applies directly to my day job as a product owner and team lead.
If you’re curious about self-hosting, start small. Run one service. Get comfortable with Docker. Add automation gradually. Learn from your mistakes (you’ll make plenty). Before you know it, you’ll have a production-grade infrastructure running on hardware you control.
And you’ll understand every single piece of it.
Resources
Want to dive deeper? These are the resources I found most helpful:
- Docker Documentation - Still the best place to start
- Docker Compose Best Practices
- Awesome Self-Hosted - Comprehensive list of self-hosted software
- r/selfhosted - Active community with tons of practical advice
My infrastructure is constantly evolving. I’ll write follow-up posts diving deeper into specific components—the shared PostgreSQL setup, AutoKuma automation, backup strategy, and more. Stay tuned!
