I've tired recently of the level of maintenance involved in running this blog. I don't mean regularly producing content—lord knows I never did that; I'm not about to start losing sleep over it now—but in actually running the software on my server. I think my deployment setup is pretty secure in itself, but in hindsight I'll admit the whole thing's basically held together with Docker, duct tape, and good intentions.
When I first setup this box in the cloud five months ago, the problems I was trying to solve were things like the following:
- Serve the site over https.
- Enable safe and consistent updates to ghost.
- Leverage Docker to run any application(s) so I can easily cleanup discontinued projects, and otherwise prevent apps from stepping on each other's toes.
These were legitimate problems then and now. But with just five months experience I'm finding myself with additional requirements like these:
- Update ghost (and any other application) with zero downtime.
- Trivially spin-up new services, applications, and projects on the box unrelated to this blog.
The second point in particular has come into clear relief recently as I plan to spin up another project at some point in the next few months. I was happy to spend the amount of time I did setting up ghost+docker on my VPS, but I've realized it's more work than it's cut out for if I need to go through that rigmarole every time I setup a new project.
When I was a child I utilized Docker ad hoc like a child, but now I realize I need to put away such childish things, and utilize some dedicated utility for managing deployment to my VPS. There's actually a healthy ecosystem of such deployment management solutions utilizing Docker/containers, of which Kubernetes seems like the most popular solution. But the vast majority of these offerings, k8s included, have a critical issue for me in that they are far too heavyweight an option for the scale of developments I'm going to be making. It's more than fair to call them enterprise-grade solutions, but everything I'm talking about here is definitively hobbyist in scope and scale.
An Introduction to Dokku
Spinning up the Dokku DigitalOcean Doplet
The biggest snag I hit in spinning up my own Dokku droplet is really just that nobody told me I needed to specifically tell Dokku where it was being served from—that is, its domain name. I was able to SSH into the box fine, but navigating to the IP address of the box only resulted in a stock "Welcome to NGINX!" page rather than the dokku installation/setup page I had been promised. I realized after some poking around however that the issue was I needed to specify the domain name of the server. Fortunately enough, configuring this in dokku is as easily as modifying the
/home/dokku/VHOST) file, setting it to the domain of the VPS.
I was pleasantly surprised to find that the
dokku UNIX user on the Droplet is automatically configured to interface as a kind of CLI. For example, if I want to list all of the running applications on my server, I run the following in a terminal:
ssh firstname.lastname@example.org apps:list
And if I want to make a new application (named
blog), it's similar:
ssh email@example.com apps:create blog
For the purposes of the rest of this post I'm going to forgo the SSH prefix to all these commands and just record the actual command run on the server.
Creating the ghost Application
I really appreciated two blog posts I found of folks who have already run ghost on Dokku. They can be found here and here. The rest of this post is likely to amount to a rough synthesis of both of these, along with any details I found relevant but missing from either.
Making a Dockerfile for the Docker ghost image was pretty trivial. It did end up getting complicated later on when I wanted to setup S3 storage of my static assets, but here's the initial contents of mine that got me up and running.
That's all I really need if what I want is to just run the official Docker ghost image. Put that in a file, put that file in a directory, make that directory a Git repo, and add a remote to that repo of
firstname.lastname@example.org, and you're ready to deploy—at least technically.
git push dokku master
With that command, a vanilla ghost installation is running on my Dokku server. There's some outstanding issues, however:
- The ghost instance needs to be configured with the right url or else all the navigation links around the site will be broken.
- Dokku by default is only directing traffic from port 2368 to the container running ghost, so that needs to be fixed to instead direct traffic from 80 (http) or 443 (https).
- ghost is running against a SQLite database instead of MySQL.
- Any assets or themes stored in the the
content/directory are living on a transient filesystem—they only exists as long as the container presently running ghost is alive; the moment I redeploy it, that directory and its contents will be blown away. I won't lose any written content or critical blog data so long as the MySql database lives on, but I need to store my images on some external service to avoid losing them each time I deploy.
- SSL/https isn't setup for the site.
The first is addressed rather simply via configuring environment variables for the ghost container.
config:set blog url='http://subdomain.url.of.your.doku.droplet.com'
The second is a matter of adjusting the proxy configuration for the app.
proxy:ports-add blog http:80:2368 proxy:ports-add blog https:443:2368 proxy:ports-remove blog http:2368:2368
The remaining issues are a bit more involved, so I'll address them each at length.
Setting up MySQL for ghost
ghost runs on Sqlite by default, but it's not the most robust option available. According to the ghost developers themselves, the support for sqlite is mostly intended to facilitate developers working on either themes or ghost itself in running a local instance of ghost on their development machines. Furthermore, they do not promise to always maintain parity in ghost's support for each database variety, even though they expect it to be the case for the forseeable future. MySql is what they want you to be running ghost against in production.
With that said, Dokku makes it really easy to spin up a MySql instance for your applications. It begins with installing the MySql plugin for dokku, which I needed to SSH into my box as root in order to install:
ssh email@example.com dokku plugin:install https://github.com/dokku/dokku-mysql.git mysql
After that, I can create a MySql service for my blog, and link it to my
mysql:create blog-db mysql:link blog-db blog
What this does is enable the
blog container running ghost to access the
blog-db container running MySql, as well as expose the connection details (i.e. the MySql username and password, the name of the database, etc.) via a
DATABASE_URL environment variable in the
blog container. Unfortunately, ghost isn't configured to even watch for that variable, let alone parse it, so I had to check the constituent elements of the connection and manually set environment variables ghost would notice.
enter blog web env # Run the env command inside the blog container, listing all environment variables
By running this I retrieved all the environment variables exposed to my
blog dokku application, which fortunately enough includes a bunch of variables of the form
DOKKU_MYSQL_BLOG_DB_ENV_MYSQL_PASSWORD, and so on. Noting these, I can then set the appropriate environment variables manually for ghost to pickup:
config:set blog database__connection__user=<previously observed value of DOKKU_MYSQL_BLOG_DB_ENV_MYSQL_USER> config:set blog database__connection__password=<previously observed value of DOKKU_MYSQL_BLOG_DB_ENV_MYSQL_PASSWORD> ...etc
This makes the whole setup a bit more fragile, as the
blog application will not automatically adjust if the connection details for
blog-db ever change, (as they might if, say, I replace
blog-db with a new database) but it's good enough for my purposes right now, and still helps achieve my goal of getting the blog application itself to quicker, uninterrupted deployments.
To Be Continued
This post has turned into enough of book, so I'll follow up this post at some point with a conclusion detailing how I addressed the following matters:
- Hosting the images on this blog on DigitalOcean spaces. (An AWS S3 compatible object storage service.)
- Serving the site over HTTPS & Let's Encrypt on Dokku.
Previously I took the whole blog down at midmight for fifteen minutes or so every couple weeks. ↩︎
I'm still working on learning Kubernetes for my dayjob, though! ↩︎
I was fortunate enough to have a domain name already so I could just give the box a subdomain of that; I presume you could enter an IP address here instead, but I don't know. ↩︎
This is due to the ghost Docker image exposing this port in its Dockerfile. Dokku sees that this port is "exposed" in the Dockerfile and automatically tries to make an NGINX proxy to that. ↩︎
I experienced this fragility sooner than I thought I would when DigitalOcean announced new plan pricing shortly after I finished setting everything up with Dokku. That meant I could get double the RAM and nearly double the disk space at the same price I'm paying now. I took advantage of it, but found upon restarting my droplet that the
mysqlplugin for Dokku had assigned a new ip address to my
blog-dbMySql instance. I fixed this issue by changing the
database__connection__hostenvironment variable on the application from the originally observed ip address to the hostname I now realize the
mysqlplugin exposes to linked applications. This hostname isn't directly displayed in a envrionment variable of the form
DOKKU_MYSQL_BLOG_DB_ENV_MYSQL_USER, but it can be observed in the