So, I'm playing around with some personal projects and looking to deploy some simple things with Docker to DigitalOcean. This personal project is a small site, and I'd like to set myself up with a repeatable deployment solution (that may be automated as much as possible) so I don't trip over myself with server hosting as I build out the web application.
I'm not really strong with server infrastructure and some of this is "figure it out as I go", while more of it is asking for help from a good friend @icecreammatt who has been a HUGE help as I stumble through this.
But at the end of this tutorial our goal is to satisfy the following requirements.
High level requirements:
Below are some core requirements this walk-through should help address. There is likely room for improvement, and I'd love to hear any feedback you have along the way to simplify things or make them more secure. But hopefully you find this useful.
I want there to be some semblance of a release process with various deployment environments. Push changes to qa
regularly, push semi-frequently to stage
and when things are good, ship a version to production.
- Have access to environments through various domain names. EX: if prod was
my-docker-test-site.com
then I would also haveqa.my-docker-test-site.com
andstage.my-docker-test-site.com
- Ability to run multiple "environments":
qa
,stage
,prod
in the same system. (prob not that many environments - but you get the picture) - Deploy to various environments without affecting other environments. (Ship updates to qa)
- It'd be great if I can figure out a mostly zero-downtime deployment. (Not looking for perfect, but the less downtime the better)
- Keep costs low. For a small site - running all environments on say a small DigitalOcean droplet. (Is this even possible? We'll see...)
- Build various environments, test them out locally and then deploy them to the cloud ()
While I'd like the ability to run as many environments as I listed above, I will likely use a qa
and prod
for my small site, but I think the pattern is such that we could easily setup whatever environments we need.
Structure of post:
What we want to do is essentially walk through how I'm thinking about accomplishing the above high level requirements using a simple node js hello world application. This app is a basic node app that renders some environment information just to prove that we can correctly configure and deploy various docker containers for environments such as qa
, stage
or prod
.
In the end, we should end up with something that I like to imagine looks a bit like this diagram:
I'm going to use DigitalOcean as the cloud provider in this case, but I think this pattern could be to other docker hosting environments.
Example Web App Structure
Below is a basic view of the file structure of our site. If you're following along, go ahead and create this structure with empty files, we can fill them in as we go...
.
|____app
| |____Dockerfile
| |____server.js
|____docker-compose.yml
Let's start with the ./app/*
files:
Simple NodeJS Hello World app
This is a simple nodejs server that we can use to show that deployment environment variables are passed through and we are running the correct environment
. As well as showing a functioning web server.
File: ./app/server.js
var http = require('http');
var server = http.createServer(function(req, res){
res.writeHead(200, {"Content-Type": "text/plain"});
res.end(`
Hello World!
VIRTUAL_HOST: ${process.env.VIRTUAL_HOST}
NODE_ENV: ${process.env.NODE_ENV}
PORT: ${process.env.PORT}
`.split('/n').join('<br>'));
});
server.listen(80);
The goal of this file is to run a nodejs web server that will return a text document with Hello World!
along with the environment variables that the current container are running under such as qa
.
This could easily be replaced with a python, php or ruby, or a whatever web server. Just keep in mind the rest of the article may assume it's a node environment (like the Dockerfile
up next). So adjust accordingly.
The Dockerfile
Below is pretty basic and says, load up and run our nodejs server.js
web app on port 80
.
File: ./app/Dockerfile
# Start from a standard nodejs image
FROM node
# Copy in the node app to the container
COPY ./server.js /app/server.js
WORKDIR /app
# Allow http connections to the server
EXPOSE 80
# Start the node server
CMD ["node", "server.js"]
How to get the qa, prod domain mappings
So now that we have a basic application defined, we can't host multiple versions of the app all using port 80
without issue. One approach we can take would be to place an Nginx proxy in front of our containers to allow translation of incoming domain name requests to our various web app containers which we'll use docker to host on different ports.
The power here is we don't have to change the port within the docker container (as shown in the Dockerfile
above) but we can use the port mapping feature when starting up the docker container to specify different ports for different environments.
For example I'd like my-docker-test-site.com
to map to the production container, qa.my-docker-test-site.com
the qa
container of my site, etc... I'd rather not access my-docker-test-site.com:7893
or some port for qa
, stage
, etc...
To accomplish this we are going to use jwilder/nginx-proxy. Check out his introductory blog post on the project. We'll be using the pre-build container directly.
To spin this up on our local system let's issue the following command:
docker run -d -p 80:80 --name nginx-proxy -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
This project is great, now, as we add new or remove containers they will automatically be added/removed to the proxy and we should be able to access their web servers through a VIRTUAL_HOST
. (more on how specifically below)
Networking
Before we get too far into the container environment of our app, we need to consider how the containers will be talking to each other.
We can do this using the docker network
commands. So we're going to create a new network and then allow the nginx-proxy to communicate via this network.
First we'll create a new network and give it a name of service-tier
:
docker network create service-tier
Next we'll configure our nginx-proxy container to have access to this network:
docker network connect service-tier nginx-proxy
Now when we spin up new containers we need to be sure they are also connected to this network or the proxy will not be able to identify them as they come online. This is done in a docker-compose file as seen below.
Put the two together
Now that we've defined our application with the server.js
and Dockerfile
and we have a nginx-proxy
ready to proxy to our environment-specific docker http servers, we're going to use docker-compose
to help build our container and glue the parts together as well as pass environment variables through to create multiple deployment environments.
Save this file as docker-compose.yml
version: '2'
services:
web:
build: ./app/
environment:
- NODE_ENV=${NODE_ENV}
- PORT=${PORT}
- VIRTUAL_HOST=${VIRTUAL_HOST}
- VIRTUAL_PORT=${PORT}
ports:
- "127.0.0.1:${PORT}:80"
networks:
default:
external:
name: service-tier
This file is all about:
- The
build: ./app/
is the directory where ourDockerfile
build is. - The list of
environment
variables are important. TheVIRTUAL_HOST
andVIRTUAL_PORT
are used by the nginx-proxy to know what port to proxy requests for and at what host/domain name. (We'll show an example later) You can see an earlier exploratory post I wrote explaining more about environment vars. - The
ports
example is also important. We don't want to access the container by goingmy-docker-test-site.com:8001
or whatever port we're actually running the container on because we want to use theVIRTUAL_HOST
feature of nginx-proxy to allow us to sayqa.my-docker-test-site.com
. This configuration sets it up to only listen on the loopback network so the nginx-proxy can proxy to these containers but they aren't accessible from the inter-webs. - Lastly the
networks:
we define adefault
network for the web app to use theservice-tier
that we setup earlier. This allows the nginx-proxy and our running instances of the web container to correctly talk to each other. (I actually have no idea what I'm saying here - but it is simple enough to setup and I think it's all good - so I'm going with it for now...)
Now what?
So with all of these pieces in place, all we need to do now is run some docker-compose commands to spin up our necessary environments.
Below is an example script that can be used to spin up qa
, and prod
environments.
BASE_SITE=my-docker-test-site.com
# qa
export NODE_ENV=qa
export PORT=8001
export VIRTUAL_HOST=$NODE_ENV.$BASE_SITE
docker-compose -p ${VIRTUAL_HOST} up -d
# prod
export NODE_ENV=production
export PORT=8003
export VIRTUAL_HOST=$BASE_SITE
docker-compose -p ${VIRTUAL_HOST} up -d
This script is setting some environment variables that are then used by the docker-compose
command where and we're also setting a unique project name with -p ${VIRTUAL_HOST}
.
Project Name
What we said just before this headline is a key part to this. What enables us to run essentially the same project (docker-compose/Dockerfile) but with different environment variables that define things like qa
vs prod
is when we run docker-compose
we're also passing in a -p
or --project-name
parameter. This allows us to create multiple container instances with different environment variables that all run on different ports and in theory isolate themselves from the other environments.
The thinking here is you could have a single docker-compose.yml
file that has multiple server definitions like say a nodejs web
, couchdb
, and redis
database all running isolated within their environment. You can then use the environment variables to drive various items such as feature-toggling newly developed features in a qa
environment, but are not necessarily ready to run in a production environment.
Running/testing this out locally
You probably want to play with this idea and test it out locally before trying to push it to a remote system.
One easy way to do this is to modify your /etc/hosts
file (on *nix) or follow this on windows to map the specific domain names you have setup for your environments to the actual service running docker. This will allow the nginx-proxy
to do it's magic.
I'm currently still using docker-machine
to run my docker environment in a VirtualBox VM so my /etc/hosts
file looks like this.
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
192.168.99.100 qa.my-docker-test-site.com
192.168.99.100 stage.my-docker-test-site.com
192.168.99.100 my-docker-test-site.com
If you have the docker containers running that we've worked through so far (for all environments) we should be able to visit qa.my-docker-test-site.com
in the browser and hopefully get this:
Hello World!
VIRTUAL_HOST: qa.my-docker-test-site.com
NODE_ENV: qa
PORT: 8001
Also try out the production environment at my-docker-test-site.com
to verify it is working as expected.
THIS IS AWESOME :) I was actually quite happy to traveled this far in this exploration. But now let's try to take it up a notch and deploy what we just built locally to the DigitalOcean in the cloud ().
Deploy to the Cloud!
Now how do we get this locally running multi-environment system up to a server in the cloud?
Just tonight while researching options I found this simple set of steps to get it going on DigitalOcean. I say simple because you should the original steps I was going to try and use to deploy this... sheesh.
These are the steps we're going to walk through.
- Get an Account @ DigitalOcean
- Create a Docker Droplet (this was way-cool)
- Build and Deploy our nginx-proxy.
- Build and Deploy our App
- Configure our DNS (domain name)
- Profit!
Tonight I discovered this blog post on docker that describes using docker-machine with the digitalocean driver to do basically everything we did above - but IN THE CLOUD - kind of blew me away actually.
Get an Account
First make sure you've signed up with a DigitalOcean and are signed in.
Create a Docker Droplet (this was way-cool)
Next we're going to use a cool feature of docker-machine where we can leverage the DigitalOcean driver to help us create and manage our docker images.
Complete Step 1
and Step 2
in the following post DigitalOcean example to acquire a DigitalOcean personal access token.
Now that you have your DigitalOcean api token, you do right? Either pass it direclty into the below command (in place of $DOTOKEN
or set a local var as demonstrated.
DOTOKEN=XXXX <-- your token there...
docker-machine create --driver digitalocean --digitalocean-access-token $DOTOKEN docker-multi-environment
NOTE: There was an issue where (this used to work but stopped with the the docker-machine DigitalOcean default image reported here). To work around this try using a different image name
EX:
DIGITALOCEAN_IMAGE="ubuntu-16-04-x64"
docker-machine create --driver digitalocean --digitalocean-access-token $DOTOKEN docker-multi-environment
If you refreshed your DigitalOcean's droplets page you should see a new droplet called docker-multi-environment
.
We can now configure our local terminal to allow all docker commands to run against this remotely running docker environment on our newly created droplet.
eval $(docker-machine env docker-multi-environment)
If you ran docker ps
it should be empty, but this is literally listing the containers that are running up at DigitalOceain in our droplet. How awesome is that?
Build and Deploy our nginx-proxy.
Now that we can just speak docker
in the cloud, run the following commands - these are all assuming we're executing them against the DigitalOcean droplet in the !!
-
Spin up our remote
nginx-proxy
on the remote dropletdocker run -d -p 80:80 --name nginx-proxy -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
-
Create our network
docker network create service-tier
-
Tell
nginx-proxy
about this networkdocker network connect service-tier nginx-proxy
Build and Deploy our App
I know this post has gotten a bit long, but if you've made it this far we're almost there...
If you were to run the following script in our local project's folder where we have the docker-compose.yml
file:
Be sure to update
BASE_SITE=my-docker-test-site.com
with your domain name or sub-domain likeBASE_SITE=multi-env.my-docker-test-site.com
BASE_SITE=my-docker-test-site.com
# qa
export NODE_ENV=qa
export PORT=8001
export VIRTUAL_HOST=$NODE_ENV.$BASE_SITE
docker-compose -p ${VIRTUAL_HOST} up -d
# prod
export NODE_ENV=production
export PORT=8003
export VIRTUAL_HOST=$BASE_SITE
docker-compose -p ${VIRTUAL_HOST} up -d
You should now be able to run docker ps
or docker-compose ps
and see 3 containers running. The nginx-proxy
, your qa
site and also the prod
site.
All that's left is to make sure DNS is configured and pointing to our nginx-proxy
front-end...
While playing with this - I kept tearing down droplets and re-building them as I worked through this tutorial and I kept forgetting to adjust my DNS settings. However right in the middle of writing this tutorial DigitalOcean came out with Floating Ip's which wasn't perfect, but definitely made this easier to work with. I didn't have to always update the IP address of my droplet, but instead just update the floating ip to point to the newly created droplet.
Configure our DNS (domain name)
I'm assuming you've already purchased a domain name that you can setup and configure on DigitalOcean. So I don't want to go too far into this process.
I also think DNS is out of scope for this post (as there are many others who can do a better job) but I used some great resources such as these while configuring my DigitalOcean setup.
- How To Set Up a Host Name with DigitalOcean
- How To Set Up and Test DNS Subdomains with DigitalOcean's DNS Panel
Environment All Things
If you've made it this far, you hopefully have a DigitalOcean droplet that is now serving qa
and prod
http requests.
NICE!!!
Now the most important thing - how to seamlessly update an environment with a new build...
Let's make an update to QA.
Now that we've deployed our site to QA and to walk through this a little further, let's make a modification to our qa
site and see if we can get it deployed without causing any downtime especially to the prod
site, but maybe we can also get an in-place deployment done and have little-to-no down time in qa
as well.
I wrote that paragraph above the other night near bedtime and as I'm learning some of this on the fly had no idea if this would be easy enough to accomplish, but to my surprise deploying an update to qa
was a piece of cake.
For this test I made a simple change to my node web server code so I could easily see that the change was deployed (or not).
I turned Hello World!
into Hello World! V2
below.
File: ./app/server.js
var http = require('http');
var server = http.createServer(function(req, res){
res.writeHead(200, {"Content-Type": "text/plain"});
res.end(`
Hello World!
VIRTUAL_HOST: ${process.env.VIRTUAL_HOST}
NODE_ENV: ${process.env.NODE_ENV}
PORT: ${process.env.PORT}
`.split('/n').join('<br>'));
});
server.listen(80);
I then used docker-compose
to bring up another "environment" but using the same qa
VIRTUAL_HOST
as before.
BASE_SITE=my-docker-test-site.com
export NODE_ENV=qa
export PORT=8004
export VIRTUAL_HOST=$NODE_ENV.$BASE_SITE
docker-compose -p ${VIRTUAL_HOST}x2 up -d
NOTICE: how the
-p
parameter we addedx2
(just to give it a different project name (for being a different version)).
This will bring up another docker container with our updated web application and to my surprise the nginx-proxy automatically chose this new container to send requests to.
So if you docker ps
you should see 4 containers running. 1 nginx-proxy, 1 prod container, 2 qa containers (with different names).
You can think about leaving both containers running for the moment while you test out the new release.
One neat thing you can think about with this is if there was something seriously wrong with the new qa release you could just stop the new container and (docker stop <new_container_id>
) and the proxy will start redirecting back to the old qa container. (That only works of course if your deployment was immutable - meaning you didn't have the new container run some one way database migration script... but that's not something I want to think about or cover in this post).
Once you're comfortable running the new version you can now bring down and cleanup the older version.
docker ps # to list the containers running
docker stop <old_qa_container_id>
docker images # to list the images we have on our instance
docker rmi <old_qa_image_id>
Now let completely remove our test...
You probably don't want to run the sample node script from above forever as you'll be charged some money from DigitalOcean for this and I'd feel bad if you received a bill for this little test beyond a few pennies as you test it out...
The following command will completely remove the droplet from DigitalOcean.
docker-machine rm docker-multi-environment
Wrap Up and What's Next?
I feel like I've done enough learning and sharing in this post. But there is still more to do...
If you want to check out the snippets above combined into a sample github repository I've put it up here.
Future thinking...
I don't know if I'll blog about these, but I definitely want to figure them out. If you find a way to extend my sample above to include the following I'd love to hear about it...
- SSL (consider cloudflare or letsencrypt?)
- Easy way to secure the qa/stage environments?
Happy Docker Enviornment Building!