Sunday, 23 August 2015

Mongrel and Docker: the power of containers

Last time we looked at how we started off the Mongrel2 webserver in a docker container. It was a very simple setup, with a single container running an instance of Mongrel with a few bits and pieces of static content.

This time, we're going to look at what makes Mongrel so interesting, and why I think that Docker suits it perfectly as a deployment mechanism. We'll shoot over the basics of handlers, and I'll summarise the handler that I created, and how Docker

Handlers

Mongrel2 doesn't deploy applications in the same way that, say, servlet containers do, and it doesn't do any processing of code itself in the way that PHP applications might. Instead, it has a construct called a handler. These are specific paths defined in the server configuration that, when requested, construct a message for the ZeroMQ message framework, and pass them to a socket. A dedicated application reads the message from that socket, takes any necessary action, and then responds to the Mongrel2 server by placing a ZeroMQ message back to a new queue.

Handler application: "thought for the day"

In this case, I've only constructed one handler - an incredibly simple one, that could have easily been managed other ways, but we'll test the water slowly. It's a simple "thought for the day" generator, that will return a json object containing a quotation, and a source for that quotation.

The code for this handler isn't checked into Github yet, but is very simple. There's a single, looping process that waits for messages, and returns one of a random set of quotations whenever it receives a message. It's just a little jar file that gets executed and stays up until terminated.

Accessing the handler

Accessing the handler from the frontend is relatively easy: I've wired up a simple AngularJS controller that just grabs the json object from the /thought path, and plugs it into some html code on the front page. Nothing too fancy.


Putting it all together

So now we have an infrastructure that looks like this:
Mongrel2 and the handler process are both running in docker containers. The communication ports for the two of them are exposed within the docker engine, but not outside of it. Mongrel's main access point (port 6767, in this case) is mapped to port 80 on the virtual machine and exposed to the outside world.

However, we're still not quite done yet. To make things even easier, we can use Docker Compose to describe this entire diagram, and suddenly we're able to build, deploy and start all of our containers from a single command. Again, we're not at a particularly complex level yet, this single file

mongrel2:
build: ./mongrel2-main
ports:
- "80:6767"
expose:
- "5557"
- "5558"
samplehandler:
build: ./sample-handler
links:
- mongrel2

will use the information in the two dockerfiles to build — from scratch — the entire application above and deploy it.

And that is an amazing tool, which will give us the ability to add sections to our infrastructure quickly and easily.

Saturday, 22 August 2015

Getting into docker: the simple case

So last time I'd been left in the situation of moving from Vagrant over to Docker. And I found myself really appreciating what Docker was doing, and beginning to get my head around what it's capable of. There's still a long way to go in order to use it properly, but I think I'm beginning to get the basics.

I ended up in a situation where I just about had the Mongrel2 webserver being loaded up onto a dockerfile, and starting up.

I've managed to take that a little further in the right direction now.

Step 1: Getting Mongrel to start properly — and keep running
I touched on it last time, but the thing you really have to nail to get Docker to work is the ability to start a single process in a container and keep it running. This really isn't as easy as it could be, given a few of the limitations of the way Docker runs things.

A docker container will only keep running as long as the process started on id 1 keeps running. There are a few hacky ways of doing this, like piping together a series of shell scripts, and finishing by tailing a log file, but that's best avoided. The best way that I've found so far is to use the supervisor daemon, which starts on process 1 and then spools up the processes that you deem necessary. It actually turned out to be easier – with Mongrel – to fire up a second supervisor process called procer in order to manage the startup of the Mongrel server. There might be a better way of doing it, but this seems to work, and (in theory) gives a layer of resiliency to the Mongrel process by granting automatic restarts in case the process dies on us.

Step 2: Getting some static content in there
The next step was to get some static content onto the site. That was pretty easy, and we ended up with an infrastructure that looked a bit like this.

The Dockerfile controlling the Mongrel2 server pulled all the necessary files to install Mongrel2 and the dependencies, copied in a set of static files, and finally started the server.

Next steps:
This is all very well, but it seems like a lot of effort to go to in order to get some static content served up by a webserver – and it is. Next up is the first handler, the independent programs that make Mongrel2 interesting, and how I believe they are perfectly suited to running on a containerised platform.

Thursday, 6 August 2015

Docker vs Vagrant - Round 1

Having kicked around Vagrant (https://docs.vagrantup.com/v2/) for a while, a colleague finally persuaded me to try out Docker instead, just to see what the competition was like.
... so, I spent a while working out ways of setting up a virtual machine / container to run an instance of the mongrel2 webserver, just to see how they compared.
The results are now in and ... well, it's a bit of a mixed bag, to be honest.
The Dockerfile and associated project are up and available on github, at https://github.com/nihilogist/docker-mongrel2, for those interested.

Setup

How easy are the two different solutions to set up?

Vagrant 

It's really easy. You write a few scripts to download and install the software you need - in this case it's a little more complex as you need to grab ZeroMQ and then make mongrel, but it's not hard at all. You copy what you need over to the VM and start the server. Great.

Docker

It took a while, I have to say. I found it a good deal harder to get my head around the way that the container works. Especially coming from Vagrant, which is purely and simply a method of managing VMs, it took some getting used to the idea that the container only persists as long as the process on PID1 is running. So this means that there were a few extra steps needed to ensure that PID1 keeps running, but happily there are plenty of tools to help with this.

Running

How do the two containers run?

Vagrant

You have a full VM running. You can ssh into it. You can see it running in VirtualBox (if that's the provider you're using). You can use it in exactly the ways you'd use any other virtual machine. But it is pretty heavy - it takes a while to boot up, but seems pretty solid once it's up, as you'd expect.

Docker

It's just a container - there are lots of limits on what you can do with it and what you can't. Well - not so much limits as recommendations. Using Vagrant I think  you could be easily tempted to start up a whole load of extra processes on the VM, just because it's easy, and if you have the machine running, then why not? Docker, on the other hand, really wants you to dedicate each container to a single process. It's really lightweight. It starts in an instant.

Which do I prefer?

Well, it's little early in the day, but I think that Docker has won me over. I've still got a heck of a lot to explore, like getting data volumes to work properly, but I'm really enjoying using it.
I'm also keen to explore the way that mongrel2 wants you to use many small services, and I think that Docker is perfectly aligned with that - each container running a single service, but running it well.