A Docker Dev Environment in 24 Hours! (Part 1 of 2)

The Problem

At RelateIQ, we recently had some issues with our development environment. We migrated our entire search infrastructure over to Elasticsearch and made some changes to MongoDB. Building and testing code was quickly becoming very cumbersome. And did I mention it took an entire day to set it up for new hires? Long story short, productivity was lost and our developers started to get angry. We got the feeling our local development stack was at a point where we needed several pieces of orchestration and automation to scale.

Over the course of the summer, our infrastructure engineers were kicking around the idea of automating our development environment with Chef Solo. One day, we got so excited talking about the new direction that we decided to give it a try. Well, it didn’t go so well. Our inspiration was quickly deflated by several roadblocks. It turns out that many versions of our current Chef cookbooks (we currently use Chef to deploy our production environment) were incompatible with Chef Solo. We also found breaking changes that would require a rewrite to the way we were using attributes. This meant that we were definitely going to be required to maintain at least two different versions of cookbooks. It would take a week or more of effort to get everything to work, and we had doubts about future maintenance. This turn of events got us thinking that there had to be a better way.

Bring on Hack Day

A 24-hour hack day was near, so Scott and I decided to try again, this time with Docker. Docker was a technology we had never used before, but the dev/ops community was raving about using its containers for development environments. So we thought, why not give it a try and see how far we get?

To achieve a whole new dev environment, we needed to get the following infrastructure components scripted and communicating with each other locally: Cassandra, Elasticsearch, MongoDB, Kafka, Zookeeper, and Redis. We also wanted to wrap the whole environment into a single virtual machine so upgrades were easy and the environment could be completely refreshed. We used Vagrant and VirtualBox since both were free and easy to use. However, this was a tall order with a lot of infrastructure to get working in an automated fashion that was easy to maintain.

Our Solution

Twenty-four hours later, we crushed it! Not only did we get all of the aforementioned components working, but we also achieved our stretch goals of getting Storm and Jetty components up as well. Docker files were so fast and simple to create that we knocked out most of the work in less than 12 hours. We started to realize containers built with Dockerfiles were almost as simple as a copy and paste from each vendor’s web site. Plus, iterating through changes was almost instant due to the file system diffs. By the end of the day, in just 24 hours, we had 90 percent of our entire production system running on our MAC OSX laptop in a fully isolated virtual environment, and we were crazy for Docker.

A week later, we won the hack day award for “Most Production-Ready Project.” We had several engineers running the new environment the day after, and we haven’t looked back since.

The Results

We’re happy to report that our engineers are no longer angry, but since hack day they have grown interested in how we’ve built the new environment in such a short time. They have given fantastic feedback so far. Today, every engineer on our team is using the new environment with great success. Our engineers couldn’t believe how easy it was to get going and the level of abstraction we were able to provide with a single command. It was an awesome hack day.

In Part 2 (which is due out next week), I’ll explain how each component we wrote works so you can replicate pieces for your own environment. Stay tuned!

In the meantime, here is a quick visual on what the end environment looked like:

Please comment if you have any feedback or questions. And if you are interested in jobs at RelateIQ, please contact us here.

19 Comments

  • roberto says:

    Simply awesome!

  • Vivek Beniwal says:

    This is great! looking forward to the part2.
    How are you managing the dependencies of images, for example for kafka needs zookeeper to be running etc, is there any tool to manage the image DAGs.
    Also Are you guys running private index or are pushing images to docker index?

    • John Fiedler says:

      Thanks for the comment! Right now we manage it though the devenv-inner.sh script. You’ll see more about it in part 2.

      As for the index we are running a private index right now but honestly all the images we have don’t need to be private. Docker is working on certified public images and once they release that we might look to move to them exclusively.

  • majke says:

    looking forward for part 2 as well :)

  • Excellent, what a blog it is! This website
    presents useful facts to us, keep it up.

  • mustafa says:

    I also use docker for some projects but I don’t understand why you needed to use it under vagrant?

    • Richard Wallace says:

      It sounds like their development team consists of mostly OSX users. Docker is a Linux only solution. So OSX users will need to run a Linux distro inside Vagrant to be able to use Docker. I’m in a similar situation – in fact, I’m the only non-OSX user on our team.

  • Aaqib Gadit says:

    Nice stuff guys. Guess vagrant is required when the developers are comfortable only with Windows systems.

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

John Fiedler

Signup for our newsletter!

Get more great content from RelateIQ
  • This field is for validation purposes and should be left unchanged.