A Docker Dev Environment in 24 Hours! (Part 1 of 2)
At RelateIQ, we recently had some issues with our development environment. We migrated our entire search infrastructure over to Elasticsearch and made some changes to MongoDB. Building and testing code was quickly becoming very cumbersome. And did I mention it took an entire day to set it up for new hires? Long story short, productivity was lost and our developers started to get angry. We got the feeling our local development stack was at a point where we needed several pieces of orchestration and automation to scale.
Over the course of the summer, our infrastructure engineers were kicking around the idea of automating our development environment with Chef Solo. One day, we got so excited talking about the new direction that we decided to give it a try. Well, it didn’t go so well. Our inspiration was quickly deflated by several roadblocks. It turns out that many versions of our current Chef cookbooks (we currently use Chef to deploy our production environment) were incompatible with Chef Solo. We also found breaking changes that would require a rewrite to the way we were using attributes. This meant that we were definitely going to be required to maintain at least two different versions of cookbooks. It would take a week or more of effort to get everything to work, and we had doubts about future maintenance. This turn of events got us thinking that there had to be a better way.
Bring on Hack Day
A 24-hour hack day was near, so Scott and I decided to try again, this time with Docker. Docker was a technology we had never used before, but the dev/ops community was raving about using its containers for development environments. So we thought, why not give it a try and see how far we get?
To achieve a whole new dev environment, we needed to get the following infrastructure components scripted and communicating with each other locally: Cassandra, Elasticsearch, MongoDB, Kafka, Zookeeper, and Redis. We also wanted to wrap the whole environment into a single virtual machine so upgrades were easy and the environment could be completely refreshed. We used Vagrant and VirtualBox since both were free and easy to use. However, this was a tall order with a lot of infrastructure to get working in an automated fashion that was easy to maintain.
Twenty-four hours later, we crushed it! Not only did we get all of the aforementioned components working, but we also achieved our stretch goals of getting Storm and Jetty components up as well. Docker files were so fast and simple to create that we knocked out most of the work in less than 12 hours. We started to realize containers built with Dockerfiles were almost as simple as a copy and paste from each vendor’s web site. Plus, iterating through changes was almost instant due to the file system diffs. By the end of the day, in just 24 hours, we had 90 percent of our entire production system running on our MAC OSX laptop in a fully isolated virtual environment, and we were crazy for Docker.
A week later, we won the hack day award for “Most Production-Ready Project.” We had several engineers running the new environment the day after, and we haven’t looked back since.
We’re happy to report that our engineers are no longer angry, but since hack day they have grown interested in how we’ve built the new environment in such a short time. They have given fantastic feedback so far. Today, every engineer on our team is using the new environment with great success. Our engineers couldn’t believe how easy it was to get going and the level of abstraction we were able to provide with a single command. It was an awesome hack day.
In Part 2 (which is due out next week), I’ll explain how each component we wrote works so you can replicate pieces for your own environment. Stay tuned!
In the meantime, here is a quick visual on what the end environment looked like:
Please comment if you have any feedback or questions. And if you are interested in jobs at RelateIQ, please contact us here.