Eppy is a really useful library which I’ve written about several times, since before I really had anything to offer in terms of contributing code. Over the past year or so though, I’ve started to contribute back some of the changes and additions I’ve made while using eppy on academic and commercial projects.

This post is intended to show why and how to implement continuous integration (CI) on an open source project with a small but geographically separated team of (intermittently) active contributors. It also covers a few of the issues and snags that came up during the process.

Just to introduce the terminology as I’m using it, CI is the part of DevOps which deals with trying to ensure that changes to the codebase of a project don’t introduce new bugs or cause old bugs to resurface.

While technically this could involve running tests on your local machine, nowadays CI usually means running on a build server to automate the running of tests on multiple platforms, avoiding the problem of “works on my machine” code.

"Works on my machine" certified
“Works on my machine” certified

I’m specifically leaving aside the subject of continuous delivery (automatically publishing changes once they are accepted into the master branch). We may come back to that later, but for now, baby steps.


While we may not all see a need for the extremes of test driven development (TDD), I hope that all developers understand the benefits of testing their code. While the project you’re working on may not have full test coverage, that’s no reason not to start testing. This is particularly important in a project like eppy which has been under development for quite a few years (in this and previous incarnations).

The first and most important reason for taking on the challenge now was that I wanted to try to streamline two modules. These are epbunch and modeleditor, both of which used long chains of inheritance to add functionality to the main classes they implement (epbunch.EPBunch and modeleditor.IDF). I think we were up to IDF6 in the end. This design pattern, while understandable as a way of isolating new functionality during development, can make it hard to follow the logic so refactoring into a single IDF class was an appealing goal.

So the immediate benefit of implementing testing and CI was to give me and other developers the confidence to get in and refactor, safe in the knowledge that if I broke anything it would (hopefully!) become apparent straight away.

As a secondary benefit, this is a real boost to the efficiency of the project as a whole. For Santosh as the project owner/maintainer, I hope this gives him confidence that every change has been tested on multiple platforms, system architectures, and for both Python 2 and Python 3.

There are some further advantages that we can expect as the CI process is completed, including removing the need for separate building and deployment of Python 3 code (we currently write in Python 2 and then automate the generation of Python 3 code using 2to3).


So in terms of the practical steps to implementing CI, the first one is to sign up to a CI service. As an open source project, there are a couple of free options available. The first one we chose is Travis CI, partly because of a great podcast episode I caught with one of the developers, and partly because that is what they use at sklearn, the machine learning library I’ve been contributing to.

Travis GitHub integration is very smooth, from automatically triggering tests when you commit code, to the badge you can put in your README.md file to show tests are passing.

Eppy's badges
Eppy’s badges

Travis CI only covers Linux though, and a lot of eppy users are on Windows so we also needed to set up AppVeyor. Again, this is free for open source projects and is well integrated with GitHub.

One thing I did have a little trouble with was setting up the test coverage statistics, but in the end that came together too. And after a little bit of digging into why it was taking so long (tracked down to an enormous string used in testing), the CI runs in around a minute on Travis CI which runs on all your platforms in parallel. It takes a fair bit longer on Appveyor since each platform is run sequentially there.

A final “how” question is the subject of integration tests that depend on external software. Given that eppy is so dependent on EnergyPlus, particularly now we can run an IDF directly from eppy using IDF.run(epw=”myweather.epw”), it is important that we test that things are playing nicely together. For this to happen, we need to install EnergyPlus as part of setting up our test environment.


There are a few things to be aware of. The first is something that has been on the development schedule for quite some time. The team have tried a few ways of testing for both Python 2 and Python 3, with the first attempt at setting up CI involving using 2to3 scripts to translate the code from Python 2.7, followed by running the tests against the generated code. However it gets difficult to track down any bugs that happen during this process, and so when I hit one I decided to bite the bullet and implement six as suggested by an experienced developer who had kindly taken a look at the eppy codebase. The six package introduces workarounds for the (relatively few) things which distinguish Python 2.7 from Python 3.x. Once each of these has been added, the tests ran perfectly on all versions – and it only took around an hour to make all the changes.

If you’re not used to Powershell, there’s a bit of a learning curve since there are a number of “aliases” for tools which don’t have the same command line interface as the alias they use. For example, while you can use curl and wget to download files, the -O flag doesn’t exist. Instead you need to use curl -OutFile to specify a download location.

Also, we require an unattended installation since we can’t interact with the installer. For this we can use the /S flag on the Windows installer and pipe keys to the Linux installer to accept the TOCs.

And probably the most frustrating, in that the solution was so simple, but took me a while to figure out, is that the Travis Linux distribution version defaults to Ubuntu 12.04 while EnergyPlus requires Ubuntu 14.04 for the availability of certain C++ libraries. To fix this, all that was required was to add:

Running time of tests is important. Ideally they should run in under a minute, and certainly no more than five. Initially this was a problem, for two reasons. The first was the pre-translation step, from Python 2.7 to Python 3.5. This was actually one of the biggest annoyances for me when building and running tests before we started looking at CI (and before turning to six). This step also took an unpredictable amount of time on different developers’ systems.

The second time-hog was the tests coverage reporting. In some cases the test suite would actually fail because the coverage report took so long to complete. This one was due to the files we use to mock an EnergyPlus IDD. These are relatively large .py files, which each contain a single string, broken over multiple lines. As far as coverage testing is concerned, they are a single source line of code. My assumption is they eat up a good chunk of memory, but whatever the reason, they were the bottleneck in the coverage reporting. Adding these two files to the omit section of the coverage configuration file reduced the test running time on Travis from just under ten minutes to just over one minute.


The biggest omission in these free (for open source) tools is that none allow us to test on OSX without lots of hacky workarounds. Hopefully, someone will come up with an offering in the future that caters to open source teams with no budget for paid CI services. However, for now we don’t have a simple way to test on OSX and that’s a shame.

Another issue is with the badges that the various CI tools provide which can be displayed in your project README. These are great to gain a high-level view of the health of a project, both in terms of passing tests, and of test coverage. The problem is that GitHub doesn’t have any mechanism for identifying what branch and what repository the README is in. That means that no matter which branch you are browsing, you will see the badges for the master branch (or whichever branch you choose to point the URL to), not the branch you are currently browsing.


This post condenses work that was done in dribs and drabs and at time of writing is not yet all in the master branch at eppy. It’s an enormous help already though and I can’t recommend these two services any more highly – particularly for their provision of free accounts for open source projects.

Jamie Bull | mail@oco-carbon.com

Related Posts

IntroThis tutorial is intended as a walk-through for complete beginners to geomeppy. geomeppy is a python project built on top of eppy, and as the name implies, it adds much more geometry editing capabilities.At the end of the tutorial, you will have created and run an EnergyPlus model without once having to manually edit an […]

WiGLE is a popular platform which can be used for finding the location of a device using the names of WiFi networks in its vicinity. I’ve written about this before, and wrote some Python code to interact with their API. This API has since been retired and replaced with a new one, as of December […]

Just a quick post to point out a couple of really useful tools.The first is a web-based tool for finding weather files for a location of interest. It’s similar to the Excel EPW finder tool we created a few years back, but much more modern looking. It is however missing a few of the useful […]

2 Comments on “Continuous integration – a step into DevOps”

Leave a Reply

Your email address will not be published. Required fields are marked *