I’ve made a lot of progress on the testing for the dashboard tool, I’m at about 80% coverage across the whole codebase. It was interesting getting the fixtures written for each of the different data sources so that they could be tested. I also added some features for testing and also some refactoring to enable testing. I moved most of the code for loading and processing the configuration file out of the main script and into a separate module. This allows those functions to be tested in isolation but also they now represent a usable API in their own right. I didn’t want to include the config code directly in the dashboard module since it is coupled to all of the different types of data sources and their dependencies. This way the dashboard code could be used with totally alternative graph objects or data table objects without any direct coupling to those specific implementations or their dependencies.
I used a model of snapshot comparison to verify the graphics rendering code, this involved creating static data sets and controlling the screen size carefully. It also requires some honesty on the part of the tester to carefully verify the renderings and not accept bugs. I have at least one bug in my example dashboard that I so-far haven’t been able to reproduce in the test automation… but it is on my list to track down.
My plan is to finish getting through the CLI and config testing and then on to the next round of features and enhancements.