About Me

My photo
I'm project manager of a software development team at www.researchspace.com. I've been developing bioinformatics software for the last 9 years or so, and I'm hoping to share some of the experience and knowledge I've gained with Eclipse RCP, Java web app development and software development in general on these blogs.

Saturday, 28 August 2010

SWTBot code coverage

Today I'm going to discuss what a great boost to test code coverage is provided by the SWTBot toolkit for Eclipse RCP applications.

We develop an Eclipse RCP app, SBSIVisual, that contains a mixture of 'core' domain objects and UI packages. Using just JUnit tests, we cover about 22% of the code base with about 42000 test instructions for 71000 production code instructions. If we think about the effort needed to write these tests, we're writing a lot of test code for not a huge coverage.

When we run our SWTBot functional tests, though,  we get a 35% code coverage with 11000 test instructions. Not only do we get a more efficient test:source ratio (1:6.5 for SWTBot, 1:1.75 for Junit), we also get a better test coverage. Moreover, if we merge our Junit and SWTBot results, we get a 47% code coverage overall - indicating that both sets of tests are covering a substantial amount of non-redundant code. 

Now, I don't know enough testing theory to know if these figures are 'typical' - the coverage might be a bit low for purists - but we are small group of  developers, with mixed enthusiasm for testing, working in a technical application with no dedicated testing or QA staff. I feel though that these figures could be quite reproducible in many other applications.

An additional advantage of SWTBot tests is their resilence to refactoring. We make quite severe refactoring from time to time which usually requires a fair amount of work in adapting the JUnits. However so long as the UI remains relatively unchanged, often the SWTBot tests do not need to be altered at all, as the underlying object model is hidden to the user operations mimicked by SWTBot.

The key caveat in the above statement is ' the UI remains relatively unchanged. '  How can we get round this problem? Our policy is to not write SWTBot tests immediately when we implement new functionality - we hold off until we are fairly sure that the UI is 'fit for purpose' with our users and at least the look and feel is acceptable for the next release. Then we write them, based on the use cases, to avoid regressions and only have to make minor changes for incremental changes in functionality. The use of PageObjects is also very helpful, so the details of the UI at the widget level, can be hidden from the test author and also kept in a single class.

Another great feature of SWTBot, is that if you link in your tests to your Ant build, you can be really confident your deployed app will run! This used to be big problem for us - the unit tests would be fine, but a feature might be missing a dependency that would only be detected at runtime with the dreaded ClassDefNotFound exception thrown. Now, we have virtually eliminated manual testing, other than a brief sanity check before each release. 

All this enthusiasm for SWTBot may look like I'm   knocking  the Junits here - far from it. We run the Junits much more frequently, they take seconds rather than minutes to execute, and are resilient to other changes. For example, if we develop a new UI, the Junits will still be relevant, but the SWTBot tests are only useful for our Eclipse-RCP based UI.