"EMMA is an open-source toolkit for measuring and reporting Java code coverage. EMMA distinguishes itself from other tools by going after a unique feature combination: support for large-scale enterprise software development while keeping individual developer's work fast and iterative."-- Taken from EMMA
I am real fortunate to have learned to use EMMA with JUnit test cases. I found the software to export a great html interface and easy-to-navigate pages with clear and concise visual representations of what functions and lines were hit and not. This feeds to the approach of "test driven development". The key is not to develop test cases that is geared for coverage, but to deveplop good test design which will yeild 100% coverage. I can easily see how a novice programmer using EMMA will just write test cases to obtain 100% coverage under the EMMA output. According to the paper "How to Misuse Code Coverage", written by Brian Marick, he gives the following tips:
- Think of what the feature in the interface that condition corresponds to.
- Rethink how you should have tested that feature. Don't worry about how to satisfy the coverage condition, or think too muhc about other missed coverage conditions in the same feature.
- Run the new tests and recheck coverage. You should expect to satisfy the missed coverage conditions in the feature, even tho you didn't specifically target them.
- Repeat for the other features
The bottom line is: don't give into the urge to satisfy the '100%', as tempting as it is, don't design test cases that way.
Unfortunately, that is easier said than done. I found myself looking at which functions I didn't cover, then just writing test function which uses assertSame to see if they work. Then I took a step back, but double-thinking it, I since the detail of the missed lines were so small there really was no other way to do it.
Recently we were given the task to modify code that satisfies the 100% coverage but have it fail test cases, just to prove that having 100% will never be sufficient. I was confused to how I would actually implement something like that... I initially took a wrong approach and modified JUnit test cases, then took a different route and had the data structure mess with the cases instead. Successfully that was correct and was quite easy to accomplish. The idea is that the cases assume the best, but don't expect a the unexpected. The functionality is right, but only to a certain extent.
Recently we were given the task to modify code that satisfies the 100% coverage but have it fail test cases, just to prove that having 100% will never be sufficient. I was confused to how I would actually implement something like that... I initially took a wrong approach and modified JUnit test cases, then took a different route and had the data structure mess with the cases instead. Successfully that was correct and was quite easy to accomplish. The idea is that the cases assume the best, but don't expect a the unexpected. The functionality is right, but only to a certain extent.
Source:
No comments:
Post a Comment