Search This Blog

Monday 12 April 2010

Product SW quality (part 2): The quality factor Correctness

Dear reader,

Quality factor correctness is maybe the most common measure for SW quality, and it might be kicking in open doors writing about them? However as Henrik quite correctly pointed out in his comment on my first post about Product SW quality, “correctness is measured at a specific point in time”. This fact makes these quality metrics less trustworthy the more changes you make to the code without performing the right regression testing. Therefore I use them for making release decisions on the maturity of the product together with the other quality factors at a specific point in time. Normally it should be measured against a preset targets for beta SW or final SW. Only looking at one or two of the quality factors correctness can give you a false sense of good quality. For example: You might have low number of open defects and a good pass rate. Without knowing the test converge and the S-curve you will still be in the dark.

Here are some details that I think of when using the correctness quality metrics.

Open defects rank of A, B and C :
This metric is very straight forward, nevertheless the truly hard part is to set the targets that is right for your product. It can be very different for a mobile game compared to a heart-lung machine. The only way I have found that works so far is actually to learn the right limit as you go ahead (maybe not the best way if you do heart-lung machines :-) and releases your product and get customer feedback (if possible before market launch using beta testers). I.e. is zero A-ranks something your customers expect or is 10 A-rank okay? If you have any better way of doing the target setting please let me know? Also what is important is that you use a good ranking system and that everyone gets trained and good at using the ranking system when reporting defects. In addition to this it is very important that everyone buy-in on always adding all defect that is found to your defect tracking system to get traceability. Without 99,9% buy-in form the organization you are in trouble.

% pass rate for test cases:
This metric is also quite straight forward. You just measure the passed test case divided by the total amount of available test cases. This should be measure for at least unit tests, integration testing and system testing.

% test coverage:
Again this metric is not rocket science .You measure % test converge of all the available test cases you have, to validate that the % pass rate tells you the true story. This should be done for all test cases. The targets in an optimal world is of course 100% if you have no blocked areas. For for example unit test it is also good to try to accomplish as close to 100% code converge as possible for the metric to be even better.

Performance metrics:
It is of great help for the development organization to receive targets for none functional requirements like performance metrics from product management in the product specification, before the development starts. Then it is much easier to spend time on estimating the needed work to archive the targets. One example could be the start up time for the product. Without targets you often end-up arguing before launch what is good or bad. If product management do a good job they benchmark with competitors or set good metric targets based on customer expectations.

Thank you for reading and please give some feedback?

Anders

No comments:

Post a Comment