KPIs for a QA/QE team

It came to me in the middle of the brainstorming session.

My boss asked for some help devising some Key Performance Indicators (KPIs) for our team of Quality Engineers for the next year. We needed to define some metrics to measure the performance of our entire team; ideally metrics that are based on numbers, really reveal how well the team is executing, and that help focus the team on what is important. The question on how to do this (well) comes up often and seems hard to answer.  Keeping focus on performance and improvement can be hard as QE are often caught in the software development cycle between development engineering always wanting a bit more time and release deadlines that can’t move. Their performance is no easier (and likely harder?) to judge than those ‘regular’ development engineers. I know I’ve been asked countless times while interviewing how one should measure and gauge individual QE performance. It’s a tough question, especially if you want easy-to-measure numbers, rather than vague hand waving.

The first thing that was suggested by her management group was to track the # of bugs filed. That’s such an easy number to generate on a per-person and per-team basis. But how useful is that number? Today I reopened a bug at the direction of the engineer I’m working with because he had conceived of a new way to handle an issue that I’d filed and he’d dismissed earlier.  If my team (or myself) was being rated based on the number of bugs filed I would have been in my best interests to spend a couple extra minutes creating a new bug for the issue instead of clicking [Reopen][Save]. If the “# of bugs filed” really is an indication of our/my performance, we’re saying that the time spent filing that extra bug was adding value to our organization and the company. But I’m pretty sure we all agree that it doesn’t, as the incentive isn’t to ‘add value to the company’ but to generate bug reports.

We also discussed that maybe “Reopened Bugs” was a possible metric to track, with the goal being to keep that number low. That means my act of imagined misdirection above would have helped keep another metric low in my favor.

But, I pointed out, if we were to track the bugs, it would also make sense to track the number of open bugs over time for each project. This would be the basis for some charting and bug modelling, which I felt would add value to the organization and improve our ability to ship software with higher quality with a better understanding of it’s quality.* Tracking the number and state of bugs in itself isn’t bad. But the number doesn’t map well to being a measure of the value that the Quality team is adding to the business.

We walked through several other potential metrics, some good, some bad:

  • the number of bug filed, closed (and reason, especially ‘not a bug’), reopened, deferred*
  • the number of test cases
  • the number of tests under automation
  • lines of test/automation code written
  • more subjective test coverage (how much of the product is tested, identify gaps)
  • code coverage (use a tool to evaluate how much of the code is exercised by tests)
  • number of test configurations and environments
  • time spent maintaining test systems/environments divided by the number of them
  • time spent per project (map initial estimate vs. actual, also track the inevitable project development slippage against what was promised*)
  • number of issues that make it to customers on a per project basis to see if our quality is improving over time*

* I just couldn’t help through the process but turn the “metrics of QE team performance” into numbers that show the health of the project and software.

But the problem was that we had a quite a lot of ideas for numbers that could be generated that didn’t have much of a connection to what we as quality engineers are doing with our time or what our entire engineering team knows that we MUST work on in order to assure and improve our quality. There are a lot of projects that are vital (“create beta test system for key customers”) that don’t fit into the ‘execute tests, file bugs’ model or metrics.  And many of these suggestions are open to being gamed.  I can easily write more bugs, break tests into separately numbered parts, write a lot of easy tests and avoid working on a hard test, and devise all sorts of ways to pad the various metrics in my favor that are totally counter to constructive work.  While I’m sure I wouldn’t copy-n-paste a section of code 10 times instead of a loop to pad my statistics, these ‘bad metrics’ can subtly influence even the best of us.  More importantly, I’ve never seen ‘bad metrics’ have a positive value on team morale; at best we shrug our shoulders and get back to work.

And that’s when it hit me: There is already a way to measure the value being added by the work of individuals and teams. It was something that the various Agile teams I’ve been a part of were collecting all the time. Story points is one of the names they go by. Every task that the team is working on can be assigned a ‘business value’ that determines how useful they are to generating value. There may be other measurements as to how hard they are, how long they are expected to take, etc. but the important one here isn’t that it’s an arduous 3-month long project, but rather the relative value added to the company.  These should be the points assigned by the product owner as to the value added to the company, not the team’s planning poker estimate of effort required.  We want to measure outcomes not estimate effort.

Story points.  That’s the appropriate measurement of my team’s performance and individual performance. That’s a measurement that doesn’t distract the team from working to provide value. That’s a measurement that can be taken into account when some other emergency project arrives. We may find at the end of the year that we have only provided 75% of the “points” for the quality initiative goals for the year–but I bet it will be because there’s another 50% worth of unplanned and unforeseen projects that have been completed.  And those projects can be assigned points–and if they’re trumping our scheduled work, they must be worth more to the company. We can scale our expectations of our number by the size of the staff. We can compare the cost of the team and infrastructure against those points. We will encourage the team to work faster, smarter, and more focused.

I’ve been involved in Agile development, Scrum trainings, Bay Area Agile Leadership Network meetings, read the books, etc. but for whatever reason I hadn’t quite made the connection on how to apply the theory to the reality of this level of software development.  Perhaps the question being pitched to me as counting the number of bugs threw me off and into a defensive posture and I couldn’t think straight.

Agile KPIs are the best KPIs.  Whether your organization is Agile or not.

This entry was posted in Agile, QE, Software Engineering. Bookmark the permalink.

One Response to KPIs for a QA/QE team

  1. Ian says:

    I think that counting bug frequency/fix is a very useful tool to understand if the development cycle is maturing, but measuring bugs as part of any reward system is an idea so foolish that Dilbert lampooned it years ago.

Leave a Reply

Your email address will not be published. Required fields are marked *