The Australian Government coat of Arms

Communities of practice

Communities of practice

Metrics for Automation

Hi,

So one thing we are currently trying to cook up, is what metrics might be useful to use in Automation for both users and consumers.

My feeling is one way to raise awareness of the benefits of automation are good metrics.

Some proposed reporting/metrics we have suggested are below, but do you have more?

7 Reporting
Reporting requirements are being investigated and we are currently identifying what we need to capture and what metrics may be required to provide meaningful reports for Project Review and Management.
Source of the reports include:
• Microsoft Team Foundation Studio (TFS)
• DynaTrace
1.1 Quantitative
• percentage of a Project Team’s system covered by auto tests
• percentage by test type (Unit, Integration, Service, UI)
• percentage by Functional Area (breakdown to lowest sensible level, to be determined by team/manager)
• Reliability of Service & UI Tests run via TFS (i.e. how often do we get a failed run due to factors aside from the test itself failing)
• Classify reasons for these failures (e.g. Agent down, Contract mismatch, ESAM issue, or whatever)
• Average Length of time taken to run tests on Server Side
1.2 Qualitative
• Any specific issues that a team wants to report relating to Auto Testing.
• Any wins a team wants to note
• Level of confidence of Testers in auto test usefulness broken down by Functional Area and Test subtype
• Level of confidence of Developers in auto test usefulness broken down Functional Area and Test subtype
(Note: We could show the last two by Traffic light reporting, ie Red – not confident, Orange – we have significant concerns, Green – all good. These can then determine the basis for the colour using a dot point summary.)

One of the difficult things about metrics is how much time and effort it can take to collect them.

I think one important metric is (similar to one you mention, Reliability) - where are problems/bugs being found ? So, are your automated tests actually finding any problems ? How does this compare with how other things (not just tests, perhaps reviews etc) are finding real issues ? Can you demonstrate that there is value in your automated tests, eg because they are finding problems where other things aren’t ?

Also, how much is it costing you to set up and run your automated tests ? How does this compare to costs for manual testing etc ?

If there are techniques during the development stage, eg unit tests, code reviews, static code analysis that are finding bugs, and preventing them from ever getting further down the pipeline, are we recording the value of these, so that we can compare to other forms of testing ? It could even be possible that something like choice of programming language could greatly reduce the number of bugs - but how do we measure the value in something like that, versus the value of downstream automated testing ?

Interesting thoughts, Glenn. As our own metrics develop, I shall definitely be incorporating these into them in some form.

Do you think there is any room for combining Data Visualisation tools (e.g. Qlik) with Automation - I feel there must be as we can capture far more info than ever before, and create auditable logs etc too, so surely we can mine that for insight. Is anyone doing this yet?

An interesting article on metrics - https://smartbear.com/resources/ebooks/6-ways-to-measure-the-roi-of-automated-testing/