So one thing we are currently trying to cook up, is what metrics might be useful to use in Automation for both users and consumers.
My feeling is one way to raise awareness of the benefits of automation are good metrics.
Some proposed reporting/metrics we have suggested are below, but do you have more?
Reporting requirements are being investigated and we are currently identifying what we need to capture and what metrics may be required to provide meaningful reports for Project Review and Management.
Source of the reports include:
• Microsoft Team Foundation Studio (TFS)
• percentage of a Project Team’s system covered by auto tests
• percentage by test type (Unit, Integration, Service, UI)
• percentage by Functional Area (breakdown to lowest sensible level, to be determined by team/manager)
• Reliability of Service & UI Tests run via TFS (i.e. how often do we get a failed run due to factors aside from the test itself failing)
• Classify reasons for these failures (e.g. Agent down, Contract mismatch, ESAM issue, or whatever)
• Average Length of time taken to run tests on Server Side
• Any specific issues that a team wants to report relating to Auto Testing.
• Any wins a team wants to note
• Level of confidence of Testers in auto test usefulness broken down by Functional Area and Test subtype
• Level of confidence of Developers in auto test usefulness broken down Functional Area and Test subtype
(Note: We could show the last two by Traffic light reporting, ie Red – not confident, Orange – we have significant concerns, Green – all good. These can then determine the basis for the colour using a dot point summary.)