Thanks for that. We have been making moves internally to build our massive data generation (data creation) capabilities. In light of what you said, I may refocus more efforts towards that angle.
I am interested to note you say that auto scripts are already in place for manual testers to use - in our case we have some (and are always looking to build more) but really wanted our manual testers to start telling us which manual test cases are the best targets for automation, rather than our automators picking one that we ‘think’ might be a good candidate (as we don’t always have the business or project or BAU insight to know which are the best ones to target). Unfortunately it is proving surprisingly difficult to get manual testers to tell us what is a good target. We recently ran a lucky dip idea where we said - ‘Give us your most annoying and repetitive regression test scenario and we will workshop how to turn it into an automated test case, and perhaps go on to actually automate it’ and got a surprisingly weak response. We thought that would be something they would jump on.
How did you determine what auto tests and data creation objectives were the ones you wanted to target if your testers didn’t tell you which were the best?
I’d also like to look at some sample reports from your organisation? We have early email reports, but perhaps they could be better… I’d like to mine what you have done for ideas.