The Australian Government coat of Arms

Communities of practice

Communities of practice

Winning over Manual Testers to Automation

Hi all,

So one issue we face in Employment is that not all Testers are onboard with automation. We are still working to figure out why this is (speculations abound but I’d like facts) and what we can do about it, but I was wondering if other orgs already faced this problem (or are facing it) and how it was handled?

To get on board with Manual testers, as they job to do Manual testing for non Automation Candidates, we have build App or Jenkins jobs for manual testers to execute data creation scripts if they need for some scenarios, which required pre canned data, and also execute automated scripts if required at any stage. This can build confidence with all testing teams and business to use the Automation Framework and depends on how good is reporting done to understand steps been executed. We use BDD Serenity at IPA.

Hi @suresh.thumma,

Thanks for that. We have been making moves internally to build our massive data generation (data creation) capabilities. In light of what you said, I may refocus more efforts towards that angle.

I am interested to note you say that auto scripts are already in place for manual testers to use - in our case we have some (and are always looking to build more) but really wanted our manual testers to start telling us which manual test cases are the best targets for automation, rather than our automators picking one that we ‘think’ might be a good candidate (as we don’t always have the business or project or BAU insight to know which are the best ones to target). Unfortunately it is proving surprisingly difficult to get manual testers to tell us what is a good target. We recently ran a lucky dip idea where we said - ‘Give us your most annoying and repetitive regression test scenario and we will workshop how to turn it into an automated test case, and perhaps go on to actually automate it’ and got a surprisingly weak response. We thought that would be something they would jump on.

How did you determine what auto tests and data creation objectives were the ones you wanted to target if your testers didn’t tell you which were the best?

I’d also like to look at some sample reports from your organisation? We have early email reports, but perhaps they could be better… I’d like to mine what you have done for ideas.

Its Better to get a Matrix Document done for most repetitive test Scenarios(Data Creation, API, UI) with in-put from Manual Testers/Business. Based on that try to get Data injection Test Candidate done at High priority level and let Manual Testers Get used to it (always you use prefix this data only for manual testers, as you need your own data for Automation purpose too, this is very critical for Data Management for Auto Scripts ), in the mean time develop more test cases and hand over to them. This makes them to get integrated with Auto Teams get feed back and develop more Test Automation Candidate. This Makes Manual Testers more secure with Job as they are integrated with Automated Teams

Thank you.

Is your ‘Matrix Document’ an traceability matrix type of things (https://www.softwaretestinghelp.com/requirements-traceability-matrix/)?

Do you have a generic example of a Data Injection Test Candidate? I am not sure I understand what this is unless it is something to do with testing effectiveness of dependency injection specifically in a given test scenario? Or… do you mean a data driven test candidate (ie. can we drive the test via parameter-isable data stored in e.g. a csv or excel ss or db table) - in which case I understand what you are saying.

When you develop test cases for them, do you just go and do it, or do you consult closely? Do they suggest or do you just pick?

I can explain a certain scenario in terms of Data creation. If we have integrated systems, Client Facing and CRM tools like PEGA for example. If you want to test only PEGA, you need to create Test Data to be in certain Status, depends on scenario types. In this Situation I can use API, or Selenium or Data Management tools to inject data in to DB based on each Scenario, so that data can be available for Testing at Pega, we can also push test data in Pega at different status for data creation using selenium to reach at particular status. This approach make Manual testing much easy way for data creation and save time.

Gotcha. We had started to develop this idea a bit on our own, but I can see we could take it further. Thanks for this.

Hello. I’ve been meaning to jump into this conversation for a while because I am keen to get some views on whether anyone is calling out that in many cases it is not possible to win over manual testers to automation (or any other technical testing discipline). In my current and previous roles I am heavily involved in tech uplift initiatives for Testers, and I can say from my experience that the conversion is low, an based on needing both a minimum level of technical skills and a willingness to learn and experiment (as distinct from a typical day to day project charging test analyst role).