Is Your Agile Project Overwhelming Your Testers?
July 4, 2022 | Written By: JP Simbandumwe | 13 Minute Read
The key takeaways:
We observe that many Government of Canada, technical teams are experimenting or embracing Agile software development methodologies. While there are many potential benefits of adopting Agile, there are also risks that must be managed. Keeping up with the requirement for testing more frequently is one of those risks.
Using automated software testing to conduct regression testing can be an effective way to maintain team velocity by preventing the accumulation of “testing debt” between Agile sprints.
Over the last decade, it has become easier and less costly to conduct automated regression testing on Government of Canada systems, because:
There are better-automated testing tools available, including no/low-code options.
The increased adoption of the cloud in the Government of Canada technical community makes it easier to create and maintain the necessary environments and access SAAS tools.
Jumping Elephants has made extensive use of automated regression testing on our recent projects. The benefits accrued included:
The development and business teams have improved confidence in the quality of the code.
A more efficient and faster user acceptance testing (UAT) process.
A significant reduction in the time required for input validation testing.
We heartily recommend the use of automated regression testing for Government of Canada Agile projects, but we also recognize that every project team should conduct a cost-benefit analysis and experiment with tools and role options to find the right fit.
If your team is trying to adhere to an Agile software development methodology, you are well aware that keeping up with testing can be a daunting challenge. If unresolved, this issue can pose significant risks to the project timeline or quality goals. Bugs that escape the testing process, may eventually be discovered by either the business client or the end-user and could erode confidence in the development team, the software, or the entire service.
In this blog article, we will reference our experience working with Canadian public sector partners to develop complex software solutions over the last 15 years. We will look at how the process and tools for conducting automated testing have changed in this time as well as how the role of testing has been impacted by the adoption of Agile development methodologies. In our experience, employing automated regression testing has become a more viable strategy over the last few years. As well, the move of many Government of Canada technical teams to development on the cloud has enabled the possibility of outsourcing some of the testing functions. That said, we do not recommend trying to “automate everything”. Although for reasons that will be discussed, the level of effort for creating and maintaining automated test scripts has come down over the last 15 years, automated regression testing is still a potentially intricate and costly endeavour that should be done deliberately, with some forethought and a clear understanding of the trade-offs.
What is regression testing?
For this specific article, we will focus specifically on automated regression testing. Other types of automated testing, including load/performance testing and latency testing, will be the subject of future blog articles. Regression testing is generally defined as testing to ensure that changes made to a code base, have not broken existing functionality from a previous iteration. Regression testing is conducted after changes have been made to assess if any of the changes made have “broken” any functionality that was previously operational. In conducting regression testing we typically run end-to-end automated scenarios that represent the most common expected user flows (known as the “happy paths”), to determine that no impediments have been introduced in the last cycle of development. The testing team will also check that data output and data interactions between systems still work as expected.
More Agile = more testing
Following the Treasury Board of Canada’s (TBS) guidance to create prototypes and develop iteratively in an Agile manner (as per the Government of Canada Digital Standards: Playbook), we observe that many Government of Canada development teams are experimenting or embracing Agile methodologies. While there are many potential benefits of adopting Agile, including reducing project risk and increasing the likelihood of meeting project objectives, there are also risks that must be managed. Maintaining the needed testing coverage is one of those risks.
A fundamental feature of Agile methodologies is that development is conducted in a series of sprints after which, the expectation is that there is always a working version of the code. In addition, if warranted by the business value accrued, changes in requirements can be made even late in the development process. Both these practices translate to the necessity for more frequent testing. Testers will have to run through their suite of regression tests more often to make sure that nothing that was previously working has been broken. While in a traditional (“waterfall”) development process, there were probably less than a handful of regression testing cycles between major builds, in Agile, there could be dozens of regressions cycles to match the increased number of builds. Ideally, a full suite of regression tests (automated or not), should be conducted after each build.
“Clean” handovers maintain hard-won trust …
The first rule of managing bugs is to produce less of them. This can get challenging for developers when they are asked to provide frequent builds and when requirements change frequently and perhaps late in the game. Developers are responsible for testing their code prior to submitting their work to the common repository as part of the next version. The automated testing team can help developers by running a suite of automated tests as part of the process to integrate new builds into the code base. The sooner the developer can be informed of (and fix), bugs in their builds, the faster the final integration can be completed.
Business owners’ trust is hard to earn but easy to lose. The Agile playbook places a high value on developing and maintaining trust between teams. Most development teams have learned the hard way that handing over buggy code to the business owner is a very fast way to lose trust. It is hard to “talk around” something that used to work and now doesn’t – especially if the development team has not flagged it in the release notes. The business owners’ perspective can quickly become suspicious and adversarial if they feel like it is their responsibility to regression test the application in its entirety after each build. For Agile to be a success, the development team must figure out how to reliably create many well-tested builds. Using automated testing to test more thoroughly and more frequently can be an important part of the answer.
Key challenges to creating and maintaining an automated testing regime
Keeping testing on pace with development has long been recognized as a key challenge of an Agile development methodology. In their book first book, “Agile Testing: A practical guide for testers and Agile teams” (2009), Lisa Crispin and Janet Gregory dedicated a full two chapters to the advice on how to use automated testing to address the requirement for more frequent testing. 13 years ago, when the book was written, the advice was aspirational. We were strongly encouraged to automate as much as possible. The reality for early adopters of Agile was that, despite our enthusiasm, it was a struggle to find the right balance. The time savings and quality improvement benefits had to supersede the costs (in time and resources) necessary to create, run and maintain automated test scripts. As described by Crispin and Gregory, the “hump of pain” – the term they used for the learning curve for an Agile team to adopt automated testing, was very steep and daunting and a potential impediment to success. For those of us struggling to implement Agile manifesto in the Government of Canada, it was evident that there were very real barriers to the optimistic ideal of frequent builds that were also rigorously tested:
Creating automated testing scripts, using the tools available at the time (tools such as Selenium), required significant time and a high level of technical development expertise. Maintaining and changing scripts also added to the level of effort.
Since almost all development was done on-premises in the Government of Canada and behind firewalls, it was harder to procure new tools and have them installed. Getting authorization and satisfying procurement could be a lengthy process.
“Spinning up” and maintaining the required development, integration, testing, and pre-production environments as well as creating and maintaining testing data to facilitate automated testing in “pre-cloud” days, could be a time-consuming task that required planning and many approval steps.
Our early experience …
Jumping Elephants has worked on several projects where automated testing was an important part of the development process. In an early experience (circa 2010), we were motivated to invest heavily in automated testing because we were working on a public-facing case management system that would periodically experience very high use in a very short period. For this project and service, it was very important to ensure that bugs did not hamper “opening day.” We developed automated testing scripts to regression test the end-to-end user flows, prior to the first major launch and then found it useful to maintain them so that we could conduct rigorous regression tests after new code versions were released and prior to another periodic application spree.
These scripts were built-in Selenium and required a significant effort to create and maintain. In this specific case, the effort was very much worth it as the regression tests “saved our collective butts” many times over. It is easy to see, however, that with similar environmental constraints, the level of effort and expense may not have been justified for other (less mission-critical) projects.
10 years later, a lot has changed …
“We have concluded that enough has changed that it now takes considerably less effort and ingenuity to create and maintain an automated testing process. We would now recommend that most Government of Canada Agile projects could benefit from some level of test automation.”
Jump forward 10 years and much has changed. In 2020, Jumping Elephants was again working on a Government of Canada public-facing case management system. This time, we were developing “on the cloud” and our client was keen to try Agile. It was evident that in a decade, the working environment (the tools available, and our skills), had become more hospitable to automated testing. The “hump of pain” described by Crispin and Gregory in 2009 has been much diminished.
We have concluded that enough has changed that it now takes considerably less effort and ingenuity to create and maintain an automated testing process. We would now recommend that most Government of Canada Agile projects could benefit from some level of test automation. In support of our recommendation, we observe that the following changes have made it much easier to implement and maintain an automated regression testing regime for Government of Canada Agile projects:
There is a better understanding of Agile methodologies in the Government of Canada and thus a greater ability to modify roles and processes successfully to accommodate the creation and maintenance of an automated testing process.
More Government of Canada development is being done on cloud environments and thus, it is easier for us to work with our partners and clients to build the environments that we need. It is also easier for our Government of Canada clients to outsource some of their testing functions to an external contractor such as ourselves.
The tools for creating and managing automated test scripts have become much easier to use and the total cost of ownership much less. In ten years, there has been an evolution from the use of more labour-intensive tools such as Selenium to SAAS no/low-code automated testing platforms with a significantly lower learning curve and cost.
The tools for (and our collective experience in) managing requirements (user stories), and testing requirements (test scripts), have evolved to a high level of sophistication. The adoption of Agile development methodologies has encouraged the evolution of some excellent project and requirement management tools such as Jira from Atlassian and Azure DevOps from Microsoft.
After a formal options analysis process, we chose Subject7 as a low-code tool for developing automated test scripts. Our main goal was to find a testing tool that we could use in the context of an Agile development process for the following:
Unit testing and as part of a continuous integration development process
Automated functionality testing for specific repetitive testing scenarios
Performance testing to optimize system performance under the expected load.
Lessons learned and recommendations
While we are strong proponents of automated testing, we will temper our enthusiasm with the recognition that each team should conduct a cost-benefit analysis that is specific to their team and the project. We would also recommend that if you decide to proceed, your team should experiment with tools and roles to find the fit that works for everyone (including the testers and the business owners).
The following are some specific practical considerations and recommendations that derive from our experience:
Use a no-code tool. We would advise using a tool such as Subject7. It has been our experience that no/low-code tools such as Subject7 have an easier learning curve than the previous generation of tools that we have used. In general, our experience has been that no/low-code automated testing tools make it less time-consuming and hence less costly to create and maintain the testing scripts.
Organize your team so that the testing function is appropriately resourced. Making decisions on who is responsible for building and maintaining scripts was (and still is sometimes) tricky. Should it be the responsibility of tech-savvy testers or is automated testing script writing a better fit for a development team member who “knows” testing? The ease of use of the automated testing tools has improved dramatically, but it is still the case that creating, organizing, and maintaining testing scripts still borrows heavily from coding principles of abstraction, componentization, and version control. After some trial and error, we have found that it works best if the creation, maintenance, and running of automated test scripts is assigned to a resource or a team with technical skills. These “automated testing specialists” straddle the space between the development and the testing team. In Agile parlance, coders, business analysts, and testers as all part of the development team. On our projects, the development team members that are responsible for the automated test scripts are referred to as “automated testing specialists.” The graphic below illustrates our process and division of labour:
3. Set up your infrastructure that includes an isolated test environment with the appropriate data set for automated regression testing use.
4. Don’t forget testing data. We would recommend thinking through the requirements for the creation and maintenance of test data as part of the testing plan. Make plans and assign responsibilities for maintaining the data between testing runs (if necessary).
5. Do not try to automate everything.
Don’t automate too early. We would recommend waiting until there is some stability in the code (maybe after the minimum viable product, has been completed).
Deciding to automate is a trade-off between the level of effort to automate vs. the level of effort to build and maintain the script. Base your decision to automate a testing script by considering the following:
How stable is the requirement? Ideally, the requirements should be unlikely to change.
How often will this need to be tested in the future and how much time does it take to test this use case? If it is anticipated that the functionality or the scenario will be tested often, it is a good candidate for automation.
How likely is it that future development will affect this functionality? If it is possible or likely that future development could “break” some functionality or a user flow, it is a good candidate for automation.
Level of effort to automate. Functionality or scenarios that test scripts that can be automated relatively easily should be prioritized.
6. Provide your business owners with the results (and videos – if available) of the automated testing run during UAT or sprint demos. This way, they can gain confidence in the system and they can concentrate on exploratory testing.
The outcome of our recent project
On one recent project, we used automated testing scripts created in Subject7 extensively for a year after the minimum viable product iteration. In total, we estimated that we ran approximately 15,000 end-to-end regression scenario automated test scripts and thousands of smaller “unit” regression scripts. The following are some of the benefits and outcomes that we observed during this project:
Improved confidence in code quality. We were able to support building and testing some very complex user interfaces (multi-page dynamic forms), in an Agile process. This required frequent builds and many rounds of testing to consistently validate the behaviour of thousands of functionality points in each form. We did not incur a “testing debt” and both the development and the business team had a high level of confidence in the quality of the code.
Faster tests for browser and platform compatibility. We were able to use the automated testing tools to emulate various browser and platform configurations to confirm that the application and content would render and work well in various browsers and on mobile or desktop.
More efficient use of our client’s time during UAT. We provided video recordings and execution reports of successful test scenarios for all UAT builds, allowing our clients to focus on exploratory and exception case testing.
Significantly reduced the time for input validation testing. Input validation on this project was an ideal candidate for automation. There were multiple forms with hundreds of fields that shared common or closely related validation rules. Automating the validation to quickly and frequently test the combinations saved both time and money.
In conclusion, for the many reasons stated, and being conscious of the caveats described, we heartily recommend the use of automated testing tools for Government of Canada Agile projects. Automated software testing is an effective way to reduce the accumulation of “testing debt” between Agile sprints.
We invite you to send us an email to discuss your planned or current project to explore how we can help with your automated testing process.