Testing times with Agile projects
Mark Firth looks at building in quality with Agile methodology-based development
Organisations are increasingly adopting Agile methodology in application development. The aim is to allow delivery teams to adapt to constantly changing requirements and get good results in weeks rather than months or years.
Using Agile can help deliver changes to users in small chunks at regular intervals, offering more time for user feedback and therefore refinement.
Often delivery of IT projects is delayed and critical requirements go unfulfilled. Problems found late in the testing process often cannot be addressed.
This can not ony cause problems when managing communication to key user communities, but cause cost overruns and schedule slippage.
The longer it takes to deliver real code to production and value to users, the more difficult it is to manage expectations.
With Agile, issues should be uncovered more quickly – usually within a few iterations – and quality checks are built into the development and build process earlier on.
However, in Agile projects you need to make the right investments in project time, costs and effort to create a framework for delivery success.
Also, a significant number of the Agile team need to be testing specialists, a factor which is often overlooked.
Ensure that new features are in each iteration function as required and that existing features continue to function as expected. To facilitate this, pay attention in automating the tests for new features in each iteration.
Maintaining an automated regression test pack, integrated into the software build process, will ensure the software continues to work as expected. If this does not happen, regression testing will soon become a task that cannot be accomplished manually by the Agile team.
This will either mean more testers are needed, or standards will need to be lowered.
Pilot and implement tools and frameworks prior to feature development. Ensure features and requirements (both functional and non-functional) are acceptable before developing the code.
That would lead to the development itself regressing to the traditional waterfall model, with the time between releases increasing.
These issues will become continually more problematic unless an investment is made to pay back the technical debt by automating the regression pack, so feedback on the software's quality can be provided within each iteration.
Performance, resilience and robustness must also be addressed throughout. If these aspects are only checked in the "finished" product, rather than being defined in the acceptance criteria and built into each design iteration, it may be too late to do anything about them.
Rectifying performance bottlenecks then becomes a long, costly, resource-intensive process that ultimately delivers a system incompletely fit for purpose.
However software is tested, there is a limit to what can be covered, within a finite budget and fixed time. So it is important to be able to assess quality at every stage, as well as have visibility of the overall risk being introduced into the process.
There must be a well-defined approach to quality with a sensible balance between developers and quality assurance testers. Ensure sufficient budget to build quality into the software from the start.
Measure and track different aspects of quality to identify potential improvements in the development process and enhance investment decisions.
Mark Firth is head of testing services at Endava