Agile Testing
22.8.2019
Author: Andrew Kelly, Software Quality Coach @ Codemate
When it comes to unknowns most people are fairly cautious, often focusing on staying on a very safe happy path. Not our testers though, they openly embrace the unknowns and encourage the whole team to do so in our journey to make awesome software applications with our customers.
At Codemate we take an entire team owned holistic view of testing whilst leveraging from a small number of professional testers to support the teams across the company.
Having the whole team being able to test adds a huge amount of value but there is one area that on occasion even Agile teams can put secondary or potentially do not even factor into their development approach, this is the area of the “unknown risks” that includes both the things the team know they do not know and also the things that they currently have not considered, it is this sub area of our testing coverage that we will focus on in this blog.
All of our testers are specialists in taking an exploratory approach to testing and love technical tools to assist them in their investigations. Individually they also carry their own additional sub-specialisations so as a small group we also have security, automation, coaching and customer support covered.
This desire for awesome applications and use of professional testers has driven our flagship Agile testing model to have a strategic bias towards exploration, discovery, investigation and the consideration of currently unknown risks in general.
This article will attempt to leverage from a couple of common testing analogies and popular testing diagrams to convey our thinking around this ‘unknowns’ aspect of our model.
Older testing models tended to evolve around verification against specifications with a lot of test cases and automation used to achieve that goal.
Software development and testing have evolved considerably since then and we now look at testing very differently than we did a few decades ago.
In very simple terms if we now consider testing as the “Discovery and investigation of risk” or even more simply as being about “Asking great questions in relation to risk.” we automatically free and empower our testing model to be something much more than verification.
Consider the question, “Can you test at the design sprint or inception phase of a new project?”.
Absolutely, by asking questions and using early collaboration with the team and customer we help guide the application forward. This early discovery and discussion of risk is generally referred to as “Testing ideas” or “Testing business value concepts”, understanding and delivering business value is a big focus for us.
We do this as a team and luckily we have a local analogy which hopefully everyone can relate to well.
The rally car co-driver is a good analogy of how our testers pair with the developers.
Working closely together they look ahead, spot obstacles and risks they want to avoid, identify how severe a certain path is and assist if there is a breakdown. Finding important things fast they empower development to carry on at an optimal pace.
To remove the tester is a bit like taking the co-driver out of a rally race, they can still compete but risks are higher or worse remain unknown and the pace and chances of success are lower.
This brings us to our first diagram: Exploratory and Investigative Testing.
This model evolved from discussions on Dan Ashby’s and Del Dewar’s similar models. Both have great testing blogs that are worth checking out.
As humans we are generally humble enough to recognise our own fallibility. We know we can make mistakes or miss things and we also know that there is always stuff we do not know entirely and even things we have just not thought of yet. This applies to equally to software development as it does to life.
In software development there are a lot of questions being asked and decisions to be made and making those decisions highly informed is where testing comes in.
For example one of those big decisions is ‘can we release to production?’
We want to make that decision as informed as possible as that decision can really impact the entire success model of your business. But it’s not just those big decisions that are important as there is a continuous flow of product related questions throughout an application’s lifecycle.
‘What harm could this feature do?’, ‘Will this add or detract from customer enjoyment?’, ‘Can the application handle a 3rd party update?’ and so forth. Testing helps answer those questions and in doing so helps empower development to stay on track and progress at pace.
As our diagram suggests this model requires a curious and investigative mindset which our second tester analogy covers well.
Testing just like a detective’s investigation always begins with questions, our testers will use these questions to build out a list of risks, ideas and even plain old hunches that they feel merit further investigation.
Every product has its own set up risks but here is a fairly generic starting list that our testers have used productively on mobile apps, it should provide insight into what we mean here by a list of risks and ideas for investigation.
A testing session would involve selecting some risks or ideas from their list and investigating those. Like Sherlock Holmes that can often involve a lot of experiments and following clues to find the things that could be really harmful to the product or worse to the users of the application.
As we investigate we discover new risks or things that we add to our taxonomy of risks.
Once we have done our investigation we now as a team know a lot more and that risk can move into our known risk field where it can be regularly covered with automation.
Overall as our diagram shows its a very cyclic model of investigation, discovery and confirmation.
Our next section takes this holistic and team owned element a step further.
It was around 2003 that Brian Marick came up with the excellent four Agile testing quadrants model as a taxonomy of testing types that helped the whole team think about what testing they should consider covering. Over the years there have been multiple variations on this. For me the ones worth mentioning are Lisa Crispin and Janet Gregory’s, Elizabeth Hendrickson’s and in particular Gojko Adzic’s.
Different companies will often have different models, maybe different content completely or they may place the same things in different quadrants but for us at this point in time the following seems to be a fairly decent re-imaging of that original model combining many of the other ideas with what works for us as we build great software applications.
As you can see it’s a very holistic view of testing types and ideas that we as a full project team may or may not decide to leverage from on a project, as such it still remains primarily a testing taxonomy model.
We have though additionally emphasised a few features of model.
Whilst this article does focus on the importance of embracing the unknown side of things, the bigger picture and covering known things is also very important. I will not go into the details of this in this article but it is worth highlighting that we also have those known quadrants covered by the team as a whole and good usage of automation, continuous integration and agile/dev-ops principles are a big part of that.
The right hand side though is where our testers excel with an exploratory testing approach, boldly embracing the unknowns on a daily basis.
A very special thanks to Janet Gregory, co-writer of the wonderful Agile Testing Books series for great pointers on an early version of this article and to Misma Silfver (awesome tester at Codemate) for helping me get it finished.
The following blogs and one essential book are great reading for those who want to find out more about some of the ideas discussed here.
https://danashby.co.uk/2016/03/08/information-and-its-relationship-with-testing-and-checking/
https://findingdeefex.com/2016/05/20/the-testing-checking-synergy
http://www.exampler.com/old-blog/2003/08/22/#agile-testing-project-2
https://lisacrispin.com/2011/11/08/using-the-agile-testing-quadrants/
https://gojko.net/2013/10/21/lets-break-the-agile-testing-quadrants/
https://www.amazon.com/Perfect-Software-Other-Illusions-Testing/dp/0932633692