So I’m writing this blog knowing it’s a topic a few people have written or talked about recently, but it is, I think, one of the more important and potentially useful topics for testers. It’s one of those things most testers don’t know much about but really should.
When I think about testability it normally falls into a few areas:
- Requirements / oracles
The last one is more about the degree to which the processes we follow enable us to efficiently test. It’s maybe not really a testability thing in the true sense but is an aspect of what we do that enables testing, so why not.
Here’s a summary of each.
Testability with respect to requirements (or any oracle) is essentially about how “good” the oracle is. Normally you’re looking at the oracle to assess its:
All oracles, and even documented specifications are never going to be complete, unambiguous or open to interpretation.
There are some good exercises on this out there, like the exercises on interpretation on the 3 Hour Tester site. For many, like me, recognising this is important was a product of facepalm moments when you realised that obvious requirement wasn’t so obvious because you made an assumption.
When it comes to requirements and other oracles, your tactics for exposing a lack of testability might include:
- Ask questions – that challenge, confirm, explore etc. Asking open questions can be a great way of realising someone means something completely different to what they wrote down
- Analyse – breaking things down, look at what it might impact, what might impact it, what could go wrong, what the risks and threats are. In other words, test it!
- Use personas – a part of the analysis, try and interpret things from a different person’s point of view. That could be different users or different people in the team
The testability of systems is arguably more important these days, with automation and other crazy things going on out there. I still think testability of requirements and other oracles is important, and probably just as important, but there is a bit more focus towards system testability as that can really make automating systems much easier. And we’re not really talking about web automation with webdriver, although that is important (making sure your devs build things to include good identifiers etc), we’re talking much more white box than that.
Here’s a general set of ideas for looking at system testability:
- Controllability – the ability to actually control the part of the system you are testing, without external factors impacting things
- Observability – the ability to observe results
- Isollatability – is the part of the system you are testing isolated from other parts, such that you can control and observe just that thing
- Separation of concerns – is the part of the system seperated from other parts of the system and does it have a single responsibility
- Automatability – how easy is it to use other software to control and observe things
It’s not uncommon to come up against a part of a system and it’s not actually possible to control it, observe the outcome of something and be able to isolate that control and observation from other factors. A simple example is a system that takes multiple feeds of data from external systems (like most) and then spits out a single unified view of that data – is the system set up to allow you to test specific feeds to understand the impact on that unified view?
Automatability is, without going into the merits of a lot of automation out there, much more common these days and many testers are using things like Selenium to automate their testing of websites. The simplest way of looking at testability of systems is to work with your devs to get good identifiers sorted. But automating a bunch of black box type web tests isn’t always going to be a great idea unless you are moving down the stack at the same time – that’s where real system testability comes in. Here’s a bunch of things you could be thinking about:
- System architecture – this is a bit of an all-encompassing point, but working with the relevant people to look at the system architecture and patterns that are being used to change the way devs and others things about the testability (and supportability) of a system. Bit of a shift is needed here.
- Logging – something simple like having good log outputs from a system can go a long way to helping you observe, or diagnose other observations, from a system. It’s also a valid way of asserting things, although being mindful of the fact that testing the logs can be a good thing to do, but now when your not testing the logs (if that makes sense)
- Services – micro services and the general servicification of a system is a great way of isolating a part of a system and allowing you to control and observe just that part. With automation, it’s much better to be interacting with lower lever parts of the system than the GUI, as it’s less flaky, less brittle, quicker and fairly often can be more isolated (how many times have you tried to test an internal business rule by using the GUI)
- Technology – some of the shelf products and commercial tech are just hard to test because they don’t allow you to look under the covers. You can improve things by using open source and anything that lets you get under the sheets with it, but you can also make your life easier by making things consistent between systems and parts of systems. It’s much easier to work out how to automate one tech than multiple
- Feature adding – I couldn’t think of a real name for this, but sometimes it’s really useful to add something to a system just for the sake of improving its testability. For example, including hidden fields on a page that can be used in automation but not seen by the average user, like a specific bit of data. That’s probably a bad example though.
I’m not sure this is something covered much by things I’ve seen on testability – maybe I’m wrong. Increasingly I think some of the frameworks, methodologies, and processes we follow need thinking about more in terms of testability.
Take Scrum – there are a bunch of events, roles, and artifacts, but how many are built with testing in mind. You could argue that they are all as we are part of the team, but if you go through the literature there’s much less talk of testing than there is of acceptance criteria (which isn’t completely a testing thing), design and development. There certainly isn’t anything about testability.
I don’t have a well-formed view of this part of the post, but here are a few random thoughts:
- If you’re using Scrum, are you breaking stories down, adding lots of detail to your stories and generally making the requirements more testable? Does this detail include how you’re going to test things and make it testable?
- If you’re going down the DevOps route, are you focusing too much on system testability (quality engineering) and forgetting about requirements / oracle testability.
- If you’re in a waterfall world, are you doing the reverse, and forgetting about talking to your architects and devs about system testability.
OK so these ideas are a bit sketchy, but the idea is that a lot of the processes we follow can lead us to forget about testability, even if we recognise it’s a thing.
Some Useful links
To finish off, here are some useful links: