This is the first of a couple of “The core of X” -articles that give you -my- ideas on what I think of X. Take from it what you will, love it or take offence. In any case, I would love to hear about what goes around in your head and what you feel while reading. Feel free to correct me or engage in discussion. We’re here to learn.
First things first: Language. For this blogpost, when I say Automation I mean a set of Anti-regression checks. Whether they’re on the Unit, Integration, API, UI,… any level, they’re all the same here. There’s usefulness in dividing them by purpose, use or whatever, but it’s not here. I use ‘checks’ where you may expect ‘tests’. That’s fine. The important thing is the point, all else is for a different time.
Behold, das Point:
The core purpose of Automation is NOT about
finding bugs, saving time, getting to market faster or bringing value faster.
It’s about
Stability and Reliability.
Now don’t get me wrong. The red stuff is greatly helped by what’s in green, but it is not and should not be the main focus.
Should your project not care about Stability and Reliability and only about going to market faster or saving time, you can just as well not write any checks.
If you want to find as many bugs as possible, automation is a rather inefficient way of looking for them.
Your checks are about minimizing the risk that comes from working with multiple people on the same thing. Because… inevitably, someone will make an error. That’s human nature. Ideally, you want your Regression Harness (or Automation Check Suite) to warn you of that error. This way, when a team has a good set of checks, it’ll keep a personal “oopsie-doozie” from becoming a whole-team “WHO THE HELL BROKE OUR STUFF AGAIN?!”.
When you engage in any form of automated checking, know that your focus should be on Reliability and Stability. It will lead you down the good path. The one of continuous integration, saving time, having to do less firefighting and having more control as a team.
I’ve seen people talk about metrics of automation and mention: number of bugs found, time saved, number of deploys per day, number of checks ran,…
These are rather unhelpful.
As a Team member, when it comes to metrics, I’d love to outweigh:
Time lost fighting instability + Energy wasted on firefighting + motivation loss while investigating problems + friction between team members
VS.
Time invested in building a regression harness + problems still occurring after the effort + Effect on team-cohesion and sense of purpose.
Hard to do, right? Though it could be better worded, that’s the only metric that counts for me.
(Though I might be interested in the other metrics sometimes, those moments are rather circumstantial)
Notice, though, the difference of mindset of investigating problems as individuals and building a solution as a team.
But… where does Testing fit into that strategy? In my opinion: Not necessarily.
While Testing can certainly help come up with interesting ideas on improving your check harness, it doesn’t have to be part of it. However:
The information that Testing provides from looking for unexpected, extreme and abusing behaviour can greatly support your automation.
I do believe that Testers should make the occasional step into this territory, just as much as I encourage anyone to make frequent and valuable role-switches. Yet the main focus of the Tester role is on providing valuable information that might have been overlooked, the team was unprotected against or misunderstood.
Make no mistake, if your product is rather complex there’ll be things missed, erred and overlooked.
Also… I’ve yet to meet a tester who didn’t think her project or business is simple.
TL;DR: Regression checking automation is about Stability and Reliability. When done well and consciously, the result can evolve to be: Less bugs, time saved firefighting & problem-solving, going to market faster and adding value faster. A Tester isn’t necessarily an agent in this process, but the information she provides can be invaluable.
We all need stability. Sometimes more than other times.
Great thoughts on the purpose of automation. I agree that automation is great for regression checks. It can also be used for coverage testing, especially in the realm of API testing. I see great advantages of using automation for functional API contract testing. With a little bit of work and design forethought by the team (PO, Devs, and QA) an API can have robust test coverage with minimal automation effort.
LikeLiked by 1 person