A Test Group’s Declaration of Intent

Experience Reports, Software Testing

I’ve been working as a test consultant for a year at my current client and I constantly wonder what problem next I can help fixing.

The IT department has grown immensely since last year. Not just in numbers, but in maturity as well. I’m very proud of how much progression these guys have made.

Yet, the laws of consulting state that there always is a problem and that it is quite certain to be a people problem. And so we identified our current number one problem to be: “Still many people don’t know what to expect of us, testers, or don’t know what we actually do.”

We’ve had managers asking us to do gatekeeping, exhaustive testing,… Programmers asking us to test systems that don’t have an interface, do implementation level testing,…

We decided to create some kind of manifesto. A clear set of rules and statements that best describe our core business. This is what came of it:KnipselThis is a first version and hasn’t been put up yet, but we feel we’re getting close.

The testers felt the need to create a concise, to-the-point document which we’d print in large and raise on our wall as a flag. They want to be understood and grow from having to hear “just test this” to “I need information about feature X regarding Y and Z”.

I, as a consultant, wanted to unite testers, programmers, analysts and managers under a common understanding of what testers do, don’t do and can’t do.
Knowing that my days at the client are numbered, I want to leave behind tools for the testers to fight their own battles, when the time comes.
I have seen us become a team that supports the IT department in diverse and effective ways. Yet recently, powers that wish to fit us back into a process box have been at play as well. I’d hate to see the team become reduced to a checkbox on a definition of done again.

 

Isn’t that your job?

Experience Reports, Software Testing

Last week, on the TestersIO slack the following question was asked:

Training

The person was looking to find out whether other testers consider it OK to be called upon the quality of their bug reports by a tester more junior than them.

This is, of course, completely dependent on both parties and the way they choose to handle the situation.

  • Are you lecturing the person, or are you requesting more detail?
  • Does he choose for the emotional response, or a more reasonable one?

Another participant of the discussion suggested to invite the developer who’d handle the bugs to explain why the reports were insufficient.

This led me to develop a visualization of something that had been growing since reading Gerald Weinberg’s “Perfect Software, and other illusions about testing”.


An explainability heuristic: the Responsibility Meter

I’m not sure anyone ever told me this specifically, but everyone seems to be in agreement: Testers search, coders fix.

Looking more closely, nothing is ever that easy.

Why wouldn’t both roles be able to do both activities?
Another misconception is the responsibilities range over two activities, while they insinuate many more.

The road from “Hey, this looks strange…” to “This is an issue” to “The issue is fixed and nothing else seems to be broken” is often long and complex but always context-dependent.

The responsibility meter is a tool to support discussions.
If you find yourself dealing with:

  • Over-the-wall development
  • Ignorant co-workers
  • Unhelpful bug reports

This may be a good step towards a solution:

Responsibility meter

  1. The first scale visualizes the road from discovery to identification of the issue. This is where most of the discussions takes place.
  2. The second scale depicts what happens after the identification. Activities on this scale, but not added, could be debugging, refactoring, adding checks, trial and error, troubleshooting, further testing…

In the situation that a tester thinks his job of describing bug ends at discovery
and a coder expects the bug to be completely pinpointed, isolated, generalized, maximized,… and documented in enough detail before he starts fixing.
Then nothing ever gets fixed, at least not efficiently.

Most of the time, there is no need to explicitly set the marker.
Awareness of the scale is usually more than enough.

There are situations, however, where you need to have the talk about responsibility. Where it starts and where it ends.

It is not unusual that developers expect more detail, but that the testers aren’t willing or able to give it. Miscommunication leads to tension.
Tension leads to many more and worse problems.

It might be necessary to reset and adjust the meter a couple of times during the project, or make exceptions for certain special cases.
You should note that the scales are not set in stone. The activities may switch places or be skipped completely. Use it in your context to the advantage of the team.


This meter uses Cem Kaner’s Bug Advocacy heuristic RIMGEA (Bug Advocacy, lecture six) and Gerald Weinberg’s Discovery, pinpointing, locating, determining significance, repairing, troubleshooting and testing to learn. (Perfect Software, pg.33-36)

The Stickering

Experience Reports, Software Testing

‘The Stickering’ is a loosely managed, session based, reporting-and-guiding tool for your (user acceptance) testing. Feel free to adapt and use this for your own projects. The explanation given here is how it was integrated for a particular context.

Context: Exactly one year ago, I and a few other consultants were hired to help transform the current process of deploying and developing directly into production and hoping nothing blows up, to a more closely managed process with more safeguards.
In addition, the complete software application now in place would be recreated to a more modern technology. This meant: new process, new technology, new infrastructure, new people, old business processes with new changes. All this, under heavy set deadlines.

Fast forward and zoom in on testing: The first release was a disaster. Testability was an afterthought, so that testing in earnest could only start when it was too late. Bugs ramped up, deadlines were reset twice and the bi-weekly fix-releases were taking up most of the test team’s time.

While we were busy fighting old battles, the team of coders was almost doubled and two new projects were started.

Today, we’re two months from the deadline, one more rigid that the last, and we can finally start testing in earnest. The functionality is huge, complex and still changing.
We do not have the necessary knowledge to do a good enough job.

Enter ‘The stickering’Wizard Mindmap

Because the functionality is so incredibly complex and important to the business, we’ve been able to get a lot of support. Many people are interested and motivated to help us find important information.
Suddenly, there’s 20 other people testing the application and without an adjusted process, we’re sure to lose track of what we’ve been doing and what is still to be done.

To manage our coverage, we created mindmaps (www.mindmup.com) . Together with the functional people and their documents, we lined out a few modules and gathered a bunch of functionalities together with interesting test ideas in various mindmaps.

We pinned them to the wall. Next, we ordered three kinds of stickers, all in a different colour. One colour for testers, one colour for business people and one colour for IT-related but closely connected to business users, such as analysts and helpdesk.

They visit us in blocks of half days. Each session, they choose a few connected nodes on the mindmaps and start exploring. We ask them to focus on their unique point of view. Helpdesk professionals know where users will have questions and where quick solutions should be implemented. Finance will have another focus.

During their sessions, they test the nodes from their point of view and note their finding in the following Test Charter . (click through for an example)

After their session, we debrief and ask them all sorts of questions. (I will elaborate on this in a later post)

The final step is adding the coloured stickers to the mindmaps once the user has the feeling she has tested the node sufficiently. This way, anyone can pass the wall and see our progress. The goal is to get all nodes stickered by each colour.

Together with our bug wall, our results and progress get very visible.

We’ve only just begun with this and I’ll be sure to report on how it worked for us in the end.

 

WP_003445

 

Katrina’s question of goodness

Software Testing

This afternoon, Katrina posed the question “What makes a good tester?” but phrased it in a clever activating way: “What makes you a great Tester?”
She had been wrestling with this particular behemoth for some days and found it time to call in the reinforcements.

This added a lot of dust and confusion into the fight, but turned out to bring some interesting threads.

Some remarks that came out of the discussion:

  1. Testing is multidisciplinary, what are the attributes that make you a good tester?;
  2. Who/what do we compare a good tester too? Non-testers? Bad testers?
  3. Core skills: “good” means finding & reporting important problems quickly and cheaply and being able to express/defend your actions.;
  4. A test that measures this. An exam? A duel to the death? A multiple choice test?
  5. How good you are is directly connected with how valuable information you supply;
  6. If you’re in high demand and teams are fighting to get you on theirs;
  7. When you are considered an expert of the product;
  8. Customer satisfaction is a parameter;
  9. Peer assessment;
  10. You could be really bad at finding bugs/important information, but inspire everyone around you to not make them in the first place or give their fullest to find them for you;
  11. Being confident in your ability may be an indication;
  12. The Relative Rule was mentioned.(http://www.developsense.com/blog/2010/09/done-the-relative-rule-and-the-unsettling-rule/)

There were other interesting remarks, but let’s keep it at this.

All of the above seem genuine criteria and yes, it is extremely relative to context. To find a general ‘rule’, however, I’d like to go deeper, broader and try to generalize. Many of the above can be countered with a “what if”. (We’re good at finding alternatives that complicate things)
Can we find common ground on what makes a good tester?

Context is important, but for this exercise we must hold every possible context into account rendering the parameter to become infinite or null. It’s virtually impossible to factor this in.

Skills are important, but as we have an infinite number of contexts, where a very large list of skills has the potential to find that out-of-the-box bug nobody without that particular skill thought about, we can’t calculate non-traditional skills as a factor.
There are a few skills that are always valuable but they are not necessarily crucial to be considered ‘good’.

Person X is a blank slate. You may customize X to your own wishes.  You must build person X to become a tester that could be an asset in any context. Use as few attributes as possible.

This is my person X:

  1. The person has a professional demeanour;
  2. The person has good written communication skills;
  3. The person is a power learner, meaning that (s)he can quickly learn new things and is interested to;
  4. The person is aware of his/her own shortcomings.
  5. With the above attributes the person will use every method, tool, colleague, hoodwink, trick, question, experiment, ploy,… in his or her power to find important information.

To me, these are the absolute attributes and skills that make a tester good. Or at least good enough to serve in as many possible context I can imagine.

How would your person X look like?

The Test-as-a-noun Alarm

Software Testing

Words are important
I often practice good and thoughtful wording. We’re in the business of gathering information and supplying it. Working in a business where most people have no deep understanding of what their colleagues do, we need to be able to clarify, even translate information in a way that our recipient can understand the message optimally.

This is no easy feat and requires much practice. That’s why I train myself in it and you should too. At some point this summer, the idea popped into my head that “Test should never be used as a noun”.

Since then I’ve had the pleasure to discuss this on Twitter and Skype with Chris(@Kinofrost, QualityExcellentness), Patrick Prill (@TestPappy), Sarah Teugels(@Sarah Teugels) and James (@rightsaidjames).

This is the conclusion I have come to. (which is not necessarily, what the others have found.)

Shaping of ‘a test’

Consider these two definitions (from Context Driven Testing – James bach)

“Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes: questioning, study, modeling, observation and inference, output checking, etc.”

“A test is an instance of testing.”

To me it felt that the second definition did not bring any value but more likely supported confusion and abuse of the term.

  • ‘A test’ is a countable noun. Which makes it dangerous for (non-)testers who like to attach meaning and value to numbers.
  • ‘A test’ is easily confused with ‘a check’. It’s much too easy to substitute the difficult to comprehend concept with a more simple one. The question: “Could this information impact value to anyone who matters?” is forgotten and replaced with “Can I push this button and is the background blue?”. ‘A test’ involves human interaction, tacit knowledge, interpreting results,…
  • Calling your activity ‘a test’ implicates that it is a thing. Things are tangible and generally mean the same thing. A dog means a dog, even though there’s a few hundred different races. You can further specify what kind of dog it is you’re talking about.
    ‘A test’ involves tacit knowledge, which can’t as easily be expressed.
    In addition, calling it a thing, gives the illusion that it can be repeated. You don’t want other people to misunderstand it that way.

I have been convinced by the aforementioned people that there are indeed cases in which ‘a test’ can be a valuable. It comes with a few caveats though.

– It’s an activity; like ‘a run’ or ‘a play’ it isn’t specific about anything, but gives notion of a certain activity.
– It’s focused on finding important information (to a person who matters).
– It’s requires human intelligence. (tacit and explicit knowledge)
– It’s boundaries are irrelevant. Where it started and where it ended are not important to the concept.

We’ve come up with the following definition, trying to stay true to the nature of Context Driven Testing:
“A test is a human-controlled activity, any activity, that is focused on discovering information that is important to a person who matters”

Which is inclusive with: “Quality is value to some person who matters.”

Conclusion

“When you use ‘test’ as a noun, find a better way to phrase it.”
and if you don’t, be mindful of what it represents.
Which has it’s exceptions of course, like every heuristic.

I’m open for further discussion, but Twitter didn’t take kind to this blog post format.