The BBST Foundations course: Week 1

Experience Reports, Software Testing

The Meet and Greet forum has exploded.
100+ messages talking about all sorts of topics.
It’s an information flood about metal music, cars, toy rockets, nature, the outdoors and where everyone’s based.
I can’t wait for this to overflow into an abundance of testing knowledge.

Foundations_logo

The waiting game

And so I waited. Pressing F5 on the discussions forum to catch the occasional test-centered post that could become a testing discussion. This took a while.
4 days, I believe it took, before the first assignment was picked up in earnest.

There were a few interesting discussions, mainly on quiz questions that are designed to get you doubting, thinking, discussing and learning. Excellent!
Even Cem Kaner himself joined in.

Today, the last day of the first week, I’m feeling a bit disappointed. As a whole, I think the students have written enough to fill a small book. There’s a lot of good stuff, but also a lot of not so good stuff. There’s also input that is in complete contrast with what the course teaches (and what I as a tester believe in).

The format of a forum, limits us and the instructors to effectively enter in a discussion.
Answers are long, try to answer multiple points at a time and don’t do a good job at getting the right sentiment across (be you angry, annoyed or patient).
Because of this, I noticed a tendency to walk between the boundaries of “What the course is trying to teach you” and “how can we best pass the course” instead of doing earnest, in depth discussion.

I would love setting up a Slack for this course. A multi channel, multi person, immediate feedback, chatting tool with possibilities for one-on-one chatting, group chatting and complete class testing.

At the moment, I’m learning a lot from the course, the exercises and putting what I learn to test. The online course format, at the moment, isn’t adding much value to it though.
I’m hoping that will change.

 

The BBST Foundations Course commences

Experience Reports, Software Testing

Two months ago, I asked my employer whether I could follow the BBST Foundations course organised by Altom. My employer decided I could and gave me 5 days to work on it over the duration of the course.

Almost immediately I pre-ordered the coursebook, downloaded all the content and converted the video’s to MP3’s.
I vehemently started studying in advance and listened to the lectures during my daily commute.
Since that day, I might have been borderline obsessed with it.

So here I am, the day before the start.
There was an invitation mail in my inbox and I’ve entered the course website.

After some sense-making of the online platform, everything seems clear enough. One thing I miss is an option for one-on-one or multiple user communication channels.
There’s a few channels and they are open to everyone all the time. I understand the need for transparency, but the ability to have a dialogue or group discussion without having to refresh constantly would be nice.

The Meet-and-greet has already been filled by most instructors and some other students. They introduce themselves, what they do as a job and in their free time. Participants quickly get to know each other and the atmosphere is very jovial.

There’s a lot of potential for this course, I’m curious how it develops.

The Lollipop Cycle

Software Testing

carmit-lollipops

Lollipops. Sweet, colourful, sugar-infested Lollipops.

They can become a tool to propagate bugfixing within your team.
I’ll show you how.

The following tweet got me thinking about using lollipops as bug reports.Tweet AMC
I wasn’t attending the presentation, but it talked about using candy to motivate testers. Apparently, when anyone found a bug, they got a lollipop.

I’d like to take it a bit further.

 

Instead of rewarding bug-finding,
Bug-fixers would get their prize in the form of candy.


Consider the following flow:

  1. Someone finds a bug and reports it.
  2. The bug gets prioritized and added to the backlog/board/…
  3. For each bug, a lollipop is added to a basket.
    1. Critical bugs get red lollipops.
    2. Yellow for medium/minor/…
    3. Trivial bugs mean a green lollipop is added.
  4. A bug-fixer fixes a bug
  5. The fixer gets a lollipop matching the bug’s priority.

Positive effects can include:

  • Programmers are motivated to fix bugs.
  • Testers give positive reinforcement.
  • The basket serves as a visible reporting tool. “Oh, that’s a lot of red…”
  • Non-testers can participate in bug-finding to add lollipops to the basket.
  • New openings for joking.

Negative effects and risks:

  • Attack on the team’s diet. Consider sugar-free lollipops.
  • Dental plan should be standard within the salary profile.
  • Bug-fixers preferring green lollipops over red.
  • Programmers explicitly add bugs to get easy lollipops.

As you can see, the method is not without risk. Be context-mindful.

carmit-lollipops

A Test Group’s Declaration of Intent

Experience Reports, Software Testing

I’ve been working as a test consultant for a year at my current client and I constantly wonder what problem next I can help fixing.

The IT department has grown immensely since last year. Not just in numbers, but in maturity as well. I’m very proud of how much progression these guys have made.

Yet, the laws of consulting state that there always is a problem and that it is quite certain to be a people problem. And so we identified our current number one problem to be: “Still many people don’t know what to expect of us, testers, or don’t know what we actually do.”

We’ve had managers asking us to do gatekeeping, exhaustive testing,… Programmers asking us to test systems that don’t have an interface, do implementation level testing,…

We decided to create some kind of manifesto. A clear set of rules and statements that best describe our core business. This is what came of it:KnipselThis is a first version and hasn’t been put up yet, but we feel we’re getting close.

The testers felt the need to create a concise, to-the-point document which we’d print in large and raise on our wall as a flag. They want to be understood and grow from having to hear “just test this” to “I need information about feature X regarding Y and Z”.

I, as a consultant, wanted to unite testers, programmers, analysts and managers under a common understanding of what testers do, don’t do and can’t do.
Knowing that my days at the client are numbered, I want to leave behind tools for the testers to fight their own battles, when the time comes.
I have seen us become a team that supports the IT department in diverse and effective ways. Yet recently, powers that wish to fit us back into a process box have been at play as well. I’d hate to see the team become reduced to a checkbox on a definition of done again.

 

Isn’t that your job?

Experience Reports, Software Testing

Last week, on the TestersIO slack the following question was asked:

Training

The person was looking to find out whether other testers consider it OK to be called upon the quality of their bug reports by a tester more junior than them.

This is, of course, completely dependent on both parties and the way they choose to handle the situation.

  • Are you lecturing the person, or are you requesting more detail?
  • Does he choose for the emotional response, or a more reasonable one?

Another participant of the discussion suggested to invite the developer who’d handle the bugs to explain why the reports were insufficient.

This led me to develop a visualization of something that had been growing since reading Gerald Weinberg’s “Perfect Software, and other illusions about testing”.


An explainability heuristic: the Responsibility Meter

I’m not sure anyone ever told me this specifically, but everyone seems to be in agreement: Testers search, coders fix.

Looking more closely, nothing is ever that easy.

Why wouldn’t both roles be able to do both activities?
Another misconception is the responsibilities range over two activities, while they insinuate many more.

The road from “Hey, this looks strange…” to “This is an issue” to “The issue is fixed and nothing else seems to be broken” is often long and complex but always context-dependent.

The responsibility meter is a tool to support discussions.
If you find yourself dealing with:

  • Over-the-wall development
  • Ignorant co-workers
  • Unhelpful bug reports

This may be a good step towards a solution:

Responsibility meter

  1. The first scale visualizes the road from discovery to identification of the issue. This is where most of the discussions takes place.
  2. The second scale depicts what happens after the identification. Activities on this scale, but not added, could be debugging, refactoring, adding checks, trial and error, troubleshooting, further testing…

In the situation that a tester thinks his job of describing bug ends at discovery
and a coder expects the bug to be completely pinpointed, isolated, generalized, maximized,… and documented in enough detail before he starts fixing.
Then nothing ever gets fixed, at least not efficiently.

Most of the time, there is no need to explicitly set the marker.
Awareness of the scale is usually more than enough.

There are situations, however, where you need to have the talk about responsibility. Where it starts and where it ends.

It is not unusual that developers expect more detail, but that the testers aren’t willing or able to give it. Miscommunication leads to tension.
Tension leads to many more and worse problems.

It might be necessary to reset and adjust the meter a couple of times during the project, or make exceptions for certain special cases.
You should note that the scales are not set in stone. The activities may switch places or be skipped completely. Use it in your context to the advantage of the team.


This meter uses Cem Kaner’s Bug Advocacy heuristic RIMGEA (Bug Advocacy, lecture six) and Gerald Weinberg’s Discovery, pinpointing, locating, determining significance, repairing, troubleshooting and testing to learn. (Perfect Software, pg.33-36)