The Stickering

Experience Reports, Software Testing

‘The Stickering’ is a loosely managed, session based, reporting-and-guiding tool for your (user acceptance) testing. Feel free to adapt and use this for your own projects. The explanation given here is how it was integrated for a particular context.

Context: Exactly one year ago, I and a few other consultants were hired to help transform the current process of deploying and developing directly into production and hoping nothing blows up, to a more closely managed process with more safeguards.
In addition, the complete software application now in place would be recreated to a more modern technology. This meant: new process, new technology, new infrastructure, new people, old business processes with new changes. All this, under heavy set deadlines.

Fast forward and zoom in on testing: The first release was a disaster. Testability was an afterthought, so that testing in earnest could only start when it was too late. Bugs ramped up, deadlines were reset twice and the bi-weekly fix-releases were taking up most of the test team’s time.

While we were busy fighting old battles, the team of coders was almost doubled and two new projects were started.

Today, we’re two months from the deadline, one more rigid that the last, and we can finally start testing in earnest. The functionality is huge, complex and still changing.
We do not have the necessary knowledge to do a good enough job.

Enter ‘The stickering’Wizard Mindmap

Because the functionality is so incredibly complex and important to the business, we’ve been able to get a lot of support. Many people are interested and motivated to help us find important information.
Suddenly, there’s 20 other people testing the application and without an adjusted process, we’re sure to lose track of what we’ve been doing and what is still to be done.

To manage our coverage, we created mindmaps (www.mindmup.com) . Together with the functional people and their documents, we lined out a few modules and gathered a bunch of functionalities together with interesting test ideas in various mindmaps.

We pinned them to the wall. Next, we ordered three kinds of stickers, all in a different colour. One colour for testers, one colour for business people and one colour for IT-related but closely connected to business users, such as analysts and helpdesk.

They visit us in blocks of half days. Each session, they choose a few connected nodes on the mindmaps and start exploring. We ask them to focus on their unique point of view. Helpdesk professionals know where users will have questions and where quick solutions should be implemented. Finance will have another focus.

During their sessions, they test the nodes from their point of view and note their finding in the following Test Charter . (click through for an example)

After their session, we debrief and ask them all sorts of questions. (I will elaborate on this in a later post)

The final step is adding the coloured stickers to the mindmaps once the user has the feeling she has tested the node sufficiently. This way, anyone can pass the wall and see our progress. The goal is to get all nodes stickered by each colour.

Together with our bug wall, our results and progress get very visible.

We’ve only just begun with this and I’ll be sure to report on how it worked for us in the end.

 

WP_003445

 

Agreeing on uncertainty

Experience Reports, Software Testing

Coverage in Scope

In a software project, you can come across quite a few kinds of coverage. There’s Line coverage, Decision coverage, Use Case coverage, Branch coverage,… It makes sense to talk in terms of percentages when addressing these.

The above examples are descriptions of exhaustive, finite lists. The lists may not be complete, but somehow, by someone, somewhere these lists are conceived and decided upon.

They will grow in time, as new information becomes apparent and new important issues are uncovered.

This is important: the changes to these lists are manageable, even if they don’t seem to be at first. You can prioritize, as some of the items are more important than others. You start from what is apparent and deal with what comes your way. Use cases are added, lines are added, decisions are clarified, branches are checked,… It’s a work in progress and eventually the project will reach a point where change isn’t relevant anymore. Coping with that constant change is being agile.

Coverage Estimation

Suppose you have 1.000.000 lines of code. There are already tool-enabled checks in place that support 90% line coverage. Could you estimate how much time it would take to reach 95% line coverage?

It’s a straightforward question but a herculean task to gather information to offer as precisely as possible an estimate. Even then the solution is ambiguous  and the result uncertain. But you can give an informed estimate, if you desire. An educated guess, calculating in some margin for error and unforeseen difficulties.

Possible parameters  for line coverage estimation:

  • # of lines
  • Complexity of lines
  • Disparity of lines

In order to agree upon an estimation, the influencing parameters must be agreed upon. To agree upon a parameter, it must be known. Estimations are more than guesses, they are an agreement between two parties. One side calculates in the time for his or her planning, the other wants to get results within the borders of the estimate.

Test Coverage

By definition, testing is infinite. You have knowledge of only so many influencing parameters and there are many more in hiding.
Test Coverage, in percentages, is an absurd concept.

Consider the following proposition:

Will you work for me? It involves pushing buttons.

I won’t tell you how long, how many times, what happens when buttons are pushed, where you’ll be doing this, in which environment or whether it involves trained wild-life.

I’m guessing you find this absurd. Estimating Test Coverage is just as preposterous as you can’t put your finger on the infinite parameters flying around.
People, by nature, are unwilling to agree to a set of uncertainties.

However, if you, or anyone on your team is still hell-bent on getting estimates for testing, there are possibilities.

  • You both could agree on a set of known parameters. Keeping in mind that new ways will open up and temper with your estimates.
  • Or both parties could outline a model of the feature and work from there.
  • You could agree on spending X time on the feature and nothing more.
  • An initial hour or two could be estimated for a first probing session on which a clearer ‘estimate’ could be given.

All of the above are severely lacking in either meaningful coverage or estimation. There are many workarounds and different possibilities, but they are usually artificial and bound to escalate.

Estimates seem harmless at first, but:
They have an impact on people’s planning, frustrating them as estimates are shattered. They can hit a dent in your credibility as a tester.
They can severely implicate your freedom as a tester.

You need to be able to push where it hurts, being told where and how long you can test restricts this.

BugCoin

Experience Reports, Software Testing

Bitcoin has been around for some time now. In a nutshell: it’s a currency that is generated by software which was created by humans. It consists of 0’s and 1’s and is only as valuable as people deem it to be.

288_european_mantis_cf1 Guess what! So are bugs! Hopefully, they weren’t generated on purpose though… but they can be extremely valuable as a currency. Treat the following as a heuristic, it won’t work in every project and in some contexts it might even be hazardous.
It’s usually better to report bugs in a dry, clear manner while keeping the stakeholders feelings in mind. It’s important that you don’t antagonize anyone on your team, for that will impact your testing for the worse.

Sometimes, however… There’s moments and situations in which it is helpful that you have a certain power to influence the bug-flow. Either by increasing the priority, describing the bug more clearly so it gets picked up earlier, having someone implement a quick fix as a favor… There’s a ton of ways to influence this bugaboo and people might benefit from it. Sometimes you find your pockets bulge with BugCoin. Whilst pondering on this concept I found that I have:

  • Bartered bugs as salesware on a market in order to adhere to contracts;
  • Have complex bugs fixed fast as a powerplay show to prove effectiveness;
  • Pinned high amounts of bugs on a wall to show problematic quality;
  • Used bugs to get buy-in with stakeholders.

Most of the times, the easiest way to show added value as a tester is by finding important defects fast. There are cases where your insistent searching is dismissed as it being “just-your-job”.
Showing a handful of BugCoin can sway many people, just make sure they know its value. In my currenct context, this is how we get things done. Today I spent a few hours searching for a mysterious bug for the accounting department. I’ll make sure they’ll know where that coin came from. 😉 I’m sure this will come in handy sooner or later.

The Team Concord Triumph

Experience Reports, Software Testing

At the current project we face plenty of challenges. The staunchest is probably that the team can’t adhere to deadlines and management can’t get a grasp on the ‘why’ of it.

We move in bi-weekly sprints, working toward a deadline that has been set by… well… someone higher up, based on something they know that we don’t. I suppose.

I don’t mind tight deadlines. They can be a valuable tool, if used responsibly. But be aware that there’s a great many horrific problems with deadlines, if they are not.

  • Frequent tight deadlines aren’t real deadlines anymore. They don’t represent anything anymore.
  • To whom is the deadline important and what for? We don’t know. Do you? Tell us!
  • Why should the team care? Tell us!
  • Too tight deadlines diminish hope. Too loose deadlines encourage sloth. Are you willing to bet on hitting that sweet spot?

It’s not the lack of respect for deadlines that’s the problem though. It’s the perception that the team doesn’t work hard enough. Not hard enough to match the expectations.

Right now, in my context, these deadlines are used as a heuristic. They are there, just like the time registration, the burndown chart and some other, less popular, procedures to keep track of one thing: Time.

It is believed, that if management can get a hold on time and apply the right amount of pressure, the team will function at 100%. That being 8 hours a day, constant coding.

I find that there are at least two things wrong with that approach.

1. A team’s productivity and motivation isn’t intensified and certainly not sustained by having your team under the pressure of having to adhere to rigorous charts and metrics.
2. A focus on time will result in less qualitative work and the team as a whole will eventually suffer.

If taken too far, these procedures produce nothing but a beating stick, fearfulness and extra time spent cheating the system.

And boy, can you cheat the system. We’ve all seen those movies in which a tyrant tries to find a certain person or group but his seemingly faithful subjects work against him and hide the lot?
It’s somewhat like these movies, a whole team can support, carry and submerge each other’s discrepancies.

Needing to help someone, having to fix your local environment, being called to a quick meeting,… All are good explanations on why you weren’t able to finish that task. It’s a sign of a good team when all its members protect each other.

Facing this challenge, my advice would be to find another way to motivate (like stressing the value their work brings). Or suggest to do some expectation introspection.

As a tester, you can find subtle ways to influence this situation. During planning, I point out all the little stuff that is overlooked but carves daily chunks from your schedule.
When asked the question: “How long will you need to test this” (which is asked still way too often) I answer that “It depends on many uncertain parameters”, of which I then give the shortlist: Quality of the product, availability of developers/functional people, the stability of the environment, # time available at the time, # of bugs found,…

Slowly, but (hopefully) steadily, awareness will grow that there is a great lot more to software development than writing code. We’ve got this team that more resembles a group of friends. That’s a great basis to work from.

The 136th time

Experience Reports, Software Testing

There’s so much ado about Test Automation I wouldn’t want to feel left out doing it.
For years now, we’ve been made to believe that test automation is the future of testing. For instance, no ‘manual testing’ would be needed because a machine would be able to do just as well allegedly.

I often feel the trending of test automation to be a dangerous thing, but today, it helped me greatly! I’m not a technical person, nor do I have the ambition to spend my days coding. I do have been experimenting for a while with tools. A Ruby course here, some tryouts with the SamouraiWTF virtual machine and so on.

Selenium IDE is a very easy to use, easily installable add-on and incredibly useful. It’s great for throw-away scripts but bad at anything else (like every record-and-play tool). Today I used it by recording a few steps to log in to the webapp, execute a query and consult record details. From those details I triggered the functionality “Go to next record”.
I had noticed before that this functionality behaved slower than others, but testing this by “clicking and waiting” was tedious. Some tweaks to the recorded steps by adding some waiting time and copy-pasting 2 relevant steps about 200 times, I had the script I needed.

Fired up the browser, pressed F12 for the developer tools, clicked play and went for coffee. The script ran for longer than this, but the beauty of selenium is that it can run in the background. On the 136’th time, the application client ran out of memory resulted by a javascript overload.

A good find and a good use of test automation.

Remember though:
It was a human having the test idea;
It was a human who had the hunch and how to pursue it;
It was a human who created the script;
And it was a human who investigated the results.

It took a few others to interpret the error and eventually fix it.