Tasting Let’s Test’s Rock Legends

Conferences, Software Testing

Smokescreens, AC/DC and a semi-headbanging James Bach.

This is how this edition of Tasting Let’s Test Benelux was introduced. Picture it.
I found this to be the catalyst for the day that followed. A conference took place of which an energetic crowd, a rock-n-roll setting, a ‘Testing laboratory’ bar and local talent were its primary ingredients. I reveled in their knowledge.

Harmony in the Testing-Checking provocation

There’s much to read about the distinction made between testing and checking. Mostly, when James Bach and Michael Bolton talk or write about this subject there is an unspoken, unintentional subliminal message that Checking is Testing’s smaller, less important brother. In his opening Keynote, James wished to address this. Which he did marvelously. By taking the distinction a step further and mixing it up with deliberate and spontaneous checking/testing he explained that checking and testing both happen constantly during testing sessions. Neither are of less importance. They are both inherent to our testing.

He demonstrated this by replaying and commenting on a testing session. He called this phenomena a ‘testopsy’ which in itself was an enlightening way of evaluating/training testing.

The devil’s machinery

I’ve played quite a few “tester’s favorite games” since the day James Bach conjured up his bags of dices. These did not prepare me, however, for puzzle 17. James Lyndsay has, next to a certain rock icon flair, a wicked intelligent mind, which I suspect, he only uses to devious ends.
… and teaching.

I spent 90 minutes on a puzzle with four inputs and two outputs. Later on, it was made clear that the puzzle consists of 3 lines of code. Moreover, some people had solved the puzzle in under 5-ish minutes.
It took me four different models of visualization, five pages of notes and uncountable discarded hypotheses, before I managed to solve it.
“Of course”. After solving a puzzle, everything immediately becomes apparent, simple even.

It is, however, the struggle that teaches us the most. Lyndsay framed this process so well that I didn’t feel too bad about spending as much time on a, in hindsight, easy puzzle.

These are but two experiences from a day filled with pleasant interactions, new people and refreshing stories.
Thank you!

The 136th time

Experience Reports, Software Testing

There’s so much ado about Test Automation I wouldn’t want to feel left out doing it.
For years now, we’ve been made to believe that test automation is the future of testing. For instance, no ‘manual testing’ would be needed because a machine would be able to do just as well allegedly.

I often feel the trending of test automation to be a dangerous thing, but today, it helped me greatly! I’m not a technical person, nor do I have the ambition to spend my days coding. I do have been experimenting for a while with tools. A Ruby course here, some tryouts with the SamouraiWTF virtual machine and so on.

Selenium IDE is a very easy to use, easily installable add-on and incredibly useful. It’s great for throw-away scripts but bad at anything else (like every record-and-play tool). Today I used it by recording a few steps to log in to the webapp, execute a query and consult record details. From those details I triggered the functionality “Go to next record”.
I had noticed before that this functionality behaved slower than others, but testing this by “clicking and waiting” was tedious. Some tweaks to the recorded steps by adding some waiting time and copy-pasting 2 relevant steps about 200 times, I had the script I needed.

Fired up the browser, pressed F12 for the developer tools, clicked play and went for coffee. The script ran for longer than this, but the beauty of selenium is that it can run in the background. On the 136’th time, the application client ran out of memory resulted by a javascript overload.

A good find and a good use of test automation.

Remember though:
It was a human having the test idea;
It was a human who had the hunch and how to pursue it;
It was a human who created the script;
And it was a human who investigated the results.

It took a few others to interpret the error and eventually fix it.

It’s never over, never easy

Software Testing

The waterfall model; 1976
The V-model; 1986
Goodharts law; 1975
The agile principles; 2001
Agile Testing Quadrants; 2003
Scrum 1995
Test automation (as replacement for testers)
ISO 29119
ISTQB
Tmap

There’s  a good chance that you’re familiar with some, if not all of these, principles/models/lists/schemes/…

Over time, during the first few years as a testing professional, I’ve come across many of these ideas . Together with different people I’ve discussed them, learned from them and sometimes refuted them.

I’ve increasingly felt more and more irritated with how these heuristics are used.
The heuristics themselves remain relevant, for they provide insight in our industry’s past and growth. They give interesting and valuable insights in how people have  perceived their work and still do. More often than not, other people or teams still aim to implement these heuristics. Whether they decided so themselves or if they have been sold these as “best practices”.

Usually, these models and principles shape change into something that can be described as an Immovable Object and sometimes a whip.

“We work according to a V-model.”
“The team has implemented an Agile process.”
“SCRUM’s how we organize things.”
“I test according to Tmap.”

Sooner than later, the room to improve upon these ‘truth-dictating’ models becomes very small.
I’ve often heard the argument: “Google does it this way, so it must be working.” or “Big bank facilities have used this process template for ages.”.

That doesn’t mean it works for this team, for this project, for this product,
for this client,…
Also, we don’t necessarily know exactly how google, or any other big software firm actually works, do we? Should we care?

Do we believe that someday, someone at these firms drew a process on the wall, sat behind a drum and started beating it, upon which all the other working-ants resumed work. And when one stepped out of line, he’d be whipped with ‘best practices’, processes, procedures and documentation?
Should we stand by, looking in awe, and imitate the best we can?

If you’ve studied these heuristics you’ll find they provide an anchor. Many ideas which you already have had about our industry may become apparent or maybe something new pours in giving room for new interesting thoughts.
They are usually an excellent means around which to revolve discussions that eventually lead to learning.

Another of their qualities is that they should always raise more questions.

They never hold ALL the truth. Mankind, however, loves simplifying things, especially when the underlying body is extremely complex.

An example:
Today I heared somebody explain what a ‘cube’ (the database structure) is to a group. It took quite some time and completely changed my definition of what I thought a cube was. But, during his explanation my mind was constantly yelling “Say it like this: If your database at a certain moment would represent a flat square and you add a time-dimension to it, representing a history of your database, you have a cube.”

That definition is satisfying for most as it is simple and gives the general idea, but it is not enough when you need the details. It also oversimplifies a person’s job, which nobody likes.

Our ultimate goal is to make our users’ lives and jobs easier for them, but to do this we, ourselves, need to go through a great chasm of complexity. It does us no good to simplify. There are patterns you can follow, there are guides that are helpful, but there is no silver bullet. The rocks you thread on, however solid they look, will give way.

“If you want solid ground to stand on, you chose the wrong vocation.” – Gerald Weinberg

Software development is subject to change, constantly. Embrace it.

Pick what’s relevant from existing models, for you, for your team and for your context and combine it to something that makes sense. Hell, throw in new ideas. Shape it, mold it, re-evaluate it  and nourish it consequently.

The Three-Facet Approach

Experience Reports, Software Testing

As I’m writing this, at least 21 other people are feeling distressed at this very moment and the hours to come.
The first release of the current project has been deployed last saturday. Tomorrow, monday, 600+ people’s work-life will have been impacted tremendously.

These past few months have been a series of accomplishments. I have seen a team timetravel from the late 90’s to the modern’ish age of software development.
When confronted with tight deadlines, an oversold, over marketed product, extreme expectations and a new way of working the team showed they could keep on growing.
They’ve shown incredible tenacity. And it is through this persistence that the deploy was made possible.
Tomorrow will show how well we did.

Before starting this project there was one tester present. She used to test mostly informally, doing a final check whether fixes actually fixed the problem and didn’t break anything else. Development was limited to implementing small changes on a staging environment before deploying on the live one.

Since that day, 5 consultant developers and 1 consultant tester, me, joined the team. Next to working on the product, we were asked to implement a methodology that would make them more agile and put more structure in the process.

TFS, Test management and the Three-Facet Approach

How has this ‘improved process’ had an impact on our testing?
Among other things, We implemented a few triggers so that the testers became part of the team. No longer will we hear that quality is only our problem or that we are responsible if any bugs are found in production. We’ve also made sure we integrate with the TFS environment, further penetrating ourselves in the development cycle.

Considering Test Planning:
– We want to be able to participate in the Definition of Done
– We want to achieve some sort of traceability
– We want to have a set of core tests
– Most of all, we want to find problems fast

This lead to a compromise approach:

  1. Test Cases: Per User Story that is put on the scrum board, the testers create a test case or three. These describe core functionality and when all these pass, our part of the Definition of Done is completed.
  2. Regression Test Set: A checklist is maintained. “Can I sort the grid by clicking the header?” This regression test checklist can be used for UAT, as a basis for automation and regression tests.
  3. Charters, sessions and test ideas. Even though this facet has yet to be fully clarified and followed. Most of our testing has been primarily informal testing. We’ve found 600+ bugs in limited functionality in only a few months time by following this approach, but we have yet to fill out one charter.

The three-facet approach is a made-up Bug Priorityname for a methodology that serves our projects needs. It is a compromise. It gives us the room we need to do good testing while keeping the overhead to a minimum. These last few weeks, people have asked us many questions.

  • “How many ongoing bugs to go?”
  • “What do you think of the quality?”
  • “Do you think go-live is possible this weekend?”
  • “Have you tested the billing functionality already?”

The question whether all test cases were written and/or passed didn’t make the list.
Should I take this as a lessons-learned?
In any case, I plan to further implement Session Based Test management into our testing now that the pressure of go-live is off.

The Nature of Computation

Software Testing

An advert showed up on my LinkedIN today.
This is what it said: Automation1

Dramatic reduction in manual effort.  [The client] identified that automation is approximately 97% faster than their prior manual approach to quality assurance.

It had a link to an article in which a presentation could be viewed. I’m very interested in these adverts for I hope to someday find an honest, clear representation of how a test automation tool can actually be profitable to your project.

I once heard the following being stated: “People who believe in these statistic-based-sales-stories blindly, deserve losing their money on it.”
It was meant as a jest and I admit, I’ve added some more drama to it.

However, it is a deep running frustration of many testers that test automation is sold as automagic. Let me tell you a bit about computers.

Computers don’t think, computers don’t lie, computers don’t tell stories, computers don’t tell the truth, computers can’t give you advice and computers can not (yet) substitute for an intelligent, sentient human being.
They take numbers and they give numbers. They calculate.

It is in our nature to believe they can do more than that. There’s no magic in there, unless we put it there. But I don’t know any magicians.

Don’t get me wrong. I love automation. It helps me speed up the boring, simple, obvious checks that come with every daily build. There’s developers creating these checks, there’s testers creating other kinds of scripts. Most of the time, they give us useful information in the form of “none of the things that worked previously, are now broken”. Sometimes they fail and we get to check whether the script itself needs maintenance, or some functionality actually broke.

The 97% profit, or any time-wise comparisons are complete huey.
Did they calculate in the actual creation of the scripts? The maintenance? The infrastructure? The time it took the developers to adapt the application to accommodate the automation? The tooling cost?

Even if all of this was. Do the tests give the feedback you want? Does it report possible problems or does it give a false sense of quality? Can it replace anything?

I sincerely hope that the people handling the money, who take the decisions about implementing new tools, processes,… take a step back and talk to someone who can put these numbers in perspective.