The Team Concord Triumph

Experience Reports, Software Testing

At the current project we face plenty of challenges. The staunchest is probably that the team can’t adhere to deadlines and management can’t get a grasp on the ‘why’ of it.

We move in bi-weekly sprints, working toward a deadline that has been set by… well… someone higher up, based on something they know that we don’t. I suppose.

I don’t mind tight deadlines. They can be a valuable tool, if used responsibly. But be aware that there’s a great many horrific problems with deadlines, if they are not.

  • Frequent tight deadlines aren’t real deadlines anymore. They don’t represent anything anymore.
  • To whom is the deadline important and what for? We don’t know. Do you? Tell us!
  • Why should the team care? Tell us!
  • Too tight deadlines diminish hope. Too loose deadlines encourage sloth. Are you willing to bet on hitting that sweet spot?

It’s not the lack of respect for deadlines that’s the problem though. It’s the perception that the team doesn’t work hard enough. Not hard enough to match the expectations.

Right now, in my context, these deadlines are used as a heuristic. They are there, just like the time registration, the burndown chart and some other, less popular, procedures to keep track of one thing: Time.

It is believed, that if management can get a hold on time and apply the right amount of pressure, the team will function at 100%. That being 8 hours a day, constant coding.

I find that there are at least two things wrong with that approach.

1. A team’s productivity and motivation isn’t intensified and certainly not sustained by having your team under the pressure of having to adhere to rigorous charts and metrics.
2. A focus on time will result in less qualitative work and the team as a whole will eventually suffer.

If taken too far, these procedures produce nothing but a beating stick, fearfulness and extra time spent cheating the system.

And boy, can you cheat the system. We’ve all seen those movies in which a tyrant tries to find a certain person or group but his seemingly faithful subjects work against him and hide the lot?
It’s somewhat like these movies, a whole team can support, carry and submerge each other’s discrepancies.

Needing to help someone, having to fix your local environment, being called to a quick meeting,… All are good explanations on why you weren’t able to finish that task. It’s a sign of a good team when all its members protect each other.

Facing this challenge, my advice would be to find another way to motivate (like stressing the value their work brings). Or suggest to do some expectation introspection.

As a tester, you can find subtle ways to influence this situation. During planning, I point out all the little stuff that is overlooked but carves daily chunks from your schedule.
When asked the question: “How long will you need to test this” (which is asked still way too often) I answer that “It depends on many uncertain parameters”, of which I then give the shortlist: Quality of the product, availability of developers/functional people, the stability of the environment, # time available at the time, # of bugs found,…

Slowly, but (hopefully) steadily, awareness will grow that there is a great lot more to software development than writing code. We’ve got this team that more resembles a group of friends. That’s a great basis to work from.

Tasting Let’s Test’s Rock Legends

Conferences, Software Testing

Smokescreens, AC/DC and a semi-headbanging James Bach.

This is how this edition of Tasting Let’s Test Benelux was introduced. Picture it.
I found this to be the catalyst for the day that followed. A conference took place of which an energetic crowd, a rock-n-roll setting, a ‘Testing laboratory’ bar and local talent were its primary ingredients. I reveled in their knowledge.

Harmony in the Testing-Checking provocation

There’s much to read about the distinction made between testing and checking. Mostly, when James Bach and Michael Bolton talk or write about this subject there is an unspoken, unintentional subliminal message that Checking is Testing’s smaller, less important brother. In his opening Keynote, James wished to address this. Which he did marvelously. By taking the distinction a step further and mixing it up with deliberate and spontaneous checking/testing he explained that checking and testing both happen constantly during testing sessions. Neither are of less importance. They are both inherent to our testing.

He demonstrated this by replaying and commenting on a testing session. He called this phenomena a ‘testopsy’ which in itself was an enlightening way of evaluating/training testing.

The devil’s machinery

I’ve played quite a few “tester’s favorite games” since the day James Bach conjured up his bags of dices. These did not prepare me, however, for puzzle 17. James Lyndsay has, next to a certain rock icon flair, a wicked intelligent mind, which I suspect, he only uses to devious ends.
… and teaching.

I spent 90 minutes on a puzzle with four inputs and two outputs. Later on, it was made clear that the puzzle consists of 3 lines of code. Moreover, some people had solved the puzzle in under 5-ish minutes.
It took me four different models of visualization, five pages of notes and uncountable discarded hypotheses, before I managed to solve it.
“Of course”. After solving a puzzle, everything immediately becomes apparent, simple even.

It is, however, the struggle that teaches us the most. Lyndsay framed this process so well that I didn’t feel too bad about spending as much time on a, in hindsight, easy puzzle.

These are but two experiences from a day filled with pleasant interactions, new people and refreshing stories.
Thank you!

The 136th time

Experience Reports, Software Testing

There’s so much ado about Test Automation I wouldn’t want to feel left out doing it.
For years now, we’ve been made to believe that test automation is the future of testing. For instance, no ‘manual testing’ would be needed because a machine would be able to do just as well allegedly.

I often feel the trending of test automation to be a dangerous thing, but today, it helped me greatly! I’m not a technical person, nor do I have the ambition to spend my days coding. I do have been experimenting for a while with tools. A Ruby course here, some tryouts with the SamouraiWTF virtual machine and so on.

Selenium IDE is a very easy to use, easily installable add-on and incredibly useful. It’s great for throw-away scripts but bad at anything else (like every record-and-play tool). Today I used it by recording a few steps to log in to the webapp, execute a query and consult record details. From those details I triggered the functionality “Go to next record”.
I had noticed before that this functionality behaved slower than others, but testing this by “clicking and waiting” was tedious. Some tweaks to the recorded steps by adding some waiting time and copy-pasting 2 relevant steps about 200 times, I had the script I needed.

Fired up the browser, pressed F12 for the developer tools, clicked play and went for coffee. The script ran for longer than this, but the beauty of selenium is that it can run in the background. On the 136’th time, the application client ran out of memory resulted by a javascript overload.

A good find and a good use of test automation.

Remember though:
It was a human having the test idea;
It was a human who had the hunch and how to pursue it;
It was a human who created the script;
And it was a human who investigated the results.

It took a few others to interpret the error and eventually fix it.

It’s never over, never easy

Software Testing

The waterfall model; 1976
The V-model; 1986
Goodharts law; 1975
The agile principles; 2001
Agile Testing Quadrants; 2003
Scrum 1995
Test automation (as replacement for testers)
ISO 29119
ISTQB
Tmap

There’s  a good chance that you’re familiar with some, if not all of these, principles/models/lists/schemes/…

Over time, during the first few years as a testing professional, I’ve come across many of these ideas . Together with different people I’ve discussed them, learned from them and sometimes refuted them.

I’ve increasingly felt more and more irritated with how these heuristics are used.
The heuristics themselves remain relevant, for they provide insight in our industry’s past and growth. They give interesting and valuable insights in how people have  perceived their work and still do. More often than not, other people or teams still aim to implement these heuristics. Whether they decided so themselves or if they have been sold these as “best practices”.

Usually, these models and principles shape change into something that can be described as an Immovable Object and sometimes a whip.

“We work according to a V-model.”
“The team has implemented an Agile process.”
“SCRUM’s how we organize things.”
“I test according to Tmap.”

Sooner than later, the room to improve upon these ‘truth-dictating’ models becomes very small.
I’ve often heard the argument: “Google does it this way, so it must be working.” or “Big bank facilities have used this process template for ages.”.

That doesn’t mean it works for this team, for this project, for this product,
for this client,…
Also, we don’t necessarily know exactly how google, or any other big software firm actually works, do we? Should we care?

Do we believe that someday, someone at these firms drew a process on the wall, sat behind a drum and started beating it, upon which all the other working-ants resumed work. And when one stepped out of line, he’d be whipped with ‘best practices’, processes, procedures and documentation?
Should we stand by, looking in awe, and imitate the best we can?

If you’ve studied these heuristics you’ll find they provide an anchor. Many ideas which you already have had about our industry may become apparent or maybe something new pours in giving room for new interesting thoughts.
They are usually an excellent means around which to revolve discussions that eventually lead to learning.

Another of their qualities is that they should always raise more questions.

They never hold ALL the truth. Mankind, however, loves simplifying things, especially when the underlying body is extremely complex.

An example:
Today I heared somebody explain what a ‘cube’ (the database structure) is to a group. It took quite some time and completely changed my definition of what I thought a cube was. But, during his explanation my mind was constantly yelling “Say it like this: If your database at a certain moment would represent a flat square and you add a time-dimension to it, representing a history of your database, you have a cube.”

That definition is satisfying for most as it is simple and gives the general idea, but it is not enough when you need the details. It also oversimplifies a person’s job, which nobody likes.

Our ultimate goal is to make our users’ lives and jobs easier for them, but to do this we, ourselves, need to go through a great chasm of complexity. It does us no good to simplify. There are patterns you can follow, there are guides that are helpful, but there is no silver bullet. The rocks you thread on, however solid they look, will give way.

“If you want solid ground to stand on, you chose the wrong vocation.” – Gerald Weinberg

Software development is subject to change, constantly. Embrace it.

Pick what’s relevant from existing models, for you, for your team and for your context and combine it to something that makes sense. Hell, throw in new ideas. Shape it, mold it, re-evaluate it  and nourish it consequently.

The Three-Facet Approach

Experience Reports, Software Testing

As I’m writing this, at least 21 other people are feeling distressed at this very moment and the hours to come.
The first release of the current project has been deployed last saturday. Tomorrow, monday, 600+ people’s work-life will have been impacted tremendously.

These past few months have been a series of accomplishments. I have seen a team timetravel from the late 90’s to the modern’ish age of software development.
When confronted with tight deadlines, an oversold, over marketed product, extreme expectations and a new way of working the team showed they could keep on growing.
They’ve shown incredible tenacity. And it is through this persistence that the deploy was made possible.
Tomorrow will show how well we did.

Before starting this project there was one tester present. She used to test mostly informally, doing a final check whether fixes actually fixed the problem and didn’t break anything else. Development was limited to implementing small changes on a staging environment before deploying on the live one.

Since that day, 5 consultant developers and 1 consultant tester, me, joined the team. Next to working on the product, we were asked to implement a methodology that would make them more agile and put more structure in the process.

TFS, Test management and the Three-Facet Approach

How has this ‘improved process’ had an impact on our testing?
Among other things, We implemented a few triggers so that the testers became part of the team. No longer will we hear that quality is only our problem or that we are responsible if any bugs are found in production. We’ve also made sure we integrate with the TFS environment, further penetrating ourselves in the development cycle.

Considering Test Planning:
– We want to be able to participate in the Definition of Done
– We want to achieve some sort of traceability
– We want to have a set of core tests
– Most of all, we want to find problems fast

This lead to a compromise approach:

  1. Test Cases: Per User Story that is put on the scrum board, the testers create a test case or three. These describe core functionality and when all these pass, our part of the Definition of Done is completed.
  2. Regression Test Set: A checklist is maintained. “Can I sort the grid by clicking the header?” This regression test checklist can be used for UAT, as a basis for automation and regression tests.
  3. Charters, sessions and test ideas. Even though this facet has yet to be fully clarified and followed. Most of our testing has been primarily informal testing. We’ve found 600+ bugs in limited functionality in only a few months time by following this approach, but we have yet to fill out one charter.

The three-facet approach is a made-up Bug Priorityname for a methodology that serves our projects needs. It is a compromise. It gives us the room we need to do good testing while keeping the overhead to a minimum. These last few weeks, people have asked us many questions.

  • “How many ongoing bugs to go?”
  • “What do you think of the quality?”
  • “Do you think go-live is possible this weekend?”
  • “Have you tested the billing functionality already?”

The question whether all test cases were written and/or passed didn’t make the list.
Should I take this as a lessons-learned?
In any case, I plan to further implement Session Based Test management into our testing now that the pressure of go-live is off.