Tuesday, February 7, 2012

Transforming the way testing is performed...part one

Glossary: Test Strategy - High level / abstract test plan, usually less than a page.
Caveat 1: This is how testing is done where I currently work. Is it the way testing should be done? For you, maybe not. For us, it works out pretty well.

SETTING:
We use scrum, 2 week sprints, and mostly agile :)

Each 'feature' is a story in scrumworks, or multiple stories depending on feature size. Features can be broad or narrow, depending on the area under work and the PO / team. We have 8 active feature development scrum teams.

As a story is brought into a sprint the QA person writes an initial test strategy: thoughts about how to test the feature, what is the target, different ways to think about the feature, what areas are we NOT going to test, etc. The strategies should be specific to the work for that story and be related to other work in subsequent stories if the feature span’s multiple sprints.

The story is implemented by development and tested by QA until conditions of satisfaction are met. Then the test strategy is updated to reflect what was actually done. Notes about things that might be useful and/or not immediately apparent to the next person are added to the test strategy.

This next part is the one we are currently working on. The automation team takes over. Between interpreting the test strategies and having a conversation with the QA person, the SDETs come up with an automation effort that includes specific test cases. This allows the manual testers to not have to manually regress old features that we are not actively testing. (More of this will be in Part 3)

EXCEPTIONS:
Some features never get a test strategy. Example: "correct the link on page X to now be Y". There is limited usefulness in agonizing over this trivial change. Examine the new page...verify functionality that might be directly affected by the change, but truth be told it's most likely a five minute testing effort. If it takes you longer to write the test strategy than it did to actually test it...don't.

If writing a state diagram is easier then writing up a test strategy do. IE, if there are 3 states that can have 4 results, those 12 possibilities are easiest to just write out, rather then try and abstract them into a test strategy.

Sometimes just writing the test cases out is the quickest method. I frown on this method cause it tends to limit the thinking of the tester as they are performing their software testing. We have seen some cases where X, Y, Z are the test cases and thinking outside those really are just 'trying' to complicate matters in an area that is minimally useful.

REASONINGS (behind this madness):
We hire smart, intelligent people with good judgement. I leave it up to those smart people to know when any reasonably trained QA person could test a new feature with no previous knowledge. Or those same people could test something with the limited knowledge included in a test strategy.

We have the standard feature creep, emergency injections and other areas of software reality. This is where the judgement of the people I've hired comes into play. I trust them to make the right choices and I trust them to be able to defend those choices.

Do people make mistakes? Yep, probably more then I'm aware of, but the system works pretty well given our environment, our systems, our people and the speed at which we move. This allows us the flexibility to provide minimal documentation (given a feature and it's importance), and not have that documentation slow us down anymore than is necessary.

Tuesday, August 3, 2010

Getting Testing?

Very few people get testing.

The mindset of a good testing person is hard to identify...
When asked to test something do they split the problem up into small parts, or do they simply test inputs and outputs...do they cover all the inputs? all the outputs? Do they only ask hard questions, and never cover the easy stuff? Do they only think of the obvious, and never broach the harder questions? Do they think of the system as a whole? Do they think of things outside the system that can affect it?
See cause a good tester hits lots of these things in turn, they identify that they've attacked a problem from one angle, go back to the beginning and attack it from another angle. This is the single hardest thing to identify, cause some people use up minutes while thinking of different paths to test, others take significantly longer.

Explaining that to other people is one of the hardest things I've ever tried to do.
Haha, explaining to other people that few people get testing is hard...hehe...sorry, back on topic.

When I explain things to other people by simplifing them, I often feel that I am dumbing it down and that those people are not going to be making informed decisions. However, if they get accessibility (to the information) instead of accuracy (details) is that enough?

I mean we don't teach math by starting with decimals...you learn integers first.
Example: In first grade we were taught 2 divided by 3 is not doable... despite the fact that .6666 is a valid decimal value. In order to teach accessibility (integers) so that kids can grasp it we left out accuracy (decimals) till after they got the first concept...

So what information about testing can be accessible (to non-testers) while perhaps not being accurate (for testers)?
Depth vs Breadth? Maintainance of test cases? Automation? Execution?

Wednesday, July 14, 2010

People, too many expectations

Recently I've converted from a real job in testing, into a manager of testing people. It's odd going from expecting awesomeness in oneself to being responsible for the awesomeness of a team of people.

For one thing, the people underneath me, that I'm supposed to guide, none of them can move fast enough. Things take longer then normal, projects never seem to be as easy as they first appeared. At first I thought this was because I was a new manager and I needed to guide them more, but I have no wish to micro-manage people. I've been micro-managed and it was all I could do to get my manager off my back (sometimes lying) and then do what I needed to do.

Next I thought perhaps people weren't applying themselves enough. So I grabbed some people I knew had the requisite skills and gave them a project I thought was simple. Results: took three times what I thought it should.

So then I chucked it up to people not having enough time, I mean a QA Engineer whose on at least one team doesn't have that much free time, not to mention in reality most of the QA people under me are on multiple teams. But I can receive detailed feedback from these same QA people about topics that they had to have researched.

I've finally come to the conclusion that it isn't them, it's me (not surprisingly).

I couldn't keep pace with my own expectations, how would anyone else be able to do that? For one thing I never seemed able to move at a pace that was acceptable, there was always something more that I could do, some little thing that could be tweaked or made better, a new feature added, some better way to store, retrieve or move data. So it's not that I've settled for less, it's that I've realized that each individual has just this small sliver of time that they can devote to 'side' projects, and most of those people need to learn something (often times multiple things) new, before they can even start the project.

PS: This isn't a ding on them. The crew I have is some of the best people I've had the pleasure of working with. However I have to temper my need for results / speed with what people can reasonably do, without burning themselves out.