Wednesday, April 18, 2012

Transforming the way testing is performed...part two

    It's now been almost 2 years of me being in charge of testing where I work for an eCommerce website.
    My first act was to attempt to abolish test cases. They managed to survive under-wraps for about a year till I could put a final nail in their coffin. We still have test cases, but only in instances where writing the test cases out is faster or simpler then writing out our test strategy.
    My second act was to dictate to people that we should be writing documentation, but that it should take less time compared to the actual testing performed. (If it takes you 5 minutes to test it, it probably doesn't even need to be documented) The idea being that we need a knowledge repository of what we did, but that repository is for the tester's use.
    At first lots of the test strategies looked like just cut down test plans where test cases were the only thing used, but as I pressed people for more abstraction, it got more and more easy. The idea of abstracting is where you talk about what you think needs to be done in testing, using shortcuts naming conventions (tours in this case) like those recommended in Whittaker in "Exploratory Software Testing" and talk about more here http://www.developsense.com/blog/2009/04/of-testing-tours-and-dashboards/
    So we now write documentation as a high level abstract of what we are testing. (I like to call these 10 minute test plans, but in reality they take between 15-60 minutes to write-up) Here is a small sample to give you an idea.

***** NAME    DATE *****
Basic Profile tests in ATG
STRATEGY:
  Create documentation on the basic testing strategies for profile tests.  This should include login, logout, and new user creation.
TOURS:
    FEDEX - Verify that usernames are passed between the ATG and the _censored_ databases including making sure that invalid and duplicate usernames can not be used
    LANDMARK - Create users in all locations and make sure that those users can log in to all locations including in My and CSC.

    ANTISOCIAL - See how the system handles illegal inputs such as short usernames, long usernames, duplicate usernames, legacy usernames, etc.  Try to break the system.
NOTES:
  The requirements for basic login and user creation are the following:
    - usernames must be between 3 and 15 characters in legnth
    - new usernames can only contain numbers and letters
    - legacy characters including . _ - and space on legacy users only
    - log out should log user out
    - duplicate usernames should not be allowed (non-case sensitive)
    - 3 places to create username ATG, My and CSC Tools:
        _censored_
        Need to get with QA if you do not have the the password for it.
Database:    _censored_
IGNORED:
    Anything beyond the basic log in and user creation functionality of the profile, this will not include the weirds such as what happens when e-mail addresses are modified, changed username, changed slugs.




The reason we choose to do this, is that my company hires people who are experienced in testing. I think they all have the ability to take a strategy from anyone and come up with 80% of the test cases that the writer did.

Is this perfect? No, but…
It solves two frequent problems.
    1: Testers writing days of documentation for no one to ever read, or if they do the documentation is too old to matter.
    2: Testers never documenting what they do.
I think this is a nice, happy medium solution to the problem and it works here.


Why is it not perfect? i.e., What failed?
    People get attached to processes, particularly those who like to have organization round their tasks. They got attached to the strategies, so much so, that they would use them as templates…not as opportunities to think about the problem. The original idea was presented to people with the idea that there are other ways to document testing. However it has evolved into a more static "This is how we test here". I'm not really sure how to fix this yet. Just by writing this blog, I'm hoping people will re-evaluate why they write strategies.

What I didn't expect to happen:
    I expected a significant amount of pushback from uppers about metrics and numbers (aka passing rates, number of test cases ran, etc). After a very enlightening conversation with the company COO, he told us if it slows us down to stop doing it. And we haven't been tracking test cases since.

Wednesday, March 7, 2012

Why every tester should take BBST Foundations!

You should take the BBST Foundations class, and here is my reasoning:

What is every testers greatest asset? (Hint: It's not the ability to do something over and over and over and over again until they succumb and shutdown.)

No, it's their mind and their ability to think critically.

So what is critical thinking:

"Critical thinking is the process of thinking that questions assumptions. It is the process of deciding if a claim is true, false, sometimes true or partly true." (1)

One of the main components of testing is questioning assumptions.

  • business assumptions

  • developer assumptions

  • product assumptions

  • your assumptions

  • your assumptions you do not know you have

  • assumptions about how something works

  • assumptions about what doesn't need to be included

  • assumptions about what the client wants

  • etc.

Part of every testers job is to take these assumptions, analyze them (we call it test) and provide data back to everyone about which assumptions are true, sometimes true, partly true and blatantly false. In the ideal world you can do this before coding, by questioning the stories, or specification, as they are written.

Even if you don't believe in the Context-Driven school or Context-Driven approach, critical thinking skills can only help you in whatever job you're in. Unless of course your that person, the one who just wants to slog through your day and get your 8 hours done while accomplishing next to nothing.

So, the real question for testers is: How do you learn or improve critical thinking?

By training, and what is effective training? The traditional classroom setting can be useful, but interactive peer-review is probably the best that I've found so far. (ie "When learners talk and teach, they learn") (2)

BBST Foundations accomplishes this through it's online classroom structure - you are participating with 24 other testing people who are there to discuss, talk, teach and learn from each other. "The BBST series attempts to foster a deeper level of learning by giving students more opportunities to practice, discuss and evaluate what they are learning." (3)

Foundations is the first in a series of classes that focuses on critical thinking with a testing bent. This isn't a class you can just listen to and parrot back, you have to "add value to the course with your participation, ... submit reasonably good assignments ... and exams, ... provide reasonable assessments of other students' work". (3) In summary you have to provide useful data back to other students.

They do it beautifully; blending knowledge, skills and testing-relevant self-awareness.

The stated goals of the first class are:

  1. Familiar with basic terminology and how it will be used in the BBST courses

  2. Aware of honest and rational controversy over definitions of common concepts and terms in the field

  3. Understand there are legitimately different missions for a testing effort. Understand the argument that selection of mission depends on contextual factors . Able to evaluate relatively simple situations that exhibit strongly different contexts in terms of their implication for testing strategies.

  4. Understand the concept of oracles well enough to apply multiple oracle heuristics to their own work and explain what they are doing and why

  5. Understand that complete testing is impossible. Improve ability to estimate and explain the size of a testing problem.

  6. Familiarize students with the concept of measurement dysfunction

  7. Improve students’ ability to adjust their focus from narrow technical problems (such as analysis of a single function or parameter) through broader, context-rich problems

  8. Improve online study skills, such as learning more from video lectures and associated readings

  9. Improve online course participation skills, including online discussion and working together online in groups

  10. Increase student comfort with formative assessment (assessment done to help students take their own inventory, think and learn rather than to pass or fail the students)

Truth be told I took this class as a prerequisite for taking the rest of the series. I thought I would slog through this class, be bored and maybe gain a little bit in the areas of 8, 9 and 10. Using the process of peer-review from peers who have vastly different perspectives, I gained a better understanding for all 10 goals.

The major areas this class focused on were:

  • Mission of Testing

  • Oracles and Heuristics

  • Impossibility of complete testing

  • Code Coverage

  • Measurement


It managed to covers these areas as well:

  • Thinking about problems

    • Before planning for them

    • Before talking about them

    • Before attacking them

  • Step back and analyze what you're doing

  • Come at the problem from another vantage point

  • Communicating with varied audiences (and an example):

    • When talking with developers; do you know the high level aspects of the software you work in (ie http for web based software)

    • When talking with business; do you understand the user models you should be working in (ie financial software is for people who care about the numbers)

    • When talking with management; do you understand how what you are doing affects the schedule (ie early found bugs get fixed)

    • When talking with your testing peers; do you know how to communicate clearly about testing aspects (ie the difference between this or that approach)


And last but not least: I have found that in communicating with people, my style is lacking. I struggle to communicate with people in testing, unless they have 3+ years of experience. What about when I'm trying to communicate to developers with very little want or exposure to testing. Let's just say it takes a bit and leaves us all usually worse than when we came in. I know this class has allowed me to focus my thoughts and present my ideas with more clarity for all parties involved.

My advice to you. Sign up for this class, NOW. For you hyper-lazy (http://www.associationforsoftwaretesting.org/training/courses/foundations/)

Too many testing people don't treat their jobs as a profession. Even if you only want to be in it for the next 12-24 months, you need to treat it like a career for that time. This includes: communicating with others in a professional manner, being knowledgeable about testing and using that testing knowledge to demonstrate your skills in the software field. This class will help you on that path.


(1) "Critical Thinking", Wikipedia (http://en.wikipedia.org/wiki/Critical_thinking)

(2) "Training From the Back of the Room!: 65 Ways to Step Aside and Let Them Learn", Sharon Bowman

(3) "BBST Foundations", http://www.associationforsoftwaretesting.org/training/courses/foundations/

Tuesday, February 7, 2012

Transforming the way testing is performed...part one

Glossary: Test Strategy - High level / abstract test plan, usually less than a page.
Caveat 1: This is how testing is done where I currently work. Is it the way testing should be done? For you, maybe not. For us, it works out pretty well.

SETTING:
We use scrum, 2 week sprints, and mostly agile :)

Each 'feature' is a story in scrumworks, or multiple stories depending on feature size. Features can be broad or narrow, depending on the area under work and the PO / team. We have 8 active feature development scrum teams.

As a story is brought into a sprint the QA person writes an initial test strategy: thoughts about how to test the feature, what is the target, different ways to think about the feature, what areas are we NOT going to test, etc. The strategies should be specific to the work for that story and be related to other work in subsequent stories if the feature span’s multiple sprints.

The story is implemented by development and tested by QA until conditions of satisfaction are met. Then the test strategy is updated to reflect what was actually done. Notes about things that might be useful and/or not immediately apparent to the next person are added to the test strategy.

This next part is the one we are currently working on. The automation team takes over. Between interpreting the test strategies and having a conversation with the QA person, the SDETs come up with an automation effort that includes specific test cases. This allows the manual testers to not have to manually regress old features that we are not actively testing. (More of this will be in Part 3)

EXCEPTIONS:
Some features never get a test strategy. Example: "correct the link on page X to now be Y". There is limited usefulness in agonizing over this trivial change. Examine the new page...verify functionality that might be directly affected by the change, but truth be told it's most likely a five minute testing effort. If it takes you longer to write the test strategy than it did to actually test it...don't.

If writing a state diagram is easier then writing up a test strategy do. IE, if there are 3 states that can have 4 results, those 12 possibilities are easiest to just write out, rather then try and abstract them into a test strategy.

Sometimes just writing the test cases out is the quickest method. I frown on this method cause it tends to limit the thinking of the tester as they are performing their software testing. We have seen some cases where X, Y, Z are the test cases and thinking outside those really are just 'trying' to complicate matters in an area that is minimally useful.

REASONINGS (behind this madness):
We hire smart, intelligent people with good judgement. I leave it up to those smart people to know when any reasonably trained QA person could test a new feature with no previous knowledge. Or those same people could test something with the limited knowledge included in a test strategy.

We have the standard feature creep, emergency injections and other areas of software reality. This is where the judgement of the people I've hired comes into play. I trust them to make the right choices and I trust them to be able to defend those choices.

Do people make mistakes? Yep, probably more then I'm aware of, but the system works pretty well given our environment, our systems, our people and the speed at which we move. This allows us the flexibility to provide minimal documentation (given a feature and it's importance), and not have that documentation slow us down anymore than is necessary.