Wednesday, February 27, 2013

Final Farewell


So, in having left my last job as manager, I was left with one final task that I owed my employees. 

Recommendations. 

In having to write recommendations, they are the most boring things I've ever done. My style tends to be sarcastic, personal story-based and NOT business professional.
However, this will not help the people I feel should be recommended in the long term.

So, in order to be able to write the recommends I want to write, I'll write them here without names just to have an outlet.

1. Hire, NOW, I don't care what the position is.
2. Death cat, nuff said.
3. Testing savant, hire, then clone her.
4. His childlike face is a disguise for his ninja like skills. 
5. Despite his ADHD, he makes shit go, faster then anyone you've ever met.
6. Ninja, would hire no matter how many times he leaves me.
7. From nothing to fully competent SDET in 3 months, what else do you need?

Despite how fun and cool those are to me, and most likely most of the people that work with these people will recognize many of them. I ended up being more traditional in my recommends.

aka

is a force to be reckoned with. She is willing to take on any task, for any team, and accomplish it. Her testing knowledge is not limited by areas she doesn't know, as she quickly picks up and learns new areas.

is a diamond in the rough. She is lightning when testing on mobile devices and can do the work of 2-3 other people. Her willingness to get the job done at all costs is bar none. While her attitude is positive and lightens the mood

has an uncanny ability to blend manual testing and automation. His work ethic is exemplary, and he stands above the crowd for his constant dedication and desire to get things done.

quickly sets himself apart as a get it done person. He is able to build tools that amaze and impress and generally in less time then people think is possible. His testing skills are above and beyond.

These still contain a bit of me, as I can't really say enough about some of these people.

Wednesday, April 18, 2012

Transforming the way testing is performed...part two

    It's now been almost 2 years of me being in charge of testing where I work for an eCommerce website.
    My first act was to attempt to abolish test cases. They managed to survive under-wraps for about a year till I could put a final nail in their coffin. We still have test cases, but only in instances where writing the test cases out is faster or simpler then writing out our test strategy.
    My second act was to dictate to people that we should be writing documentation, but that it should take less time compared to the actual testing performed. (If it takes you 5 minutes to test it, it probably doesn't even need to be documented) The idea being that we need a knowledge repository of what we did, but that repository is for the tester's use.
    At first lots of the test strategies looked like just cut down test plans where test cases were the only thing used, but as I pressed people for more abstraction, it got more and more easy. The idea of abstracting is where you talk about what you think needs to be done in testing, using shortcuts naming conventions (tours in this case) like those recommended in Whittaker in "Exploratory Software Testing" and talk about more here http://www.developsense.com/blog/2009/04/of-testing-tours-and-dashboards/
    So we now write documentation as a high level abstract of what we are testing. (I like to call these 10 minute test plans, but in reality they take between 15-60 minutes to write-up) Here is a small sample to give you an idea.

***** NAME    DATE *****
Basic Profile tests in ATG
STRATEGY:
  Create documentation on the basic testing strategies for profile tests.  This should include login, logout, and new user creation.
TOURS:
    FEDEX - Verify that usernames are passed between the ATG and the _censored_ databases including making sure that invalid and duplicate usernames can not be used
    LANDMARK - Create users in all locations and make sure that those users can log in to all locations including in My and CSC.

    ANTISOCIAL - See how the system handles illegal inputs such as short usernames, long usernames, duplicate usernames, legacy usernames, etc.  Try to break the system.
NOTES:
  The requirements for basic login and user creation are the following:
    - usernames must be between 3 and 15 characters in legnth
    - new usernames can only contain numbers and letters
    - legacy characters including . _ - and space on legacy users only
    - log out should log user out
    - duplicate usernames should not be allowed (non-case sensitive)
    - 3 places to create username ATG, My and CSC Tools:
        _censored_
        Need to get with QA if you do not have the the password for it.
Database:    _censored_
IGNORED:
    Anything beyond the basic log in and user creation functionality of the profile, this will not include the weirds such as what happens when e-mail addresses are modified, changed username, changed slugs.




The reason we choose to do this, is that my company hires people who are experienced in testing. I think they all have the ability to take a strategy from anyone and come up with 80% of the test cases that the writer did.

Is this perfect? No, but…
It solves two frequent problems.
    1: Testers writing days of documentation for no one to ever read, or if they do the documentation is too old to matter.
    2: Testers never documenting what they do.
I think this is a nice, happy medium solution to the problem and it works here.


Why is it not perfect? i.e., What failed?
    People get attached to processes, particularly those who like to have organization round their tasks. They got attached to the strategies, so much so, that they would use them as templates…not as opportunities to think about the problem. The original idea was presented to people with the idea that there are other ways to document testing. However it has evolved into a more static "This is how we test here". I'm not really sure how to fix this yet. Just by writing this blog, I'm hoping people will re-evaluate why they write strategies.

What I didn't expect to happen:
    I expected a significant amount of pushback from uppers about metrics and numbers (aka passing rates, number of test cases ran, etc). After a very enlightening conversation with the company COO, he told us if it slows us down to stop doing it. And we haven't been tracking test cases since.

Wednesday, March 7, 2012

Why every tester should take BBST Foundations!

You should take the BBST Foundations class, and here is my reasoning:

What is every testers greatest asset? (Hint: It's not the ability to do something over and over and over and over again until they succumb and shutdown.)

No, it's their mind and their ability to think critically.

So what is critical thinking:

"Critical thinking is the process of thinking that questions assumptions. It is the process of deciding if a claim is true, false, sometimes true or partly true." (1)

One of the main components of testing is questioning assumptions.

  • business assumptions

  • developer assumptions

  • product assumptions

  • your assumptions

  • your assumptions you do not know you have

  • assumptions about how something works

  • assumptions about what doesn't need to be included

  • assumptions about what the client wants

  • etc.

Part of every testers job is to take these assumptions, analyze them (we call it test) and provide data back to everyone about which assumptions are true, sometimes true, partly true and blatantly false. In the ideal world you can do this before coding, by questioning the stories, or specification, as they are written.

Even if you don't believe in the Context-Driven school or Context-Driven approach, critical thinking skills can only help you in whatever job you're in. Unless of course your that person, the one who just wants to slog through your day and get your 8 hours done while accomplishing next to nothing.

So, the real question for testers is: How do you learn or improve critical thinking?

By training, and what is effective training? The traditional classroom setting can be useful, but interactive peer-review is probably the best that I've found so far. (ie "When learners talk and teach, they learn") (2)

BBST Foundations accomplishes this through it's online classroom structure - you are participating with 24 other testing people who are there to discuss, talk, teach and learn from each other. "The BBST series attempts to foster a deeper level of learning by giving students more opportunities to practice, discuss and evaluate what they are learning." (3)

Foundations is the first in a series of classes that focuses on critical thinking with a testing bent. This isn't a class you can just listen to and parrot back, you have to "add value to the course with your participation, ... submit reasonably good assignments ... and exams, ... provide reasonable assessments of other students' work". (3) In summary you have to provide useful data back to other students.

They do it beautifully; blending knowledge, skills and testing-relevant self-awareness.

The stated goals of the first class are:

  1. Familiar with basic terminology and how it will be used in the BBST courses

  2. Aware of honest and rational controversy over definitions of common concepts and terms in the field

  3. Understand there are legitimately different missions for a testing effort. Understand the argument that selection of mission depends on contextual factors . Able to evaluate relatively simple situations that exhibit strongly different contexts in terms of their implication for testing strategies.

  4. Understand the concept of oracles well enough to apply multiple oracle heuristics to their own work and explain what they are doing and why

  5. Understand that complete testing is impossible. Improve ability to estimate and explain the size of a testing problem.

  6. Familiarize students with the concept of measurement dysfunction

  7. Improve students’ ability to adjust their focus from narrow technical problems (such as analysis of a single function or parameter) through broader, context-rich problems

  8. Improve online study skills, such as learning more from video lectures and associated readings

  9. Improve online course participation skills, including online discussion and working together online in groups

  10. Increase student comfort with formative assessment (assessment done to help students take their own inventory, think and learn rather than to pass or fail the students)

Truth be told I took this class as a prerequisite for taking the rest of the series. I thought I would slog through this class, be bored and maybe gain a little bit in the areas of 8, 9 and 10. Using the process of peer-review from peers who have vastly different perspectives, I gained a better understanding for all 10 goals.

The major areas this class focused on were:

  • Mission of Testing

  • Oracles and Heuristics

  • Impossibility of complete testing

  • Code Coverage

  • Measurement


It managed to covers these areas as well:

  • Thinking about problems

    • Before planning for them

    • Before talking about them

    • Before attacking them

  • Step back and analyze what you're doing

  • Come at the problem from another vantage point

  • Communicating with varied audiences (and an example):

    • When talking with developers; do you know the high level aspects of the software you work in (ie http for web based software)

    • When talking with business; do you understand the user models you should be working in (ie financial software is for people who care about the numbers)

    • When talking with management; do you understand how what you are doing affects the schedule (ie early found bugs get fixed)

    • When talking with your testing peers; do you know how to communicate clearly about testing aspects (ie the difference between this or that approach)


And last but not least: I have found that in communicating with people, my style is lacking. I struggle to communicate with people in testing, unless they have 3+ years of experience. What about when I'm trying to communicate to developers with very little want or exposure to testing. Let's just say it takes a bit and leaves us all usually worse than when we came in. I know this class has allowed me to focus my thoughts and present my ideas with more clarity for all parties involved.

My advice to you. Sign up for this class, NOW. For you hyper-lazy (http://www.associationforsoftwaretesting.org/training/courses/foundations/)

Too many testing people don't treat their jobs as a profession. Even if you only want to be in it for the next 12-24 months, you need to treat it like a career for that time. This includes: communicating with others in a professional manner, being knowledgeable about testing and using that testing knowledge to demonstrate your skills in the software field. This class will help you on that path.


(1) "Critical Thinking", Wikipedia (http://en.wikipedia.org/wiki/Critical_thinking)

(2) "Training From the Back of the Room!: 65 Ways to Step Aside and Let Them Learn", Sharon Bowman

(3) "BBST Foundations", http://www.associationforsoftwaretesting.org/training/courses/foundations/

Tuesday, February 7, 2012

Transforming the way testing is performed...part one

Glossary: Test Strategy - High level / abstract test plan, usually less than a page.
Caveat 1: This is how testing is done where I currently work. Is it the way testing should be done? For you, maybe not. For us, it works out pretty well.

SETTING:
We use scrum, 2 week sprints, and mostly agile :)

Each 'feature' is a story in scrumworks, or multiple stories depending on feature size. Features can be broad or narrow, depending on the area under work and the PO / team. We have 8 active feature development scrum teams.

As a story is brought into a sprint the QA person writes an initial test strategy: thoughts about how to test the feature, what is the target, different ways to think about the feature, what areas are we NOT going to test, etc. The strategies should be specific to the work for that story and be related to other work in subsequent stories if the feature span’s multiple sprints.

The story is implemented by development and tested by QA until conditions of satisfaction are met. Then the test strategy is updated to reflect what was actually done. Notes about things that might be useful and/or not immediately apparent to the next person are added to the test strategy.

This next part is the one we are currently working on. The automation team takes over. Between interpreting the test strategies and having a conversation with the QA person, the SDETs come up with an automation effort that includes specific test cases. This allows the manual testers to not have to manually regress old features that we are not actively testing. (More of this will be in Part 3)

EXCEPTIONS:
Some features never get a test strategy. Example: "correct the link on page X to now be Y". There is limited usefulness in agonizing over this trivial change. Examine the new page...verify functionality that might be directly affected by the change, but truth be told it's most likely a five minute testing effort. If it takes you longer to write the test strategy than it did to actually test it...don't.

If writing a state diagram is easier then writing up a test strategy do. IE, if there are 3 states that can have 4 results, those 12 possibilities are easiest to just write out, rather then try and abstract them into a test strategy.

Sometimes just writing the test cases out is the quickest method. I frown on this method cause it tends to limit the thinking of the tester as they are performing their software testing. We have seen some cases where X, Y, Z are the test cases and thinking outside those really are just 'trying' to complicate matters in an area that is minimally useful.

REASONINGS (behind this madness):
We hire smart, intelligent people with good judgement. I leave it up to those smart people to know when any reasonably trained QA person could test a new feature with no previous knowledge. Or those same people could test something with the limited knowledge included in a test strategy.

We have the standard feature creep, emergency injections and other areas of software reality. This is where the judgement of the people I've hired comes into play. I trust them to make the right choices and I trust them to be able to defend those choices.

Do people make mistakes? Yep, probably more then I'm aware of, but the system works pretty well given our environment, our systems, our people and the speed at which we move. This allows us the flexibility to provide minimal documentation (given a feature and it's importance), and not have that documentation slow us down anymore than is necessary.

Tuesday, August 3, 2010

Getting Testing?

Very few people get testing.

The mindset of a good testing person is hard to identify...
When asked to test something do they split the problem up into small parts, or do they simply test inputs and outputs...do they cover all the inputs? all the outputs? Do they only ask hard questions, and never cover the easy stuff? Do they only think of the obvious, and never broach the harder questions? Do they think of the system as a whole? Do they think of things outside the system that can affect it?
See cause a good tester hits lots of these things in turn, they identify that they've attacked a problem from one angle, go back to the beginning and attack it from another angle. This is the single hardest thing to identify, cause some people use up minutes while thinking of different paths to test, others take significantly longer.

Explaining that to other people is one of the hardest things I've ever tried to do.
Haha, explaining to other people that few people get testing is hard...hehe...sorry, back on topic.

When I explain things to other people by simplifing them, I often feel that I am dumbing it down and that those people are not going to be making informed decisions. However, if they get accessibility (to the information) instead of accuracy (details) is that enough?

I mean we don't teach math by starting with decimals...you learn integers first.
Example: In first grade we were taught 2 divided by 3 is not doable... despite the fact that .6666 is a valid decimal value. In order to teach accessibility (integers) so that kids can grasp it we left out accuracy (decimals) till after they got the first concept...

So what information about testing can be accessible (to non-testers) while perhaps not being accurate (for testers)?
Depth vs Breadth? Maintainance of test cases? Automation? Execution?

Wednesday, July 14, 2010

People, too many expectations

Recently I've converted from a real job in testing, into a manager of testing people. It's odd going from expecting awesomeness in oneself to being responsible for the awesomeness of a team of people.

For one thing, the people underneath me, that I'm supposed to guide, none of them can move fast enough. Things take longer then normal, projects never seem to be as easy as they first appeared. At first I thought this was because I was a new manager and I needed to guide them more, but I have no wish to micro-manage people. I've been micro-managed and it was all I could do to get my manager off my back (sometimes lying) and then do what I needed to do.

Next I thought perhaps people weren't applying themselves enough. So I grabbed some people I knew had the requisite skills and gave them a project I thought was simple. Results: took three times what I thought it should.

So then I chucked it up to people not having enough time, I mean a QA Engineer whose on at least one team doesn't have that much free time, not to mention in reality most of the QA people under me are on multiple teams. But I can receive detailed feedback from these same QA people about topics that they had to have researched.

I've finally come to the conclusion that it isn't them, it's me (not surprisingly).

I couldn't keep pace with my own expectations, how would anyone else be able to do that? For one thing I never seemed able to move at a pace that was acceptable, there was always something more that I could do, some little thing that could be tweaked or made better, a new feature added, some better way to store, retrieve or move data. So it's not that I've settled for less, it's that I've realized that each individual has just this small sliver of time that they can devote to 'side' projects, and most of those people need to learn something (often times multiple things) new, before they can even start the project.

PS: This isn't a ding on them. The crew I have is some of the best people I've had the pleasure of working with. However I have to temper my need for results / speed with what people can reasonably do, without burning themselves out.