Friday, February 25, 2011

WTA07 - when do we really start testing ?

Last Saturday I was participating in Weekend Testing Americas #7. This session realized me once again how little I know :).

The Challenge: Cross the River
Press the round blue button to begin. The goal is to get all eight people across the river. You must respect the following rules:

Only two people can be on the raft at one time.
Only the mother, father & officer can operate the raft.
The mother cannot be left with the sons without the father (or she’ll beat them).
The father cannot be left with the daughters without the mother (or he’ll beat them).
The thief cannot be left with anyone without the officer (or there will be even more beat-downs).

The Mission:
"I’m hiring staff for my IT department. I was told that this simple program will help me in finding the smartest candidates. Your mission: test the program and report how it suits my needs!"

Session from my perspective
I read the mission and downloaded the application. Then I started testing - I was trying to find any problems with this application as fast as I could. Meanwhile I was reading main chat. It was strange that no one was writing about the "findings" or "issues" related to the program. The discussion was mainly focused on the mission. Instead of pure clicking on the program, people were trying to:
- get as much details from the customer as possible
- challenge the mission
- understand what "smartness" we are looking for
- to understand how a piece of software can evaluate the smartness
- to verify if we are smart enough to evaluate others smartness
- and so on

Above questions reminded me, how many times I start testing something without asking for more details and without raising questions. I realized that testing hardly ever start form executing program.

The debate was very inspiring. One of the brillinat thought was from Adam Yuret:
Bugs are meaningless in this context. Like finding that the logo on one side of the sinking titanic is the wrong color.

If I could remember only one thing from the Weekend Testing session it would be from Justin Byers:
Avoid jumping into the software looking for bugs until you know what you’re really testing.


For whole chat session see chat transcript.

Participants: Michael Larsen, Justin Byers, Albert Gareev, Shmuel Gershon, Markus Deibel, Mohinder Khosla, Adam Yuret, Perze Ababa, Phil Kirkham, Aleksander Lipski

Thanks guys for another testing lesson.
Alek

Thursday, February 10, 2011

Skype Coaching with James Bach

One of the privileges of being a member of Association for Software Testing are coaching sessions over Skype. Recently, I've had a chance for a lesson with James Bach. It is hard to find someone in testing field who doesn't know or at least heard about him. The most important in his coaching sessions is that he is not giving tutorial nor pure advice. He is challenging you with his questions and posing puzzles. The goal is to learn something through mental struggling.

I asked for such lesson last month and I was very lucky because he found some time for it. We've started lesson from the question : "Are testers allowed to make assumptions?"

It was kind of a tricky question. Since I've started learning the craft no one has ever told me anything about assumptions. I had a feeling that assumptions are bad but can we really work without them? I knew both sides of making assumptions but it was not enough - I couldn't find good example. Besides, I've been feeling like in a court. Anything I would say could be used against me. I really needed to be careful with what I was saying. English is not my mother language, so it was even more challenging. It reminded me that We as Testers need to be aware what we are saying, be careful with making absolute statements and that we always need to be ready to explain our point of view.

Another question James asked me was : "Should you assume, when you are testing, that your test platform is working properly? Or should you question and check that?"

I said that it depends and gave only one example factor : time. The other possible factors where quite insightful

1. Do we have any specific reason to believe that the test platform is malfunctioning? Perhaps someone walked up to you and said "That box is broken!"
2. When we attempt to use it, does it behave in any strange way? (based on experience with similar boxes, or background technical knowledge)
3. Has the box been sitting on the open internet with no firewall or anti-virus protection for more than about 10 minutes?
4. Have you just dropped it on the floor from a significant height?
5. Have you just hit it with a very large mallet?
6. Did you just remove its memory chips?

I would have never thought about such factors if I hadn't been pushed to such question.

Back to the assumptions, James helped me to distinguish between critical and non-critical "assumptions".

1. Are those required to support critical plans and activities. (Changing the assumption would change important behavior.)
2. Are those likely to be made differently by different people in different situations. (The assumption has a high likelihood of being controversial.)
3. Are those needed for an especially long period of time. (The assumption must remain stable.)
4. Are those especially unlikely to be correct. (The assumption is likely to be wrong.)
5. Are those we are expected to declare, by some relevant convention. (It would hurt your credibility to make the assumption lightly.)

Final thought from James was a brilliant comparison between assumptions and bees : "Assumptions to a tester are like bees to a beekeeper"


"We are careful around assumptions, but even being careful we will get stung once in a while but if we try to do away with them, we won't get honey!"


I really enjoyed this coaching session and I am looking forward to the next one.

Alek

Tuesday, February 8, 2011

BBST Foundation 2.0

Last year I had a chance to participate in Black Box Software Testing (BBST) Foundation 2.0 online course provided by The Association for Software Testing (AST).

Since I graduated I have attended a few technical courses, 3-day offside courses only, so this was my first experience with an online course. I hadn't had much expectations from that kind of course before, therefore I was kindly surprised how much valuable it turned out to be.

The course lasted for 4 weeks and required from students real work just like in standard live classes. We were working on 5 main topics :

- Information objectives
- Impossibility of complete testing
- Oracles are heuristics
- Coverage is multidimensional
- Measurement is important, but hard.

The number of tasks was enormous. We had tight deadlines - 3,4 tasks every 3 days. There were video, lectures, quizzes, obligatory and recommendation readings, orientation tasks, individual and group tasks. In addition, we were obligated to evaluate one another. It was a lot of work. We had to learn something about testing but at the same time we were improving our communication and collaboration skills. I used to think that to succeed in a course, what you need to have is good remembering skill only (ISTQB - can be good example). In BBST course I really needed to work on the answers, I needed to be involved and thanks to this I could learn something faster and understand it more deeply. I have also realized that there is no best-and-only answer in our field, that everyone has his own meaning for standard vocabulary and we understand same things differently. I have finally understood why complete testing is impossible and how we can test differently when we are given different testing mission.

I must say that I don't remember such hard studying and working on one subject since I graduated.It's hard to describe how much valuable this course was for me.

I strongly recommend this course to everyone who wants to work on his own understating of testing. And if you are done with this Foundation 2.0 there is the next one waiting in line: Bug Advocacy. I look forward to that course. It starts this month

Alek

Friday, February 4, 2011

Testing outsourced to students

Recently I have encountered a situation where some of the testing activities were outsourced to students(aka Short Term Contractors). The number of "very-detailed" test scripts were handed in order to execute them as regression testing. Students were also given a written manual for proper test execution. They had never had any experience in testing before nor in specific software context.
We expected from the students evidence for test scripts execution and raising bug reports. Personally ,I can see many concerns in such approach from the testing point of view.

1. I believe that "testing" is something more then just "test execution". When we focus on test execution we don't see anything beside the test steps and this is very similar to the automation - the machines don't care what is around the test path. We treat human as automation workforce where we expect only evidence for execution. Having in mind that complete testing is impossible how can we be sure that all possible (or at least the most important) test scripts have been prepared ?

2. For every SUT we have different group of students. It means that at the beginning of every project we need to spend enormous amount of time on answering entry-level questions. In addition we are obligated to preparing "bullet proof" test scripts in advance if we don't want to have tons of script defects errors. I wonder, if this time could be spent more wisely.

3. We are not raising good testers. If I had started testing from such 'execution activity' I doubt I would've been continuing this career path. Testing, for me, doesn't mean following someone's path but questioning what I see using my mind. We don't encourage people to question things, to resolve problems, to understand context, to have fun from testing - to use their mind. Everyone has their own perspective and when we expect to follow the script, we don't take advantage from this diversity.

Alek

Weekend Testing for the first time

I heard about Weekend Testing some time ago but I haven't had chance to participate in it until last weekend.

The testing challenge for WTA06 was prepared by Eusebiu Blindu. It was a game that was using a hand of cards, and based on the hand selected, an image was generated when the player clicked on the "Generate Image" button. The session was divided into 2 parts - testing and discussion.

There were more then 20 participants. Even though it was american chapter, we had people from England, Czech Republic and Poland too :). In the first part we could test separately or work in pairs. It was up to us how we would manage this challenge. I didn't have chance to work in pair so I started working alone. The beginning was hard because all the messages which came every second, were very distractive. But after a while I got used to it. I was picking a test idea and following it. In the case of finding anything interesting I shared it on main chat. In the case of lack of ideas I picked someones else's and started following it. It was amazing how quickly, that one hour flew by. I had never had chance to participate in such insightful testing session.

The second part was mainly a discussion. We were discussing what skill were essential in this task. We had chance to see someone else's point of view, to challenge our own conceptions and verify our own thoughts. I have learnt a lot by watching other people's different approach, as how they use different tools and skills for the same task. I could again realize how litte I know

One more time thanks to Eusebiu Blindu for the challange and Micheal Larsen for leading the session.

If you ever have 2 hours free on Saturday or Sunday try to "taste" Weekend Testing if you want to stay in shape as a tester.

Alek

Another Testing Blog

My first testing blog post I read was from Jonathan Kohl. Since then I have found a number of testing blogs and people who share their professional knowledge and experience .

Is there really need for another testing blog? I don't know.

But I do know that software testers are different, that we have different experience, different skills, we work in different contexts and we have different points of view. This diversity lets everyone say something valuable and, at the same time learn something new. These are my reasons for blogging about testing. Hopefully, I will always have something valuable to share.

Alek