Friday, 12 July 2013

Conference at a Glance, part II – My glance on the Tuesday AM tutorials

This is the second part of my series of posts about EuroSTAR 2013 conference. I apply the same method of evaluating as I did previously, so read the “My glance on the Monday tutorials” before this post, if you haven’t done that already.

Ian Rowland’s “Thinking Outside The Locks

I have not heard of Ian Rowland, but I must say I’m intrigued. A magician? The biography in EuroSTAR page makes me want to know more about this fellow. In fact, I googled his name and got to his website. I’m really looking forward to see Ian do his stuff. I would guess that humor is involved in addition to mind blowing approach to thinking outside the box (or locks as he says).

To be honest, I think critical thinking, unconventional approaches, non-rational thinking are the tools of my trade. I would be able to use those both daily and they would make me become a better tester both short and long term.

I would recommend this to all my colleagues. In fact I’ve been thinking about a 30 minutes workshop on out-of-the-box thinking and using unconventional methods to solve a problem. If I get enough ideas for my workshop I might just do a workshop, and I hope Ian can help me generate ideas for it.

To be honest, I can’t think of anything to disagree from the summary of the tutorial, so I might not be able to challenge him. I’m curious to see how the methods he uses actually project to software testing. I might steal some of the details and trick he uses to my own work, like I mentioned before.

The tutorial seems very interesting as a light weight beginner for the conference (even better after a full day of Monday tutorial), but is this the best option out of the cast of many great speakers? I would be the one doing a half day tutorial on the subject, considering I don’t have that much experience in coaching out-of-the-box thinking.

After the smoke clears and the magician bows, I would like to see/hear/learn about implementing the skills and theories Ian shows to us. Theory is good and all but I would like to see results in testing craft to be happy with the tutorial.

On my Birdy scale, Ian Rowland’s “Thinking Outside The Locks” would scale as follows:

  • Person-to-person: **
  • Short time value: ***
  • Long time value: **
  • Steal-ability: *
  • Challenge-ability: 
  • Total: 8/15 stars

Prof Harry Collins and James Bach’s “Using Sociology To Examine Testing Expertise

I think I said enough of James Bach on my previous blog post, so I will concentrate on Prof Collins. I find his resume quite impressive. He has (co-)written books that we testers should (I haven’t) be reading including the “Tacit and Explicit Knowledge” and “Rethinking Expertise”. I am eager to hear what he and James Bach have come up with. The duo of unschooled (but not untaught) and a University professor could spell doom to us mere mortals. I’m truly eager to hear their tutorial.

I am really eager to listen to stuff about meta-knowledge (or knowledge about knowledge). I do not, however, see a short term benefit from it. It will eventually develop my sense of self analysis. I’m very interested in any studies about testing and testing methodologies and this tutorial taps to that – using the tacit knowledge in addition to radiant expertise.

I don’t know straightaway how I could harness the tutorial for the benefit of my fellow testers. If the tutorial addresses social tacit knowledge, I could be able to make the company benefit from acknowledging that knowledge.

I am keen challenge the fact that there are skills that no person possesses but a group of people. Let’s say I invite a group of people to my house and I want to learn Chinese. None of these people know Chinese, but as a group we might be able to possess the skills to communicate in Chinese? Am I on the right path here? Or are we talking about more-than-sum-of-its-parts mentality, where we all would know just a little Chinese or a language close to Chinese?

I would refrain myself from teaching this. Maybe I could mention and guide people to seek for material appropriate to this experiment. I have no previous knowledge in this kind of study as I lack the university background.

Just like in Ian’s tutorial I would like to see/hear something that I could implement to my own work. What do I do with the information about what skillset does the teams have?

On my Birdy scale, Harry Collins and James Bach’s “Using Sociology To Examine Testing Expertise” would scale as follows:

  • Person-to-person: ***
  • Short time value: 
  • Long time value: **
  • Steal-ability: 
  • Challenge-ability: **
  • Total: 7/15 stars

Peter Zimmerer’s “Questioning Testability

I begin to wonder what people think about me as a community member when I don’t know most of the people making presentations and more importantly tutorial speakers. This pre-analyzing the tutorials also helps me to familiarize myself with the people so I could recognize them at the event location in Gothenburg. I believe I have a lot to talk about with Zimmerer from all kind of things, but I believe we can make a conversation out of his topic also.

Testability is a freaky subject for me. I might be living in a bubble where we almost automatically plan our products with testability in mind. We aim to make the testing as fast and efficient as possible, so this tutorial might not give me too much in short term. I do believe that testability is one of the key things to enable efficient testing, so I more than recommend this tutorial to everyone!

I do believe that I could benefit from revolutionary points of view, which I hope Peter will provide. At some point when testability becomes more a worry for me, I might need the skills. Also the ability to promote testability could be important for me in this company. At some point the leading testability evangelists might leave the company, so we need as much tacit knowledge on testability as possible.

I’m expecting a lot of practical examples to be able to share myself (possibly after altering them to my own flavor). In that sense, the stealability is quite high in tutorial. I would focus mostly on practical appliances of testability, because testability as theory is quite trivial. People seem to have trouble in understanding how they can make stuff happen in practice.

When it comes to questioning, the words “step-by-step” rose a red flag. Is this method an omniscient, all-encompassing process? I hope this tutorial doesn’t turn into “do this and everything will be fine” but into “apply these skills where it’s reasonable”. If I am to join the tutorial, I will definitely challenge

On my Birdy scale, Peter Zimmerer’s “Questioning Testability” would scale as follows:

  • Person-to-person: *
  • Short time value: *
  • Long time value: **
  • Steal-ability: **
  • Challenge-ability: ** 
  • Total: 8/15 stars


Anne-Marie Charrett’s “Coaching Software Testers

James has mentioned Anne-Marie a couple of time in our conversation and praised her coaching skills. Then again, I must admit that I have not made myself too familiar with her work. I’m looking forward to seeing her and possibly having a chat at some point. Hopefully she’ll be able to donate some of her time to me.

Actually the coaching method described here is something I have already done a few times. First James Bach coached me using the Socratic Questioning and then I used it to coach Erik Brickarp, Jari Laakso and Aleksis Tulonen among others. The amount learning on BOTH parties was phenomenal. I would love to gain more skills in this area to be able to continue my journey as a coach. This is something that both my colleagues and my fellow crafts(-wo-)men will benefit. I have some skills to begin with so I will employ them in future to the benefit of all, including me.

Having said that, I will try to steal as much as possible from this session and to mold it to my own. I recommend this session – yes, without having yet attended it, but having faith in it like in no other! Coaching skills are paramount on testers skill set if they ever want to become true professional.

I find it hard to challenge this on two reasons: I would consider myself as a member of coaching congregation and I would find it hard to challenge something I have unquestionable faith in. I am willing to try challenging for the sake of argument, but facing Socratic Questioning arguing for argument’s sake might be my downfall.

On my Birdy scale, Anne-Marie Charrett’s “Coaching Software Testers” would scale as follows:

  • Person-to-person: *
  • Short time value: **
  • Long time value: ***
  • Steal-ability: ***
  • Challenge-ability:  *
  • Total: 10/15 stars

James Christie’s “Questioning Auditors Questioning Testing

James Christie (How many people called James are presenting in this conference?) is one of those that fall on the same category as Anne-Marie – I would love to talk to them for what I have heard from my fellow community members – but I have never delved into James’ work in-depth. Maybe I can sneak into his lunch table and steal a minute to talk about testing. ;)

In the past, I was working in a company where an audit was held. I was part of a group that coached the people getting audited to answer the questions correctly to appease the auditor. Wrong approach to audit – we do not always act according to the documented process but according to our best knowledge on the situation. The audit was about the documentation. The auditors were held in so high authoritarian position that they were not challenged – I was not allowed to talk to the auditors. ;)

I don’t see a short term benefit on this tutorial, however. I’m not currently in a position to be part of audits currently at F-secure. We do have security audits and the like, but I have yet to be invited to one such event. I might benefit James’ tutorial if I focus on questioning instead of auditing. If the scope wouldn’t be so narrow as to concern only audits, I would find it more beneficial to my current work.

If I could combine questioning to other areas, like specific levels of testing (unit, module, etc.) I could be able to teach or coach other testers and programmers to question their work more efficiently. So long term benefits could outweigh those of short term. More so, I don’t know where my life takes me so having some skills in challenging auditors might be my thing in the future.

I have such limited knowledge on auditing as such, so I find it difficult to disagree with questioning. Usually the person being questioned could benefit from the questioning too. I have been in a situation where I learned more by being challenged than by acquiring book knowledge on the subject. Like I said earlier, I would like to see tracks on more general questioning, arguing and challenging. This tutorial might answer some questions I have, but I’m not sure at this point.

On my Birdy scale, James Christie’s “Questioning Auditors Questioning Testing” would scale as follows:

  • Person-to-person: **
  • Short time value: 
  • Long time value: **
  • Steal-ability: *
  • Challenge-ability:  *
  • Total: 6/15 stars


Pradeep Soundararajan & Dhanasekar Subramaniam’s “Context Driven Mind Mapping

I know Pradeep from tweeting with him and reading his blog. I also have followed the progress of Moolya for some time and I’m really impressed in their success. I’m also looking forward meeting Pradeep and Dhanasekar in Gothenburg to talk about mind mapping and all testing related stuff. I’m glad that Pradeep is hosting two sessions at the conference so I can join at least one of them.

I’m a bit of a mind map enthusiast myself so this tutorial is almost tailored for me. I find a lot of things here that are almost exactly from my workshop year ago from Nordic Testing Days 2012. I do believe that I have a lot to learn on both using the mind maps and hosting workshops. In short term, I would like to learn the most effortless way to utilize mind mapping. I tend to procrastinate during the testing, so if mind map can keep me focused, I would be on cloud nine. I also see mind maps as the tool of the future for it utilizes the brain instead of some arbitrary tool.

This tutorial would be worth stealing in its entirety and then I would go on promoting the ideas and practices to my company and to my peers in the community. In the long term, mind maps could help make exploratory testing both understandable and credible to stakeholders with minimum effort on documentation. I have already played around with the thought of decompiling the mind map into coverage charts by scripts, so this might even further automate the documentation of exploratory testing.

As for challenging, I know where I was stumbling in my workshop so I might tap into those subjects. First would be the content-switching during testing from the mind map to the software under test. If the mind map requires another window in addition to database browser, Unix log screens, browsers, standalone tools, etc. the content-switching becomes a burdening factor in the long run. Second would be the “free form” of the maps, which could result in inconsistent ways to report. I’m curious to see how Panda and Commander can tackle these. :)

I would recommend taking the Monday tutorial by James Lyndsay and combining it with this to make an awesome combo of exploratory testing and modeling. I haven’t yet decided which to attend, but if you, dear readers, should consider this combo really hard.

On my Birdy scale, Pradeep Soundararajan & Dhanasekar Subramaniam’s “Context Driven Mind Mapping” would scale as follows:

  • Person-to-person: ***
  • Short time value: **
  • Long time value: **
  • Steal-ability: **
  • Challenge-ability:  **
  • Total: 11/15 stars

Afterword

Once again I have not yet decided which to attend. There seems to be 2 top dogs right now, but I cannot yet say which to attend. I might even change my mind right before the session if other community members talk me to join a session other than what I would have chosen.

Anyway, I have quite a task ahead of me to plow though the conference tracks one by one. But be assured, I will go through as much as I can.

Also, I got interviewed to EuroSTAR community spotlight. I thank Emma Connor for the interview, and wish her and every tester out there a great summer!

- Peksi

Monday, 8 July 2013

Conference at a Glance, part I – My glance on the Monday tutorials

The EuroSTAR 2013 is knocking on my door. It's about time get some scribbles to my blog about it.

Quick word about what I’m doing here: I’m trying to get my thought on paper both to ease my burden of choosing the talks which to attend and also to write out my thoughts before the conference (as after the conference I might have a bit of a "information hangover" and I'm not that keen to write on stuff I didn't attend). If you think I have mistaken, misinterpreted or been outright wrong, please comment and discuss about it. I also encourage all the speakers, who I mention here, to comment on my expectations. I might be off by a mile on what the true content of the tutorial is, but please correct me if I’m on the wrong track.

I will also try a heuristic grading system to determine what would be the best session for me. I will grade the stuff with Angry Birds ™ grade – 0-3 stars per area – on five areas:

  • Person-to-person (How will the person and his/hers work affect/inspire me or the people I know?), 
  • Session value – short time span (How much can I get out of the session tomorrow – next year?), 
  • Session value – longs time span (How much can I implement o my work and teach to my colleagues, my community?), 
  • Steal-ability (How much of it am I willing to borrow and further develop to make it better and, more importantly, mine?), and 
  • Challenge-ability (My past knowledge on the topic and my willingness to challenge the session contents.)


This doesn’t mean that 0 stars would mean “I hate it”, instead I’m looking for personal value against my beliefs, biases, former knowledge and worldview. I’m not saying that I won’t attend a low scored session, no, I might change my mind after talking to people at the conference and to the speakers. So I encourage you to talk to me about your session and comment what I have written about you.

James Lyndsay’s “Insights Into Exploratory Testing

As a first thought, James Lindsay isn’t the one person that I recognize as my personal idol. He ought to be, because I read his blog once in a while, but I haven’t really gotten into his stuff. I love the blog series in which he describes different ways to manage exploratory testing! Kudos for that! However, I think I need to see him in person to suck in the charisma that he might have. After that I might regard him as my Top-3 testers. Right now, he’s a bit of a mystery man to me. And I always mix him for James Whittaker! *grin*

I’ve done exploratory testing a while now, so to answer the question: “What’s in it for me for tomorrow/this year?” I think hands on testing practices could be a remedy for my “sohpomoristic” (I have knowledge, but not enough experience implementing my knowledge to practice) way of approaching testing. The analyzing nature of the workshop could help me be a better tester on a daily basis. I was struck by a realization on few pair testing sessions with people who I really look up to. They were so much better hands-on testers that I, but they had high expectations on my skills.

Because I need to bring something back the “Ye olde Sweatshop” at Helsinki, I need to look for stuff that are good for my colleagues. The nature of the workshop could sprout some lightning workshops or sessions with my testing fellows, so I’m keen to see how to run a successful workshops with different backrounds of people. I’m also keen to see how I can use my skills to use attacking and exploitation in my everyday testing, ‘cause I work at a security software corporation. So I might be able to bring that home with me. Over all, all the things in exploratory testing might be good for my colleagues. The focus in my company should continuously veer towards exploratory approach. This workshops could give a lot – that’s for sure.

As I always tend to look on the bright side of life, I do want to be able to challenge James
and his topic. So what areas might be prone for me disagreeing? Workshops usually bring new course to the testing buffet, so I would have to taste the dish before being able to make hard decisions about it. I can’t say what would be the areas where I would disagree but I would definitely ask about how you could implement exploratory approach on a company that uses mainly test case driven approach. Let’s say there’s a company that uses only outsourced testers and they have one test analyst whose job is to design proper tests for that group of testers. How could one use exploratory approach to make their testing more efficient, creative and manageable?

What could be the next step after having this workshop? Personally I would try this on a larger scale. Oh! That’s a good question to James: “How will I be able to scale this up? How do I manage a team of these explorers?” Personally James Bach workshop could answer some of these questions. If it was somehow possible, I would attend to both workshops and then combine then seamlessly together. The next step would be to combine Lyndsay’s techniques to a broader audience and to organizations where testing is still in baby shoes (i.e. über controlled, test case driven, hierarchial, to name some).

I have one more thing to say. Mr. Lyndsay, if you happen to be at Gothenburg on Sunday evening, I would like to have a pint or two with you and talk about testing and your workshop. I might not attend it as there’s so much to choose from, but I’d like to talk to you about this particular workshop. You can give me a shout at Skype or Twitter, if you want to. ?

As for James Lyndsay’s Monday tutorial “Insights Into Exploratory Testing”, I would score like this:

  • Person-to-person: **
  • Short time value: **
  • Long time value: **
  • Steal-ability: **
  • Challenge-ability: **
  • Total: 10/15 stars



James Bach’s “Rapid Test Management

Personally, I know James Bach and I’ve spent some time with him face-to-face. I attended his RST class on 2012 and the Rapid Testing Intensive course. I was exhilarated to be invited to the course as a special guest and it was one of the most joyous events of my end-year of 2012. I had the opportunity to have dinner with James and to get face-to-face coaching. He’s a radiant person who some people misunderstand as angry or frightening – choosing to see behind the façade gives you the opportunity to see a multi-dimensional, empathic person who’s a power to Testing craft all around the globe. I know it sounds like I’m secretly in love with him, but trust me: he can turn your world upside down.


As for the Monday tutorial, I feel the topic is slightly too narrow for me. I did learn a lot on the RTI course last year and I think this tutorial might repeat some of the stuff that I learned that year. For short time value the tutorial might not be in the top-3. I have a lot to learn about test management, yes. I do think, however, that I can get more by starting finally to implement the stuff I learned year ago. The opportunities to do that have been scarce. I think I need more advices on how to implement the lessons to my work instead of repeating the theory myself.

What could I do to make my colleagues life better using the lessons learned from the tutorial? I have experience already from the RTI, but my situation after that session forced me to forgo the implementation of the stuff I learned. The job as a maintenance manager also did not fully support the further teaching of the methods. At the moment I am in a position where I could help my team and all Quality Engineer at F-secure to make their testing both better (using exploratory testing) and ways to manage it properly.

I am quite biased in challenging James’ tutorial because I wish I was the one with skills to pull off a tutorial like that. I have seen the traps and pitfalls at least partially already, but I think I need a different perspective to the tutorial altogether. I like to think about the scaling of the methods and also the managing of outsourced testing. How will the methods James purposes will scale across rigid and wide spread organization? How could I manage a team of tester in Kuala Lumpur according the principles? I would have to delve a bit deeper into his material. That would give me pointers on the terms and techniques he uses and possibly I could find some holes in his reasoning to sprout a fruitful conversation.

When I think about areas I’m willing to steal, the whole material could be worth stealing. The concept of managing exploratory testing using the Rapid approach gives me new tools and tricks to make our testing at F-secure more efficient and manageable.

The next step with this presentation would be almost the same as I described earlier. The approach needs to be implemented on a team to see where it might go wrong. In addition to this tutorial, one might benefit from different coaching and mentoring lessons so that the problems are found early and dealt with – almost like testing you test management.

On my Birdy scale, James’ “Rapid Test Management” would scale as follows:

  • Person-to-person: ***
  • Short time value: *
  • Long time value: **
  • Steal-ability: ***
  • Challenge-ability: *
  • Total: 10/15 stars


Paul Gerrand’s “How to Create a Test Strategy

Paul is one of those people whose name pops up every here and there – he seems to be able to do everything. Personally I don’t know him or his work, but it seems it’s about time. The description at EuroSTAR page describes him to be attending lots of events so it’s clear why he’s a well-known person. It's hard for me to develop an understanding of Paul’s achievements on a personal level but after a few YouTube videos and some googling, I think he knows his way around testing. I may not agree with some of the stuff he speaks for, but I cannot put my finger on any specific topic. Hopefully I can get some challenging happening about hit EuroSTAR tutorial.

The first thing that comes to mind is a hint of worry – he’s using a template. I know from past experience that templates are seldom abused and used to hide incompetence and lack of interest. They look good, though. Is there something fór me in this tutorial that I could use in the near future? I wish I had taken this tutorial 5 years ago when I was struggling to tears with a testing strategy to cover a whole organization’s testing. I did do some short and efficient testing strategies on a project-by-project level, but the framing, all-encompassing strategy was a vague scribble that I loathed where other loved it – I knew I could do it better, but I didn’t have the motivation to really get into test strategies at that time (or the time to do it, I might add). As for “past” value, I would rate this really high, but now I don’t see too much value to my work. This could be one of those inspiring and challenging tutorials to attend but I don’t see short term value to me here.

How could I use test strategy workshop in my company? I’d have to say: in a punch of ways! The opportunities provided by honing the existing strategy and being able to make high and low level strategies would be helpful. The test strategy does push you towards organized testing and even give credence to ones testing if one has a strategy which to follow. On that aspect, I might even benefit from it.

I try to find something worth disagreeing or to challenge in all the tutorials, but in Paul’s case the answer is clear – templates. I loathe templates as base of documentation. They usually lose the meaning as a template and become “fill form and deliver” –documents. So I am eager to challenge template in every aspect I can. I would rather have a set of skills to enable me create my own template than a readymade one. Does Paul provide ways to create a skill set for that purpose?

I would like to be able to understand a bit more about the techniques Paul Gerrard uses to create the testing strategy. I guess that the only way is to attend the tutorial. For now I don’t see much else worth stealing than the concept of improving test strategy thinking. I’m not sure if this actually increases or decreases my willingness to join Paul’s tutorial, because I might be lacking in knowledge to actually make that decision.

Personally I would focus more on the skill set instead of the template. If he does encourage in a mindset change then that would be one of the topics I’d like to steal also. This tutorial would benefit from getting the test strategy to context and possibly the implementation to a testing project. I hope Paul has some concrete material on how the strategy implementation has worked in the past, if he has used it before.

On my Birdy scale, Paul Gerrard’s “How to Create A Test Strategy” would scale as follows:

  • Person-to-person: *
  • Short time value: **
  • Long time value: *
  • Steal-ability: **
  • Challenge-ability: ***
  • Total: 9/15 stars


Torbjörn Ryber’s “Boosting Your Test Design Powers

I know Tobbe just a little. I have met him face-to-face, but we haven’t actually had a long conversation at that point. What lingers in my mind from our talk is a single phrase: “High-6” I hope I get a chance to talk to Tobbe at the conference at least once for he’s a great character. He has a tremendous knowledge in test design and critical thinking, besides he’s funny as hell! So I encourage all you, go and talk with him – you won’t get bored!

The tutorial seems, at a glance, to be run-of-the-mill presentation on test design. I have Tobbe’s book on test design and it is THE book for every tester! If you don’t have it, join the tutorial – you will receive a complimentary copy of the book “Essential Software Test Design”. Just like James Lyndsay’s tutorial, this one would be a fun thing to join as I know already something about the topic and to increase skills in test design cannot be harmful. Besides, ways to design testing with tools (i.e. charts, graphs, mind maps) would help me on my daily work. I do hope that we are able to get some hands-on experience on using the design techniques.

I have already promoted Tobbe’s book here at F-secure and I lend my book to one out Quality Engineers to get some advice on her work. I would like to have all our testers to join the tutorial as a wakeup call. As this is elementary for any tester out there, I encourage every pudding tester and developer to join the tutorial. For those that have been doing test planning for long time, this could act as a reminder of the good practices in test design.

Disagreeing with Tobbe’s topics can be difficult because I believe in the most parts. I do however see an opportunity to play devil’s advocate and challenge Tobbe and his claims. I do however see more beneficial to get into a debate on a separate occasion to let him bring out the most of the tutorial to people who need the knowledge. I’m not saying I don’t need more knowledge, but I will restrict myself from spoiling the fun from others.

I did a similar class few years ago where I taught test design on our testers and developers. My point of view was however a little more exploratory testing oriented and heuristic driven. I could try to steal some pointers from Tobbe to support my own material.  I could then arrange a workshop here, at out office, to spread the joy.

The next step after this tutorial would be a walk to the test lab and to use these skills in practice. In that sense the schedule of this tutorial is perfect. Later in the conference people could try their newly learned or reminded skills to test stuff. I think Tobbe would appreciate the feedback that practical use of those skills could bring. For example, what areas require more attention in the future?

On my Birdy scale, Torbjörn Ryber’s “Boosting Your Test Design Powers” would scale as follows:

  • Person-to-person: **
  • Short time value: **
  • Long time value: **
  • Steal-ability: **
  • Challenge-ability: **
  • Total: 10/15 stars


Matt Barcomb and Jim Holmes’ “Becoming A Testing Craftsman

To be honest, I don’t remember any significant details about either of these guys. I seems that matt is a busy conference speaker (according to his blog), but I did not yet find the tone in which would be in harmony with my own thoughts about testing. I do see that he’s a coach so I could try to approach him via Skype and ask for some coaching and to chat about the conference topic. Jim also seems to be quite a veteran in conferences. I scrolled through his blog and the slides that he shares are excellent! I love the way to make things simple and bite-size. If the duo is anything like what I learned from them, I would definitely want to meet with them and talk about coaching, motivation, innovation and testing topics.

On short term, this tutorial is like candy to me. Little programming excersises? Sign me in! Building your tools? I’m there! I also see that these guys focus on skills to get things done – automation is the extension of human mind, not the only solution to testing. This is by far the most fitting tutorial for the first day for me and my development, short and medium term. I am getting into trouble at my work by some tedious manual repetition that could be made easier with a shell script or a python script. I hope these guys can brrring it!

When I think about how to educate and help my fellow testers, I’m not actually sure if I could bring enough to the table. I believe, though, that the attitude towards being a craftsman instead of rank-and-file- employee could be beneficial to both them individually and to the company as a whole.

I would like to learn more programming and tool building before I could confidently teach how to build them. There’s a hell of a group here who can make tools, apps, what-do-you-need to make testing easier. I’m willing to see the applications of tool building and the test data creation/management and possibly try to teach that to my fellows.

This is an interesting topic, because I have little experience in actually building something functional for others (for myself a few scratch-built scripts, but nothing serious). If I was to choose what I would like to hear, I would like to hear more about the attitude towards craftsmanship and how we can spread the word around. I believe that craftsmen aren’t supposed to hold the secrets to themselves but to share their wisdom, like Matt and Jim.

I’d like to see these teachings to be implemented on some practical hands-on session, test-lab or something, so we can really get our craft to shine. I would also promote testing to other than testers – I think managers, documenters and all software project stakeholders could benefit from craftsmanship attitude.

On my Birdy scale, Matt Barcomb and Jim Holmes’ “Becoming A Testing Craftsman” would scale as follows:

  • Person-to-person: *
  • Short time value: ***
  • Long time value: **
  • Steal-ability: *
  • Challenge-ability: *
  • Total: 8/15 stars


Afterword

I’m still teetering on which tutorial to join, but I will try my best to decide. I know I need to make decisions fast so I can book my seat before they run out. I will let you know which one I chose after I book the tutorial. As for now, I will keep on writing about the conference. I’ll try to cover every conference day at least on some level, but I do not guarantee anything.

As you might know I am speaking at EuroSTAR on a Wednesday. I would like to hear what you might get from that session if you are to attend it. I will also hold a preliminary practice talk here in Helsinki, so if you’re interested to join, give me a tweet. ?

Friday, 5 July 2013

You gotta fight for your right to test!

I am terribly sorry for the soon to be rant and biased output that is going to happen. I have did this once, but the conversation resulting from that was rather pleasant and constructive. This is a comment / response / rant about a blog page I stumbled upon today. You can find the original post here. The page may have been outdated since it was created in 2010 but it seems that it was updated half a year ago.

My first though was “not another test case” when I read the article. I’m currently trying to figure out what I will write to my next column on the “Testaus ja Laatu” –magazine, and since the theme is juxtaposing, I thought I’d try to write some blog-stuff first. One of the topics could be “test cases vs. no test cases” and I shy away from that strict way of thinking. Exploratory testing, to me, is utilizing all the available tools and gimmicks to get to the best results. Black and white –world vision narrows too much my take on testing.

Having said that, I will use my recently found taste to logic to point out what I found could be wrong in the article. I do not know the state of mind in which the article was written or the context in which it is supposed to be fitted, so I will project it to my own workplace where that is necessary.

You are warned…

The first sentence goes like this: “A significant factor in a successful testing effort is how well the tests are written.”  I have no idea how much significant is this case. I would guess that it is either the largest part or the one after that. When I do testing, the significance of the test cases are miniscule, and I tend to do successful testing. Not all the time, but I know a certain cases where properly written test cases did not contribute to the successful testing effort. Also I don’t exactly know what qualifies as successful testing effort. It could mean having run all the tests within the schedule (which doesn’t quality for successful for many reasons), having written all the test cases (do I need to say more about this), the product is shipped to end user (lots of products have been shipped to customers and later fixed due to poor quality), etc. So I would say that well written test cases /may/ be a factor in the successful testing effort just as well as poorly or not-at-all-written test cases.


“They have to be effective in verifying that approved requirements have been met AND the test cases themselves are understandable so the tester can run the test as intended by the test author.” 
So the test cases have to be effective in verifying requirements. Granted. Do the /have to be/ written. No. Even poorly written test cases can be effective at verifying requirements. I think the ability to verify something comes not from the writing of a test case but from the skill of a tester. If a tester is skilled there may not be need for any written test cases to get the job done. “--approved requirements have been met --” So anything that is not written to requirements doesn’t get tested? We also have expectations about the product that don’t really qualify as requirements. Testers still need be aware of anything that might threaten the quality of the product.


“The 3C’s: Clear, Concise, Complete. Test Cases are not open to multiple interpretations by qualified testers. Ambiguous or nonspecific Test Cases are difficult to manage.” 
What if we do not know about the product enough at the time we write the test case? Should we wait until the product is finished and then write the cases? Isn’t that just huge waste of time and money? And when it comes to managing test cases, I think lines of text are the easiest to manage, it’s the people that might require managing. It is true that estimating coverage and depth may be difficult if the test case is ambiguous. I also think that it is difficult to estimate those with good and precise test cases. People are best to estimate the depth of their testing and the confidence in their testing. ASK IF SOMETHING IS AMBIGUOUS. -> Fight ambiguity with openness and communication


“Test Cases are easily understood.  They provide just enough detail to enable testers familiar with the functional area under test to run the test case. Note:  Detailed click by click steps are only useful for automated tests.” 
This is something that I agree! Test case can be easily understood even if it’s like this: “Play around with the product and describe the person next to you what it does.” It is both easily to understand and you already have enough familiarity because the base requirement for that is none! This could be the baseline for any testing ever to be done at any context. FAMILIARISE YOURSELF WITH THE PRODUCT.-> Fight ignorance with eagerness.


“Test cases include setup requirements (environment & data), expected results, any dependencies on other tests and tracing to the requirements they validate. Are traced to the results of each test run, including defects (bugs) discovered, test platform run on, software build version used.” 
For the data and expected results, I recommend all you read Michel Bolton’s and James Bach’s conversation and decide yourself if the test case is complete with expected results. It is necessary to document stuff that mentioned in the text so to avoid unnecessary overlapping. But traceability to test cases? Is that possible for bugs that are found during the test case execution but outside the intended observation area? BE PREPARED FOR THE UNEXPETED RESULTS WITH UNEXPETED INPUTS. -> Fight patters with chaos and vice versa.


“Measurable: For each test case, there must be a way to objectively measure that the test case has either passed or failed. All test cases must be linked to the requirements that they verify to enable impact of changes made to the requirements or solution designs to be identified (measured).” 
OK. Let’s say that the test is executed perfectly and the result is exactly as expected. A minute after that the computer crashes. Does that constitute as failed? How much do we actually know about the product to say that a test has passed or finished. And if we want to find bugs, isn’t the test passed only if it finds bugs? I think test only fails if you don’t run the test, because it didn’t test anything. TEST IS ONLY FAILED IF IT IS NOT RUN. -> Fight unnecessary documentation with rightly timed planning. I.e. keep the time between planning and doing minimum. Preferably do them at the same time.


“The test case must have been approved, prior to being run, by the key stakeholders of the requirements that the test case verifies. Any changes made to test cases caused by requirements or solution designs changes must also be approved.” 
There is ABSOLUTELY NO POINT IN THIS! Why the hell do we need approving for our testing? Only thing that requires approving is the results of our testing. If the stakeholders do not trust us to test the software, we could record everything we do and ask them to audit the test material. I would think they are happier to audit actual results than worthless, constantly changing trivial documentation. IF WE NEED EVERY TEST APPROVED, WE LOSE MONEY AND TEST COVERAGE. -> Fight über control with proper* documentation.


“Realistic test cases DO: Verify how the product will be used or could be misused, eg positive and negative tests. Verify functions and features that implement approved product requirements. Can be run using the platforms and software configurations available in the test environment.
Realistic test cases DO NOT: Verify out of scope functions and features. Verify unapproved product requirements.”
I agree with the first sentence – a good test could test how the product could be used/misused. Also how it should/will be used. How much does the product actually differ from what the customer actually wants? I also agree with the second sentence, but it should be broader. It should test the features, platforms, integrations, data integrity, security, performance, etc. I don’t understand the third sentence for if we don’t have the proper environment or setup, we should acquire them in time. If we do have a scope (referring to the fourth sentence) then I agree that we should not spend too much time on out of scope elements. They could be mutated or broken due to changes made somewhere else so regression/smoke testing should be implemented to out of scope areas. If the fifth sentence I don’t understand what is meant by “unapproved”. Unapproved by who? By the testers, client, developers, managers? It really depends, so making a claim like that is just nonsense. A REALISTIC TESTCASE IS REALISTIC BUT FLEXIBLE. -> Fight rigid descriptions with autonomy, critical thinking and challenging.


“Test Cases must be able to be successfully completed within the project scope and schedule.” 
There is no point on writing test cases that are not run. But “must”? And why do they have to be “successfully run” and is a failed test a “successfully run” test? If a test finds three hundred bugs, is it successfully run if it doesn’t reach the expected result? WE CAN ONLY RUN AS MUCH TEST CASES. -> Fight the quantification of test cases with the amount of time spent doing testing. Count time, not test cases.



To summarize, I think the whole concept of SMART test cases is wrong excluding few things that I agreed with. I also encourage to keep the writing of test cases in minimum and the amount of testing done in maximum. Use the time wisely and appropriately! But if you do, you should consider these instead of the SMART way:

  • Fight ambiguity with openness and communication
  • Fight ignorance with eagerness.
  • Fight patters with chaos and vice versa.
  • Fight unnecessary documentation with rightly timed planning. I.e. keep the time between planning and doing minimum. Preferably do them at the same time.
  • Fight über control with proper documentation.
  • Fight rigid descriptions with autonomy, critical thinking and challenging.
  • Fight the quantification of test cases with the amount of time spent doing testing. Count time, not test cases.


I will leave you to this. I promise I will be back with more about test cases (if I have the time). I’m not saying “don’t write test cases”, but use your head! It is not smart (bun intended) to follow some rigid procedure for all context, but to adapt to situation and to make most of the time you have to test.

- Peksi

* Proper: Appropriate as automated as possible. Including video recording, automated logs, scans from scribbles, session sheets, statements, etc. What is required to get enough information to the decision makers without hindering the job of a tester.