Wednesday, November 9, 2011

My First Post


This is my first attempt blogging, so let me just say that if I come off as arrogant, it is not my intention as I hope will become obvious.  So here goes.  I am a QA team member of the Agile development group of a small Financial Services company.  We have a distributed team of about ten testers on two continents.  Two of us had previous automation experience. The rest of the team are at various skill levels, all taught in-house.  We automate using HPs BPT framework for QTP.  In the last four years we have built a regression suite of about three hundred tests that run daily.

Over one and a half years ago we made a concerted effort to find other people with successful implementations of BPT frameworks.  We wanted to see what was working for others to improve our own process.  We were not able to find any.  We joined Vivit, HP’s user community, and were shocked at the level of use and sophistication we saw there.  “We can’t be THAT smart!” we joked to ourselves.  Our search brought us to the attention of HP and we were asked to give a demo of our framework to the heads of the BPT and QTP product lines.  The idea was we would get to connect with other companies at a similar sophistication level to share ideas with.  Still nada.  “We REALLY can’t be this smart!  Maybe all the really smart people are using open source tools.”  So we looked at Selenium and Watr.  We found that they do not come anywhere near the power, utility and maintainability of our BPT based framework.  Further, to build a framework that even comes close to what we already have would take more development cycles then we can justify when BPT works.  And if people have real solid frameworks already, we could not find evidence of them talking about such online.

And then a bombshell.  QTP/QC 11.0 was released.  We were thrilled initially for a number of reasons.  Our developers had been holding off migrating to .Net 4.0 because it would break our regression suite.  QTP 11 also supported Windows 7, which our SA team wanted to start rolling out.  And most important (to us) the performance enhancements to BPT would mean our 300 tests would be ready for analysis earlier each day, greatly improving our ability to provide ‘immediate feedback’ to the developers.  But it just did not work.  Just getting a sandbox running with a subset of our test artifacts converted took months.  Then the real problems started.  It seemed to us that every change to the tools either broke something, dumbed down a piece of functionality we relied on, or just made it harder to run our current process.  We have been working with HP for over a year on this implementation and still do not have all the critical (to us) bugs worked out.  And now I am rambling.  Back to the point, HP made all these changes after extensive user reviews and inputs.  Who are these users and why is their view of these tools so fundamentally different from ours?  And how, a year and a half after initial release, are we STILL finding new critical issues that not only have not been reported by any other users, but that we have to repeatedly explain to HP WHY they are critical issues?

Then the other day while talking to my co-worker about his latest (finally successful) attempt to get HP to accept one of our issues as a bug it hit me.  Why are we so different? 

Transparency.  (Finally getting to the point, thanks for sticking it out!)  Our mantra in automation since implementing BPT has been readability, readability, readability.  Every Object Repository entry, Parameter, Component, Function/Keyword should be named so that it is obvious to anyone with a very basic understanding of our framework (read, a 10 minute intro)what that object is for.  A novice or even (gasp) Business Owner should be able to look at a test or test results and understand what the test was doing.  To achieve this, our framework acts as an abstraction layer between building components (the real code) and building tests (which anyone should be able to do).  Even though we only have 2 ‘classic’ automators on our team, the entire team contributes to growing the automation test suite.  The team members who execute, analyze, and to a large degree maintain the tests had no prior automation experience. Not everyone can build complex components, but everyone can build a test from existing components, stub out a new component where needed, and build simple Keyword only archetype components.  Most QA teams with automation keep the tools in the hands of a few skilled people who can both test and code.  These Automation Specialists build for themselves and thus the tests require someone with their skills to interpret and maintain.  I now believe all toolsets and frameworks out there are designed to cater to this paradigm.  There is little incentive for the automators to change, as it feels like giving up what sets them apart and makes them valuable.

So are we just smarter then everyone else?  No, but the dynamics of our team, coupled with the transition to Agile, led our team down a unique path that I truly believe is better then everything else I have been able to find other people doing.  We have a test suite that can be executed and analyzed by anyone on our team.  The components can be used and reused for regression or even temporary sprint work testing.  Tests can be maintained by almost any tester. And ‘Super Automators’ are only needed for the most complicated components.  Without this framework we would not be able to execute, analyze, maintain and grow our test suite unless we hired many more automation experts. 

I know many Automators will cringe at this thought.  I did when we started down this path.  At first it feels like giving away the secrets that make us special, valuable, sought after.  But in practice, my skills are more in demand now then before, as more people not only rely on them, but actually understand what it is I do.  I get to focus more on the challenging code tasks that I love while the more mundane parts can be done by anyone.  What this gives us is a diverse team of testers backed up by a robust automation test suite that executes and gets analyzed every day, and grows every sprint.
So maybe we are smart enough.  Or maybe we are just missing something so obvious that people don’t feel the need to talk about it.

7 comments:

  1. Great first post, Jason. As a lone Automator (well, lone a lot of things on my team)I greatly value the ability for things to be readable and readily understandable, or as close to it as humanly possible. In my work, I use Cucumber, which is an abstraction over Rspec, which is implemented in Ruby. Why? It's what my company uses (we're a Rails shop). I know the feeling of having gone down many roads and trying many diferent approaches to get to a state where tests are robust, maintainable and understandable, with the minimum of underlying weirdness to prevent others from understanding what is happening. I don't want to obfuscate what I do. I *welcome* the help. If anyone wants to help me write feature files, I'm all over it. I won't get that help, though, if I don't at least try to make the scripts I work with easy to interpret and the implementation easy to follow. I'd be very interested to see where your development/automation experiments lead :).

    ReplyDelete
  2. I agree with Michael. A very good first post. I've only been involved with automation efforts since 2009, but the big thing that I have tried to stress in the last year or so is DRY in our frameworks. I built a Framework on top of Selenium RC a year ago, and then on a new project, had to migrate to Selenium 2 Webdriver which required a modification on how a lot of the framework was put together. It's still not 100% where I'd like it to be, but I've been building it with the hope of making it robust and abstract as possible, so any new context or application, need only have the new page object abstractions built to begin testing, not a retooling of key functions. I think in recent weeks I've gotten to something that pretty much achieves that, but that's really only half the battle right? It can be easy to get lost in automating so many things, and leaving your brain turned off to what your goals in automating are for testing. In any case, I enjoyed reading, and I look forward to reading more of your blogs.

    ReplyDelete
  3. Jason,

    Nice post. I'm sure I'm not alone in hoping you'll write more blog posts about your experiences. When you do, please include some specific examples of the kinds of decisions you made when you designed your tests around, e.g., the appropriate level of abstraction to use in various contexts, what parts if your tests you parameterized, where the dividing line falls between elements of tests that are maintainable by "average testers" and those elements maintained by automation specialists like yourself. I, for one, would be interested in your thoughts about: what choices you made, what options you considered, why you made the choice you did, mistakes you may have regretted later, and "Res of thumb" that your team finds useful in creating and maintaining your tests.

    Thanks again.

    Justin Hunter from Hexawise

    Justin

    ReplyDelete
  4. I'm always excited to read about other agile teams working in a similar domain to ours. And it's exciting to read about a team which has found an automation solution that provides value for the business folks as well as the development team. Very cool!

    IME it is best to have the programmers writing the test automation code, collaborating with the testers who specify test cases and figure out the right things to test. But it sounds like your team already had a lot of expertise in test automation and with HP's tool set, and you were able to leverage this in an agile setting as well. I'll be interested in what else you learn over the long term. Thank you SO MUCH for sharing your experiences.

    ReplyDelete
  5. Hi Jason,

    Are you saying that not every tester on your team can do full object oriented programming on a whiteboard but still adds great value to your automation? If yes, congrats! This is what good automation can do with collaboration between different levels of capability with programming! Even more powerful when it's possible to use scripts, tools, and manual testing captures in order to enhance what testers do based on instinct, experience, and exploration.

    Lanette Creamer a.k.a. blog.testyredhead.com

    ReplyDelete
  6. Nice first post - hope there's going to be more and you explain more about what you do, how you do it, problems faced and overcome etc

    ReplyDelete
  7. Thank you for all the comments, with this opening blog post I wanted to get enough overview to give background to my 'Ah Ha' moment. I plan to use my next few posts to fill in the details that you have mentioned, and any other questions that come up.
    In short:
    Lanette, Yes they do, every single one of them! It is a great advantage, especially in a small test team.
    Michael, we looked at Cucumber, and we did an evolution of self-teaching our whole team to code Ruby as well. I will try to do a post about that process too.

    ReplyDelete