Wednesday, November 9, 2011

My First Post


This is my first attempt blogging, so let me just say that if I come off as arrogant, it is not my intention as I hope will become obvious.  So here goes.  I am a QA team member of the Agile development group of a small Financial Services company.  We have a distributed team of about ten testers on two continents.  Two of us had previous automation experience. The rest of the team are at various skill levels, all taught in-house.  We automate using HPs BPT framework for QTP.  In the last four years we have built a regression suite of about three hundred tests that run daily.

Over one and a half years ago we made a concerted effort to find other people with successful implementations of BPT frameworks.  We wanted to see what was working for others to improve our own process.  We were not able to find any.  We joined Vivit, HP’s user community, and were shocked at the level of use and sophistication we saw there.  “We can’t be THAT smart!” we joked to ourselves.  Our search brought us to the attention of HP and we were asked to give a demo of our framework to the heads of the BPT and QTP product lines.  The idea was we would get to connect with other companies at a similar sophistication level to share ideas with.  Still nada.  “We REALLY can’t be this smart!  Maybe all the really smart people are using open source tools.”  So we looked at Selenium and Watr.  We found that they do not come anywhere near the power, utility and maintainability of our BPT based framework.  Further, to build a framework that even comes close to what we already have would take more development cycles then we can justify when BPT works.  And if people have real solid frameworks already, we could not find evidence of them talking about such online.

And then a bombshell.  QTP/QC 11.0 was released.  We were thrilled initially for a number of reasons.  Our developers had been holding off migrating to .Net 4.0 because it would break our regression suite.  QTP 11 also supported Windows 7, which our SA team wanted to start rolling out.  And most important (to us) the performance enhancements to BPT would mean our 300 tests would be ready for analysis earlier each day, greatly improving our ability to provide ‘immediate feedback’ to the developers.  But it just did not work.  Just getting a sandbox running with a subset of our test artifacts converted took months.  Then the real problems started.  It seemed to us that every change to the tools either broke something, dumbed down a piece of functionality we relied on, or just made it harder to run our current process.  We have been working with HP for over a year on this implementation and still do not have all the critical (to us) bugs worked out.  And now I am rambling.  Back to the point, HP made all these changes after extensive user reviews and inputs.  Who are these users and why is their view of these tools so fundamentally different from ours?  And how, a year and a half after initial release, are we STILL finding new critical issues that not only have not been reported by any other users, but that we have to repeatedly explain to HP WHY they are critical issues?

Then the other day while talking to my co-worker about his latest (finally successful) attempt to get HP to accept one of our issues as a bug it hit me.  Why are we so different? 

Transparency.  (Finally getting to the point, thanks for sticking it out!)  Our mantra in automation since implementing BPT has been readability, readability, readability.  Every Object Repository entry, Parameter, Component, Function/Keyword should be named so that it is obvious to anyone with a very basic understanding of our framework (read, a 10 minute intro)what that object is for.  A novice or even (gasp) Business Owner should be able to look at a test or test results and understand what the test was doing.  To achieve this, our framework acts as an abstraction layer between building components (the real code) and building tests (which anyone should be able to do).  Even though we only have 2 ‘classic’ automators on our team, the entire team contributes to growing the automation test suite.  The team members who execute, analyze, and to a large degree maintain the tests had no prior automation experience. Not everyone can build complex components, but everyone can build a test from existing components, stub out a new component where needed, and build simple Keyword only archetype components.  Most QA teams with automation keep the tools in the hands of a few skilled people who can both test and code.  These Automation Specialists build for themselves and thus the tests require someone with their skills to interpret and maintain.  I now believe all toolsets and frameworks out there are designed to cater to this paradigm.  There is little incentive for the automators to change, as it feels like giving up what sets them apart and makes them valuable.

So are we just smarter then everyone else?  No, but the dynamics of our team, coupled with the transition to Agile, led our team down a unique path that I truly believe is better then everything else I have been able to find other people doing.  We have a test suite that can be executed and analyzed by anyone on our team.  The components can be used and reused for regression or even temporary sprint work testing.  Tests can be maintained by almost any tester. And ‘Super Automators’ are only needed for the most complicated components.  Without this framework we would not be able to execute, analyze, maintain and grow our test suite unless we hired many more automation experts. 

I know many Automators will cringe at this thought.  I did when we started down this path.  At first it feels like giving away the secrets that make us special, valuable, sought after.  But in practice, my skills are more in demand now then before, as more people not only rely on them, but actually understand what it is I do.  I get to focus more on the challenging code tasks that I love while the more mundane parts can be done by anyone.  What this gives us is a diverse team of testers backed up by a robust automation test suite that executes and gets analyzed every day, and grows every sprint.
So maybe we are smart enough.  Or maybe we are just missing something so obvious that people don’t feel the need to talk about it.