Google Analytics

Search

To search for specific articles you can use advanced Google features. Go to www.google.com and enter "site:darrellgrainger.blogspot.com" before your search terms, e.g.

site:darrellgrainger.blogspot.com CSS selectors

will search for "CSS selectors" but only on my site.


Tuesday, January 19, 2010

the math behind test automation

It has been a while since I had time to write to my blog. A lot has happened. In the world of test automation I created a kick ass test suite using Watij. Built out a wonderful set of libraries and had a tonne of reusable code. I could whip out a new test case in minutes. Complex test cases might take an hour or two.

New features and code changes required a minimal amount of work. This is usually key to the survival of a test automation framework. I could have used record and playback to create automation but small changes in the application would render the automation useless. Most test tools recommend re-recording the application. If recording the test case and adding in the appropriate verifications takes longer than manually testing the application then a record and playback automation tool is pointless.

You need to create reusable code. Software automation is development. I have always believed this and recently read an article on infoQ, by Dale H. Emery which reiterated this belief.

The only way a test automation suite pays off is by being maintainable. Here is the math:

-let x be the time it takes to run the test suite manually once
-let y be the cost for each unit of x
-let z be the number of iteration we need to run the full test suite
-therefore the cost of manual testing is xyz

-let a be the time it takes to create the automation test suite
-let b be the cost for each unit of a
-assume the time it takes to run the test suite is infinitesimally small
-therefore the cost of automated testing is ab (running the tests more than once does not incur any significant costs)

-so long as ab < xyz, the return on investment is worth it Companies selling test automation tools will often sell them with the idea that you can record and playback the automation almost as easily as manual testing. Even if a = 2x and b = 2y then ab < xyz == 2x2y < xyz == 2(xy) < xyz == 2 < z Or in english, if you run the automation more than twice, it pays for itself. Sounds pretty good, doesn't it? This is an incredible simplistic view. It does not take into consideration the time dealing with bugs in the automation tool, lack of support for new technologies or configurations, training the staff, learning how to avoid false positives, learning how to eliminate false negatives (these greatly increase the cost of automation especially if not handled well). After you take all that into consideration, the reality becomes that you need to run the test automation 6 to 10 times with no modification to break even. And everyone knows that the development team will not change the application in a 6 to 10 week period, right? Bottom line, even on the first release the test automation does not pay for itself. So you spent $10,000 on a test automation tool and you hired a consultant to use it and create your test suite for another $50,000 only to find out that it never really worked or therefore were additional costs. If you decide to drop the tool (which has an annual service contract), the sales team jumps into high gear and convinces you that maybe after the next release it will pay for itself. By the third release you'd start seeing a profit. You have invested thousands and want to believe you can make this money pit viable. You desperately believe the sales staff. The record and playback is a losing battle. The code is just not maintainable. There is no reuse of code. It is equivalent to cut&paste. Any intermediate developer will tell you that cut&paste is a bad thing. If you have a section of code that clicks the Login button and they change the way the button is identify, you find yourself changing a thousand lines of code. If the automation used a library call to click the Login button then one change is all you need for all test cases which click the Login button. And that is the trick. As an automation developer, I record a test case. I look at the code produced and refactor it so common operations are placed in a library. The next time I do a record, if I see the same chunk of code, I replace it with the library call. The problem now is remembering everything I put into a library and refactoring recording to use the library call. For this, structure your library. I like to use an object oriented language for automation. I can then use one class for each feature or 'screen' or 'window'. I basically break the application into 'screens' or states and model my test framework around that. Now my library has an easy to remember structure just like a language's set of libraries (e.g. Java Software Development Kit or C++ Standard Template Library). So now I'm developing a library similar to the C++ STL or Java SDK. If I'm a failed developer forced to do test automation, I'm probably not a good tester and I definitely don't have the skill set to create the STL. What else can I do to make the math work out for test automation? Look at maintenance and reporting. If a test framework cannot tell you quickly and easily how the state of testing is then it is not very good. You need to have good reporting capabilities. If the framework has this built in all the better. If it just happens as part of the development of a test case great. For my last test framework I added in log4j messages to all the library calls. If set to NONE it would output nothing. If set to INFO it would print out the test steps as them occurred with the data being used (helps to reproduce the test manually, if necessary). If set to DEBUG, tells you details and interworkings of the library calls. This is more for maintaining the code than reporting what is happening. As for code maintenance, you want something that works in a good debugger. Running to a point then examining the state of the application under test (AUT) is important for fixing broken automation. Being able to quickly and easily find, manipulate and verify elements of the AUT. Languages like Watij are Java based and take full advantage of the Java programming language. If I have an xpath that matches a dozen text fields I can ask Watij to give me a List then I can use the Iterator to look at each text field.

On the other hand, languages like Selenium are very popular but its native language is HTML tables. Each row (TR) in the table is a test step. The first column is the action, the second column is the locator and the third (optional) column is any extra parameters needed.

You can get Selenium-RC and a Java API but the Java support uses String[] rather than Lists and Collections. I can find out how many text fields match a given xpath but I have no way of locating them. I often find myself adding the support I've grown used to in Watij. This means more code and more maintenance.

The unfortunately thing is nothing is perfect. Watij is easier to use as a Java developer but it only works with Internet Explorer (I have found that IE locks up occasionally; the status bar indicates it is waiting for 1 more item to load but everything is loaded). IE is not a very stable platform and with Watij you have no choice but to deal with it. Selenium does not have the Java structure and support of Watij but I ran it continuously on IE and it never locked up (same AUT). Additionally, I can run Selenium on my Mac, a Linux box or Windows. It works with Safari, Firefox and IE.

Watij is easier to maintain but if the tests hang and never complete, Selenium seems to be the solution. Going forward I will try to write a set of miscellaneous libraries which mirror the Watij structure. Hopefully once that is done, Selenium will be just as powerful as Watij and far more stable.