Google Analytics

Search

To search for specific articles you can use advanced Google features. Go to www.google.com and enter "site:darrellgrainger.blogspot.com" before your search terms, e.g.

site:darrellgrainger.blogspot.com CSS selectors

will search for "CSS selectors" but only on my site.


Thursday, February 9, 2012

Automated GUI Testing

From time to time I will see an automation tool with image compare. The idea is to get a screen capture, save it as a baseline then when the automation runs it gets a screen capture and compares it to the baseline.

Originally this always failed because there would be small differences. For example, the screen displayed the time and date, this would always be different and the compare would always fail.

The solution was to mask out parts of the screen capture we know will not be the same or to capture a portion of the screen and see if any part of the baseline matches the portion of screen capture.

This was a better solution but the video drivers from one machine to the next would create slightly different screen captures. So if you developed the baseline on one computer and ran it on another there was a high probability the compare would fail. Even comparing the screen capture to a baseline created on the same machine has a high probability of failing.

So what is the alternative?

We could go back to manual testing but that would be a step in the wrong direction.

We could automate the tests but hire someone to sit and watch them. If they see something wrong, do a screen capture and file a bug. Problems with this are, you still have one person tied up for the day watching the automation. Additionally, getting a screen shot and filing a bug while the automation is running could be a problem. Ideally, you'd want to be able to pause the automation, get the screen capture, file the bug and resume the automation.

However, there is a third alternative. When I create the test cases they will usually have points in the testing when you want to confirm the display looks okay. In some cases we just assume the user will be constantly looking. With automation, you want to explicitly decide when we need to confirm the display is okay. At those points in the automation, do a screen capture. The name of the file and the directory you store it in will have enough information for you to know which test case created the screen capture.

Once you have a directory of screen captures, you can quickly scan them and see that everything looks okay.

How does this save time? Most the time, GUI Testing is 80% getting the to place you want to confirm good display and 20% actually confirming the display. So if you are just reviewing the screen captures the time to get to the point to create the screen capture does not involve a tester and you save, on average, 80% of the time.

Classic examples of this being even better are confirming different configurations on the same test cases. For example, I tested a Java library for creating graphics. I would run the tests on 5 different operating systems with 3 different versions of Java.. So the test runner would essentially run:

for os in (5 different operating systems):
    for jvm in (3 different versions of Java):
        run all tests

This would result in 15 different test runs. I would then display all 15 screen captures for test case #1. If they all looked the same it would be a pass. If any one of the screen captures looked wrong I'd know which operating system, which JVM and which test case. So I could file a bug against that configuration.

You could do a similar thing for web testing. Run the same tests against the different operating systems and different web browsers. Then compare the screen captures to ensure all the configurations displayed correctly.

If you wanted to you could video capture the display as the tests ran. With a good video play back you could fast forward through the parts you know won't change.

No comments: