5 Reasons to Automate Testing by Recording User Interaction

Rich Internet applications with desktop-like functionality can be very beneficial, but they pose special testing challenges. One approach is to start with a closer look at how users interact with the applications.

The following is an excerpt from Goran Begic's e-book, Why HTML5 Tests the Limits of Automated Testing Solutions.

Websites that act like desktops, with drag-and-drop functions, on-page calculators, and other interactive features, are of great benefit to the user community, allowing users the accessibility and scalability of the web in a familiar application paradigm. However, these rich Internet applications (RIAs) pose special testing challenges. To ensure that the user experience is consistent and "bug-free," testers must manage multiple technologies, inconsistent browser behaviors, and highly dynamic development environments. In today's competitive software industry, teams are grappling with the additional pressure of fast-paced micro-releases on top of all this complexity. Further, testing a web page means testing the layout, the logic underneath, and multiple layers of information, which only complicates the situation.

HTML5 is intended to simplify things by incorporating functions within HTML that previously required external plug-ins. The result is a more seamless experience for most users, especially those who use multiple device types. However, without extensive testing on a variety of devices, platforms, and browsers, applications built with HTML5 can actually be less user friendly.

From a testing perspective, HTML5 introduces additional complexities. While the standard is still evolving, browser support is inconsistent and new elements require new kinds of tests. For example, if you want to use an HTML5 extension, then you must learn the extension's coding rules and the potential ripple effects that the extension may have on other technologies that the application uses. It's not just about learning and testing HTML5; it’s also about evaluating its multiplier effects on all the other evolving technologies.

All of this is hard to do manually. You would need to visually inspect lines of code or write different scripts to test each function. Even if you knew all the relevant rules for all technologies and extensions and how they relate to each other in every browser, you still would have to apply that knowledge through potentially thousands of source code lines. Can you really catch each and every issue? How long will that take? Clearly, you have to automate the testing, but how? There are many styles of automation to choose from, and one of the challenges testers face is to identify the right type of automation for the task at hand.

When it comes to testing an HTML5 interface, there are several reasons why it makes sense to automate testing by recording user interaction with the software.

1. Recorded user actions make tests more "lifelike."
Recording user actions takes all the guesswork out of the equation. You can literally see every action your users make on the website, painting a very realistic picture of the user experience. This can be of enormous benefit for a number of reasons, including providing usability feedback to the user experience designers and delivering visual aids for the developers when logging defects. Many organizations simply don't have the time to use their quality assurance professionals as anything other than literal testers. Providing essential tools like recorded tests allows them to spend more time analyzing the "number of clicks" and the overall user experience. Everybody benefits from this feedback, most importantly the users. What’s more "lifelike" than that?

2. Recorded user actions require far less labor than manual code inspection and development.
It can be difficult to determine how something in the code affects something on-screen and vice versa. Even when the connection is obvious, the relevant line numbers and screen coordinates may change when developers edit the code. That can make testing difficult by conventional methods (i.e., manual code inspection or handwritten scripts) because testers have to keep "chasing the change." Further, test recording dramatically reduces test development time by allowing you to easily capture the use case even if you decide to improve the recording by editing the test later. Capturing the session provides a reference point for later even if you intend to write scripted tests or translate the recorded session into functional test cases for later use.

3. Recorded user actions eliminate the need to implement a shadow testing script development effort in parallel to application development.
Talk about "chasing the change." How about "chasing the code?" Writing automation scripts from scratch is time consuming and usually requires that the tester begin this effort early in order to meet project deadlines. Trying to prepare test automation while the feature is still being developed often results in a lot of rework and frustration for both the developers and testers. By recording the automated test after the feature in question has stabilized, you can focus your testing where it makes sense and results in qualified defects.

4. Testing is easier for a variety of skill sets, especially with screen shots to guide them.
Some developers can look at a piece of code and visualize what happens on screen in the web browser. Others cannot. Finding a testing solution that serves all testers—even those who need help "seeing" how changes in the code will actually play out when users run the application—is key. Recording user actions allows even junior testers to find and log qualified defects quickly.

5. Highly skilled developers have a starting point for tailoring test code at will by using automated object maps and object.
Recording user actions creates tests that actually exercise the code as written, using object controllers rather than referring to screen positions or lines of code. The resulting script relies on an object-oriented testing model, which means that the test has a greater chance of running within different browsers and operating systems and under a variety of screens. The object-oriented model is independent of all the layers that would otherwise complicate RIA testing and thus defeat scalability—e.g., changes to standards, browsers, and even the application code. It makes the model infinitely extensible as new layers are added and keeps all three perspectives-- whether you work with code as a developer, with scripts as a tester, or with onscreen behavior as a user—in sync.

Often, there is the perception that recorded tests are less useful in today's world of rapid releases and ever-evolving Internet applications. While it can be true that UI changes can impact a recording, a well-designed script and a flexible recording tool can allow those kinds of edits with minimal effort. Capturing true user actions in an object-oriented testing model has so many benefits that it should be an essential tool in a quality assurance toolbox.

User Comments

Jim Hazen's picture
Jim Hazen


Yes, using the automation tool to 'prototype' a test script is a good and quick way to get it done via recording. But after that you really do need to clean it up and get it into a manageable and maintainable format/framework.

You do mention that this method you prescribe should be done on 'stable' code, which even with the 'coded' scripts that interact with the UI/Object layers this is needed as well.

My concern here is that you are talking about potentially using Record/Playback in a way that people will glom on to as a 'best practice'. As a 20+ year veteran of working with automation this scares me. We are finally getting the misconceptions of Record/Playback and "any monkey on a keyboard can do automation" under control and cleared up. Don't make us take 5 steps back.

To all who read this post I highly urge you to read it closely and know that recording a script does give you some advantages to "start" to build out the final testing script (object definitions, business logic to some degree, usage model), but there is a lot more to do under the covers in order for an automated test script to become robust, maintainable and reusable.

I admit I do use recording to 'prototype' my scripts in the beginning, but once I get the basics down I go to a coding method for the rest of my work. I do about 15% recording and 85% coding. I use frameworks to make the whole thing robust. Afterall, the main issue with Record/Playback is that it causes heavy rework to be incurred if changes in 'stable' code occur. And rework translates to time and money, of which we typically do not have later on when we really need it.

I'll say it now... It's Automation, Not Automagic!


Jim Hazen

October 8, 2012 - 11:01am
Goran Begic's picture
Goran Begic

Hello Jim, Thank you very much for your comment and for valid points on the benefits of playback and recording. More than anything my intent is to remind on the opportunities for test automation on the UI level, especially with cross-platform technologies, but as you said - there is no "automagic".

October 8, 2012 - 11:54pm

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.