5 Reasons to Automate Testing by Recording User Interaction

[article]
Summary:
Rich Internet applications with desktop-like functionality can be very beneficial, but they pose special testing challenges. One approach is to start with a closer look at how users interact with the applications.

The following is an excerpt from Goran Begic's e-book, Why HTML5 Tests the Limits of Automated Testing Solutions .


Websites that act like desktops, with drag-and-drop functions, on-page calculators, and other interactive features, are of great benefit to the user community, allowing users the accessibility and scalability of the web in a familiar application paradigm. However, these rich Internet applications (RIAs) pose special testing challenges. To ensure that the user experience is consistent and "bug-free," testers must manage multiple technologies, inconsistent browser behaviors, and highly dynamic development environments. In today's competitive software industry, teams are grappling with the additional pressure of fast-paced micro-releases on top of all this complexity. Further, testing a web page means testing the layout, the logic underneath, and multiple layers of information, which only complicates the situation.

HTML5 is intended to simplify things by incorporating functions within HTML that previously required external plug-ins. The result is a more seamless experience for most users, especially those who use multiple device types. However, without extensive testing on a variety of devices, platforms, and browsers, applications built with HTML5 can actually be less user friendly.

From a testing perspective, HTML5 introduces additional complexities. While the standard is still evolving, browser support is inconsistent and new elements require new kinds of tests. For example, if you want to use an HTML5 extension, then you must learn the extension's coding rules and the potential ripple effects that the extension may have on other technologies that the application uses. It's not just about learning and testing HTML5; it’s also about evaluating its multiplier effects on all the other evolving technologies.

All of this is hard to do manually. You would need to visually inspect lines of code or write different scripts to test each function. Even if you knew all the relevant rules for all technologies and extensions and how they relate to each other in every browser, you still would have to apply that knowledge through potentially thousands of source code lines. Can you really catch each and every issue? How long will that take? Clearly, you have to automate the testing, but how? There are many styles of automation to choose from, and one of the challenges testers face is to identify the right type of automation for the task at hand.

When it comes to testing an HTML5 interface, there are several reasons why it makes sense to automate testing by recording user interaction with the software.

1. Recorded user actions make tests more "lifelike."
Recording user actions takes all the guesswork out of the equation. You can literally see every action your users make on the website, painting a very realistic picture of the user experience. This can be of enormous benefit for a number of reasons, including providing usability feedback to the user experience designers and delivering visual aids for the developers when logging defects. Many organizations simply don't have the time to use their quality assurance professionals as anything other than literal testers. Providing essential tools like recorded tests allows them to spend more time analyzing the "number of clicks" and the overall user experience. Everybody benefits from this feedback, most importantly the users. What’s more "lifelike" than that?

2. Recorded user actions require far less labor than manual code inspection and development.
It can be difficult to determine how something in the code affects something on-screen and vice versa. Even when the connection is obvious, the relevant line numbers and screen coordinates may change when developers edit the code. That can make testing difficult by conventional methods (i.e., manual code inspection or handwritten scripts) because

User Comments

2 comments
Jim Hazen's picture
Jim Hazen

Goran,

Yes, using the automation tool to 'prototype' a test script is a good and quick way to get it done via recording. But after that you really do need to clean it up and get it into a manageable and maintainable format/framework.

You do mention that this method you prescribe should be done on 'stable' code, which even with the 'coded' scripts that interact with the UI/Object layers this is needed as well.

My concern here is that you are talking about potentially using Record/Playback in a way that people will glom on to as a 'best practice'. As a 20+ year veteran of working with automation this scares me. We are finally getting the misconceptions of Record/Playback and "any monkey on a keyboard can do automation" under control and cleared up. Don't make us take 5 steps back.

To all who read this post I highly urge you to read it closely and know that recording a script does give you some advantages to "start" to build out the final testing script (object definitions, business logic to some degree, usage model), but there is a lot more to do under the covers in order for an automated test script to become robust, maintainable and reusable.

I admit I do use recording to 'prototype' my scripts in the beginning, but once I get the basics down I go to a coding method for the rest of my work. I do about 15% recording and 85% coding. I use frameworks to make the whole thing robust. Afterall, the main issue with Record/Playback is that it causes heavy rework to be incurred if changes in 'stable' code occur. And rework translates to time and money, of which we typically do not have later on when we really need it.

I'll say it now... It's Automation, Not Automagic!

Respectfully,

Jim Hazen

October 8, 2012 - 11:01am
Goran Begic's picture
Goran Begic

Hello Jim, Thank you very much for your comment and for valid points on the benefits of playback and recording. More than anything my intent is to remind on the opportunities for test automation on the UI level, especially with cross-platform technologies, but as you said - there is no "automagic".

October 8, 2012 - 11:54pm

About the author

Goran Begic's picture Goran Begic

Goran Begic brings over thirteen years of professional experience with various code and design verification tools, ranging from manual testing and test automation to static analysis and formal verification, to his position as senior product marketing manager at SmartBear Software. Goran has extensive experience with integrating development and testing tools into end-user processes, including agile and model-based design. He is an avid customer advocate and a proponent of visual thinking. Goran holds a master’s degree in electrical engineering from the University of Zagreb, Croatia. Connect with him via Twitter at @gbegicw.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Nov 09
Nov 09
Apr 13
May 03