embedded systems

Conference Presentations

Testing Embedded Software Using an Error Taxonomy

Just like the rest of the software world, embedded software has defects. Today, embedded software is pervasive-built into automobiles, medical diagnostic devices, telephones, airplanes, spacecraft, and really almost everything. Because defects in embedded software can cause constant customer frustration, complete product failure, and even death, it would seem critical to collect and categorize the types of errors that are typically found in embedded software. Jon Hagar describes the few error studies that have been done in the embedded domain and the work he has done to turn that data into a valuable error taxonomy. After explaining the concept of a taxonomy and how you can use it to guide test planning for embedded software, he discusses ways to design tests to exploit the taxonomy and find important defects in your embedded system.

Jon Hagar, Consultant
The Power of the Crowd: Mobile Testing for Scale and Global Coverage

Crowdsourced testing of mobile applications, a middle ground between in-house and outsourced testing, has many advantages: scale, speed, coverage, lower capital costs, reduced staffing costs, and no long-term commitments. However, crowdsourced testing of any application-mobile or not-should augment your professional testing resources, not replace them. Most importantly, crowdsourced testing has to be done well or it’s a waste of time and money. John Carpenter reviews the applications and ways he’s outsourced testing to the crowd. Focusing on adopting crowdsourcing for both functional and usability testing in mobile applications, John describes scenarios in which you can leverage the crowd to lower costs and increase product quality, including scaling the application to large populations of global users.

John Carpenter, Mob4Hire, Inc.
Test as a Service: A New Architecture for Embedded Systems

The classic models adopted in test automation today-guaranteeing ease of test implementation rather than extendibility of the test architecture-are inadequate for the unprecedented complexity of today’s embedded software market. Because many embedded software solutions must be designed and developed for multiple deployments on different and rapidly changing hardware platforms, testers need something new. Raniero Virgilio describes a novel approach he calls Test as a Service (TaaS), in which test logic is implemented in self-consistent components on a shared test automation infrastructure. These test components are deployed at runtime to make the test process completely dynamic. The TaaS architecture provides specific high-level test services to testers as they need them.

Raniero Virgilio, Intel
Automating Embedded System Testing

Many testers believe the challenges of automating embedded and mobile phone-based systems testing are prohibitively difficult. By approaching the problem from a test design perspective and using that design to drive the automation initiative, William Coleman demystifies automated testing of embedded systems. He draws on experiences gained on a large-scale testing project for a leading smart-phone platform and a Window CE embedded automotive testing platform. William describes the technical side of the solution-how to setup a tethered automation agent to expose the GUI and drive tests at the device layer. Learn how to couple this technology solution with a test design methodology that helps even non-technical testers participate in the automation development and execution. Take back a new approach to achieve large-scale automation coverage that is easily maintainable over the long term.

William Coleman, LogiGear Corporation
Exploratory Testing of Mobile Applications

Exploratory testing-the process of simultaneous test design, execution, and learning-is a popular approach to testing traditional application software. Can you apply this approach to testing mobile applications? At first, it is tempting to merely employ the same methods and techniques that you would use with other software applications. Although some concepts transfer directly, testing mobile applications presents special challenges you must consider and address. Jonathan Kohl shares his experiences with testing mobile apps, including the smaller screens and unique input methods that can cause physical strain on testers and slow down the testing effort. Smaller memory and less processing power in the device mean tests often interfere with the application’s normal operation. Network and connectivity issues can cause unexpected errors that crash mobile apps and leave testers scratching their heads.

Jonathan Kohl, Kohl Concepts, Inc.
Open Source Tools for Web Application Performance Testing

OpenSTA is a solid open-source testing tool that, when used effectively, fulfills the basic needs of performance testing of Web applications. Dan Downing will introduce you to the basics of OpenSTA including downloading and installing
the tool, using the Script Modeler to record and customize performance test scripts, defining load scenarios, running tests using Commander, capturing the results using Collector, interpreting the results, as well as exporting captured performance data into Excel for analysis and reporting. As with many open source tools, self-training is the rule. Support is not provided by a big vendor
staff but by fellow practitioners via email. Learn how to find critical documentation that is often hidden in FAQs and discussion forum threads. If you are up to the support challenge, OpenSTA is an excellent alternative to high-priced commercial tools.

  • Learn the capabilities of OpenSTA
Dan Downing, Mentora Inc
Software Security Testing: It's Not Just for Functions Anymore

What makes security testing different from classical software testing? Part of the answer lies in expertise, experience, and attitude. Security testing comes in two flavors and involves standard functional security testing (making sure that the security apparatus works as advertised), as well as risk-based testing (malicious testing that simulates attacks). Risk-based security testing should be driven by architectural risk analysis, abuse and misuse cases, and attack patterns. Unfortunately,
first generation "application security" testing misses the mark on all fronts. That's because canned black-box probes-at best-can show you that things are broken, but say very little about the total security posture. Join Gary McGraw to learn what software security testing should look like, what kinds of knowledge testers must have to carry out such testing, and what the results may say about security.

Gary McGraw, Cigital Inc
How to Build Your Own Robot Army

Software testing is tough-it can be exhausting and there is never enough time to find all the important bugs. Wouldn't it be nice to have a staff of tireless servants working day and night to make you look good? Well, those days are here. Two decades ago, software test engineers were cheap and machine time was expensive, demanding test suites to run as quickly and efficiently as possible. Today, test engineers are expensive and CPUs are cheap, so it becomes reasonable to move test creation to the shoulders of a test machine army. But we're not talking about the run-of-the-mill automated scripts that only do what you explicitly told them … we're talking about programs that create and execute tests you never thought of and find bugs you never dreamed of. In this presentation, Harry Robinson will show you how to create your robot army using tools lying around on the Web.

Harry Robinson, Google
Testing Dialogues - Technical Issues

Is there an important technical test issue bothering you? Or, as a test engineer, are you looking for some career advice? If so, join experienced facilitators Esther Derby and Johanna Rothman for "Testing Dialogues-Technical Issues." Practice the power of group problem solving and develop novel approaches to solving your big problem. This double-track session takes on technical issues, such as automation challenges, model-based testing, testing immature technologies, open source test tools, testing Web services, and career development. You name it! Share your expertise and experiences, learn from the challenges and successes of others, and generate new topics in real-time. Discussions are structured in a framework so that participants receive a summary of their work product after the conference.

Facilitated by Esther Derby and Johanna Rothman
Behavior Specification for Testing Embedded Systems

A behavior specification is a valuable engineering artifact for the design, review, and testing of embedded software. It is a black-box model defining all interactions between system and environment and the conditional state-based causal relationships among them. Based on work by IEEE working group P1175, Dwayne Knirk describes a new reference model for specifying the behavior of computing systems. An embedded software control application is used to illustrate the application of this model. Dwayne provides an overview of the application and shows specification examples from the requirements document. He focuses on how the specification was initially developed, analyzed for completeness and consistency, used for generating test cases, and valued by the entire development and test team throughout the project.

Dwayne Knirk, Sandia National Laboratories

Pages

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!