Committed to covering the latest trends and approaches for anyone investigating or implementing agile development practices, processes, technologies, and leadership principles, Agile Development & Better Software Conference West offers their 2013 interview series.
To create better test cases, Koray Yitmen says you must know your users. And the path to better test case creation in usability testing starts with the segmentation and definition of users, a concept known as personas. Contrary to common market-wise segmentation that focuses on users'...
Although usability and user experience may seem synonymous, they are separate and much different concepts. While usability is well defined in standards, UX has no agreed upon definition because it relates to a more nebulous attribute-user satisfaction. Both are, however, key ingredients for successful system deployment. Because they don’t know how to measure and evaluate UX, many teams ignore this important attribute until the end of development. Philip Lew discusses how to model both usability and UX by breaking each attribute down into measurable characteristics-learnability, user effectiveness, user efficiency, content quality, user errors, and more. Phil shows you how to derive measurements and metrics that your development and team can employ to benchmark, analyze, and improve both usability and UX.
Software development organizations adopting Scrum have struggled to apply it to big projects with multiple teams. Dan Rawsthorne is frequently asked, “What does ‘big’ Scrum look like?” Because no two organizations are alike, this simple question does not have a simple answer. However, Dan has discovered patterns that are common in organizations that successfully implement “big” scrum. The first pattern he explores-Product Owner Team-allows the organization to handle agility up and down the hierarchy. Dan also discusses the Cross-cutting Teams pattern that handles issues-architecture, usability, integration, performance, and evaluation-that the formal hierarchy can’t resolve. Finally, Dan discusses the BuddyUp pattern to describe the best way to work with subject matter experts from dispersed parts of the organization.
Is there an important technical test issue bothering you? Or, as a test engineer, are you looking for some career advice? If so, join experienced facilitators Esther Derby and Elisabeth Hendrickson for "Testing Dialogues-Technical Issues." Practice the power of group problem solving and develop novel approaches to solving your big problem. This double-track session takes on technical issues, such as automation challenges, model-based testing, testing immature technologies, open source test tools, testing web services, and career development. You name it! Share your expertise and experiences, learn from the challenges and successes of others, and generate new topics in real-time. Discussions are structured in a framework so that participants receive a summary of their work product after the conference.
In addition to the efficiency improvements you expect from automated testing tools, you can-and should-expect them to provide valuable metrics to help manage your testing effort. By exploiting the programmability of automation tools, you can support the measurement and reporting aspects of your department. Learn how Jack Frank employs these tools with minimal effort to create test execution
status reports, coverage metrics, and other key management reports. Learn what measurement data your automation tool needs to log for later reporting. See examples of the operational reports his automation tools generate, including run/re-run/not run, pass/fail, percent complete, and percent of overall system tested. Take with you examples of senior management reports, including Jack's favorite, "My Bosses' Boss Test Status Report"-names will be changed to hide the guilty. Regardless of the
With billions of dollars changing hands every day, financial trading systems demand extremely high accuracy and reliability. So, how do you improve test process performance in the areas of time to market and efficiency and at the same time reduce failures? Over the last three years, using process and project measurement data as a guide, SIAC has focused on doing exactly that. Steve Boycan highlights the key elements of the process changes that have led to SIAC's current performance: the use of a rigorous requirements engineering process; controlled parallel and iterative work flows; changes to the level of abstraction in test documentation; emphasis on test planning, analysis, and design; causal analysis; and improving the test team's skills.
The .NET environment provides a surprising but little known way to create user interface (UI) test automation scripts. By employing objects in the System.Threading and System.Reflection namespaces, test engineers can write ad hoc automated UI test scenarios in minutes. James McCaffrey presents an example of a Windows-based application and creates a test program written in C# that verifies UI functionality by simulating user typing and clicking. James explains the code in detail so you can modify and extend the program to meet your own needs. Learn how to write ad hoc UI test automation for .NET-based Windows applications.
How to use System.Threading for test harness communications in .NET
Simulate .NET user interactions with System.Reflection
A look ahead to Avalon and its effect on user interface test automation
Testing an application's robustness and tolerance for failures in its natural environment can be difficult or impossible. Developers and testers buy tool suites to simulate load, write programs that fill memory, and create large files on disk, all to determine the behavior of their application under test in a hostile and unpredictable environment. Herbert Thompson describes and demonstrates new, cutting edge methods for simulating stress that are more efficient and reliable than current industry practices. Using Windows Media Player and Winamp as examples, he demonstrates how new methods of fault injection can be used to simulate stress on Windows applications.
Runtime fault injection as a testing and assessment tool
Testing is a never-ending series of trade-off decisions, what to test and what not to test; when to stop testing and release the product; how to budget your testing resources for automated vs. manual testing; how much code coverage is good enough; and much more. To make these difficult judgement calls, we often turn to the "best practices" recommended by testing experts and others who have encountered similar problems. The key to successful implementation is matching their "best practices" to your own context (team make-up, company culture, market
environment, etc.). Barry Preppernau shares his insights gathered from over 20 years of testing experience at Microsoft. You'll learn about the tools and processes that have been successful within Microsoft and ways for you to identify, adapt, and implement successful test improvement
initiatives within your organization.