Next, you pop out of the elevator and walk down a ward with your device automatically connecting to three different WiFi connections on your trip. Now you are in a room with a family and you need to get dad's cardio study up to show the family why he has been rushed into emergency surgery. Emotions run high; you are stressed and family members are shocked, confused, anxious, and want to know immediately what is wrong. You don't want an app that crashes because it couldn't handle those transitions.
Now imagine requiring access in any of those points I described above. You may be in a transition point between network types or sources or in an area with poor network strength. Maybe you are walking down the hall getting a second opinion from a colleague; we tested this and developed code in our app to handle these situations well and we tested many other mobile-specific scenarios heavily. There are a whole host of other things to consider—low batteries, syncing with PCs, and different lighting conditions (radiologists like the dark, but we had one radiologist test the software when he was in the park with his children under bright sunlight.)This project heavily influenced mnemonic, which outlines many of the mobile-specific models of coverage we used.
We recorded what we did with those kinds of scenarios using session-based testing and we had to make sure our session sheets were detailed enough so they could be followed by another person. The session sheets also had to follow document management guidelines, with proper oversight, storage, titles, headers, footers, etc. So we had to have really solid debriefs and more editing on the actual docs than I was used to.
So we had a balance. We needed to be up to date on technology, be compliant with our regulatory oversight, and always trying to find a happy medium. We had a lot of skilled people who could work efficiently, get up to speed on technology quickly, and with whom we collaborated closely on every aspect of development and regulatory compliance. That helped tremendously. We also used mobile technology to stay in touch and communicate when we were on the move, to record videos for demonstration purposes, or to record bugs. We could get a message with a video or screenshot to our device from someone doing field testing in seconds, and from there figure out how to address it.
JV: You mentioned that some issues the FDA was concerned about were bench tests and clinical trials. How did you pull off testing when you had to meet these multiple goals/marks?
JK: They were just other models of coverage to manage and complete. As a tester, I don't just look at the requirements and test off of them. That isn't nearly enough. I always use multiple models of coverage to get the most out of our testing efforts. I also tend to do lightweight risk assessments on most projects I'm on, which help us determine what models of coverage to use to mitigate those risks. The nice thing about an FDA or other regulated process, is that this work is not optional, so you get more support for being more thorough and creative. So in short, as a tester, I always have multiple goals/marks in my work. It was just far easier to sell when it was a requirement. The clinical trials were fascinating to manage: Radiologists and others who do diagnostic work are amazingly talented. They would see a pathology instantly where I would see nothing out of the ordinary. It was an honor and a pleasure to work with them and learn from them. Their input helped our other types of testing enormously, especially with user scenario testing. We got a sense of their emotional state and urgency and their motivations and fears when they use this kind of software, so those perspectives informed other areas of our testing to make it much more effective.