Pam Hardy's goal in this article is to relay her experience as a new tester, in the hope that some of her perspectives will help other new testers navigate the waters of their new vocation.
Six months ago, I was guiding a five-day river trip, teaching archeology on the San Juan River in southern Utah. Things change. Now I'm starting my second project as a software tester for a Silicon Valley startup! I'm not alone; I suppose as the world of software development outgrows its pool of available technically trained employees, more people like me are finding opportunities in this industry. My goal for this article is to relay my experience as a new tester, in the hopes that some of my perspectives will help other new testers navigate the waters of their new vocation.
First, let me describe my background and my current role a little more thoroughly. I am not totally inexperienced in the computer world. I grew up in Silicon Valley, and I was one of the first kids in school to bring their papers in typed on a word processor rather than on a typewriter (it was, after all, way back in the 1900's). I also spent six months doing telephone technical support for a large computer manufacturer, which gave me some insight into users' perspectives. I am not a developer—I've never written a program in my life. However, I know what computers look like when they're working, and my phone support days did give me a hint about where customers tend to have difficulty. My job is black box testing of custom software applications; I'm also working on test planning for the next product, and arranging the necessary testing environment.
As I began in the testing world, I had no idea that many of the attitudes that served me well as trip leader on the river would help me as a software tester. The number one carryover has been an appreciation of quality work. As the senior leader on an archeological river trip I am ultimately responsible for everything—from the quality of the educational material (the stated requirements) to the presentation of dinner (the assumed usability standards). If all the pieces don't meet (or exceed) participants' expectations, then it will reflect on their experience of the entire trip. Software is very similar: Great ideas have to be paired with great presentation if your product is going to satisfy customers. I've found that the hardest thing to pick up while starting as a tester has nothing to do with the technical details. It's learning what quality testing means to different members of the team.
How much feedback do these people really want?
As a new tester, I am acutely aware that my feedback could easily paint me as the "bearer of bad news," but I'd rather have my work regarded as helpful in making the product better. It can be hard on anyone to have his or her work scrutinized for the tiniest problem, and as a new person the last thing I wanted to do was get on the wrong side of the developers by seeming excessively nit-picky. But at the same time I know they are depending on me to be a fresh set of eyes. If that means that occasionally I'm seen as the naysayer, then that's the risk I'm prepared to take.
My primary challenge has been to learn what my role should be in reporting bugs and suggesting changes. Should I be reporting only bugs that cripple functionality, or should I include usability issues? If I'm to include usability issues, am I to report only serious usability issues in which the intuitive customer action may result in a potentially big problem? Or am I supposed to report places where it just isn't clear how to perform