Incentivizing Tester Behaviors, not Bounties: An Interview with Shaun Bradshaw

[interview]

Noel: Hi, this is Noel Wurst from TechWell, and today I’m speaking with Shaun Bradshaw, who is going to be giving the keynote at STARWEST on Thursday, October 3rd at 4:15, and the session is going to be called “The Bounty Conundrum: Incentives For Testers.” How are you doing this morning, Shaun?

Shaun: I’m doing well, Noel, thanks for asking.

Noel: I was really interested in speaking with you. I always enjoy learning what motivates people and what motivates this group as opposed to that group and new ways to offer rewards. I noticed that in the abstract for your session, it mentioned that to give bounties as a reward can backfire and you referred to that as the cobra effect, and I was curious if you could go into what that is and what kind of detrimental effects that can have on a project or on a team for rewarding bounties like that.

Shaun: Sure. First of all, let me just mention where the name “cobra effect” comes from, and I will actually tell a more detailed story in the keynote, but essentially it comes from something that happened in India, when colonial Britain was in charge. One of the local governors felt there were too many cobras in the local townships, and to get rid of them, he established a bounty. If you brought in a cobra skin, you would be paid, to basically enlist the citizenry to help rid them off this problem.

The ultimate effect of this is … you are familiar with the law of unintended consequences? What people did, is they started raising cobras. They started cobra farms. In fact, people would bring in cobra skins, they would get paid and then they would immediately go to the cobra farm and buy more cobras to get the skins, to get paid to … you see where it’s going.

Essentially, they weren’t getting rid of the cobras, and what’s worse is, once the government found out about this cobra farm going on, they said, “No more, we are not paying for any more cobra skins, we are done with this program.” Now you have a market that has collapsed and the cobra farmers have no more incentive to keep the cobras, and what happened was they released the cobras, and the net effect is essentially being worse off than how we started, and that’s the downside of a bounty.

How does that play into, say, a software development project, and particularly with testers? There are two things that I want to talk about in the keynote that I think has a similar negative effect. It’s not necessarily that more bugs could be created, I suspect that if we were to incentivize our internal teams as testers … as a tester, I love finding bugs. In fact, just yesterday … I won’t mention the name, but I was on my mobile phone ... my iPhone actually, and I was on a website trying to book some travel arrangements, and instead of having the month listed, it had “select=true quote 9” for September and so they had left some code in on the drop down that was not supposed to be there.

That really excites me as a tester to find bugs, and I think to some organizations, it’s appropriate to incentivize testers to find bugs, but really that’s an outcome and what I’m more interested in doing is incentivizing behaviors. Behaviors like collaborating with the developers, or the business analysts, or the customers to ensure that every aspect of the project is as clear and as defect-free as possible. In the end, I’m not really looking for bugs, I’m looking for quality as it relates to that application, if that makes sense. What I really want to do is I want to drive behaviors that help us incentivize, really, fewer defects, fewer bugs. If I incentivize finding bugs, then what I’m really doing is I’m asking, number one, testers to potentially fake finding bugs.

This is not my expectation, but what behavior may come out of this is they may write up bugs that are actually bugs, but they may write them up in such a way that makes them more difficult to correct, so we get the bug and it’s not fixed appropriately and so another bug is found, et cetera.

I want to stay from those types of behaviors, or incentivizing those types of behaviors, so really, I want to focus more on collaboration and communication. One other thing is I think if we incentivize finding bugs, what we are really doing, too, is there tends to be a bit of a contentious relationship with testers and developers, and we're not really doing a lot to try to get those to groups to cooperate more if we incentivize finding bugs. So … I go into a lot more detail in the keynote, but hopefully that give you a sense of where I'm going with this.

Noel: It does, definitely, and it leads to a question I had further down the line for you, but, one that I would love to go ahead and know the answer to, but I was reading a blog post the other day from someone, and the author wrote … he was a developer, writing about testers and he wrote, this is a quote from him that said, “A good tester is yin to your yang, who doesn’t think like you do, who is desperate to prove you wrong, who feels personally responsible to the users if anything is missed.”

Then he said that developers and testers together are “a team of opposites.” It just rung with such a separatist view and it didn’t seem to offer any possibility for the two of them being able to be thought of as the same and I was curious as to, is this part of the problem, that developers and testers think of themselves so completely differently that it’s almost hard for them to collaborate, because they see themselves coming from such different places?

Shaun: It’s an interesting thing, and for me, I actually completely understand where the blogger is coming from, but let me put a little different spin on it. When I think of yin and yang, I think of a natural balance and so not so much an opposition of forces as one of forces in balance with each other. I think what can tend to happen is you get very strong personalities, or a lot of negativity that can come from either group. That’s not really a bug, you are just not using the system properly or from a testing perspective, “This is complete crap; we never get anything good.”

That’s a very negative way of looking at it, but in balance, the two together … I think that is an appropriate analogy. I’m not sure if that was the intent of what was being written, but I like that idea and I’ll say this, when I’m talking to people who are interested in becoming software testers, the thing I like to share with them is, If you have a tester's mindset, one of the key things I think that makes a good tester, or part of a natural personality, is a natural skepticism, but not pessimism.

I think sometimes you can find great testers who are just naturally pessimistic; it’s never going to work. That’s not the type of mindset that I want. What I want from my testers is, “Prove to me that it works. Here is what I’m going to look for and if it works, great, but if it’s not, be aware that that’s what I’m going to be testing you on.”

I actually like the idea of the balanced approach, and maybe yin and yang is a good way to do it, but I think there is still some cooperation that has to occur as well.

Noel: As I read back over the statement that the blogger made, everything fits in line with what you are saying, except from that line where it says, “Who is desperate to prove you wrong.” There is that … that’s the kind of pessimism you were just talking about in that if you are rewarding the collaboration and behaviors that are just focused around creating a better product, I think you’d maybe eliminate some of that skepticism coming from the developers, at least in this person's case, of that testers are just there to try and prove you wrong.

Shaun: That’s, I think, a byproduct of … I’ll say this, of how we were raised. Many of us ... I've been in QA and software testing for sixteen years now, and so I grew up, and a lot of us grew up, in organizations that were strictly waterfall approach, very siloed-type organizations. You learn that... we always talk about people throwing the code over the wall and things like that, so there was … even our language implied that was a separation and as we’ve moved toward agile concepts, even if the organizations aren't implementing agile per se, or Scrum or lean, or any of that, I think there is more of a tendency nowadays for the cooperation to be there.

I remember some time ago doing a class, and one of the testers couldn’t believe what I was teaching them to do, and she made a statement like, “I can’t talk to a developer about what am going to test, then they’ll just code it so the test passes,” and I'm thinking, “Yeah, that’s what you want,” but that was the natural byproduct of this siloed nature between the two organizations.

Noel: Right, interesting. In regards to coming up with these better ways to incentivize testing, another thing that you mentioned was keeping the bonuses for testers as a surprise. I was curious as to why that would motivate or why would that result in a perhaps more positive way to incentivize them?

Shaun: Not just testers, really anybody. There have been actually studies done that, to a certain degree—bonuses can be counterproductive to help people … how excited people are to do their work. So, when we think about a bonus, a reward of any sort, it implies that people are going above and beyond the level of work necessary just to get the job done.

What we have a tendency to do … and I’ll use a pop culture analogy here, is we become very used to our bonus structure and the fact that, year in and year out, we are going to get a bonus and it’s going to be relatively the same because the company is doing relatively the same, we are doing relatively the same work.

If you remember the movie Christmas Vacation, Clark Griswold. It’s a funny movie, hilarious movie, but it’s all based on the premise that he’s expecting this bonus because he has gotten the same thing every single year, and so what’s the real incentive there for him to do anything better or bigger or thinking outside the box or taking risks, because he expects it every time. That’s really why we want to keep it not a secret but a surprise. It should be a surprise in terms of when you get it or how much you get or some combination of the two because in doing that, it makes you work a little bit more for it, I believe.

That’s actually not just what I believe, what the studies have shown, and actually I put it to the test, of course, on a tester so here at Zenergy, we did some experiments with one of our teams and how we did bonuses. I’ll tell some more stories during the keynote, but it was very interesting how the team reacted when they did not know, number one, how much the bonus was going to be, they did know when it was going to be given. It was also peer-driven, so it was not based on my view of the team. It was based on the teams’ view of themselves, so the winner was recognized by the team, not just by me. Contrast that with some of our people who got bonuses and how they reacted when a particular bonus was given that was smaller than they really expected. I guess is the best way to put it again … if you want an interview, I’ll tell the story, but in the keynote, I go into what happened.

Noel: That’s interesting, even to think back about Christmas Vacation again, I never really thought about the entire movie being based around him waiting on that bonus and when he gets the smaller one, he just explodes. He just goes crazy when it’s not that bonus that he’s had every single year and there is no thought of, “What may I have done differently?” or, “What’s happening with the company?” He's just so furious; it completely blinds him to act rationally at all. I can see that happening very easily in a company.

Lastly, I wanted to ask you something else you had said, was that a great reward system is “safe to fail” and, again, that goes back to what we were just talking about what happens when a reward system does fail and how does that end up possibly being spun back into a future positive.

Shaun: What we mean by “safe to fail” is as you are implementing bonuses, the potential for failure is actually pretty high. What we mean by that is, if you did a survey and … actually surveys have been done of especially IT workers, 80% of IT workers believe that they do above average work. I don’t know how you describe average in that case, but you can’t really … it’s just, on the bell curve, that doesn’t work. Somebody is going to be disappointed, how disappointed are they going to be and how do you handle that?

That’s really what we mean by “safe to fail,” so what we can do is we can put certain mechanisms in place that make it safe to fail, for example, not tying bonuses … or I should say tying bonuses, but not having bonuses being a huge part of the expected compensation. Doesn’t mean you can’t get a huge bonus, but if part of my living wage requires that I get a big bonus, that’s not safe to fail. If for some reason, I don’t get that bonus, now I’m not just disappointed, I’m unable to make a car payment a house payment, or something like that.

You have safe to fail from the perspective of the individual who receives the bonus. You can also flip it and say, “What if we incentivize things in a certain way that allows for collusion,” in which individuals collude to make sure that this person gets the bonus and it's going to be this huge bonus and they’ll actually share some of it with us because (call was dropped, but then reconnected shortly after)

Shaun: … yes, so if a … we need to protect the company as well from that type of activity, so again, keeping the bonus a surprise … keeping it so that it’s not necessarily a huge bonus, one that … what we would tend towards is smaller, more frequent type bonuses, so if something goes awry in how they are given out, we have … just like in agile, we have an opportunity to relatively quickly change the way we are doing it. That’s what we mean by that what we mean by that.

Noel: That is a really good idea as far as ... almost an iterative bonus.

Shaun: Exactly, and in our experiments here at Zenergy, that's what we found, was … I’ll go ahead and share this little tidbit of information. We were driving incentives for behavior and did that over the course of four months, so there were four different … every month, there was a bonus given and the first experiment that we ran with this new bonus structure, there was a pretty wide spread between the person who actually got the bonus, the winner, if it were, and the person at the bottom of the scale.

Over the course of the next three months, each time we ran the bonus, the behaviors … everyone shifted their behavior, and the difference between the winner and the person at the bottom became smaller and smaller. What I read from that was, I was incentivizing the team to be more like the best person, to bring everyone up together, as opposed to having someone at the bottom just hang out there.

What we really want to do is we want to see the team, as a whole, increase in their abilities and their output. That was actually a very positive outcome to this bonus structure we put in place, which again, we'll describe more fully in the keynote.

Noel: That should be a really interesting keynote, to have an entire room of testers as there will be there at STARWEST and then to be able to … I imagine that ... these all sound like really great ideas, but like you said, you are never going to make every single person completely thrilled with the way the bonus is doled out, so to have an entire room of people to get to pitch this to, seems like after that is over, you can get a lot of really interesting feedback from the crowd there.

Shaun: I’m looking forward to it.

Noel: Thank you so much for speaking to me today.

Shaun: Yeah, Noel, I appreciate you giving me that opportunity to talk to you guys at TechWell.

Noel: Definitely. Again, this is Noel Wurst. I’ve been speaking with Shaun Bradshaw and his keynote is going to be at Star West in Anaheim, California, on Thursday, October 3rd at 4:15, and that keynote is titled “The Bounty Conundrum: Incentives For Testers.” Thanks so much again.

Shaun: Thanks Noel, you have a good day.

 

For the past fifteen years Shaun Bradshaw has helped clients improve the quality of their software by advising, instructing, and mentoring them in QA and test process improvement. His focus on effective testing and test management techniques, as well as practical metric implementations, creates demand for him as a consultant and frequent speaker at major QA and testing conferences. Shaun is well known for his topics on test metrics, the S-Curve, and the Zero Bug Bounce.

User Comments

1 comment

About the author

Upcoming Events

Apr 28
Jun 02
Sep 22
Oct 13