What Happens to the Code at the End of the Spike?
You’ve timeboxed your experiment. You have a proof of concept and you have a prototype or something you can show other people. They like it and say, “Wow, that looks great! Let’s use it. Pop it in the system and be done with it.”
You have to say, “No, it’s a proof of concept; an experiment. It’s got holes a mile wide in it. It doesn’t account for all of these cases,” and then you name several. “It needs real testing. I did an experiment.” Maybe you even experimented with another developer or tester, but the code isn’t done. If you put that code into the system, you’ll create technical debt up the wazoo. What do you do?
You can throw out the code. You can use the code as a basis for real development. Or you can refactor the code to get to something reasonable. It all depends on where you start. Any of those will change the estimate of how long the rest of the work will take.
Whatever you do, you will need to estimate how long the rest of the work will take. But you have the learning from the original timebox. You have feedback from your users or product owner and part of your team. You know enough to explain to the team what this feature will take and where the risks are, or you know enough to do another spike.
“Use the Code As Is…”
After a spike, I bet many of you hear, “Use the code as it is. It’s good enough. You can refactor it in the future.” That’s enough to make anyone concerned about planning for spikes.
Your code might be quick and dirty because you planned to throw it away. Your code might be a small proof of concept that is very narrowly implemented and would be very hard to refactor. You could do it, but the cost would be high.
The idea is that you use the learning from your spike, not your code. If you can use your code, great. But that’s not the goal.
How Many People Were Involved in the Learning?
The real question is this: How many people were involved in the spike? The more people in the spike, the more people were involved in the learning—and the more easily those people will be able to estimate the real work.
A couple of years ago, I consulted to a team about a difficult performance issue. Because their entire product was based on performance, they weren’t sure what to do. They thought it was a process problem, which is why they asked me to help.
In a sense, it was. I suggested they spike their problem and spend no more than one day as a team researching it. When they met the next day at their standup, they each had much more data. Now, they could begin formulating a solution.
They developed three candidate algorithms as developer pairs. That took two days. They developed five different test scenarios as a team, then automated the tests in tester-developer pairs. That took them two more days. They could now spend one more day running the tests against their candidate algorithms.
After one team-week, they had an answer. It was not the answer they expected, but they had the data they needed to know how to proceed. The entire team had been involved, so when the product owner said, “Just use the code,” the entire team could say, “No, we need to evolve the design and the tests,” and explain why.