Vinay Krishna explains why agile development includes testing and coding concurrently, which is also what test-driven development emphasizes. The transformation from coder to developer to tester is needed in all agile software development projects.
I started my IT journey as a coder. I worked on many small- and medium-sized software projects and products. First few years, as a coder, I always put more effort into writing code or implementing required functionality. I tried to write code as per my best, but most of the time I faced difficult times during and after production/QA releases. As a result I started stretching my working hours along with my teams at the office and struggling to fix the never ending bugs. The whole team was spending day and night, including weekends, and output was horrible. Mostly after any release, pressure was very high directed at the development team. During that time as a team member I was thinking it was as a result of bad estimates and poor planning. I raised this concern and next time I got time as per my estimation. To my surprise I felt very little difference or improvement. Eventually I was stretching my working hours and ruined my personal life as most of us do.
I am not trying to say here that estimation and planning don’t play a major role in the success and failure of a project, but even though in the case of proper estimation and planning without a “developer” (not coder) mentality your adoption of an agile product development approach will fall way short. In this article I will highlight what I have learned about testing and coding from the point-of-view of an agile product developer.
Positive Testing
In my early days I was doing only positive testing after writing the code. By positive testing I mean to say testing of any functionality which met customer requirements. So all the time I was providing only correct values in all required fields and checking whether the new system gave correct result or not. It looks funny now when I look back on this.
Those days I was not able to understand why someone would test by entering incorrect values or would use different steps which are not possible or supported by the system. As a result I tried to spend more time on providing training to users or providing more detailed training material.
But very soon I realized that only positive testing was not the correct approach as lots of factors are there to violate the rules, like users may change at client end. Also, one cannot always read and follow the steps from the user manual because the actual way of working is different than the implementation and users are comfortable with some old/legacy application and last but not least plain old human error is always possible.
Ad hoc Testing
I started using ad hoc testing which was nothing but just a small addition to positive testing. I was doing some negative or extra testing around some functionality that I was finding complex while implementing. It was bit better than positive testing. However still I was struggling a lot during integration of different modules or components and releasing it to QA and production.
I found “whole part” testing was missing or not proper. I had improved my testing approach a little but only in case of “part testing.”
Monkey Testing
I added another aspect in my testing approach to cover the “whole part” testing. I started navigating through various screens and was checking for the functionality with some dummy, unformatted and random inputs and finding defects/bugs.
Basically it was just testing here and there evaluating the application and trying to see whether accessing different functionalities is in adherence or going to cause any abnormalities. In fact it was nothing but getting a feel of the whole application by jumping here and there.
Later I came to know I was doing Monkey Testing wow :). Whether sometimes it was dumb or smart monkey testing, I found it better than my previous approach.
Pseudo Unit/Integration Testing
In order to follow the organization’s standard and best practices, I was preparing the unit and integration test document where I wrote the test cases along with Pass/Fail status. It was really a very good practice since it ensures that particular functionality has been well tested by the developer.
I experienced below items in this approach:
- Coder still doesn’t find much value in this document
- Normally coder prepares it at the end of complete coding
- Though estimation includes the proper timing for this no coder bothers much on this and treats it just for the sole purpose of creating a document
- Coder spends almost all the time allocated for unit testing to coding
- At the end but before releasing the coder starts preparing this document and by default adds all test cases as Pass without testing
- Coder writes test cases that do not cover all the scenarios
- Coder still uses positive, ad hoc and monkey testing depending upon scenarios and even some time skips doing this
Transformation from Coder to Developer
I was continuously trying to improve and doing analysis of outcomes and impediments. The problem, that I found, was the approach I was using. I was focusing more on coding and very little on testing while a balance was required between these two. I soon realized most importantly this required changing myself, because no matter how elegant one’s code is if it’s not able to handle all possible positive and negative scenarios, the product (system-software) has no real value.
I started respecting all aspects of the testing and included it as an essential part in my development. This is what constituted my transition from Coder to Developer. I went through various sources to improve my approach. Fortunately, a great person in my organization, who recently joined, encouraged me to learn about Test Driven Development (TDD. It was totally new thing for me and I gathered more information about it and presented TDD to my team.
My first step towards TDD
I became a staunch supporter of TDD, but was not sure where to start. Unfortunately, I was not given the opportunity to use any xUnit family tool as in order to use this tool the team was requiring some training and initially, extra time to start with. But I was keen to start following TDD at my end. So I discussed it with the team and established the following rules:
- Write first the unit test cases in document related to any functionality prior to writing the code
- Always track changes to the document
- Make the status of that test case as fail, since no code has been written to implement that functionality
- Write just enough code to implement the functionality
- Now, run the unit test cases written for that functionality and update the status
- Track changes to ensure that test cases has been written first and tested later
It was a real challenge to get these rules followed by the entire team. And as expected I received a strong resistance from everyone. One strong point made by the team was how can one write a test case without implementing the functionality. I myself had the same question initially, but it was misguided. Writing test cases prior to development, led us to think about the functionality as per the end user’s expectation. Eventually all of us agreed it made good sense to write tests first. Moving forward we reviewed our progress after couple of releases to find out if it was really helpful and whether it made sense to continue to do. As it turned out it did.
Below are the findings after couple of releases:
- Developer gets better understanding of functionality and able to visualize the behavior more appropriately. Since s/he required writing test cases prior to development, it led them to think about the functionality as per the end user’s expectation.
- Developers were able to discover additional possible test scenarios including positive and negative tests and accordingly implement in the code.
- Developer got more confidence with their implementation, because of the greater test coverage.
- After one or two releases, the team was able to understand the gap and filled it in next releases. In one case writing test firsts surfaced the lack of business knowledge on our team.
The remaining challenge was regression testing and the re-testing of previously written code because of responding to change. This would have been impractical with our manual process because of how long it would take. Now our releases are much more stable.
Using nUnit, a step towards automated unit testing
Until I started to use nUnit I was using the traditional development methods on my projects. Eventually, I got the chance to work on one project where agile development methods were being adopted. I got nUnit for automated unit testing. It was not so easy to start using it, but I had already crossed the major hurdle i.e. changing of mindset as developer from coder, so I didn’t find much difficulty in adopting it. Also we decided not to write nUnit test cases for all existing or old functionalities, because it required lots of extra time. So we started writing the nUnit test case only for new change/implementations and gradually it started growing. A good thing with automated unit testing is that it’s not like testing but it’s more like programming and eventually it makes testing and code reviews much easier and faster. However in case of UI related testing or wherever automated testing has limitations I find the approach of “first step toward TDD” more suitable and effective.
Learning point
My development journey continues. But what I have learned and found very appealing about agile development is it includes testing and coding concurrently. This is exactly what TDD emphasizes through its mantra. The transformation, from coder to developer, is needed in all agile software development projects.