Companies that build and sell products need to focus on a high level of quality to maintain and increase their customer base. A company venturing into a new market may be able to win over customers through marketing gimmicks, hype and such. But if the products they offer are poor in quality, all the gains will be short lived because dissatisfied customers no longer want the product. I was just reading a report about a successful company. Its responsiveness to customer's requests resulted in the company gaining a foothold in new markets, but the customers reported the lack of quality in the product to an embarrassed executive. Something needs to be done. Today, we often hear about product recalls from famous companies like Toyota - the birth place of lean development. It goes to show how the reputation of any successful company can be ruined by poor quality.
Achieving a high level of quality is the responsibility of everyone involved in product development - not just the development team (which includes testers and developers), but marketing folks and product planning folks. Yes, there are only 24 hours in a day and 7 days in a week. However, there should be time allocated and used to do the right things, provided you choose to do those things right, and that requires a change, which I discuss in this article.
1. Mindset: Quality before Feature
The traditional approach is to build a product with a set of features and then hand-off the product for testing. The testers will find bugs - some minor, some severe, and some are even show stoppers. Test results are then handed back to the developers for fixing. This cycle repeats until the testers are unable to find new bugs and the developers have reduced the number of bugs to an acceptable level. This is what I call a Feature-Before-Quality (FBQ) mind set and this is what most teams are doing.
The modern approach is instead to develop a subset of features, test them and assure the product has reached a predetermined level of quality before working on the next set of features. This is what I call a Quality-Before-Feature (QBF) approach. Features are added incrementally to an increment of a high quality product, until the product contains enough features that make it attractive/valuable to customers. This approach has a benefit because the product can be released any time due to its rich feature set and high quality. Product planning can easily change requirements for the product by deciding on a different set of features to develop. True, this is not so simple, but by having a Quality-Before-Feature (QBF) mindset, you have a chance to do so. If you were to apply a Feature-Before-Quality (FBQ) mindset, chances of effectively adapting to change is near impossible.
The key to making Quality-Before-Feature (QBF) work is simply to maintain and advance the quality of the product and to continuously improve the product development process with a keen eye on no bugs allowed.
2. Mindset: Bug is anything that Bugs You
There are developers who say, "If it works, who cares how I do it". This is just unprofessional. We need to do much better than that. We need to achieve all rounded quality for the product - there must be no bugs.
When I say no bugs, I define bugs as anything that will bug you later. Of course, if a test fails, or the program crashes, it will bug you immediately. There is no argument about that, but I go a step further to broaden the definition of what bugs are. If you write bad code which nobody else in the team understands, people reading the code after you will bug you. If you are a team leader, and if new members keep complaining about not understanding what the code is doing, they are bugging you. If there is no documentation about the product or its design, they bug one another. Removing bugs according to this broader definition will help you remove those belonging to the narrower definition.
The agile movement amongst other things has emphasized three practices: continuous integration, test driven development and constant refactoring. I will describe these practices from the perspective of achieving quality, i.e. reducing bugs.
- Continuous Integration (CI). Continuous Integration is not just about automating procedures to create a build. Today's CI environments can and usually incorporate automatic tests, static code analysis for code complexity, code duplication, and potential bugs, code execution coverage, presence of comments, and so on. These are helpful to detect problems early and give the team a chance to fix them early.
- Test Driven Development (TDD). Perhaps writing test code before target code is not easy, but at the very least, but it should be something to strive for. At the very least, developers must deliver test code together with the target code and in the same code repository. A developer's job is not done until he has proven that his code works and there is nothing better than writing their own test code. If that is difficult, it means that the testability of the product is poor or the developers are just plain lazy.
- Constant Refactoring. There have been many debates between the merits of up-front design or emergent design, which I will discuss in a minute. But for now, I am talking about constant refactoring as a way of tidying, cleaning and sorting the code - to improve maintainability and extensibility and to have this as a habit as part of lean developments 5S (sorting, straightening, sweeping, standardizing, sustaining). So, all code must reside in its rightful place, and so should tests and documentation.
All the above practices, CI, TDD, Constant Refactoring, are their precisely to help you attain high quality while still being responsive through iterative development. Iterative development without CI, TDD, Constant Refactoring is not sustainable.
I have worked with a number of agile teams. They all practice iterative development in a SCRUM like manner, but they lack the will power and perhaps the competency to apply CI, TDD and constant refactoring effectively. Thus, whereas outwardly, they seem “agile” with sprints, stories and all, but peel off the outer layers and you still find the same old habits.
3. Mindset: Developers and Testers are One
Rightfully, there should not be a distinction between developers and testers. Developers should test their own code. They should not need other people to test what they do. Does it mean there should be no testers? Let me clarify myself. I distinguish two kinds of roles:
- Quality Advisors/Coaches. Of course, there needs to be some requirements persons to look at the product from a customer's or user's perspective. They give advice on what quality means in the context of the product, what are the key usage scenarios and the expected usage environment. They help fill the gaps in the requirements.
- Naive Testers. There is another group of people who have no better understanding of the product than the developers themselves. Their job is to test the system by following a set of test procedures. They are treated like robots. Some are more technically qualified, and they write automated for these test procedures. If the automated tests break, they fix them.
Given that they know little, naive testers need to bug customer/user representatives and developers to understand what needs to be tested. In my opinion, naive testers are not necessary. They are only doing what developers should be doing. Naive testers give the testing profession a bad name. In a number of organizations, testers are those who cannot code, and quality assurance people are people who cannot lead or manage. They are like the dumping ground. But until developers take the responsibility of testing their own stuff, such testers are a necessary evil (overhead).
A usual problem I see is the use of User Interface driven testing tools. Testers automate functional testing by executing UI events when in fact a better approach is to execute via internal APIs. User interface driven testing are highly unstable. For example, a developer might change the colour, position, identifier of a UI element, a dialog box or tooltip might show up, the response time of a button click might take longer than usual, a painted screen might be painted differently, etc. All these causes the UI driven test scripts to fail. A better approach is to conduct what is commonly known as below UI testing. It is more effective. What is below UI? It is the code. It is the code which developers write, which the developers should be responsible for testing. Given that many developers do not test, many organizations build testing teams and departments. They equip them with testing tools, albeit sometimes the wrong set of tools. This of course have some positive effect on product quality, and it is primarily because the developers do not test!
Not just that. Because there are people testing for them, the developers have an even lesser motivation to test. To complicate things further, the testing tools used by testers are in a different programming language compared to what the developers use. Tests written by either developers or testers cannot be passed between them easily, resulting in much duplication. When there is a requirement or design change the separation between developers and testers creates a barrier. Testers are not informed of changes or cannot keep up with changes and gradually, the automated tests become ineffective.
When coaching such organizations, there is strong resistance for developers to do their own tests. People have been cleaning up for them for too long. They had been spoiled. My advice is for organizations to seriously think about how to merge the developers and testers. Motivate, encourage and teach developers to think like testers. Convert naive testers into developers or quality advisors or quality coaches to teach developers how to better test the product.
4. Mindset: Coverage! Coverage! Coverage!
Even with a testing team, there is no guarantee that there will be no bugs when the product ships. I believe we all have seen obvious bugs in products, and during important product demonstrations. So, what is the matter? It is about test design, it is about test coverage. Testing is really not simple. It requires good design and development skills as well as in-depth knowledge about the product domain. That is why I am not in favour of naive testers, other than just a transitional phase until developers do their own testing.
Anyway, testing is about spanning and covering the possible scenarios. This requires understanding the test data input variables along different dimensions of the product including internal design and state variables. Even after identifying these possible cases, there is still a step to implement tests to hit these identified cases. With a product design that has no testability in mind, this becomes challenging.
Some kinds of behaviours are notoriously difficult to reproduce and hence difficult to test. They include behaviours that takes time, behaviours are non-deterministic (or so it seems), behaviours that go across threads, processes and machines, behaviours that involve hardware, behaviours that involve large data sets, behaviours that involve concurrency, synchronization, and locking, asynchronous behaviours and more.
When coaching teams about testing, one of the first thing I do is to evaluate the products testability and strive towards what I call "Testability Coverage". This means identifying the different kinds of tests and enhancing the product and its test environment tool cover all these kinds of tests. It is only after testability coverage is achieved that you can write automated tests and there after you can consider how test data coverage can be achieved.
Testability is a quality of the product that measures how easy it is to test it:
- How easy it is to control the behaviour of the product - controllability
- How easy it is to observe the behaviour of the product - observe-ability
- How easy it is to isolate the problem if one is detected - diagnose-ability
Testability is also a quality of the test cases
- How easy it is to prepare a test case?
- How easy it is to prepare the test data, test scenario/script?
- How quickly can each test case run?
- How robust are test cases to minor changes and noise in the test environment?
Testability is also the organization of test cases and test results?
- How organized are the test cases? Can they be found easily?
- How easy it is to evaluate test coverage (from various perspectives)?
- How easy it is to interpret the results of test execution?
- How easy to infer the quality of the system from test results?
As you can see, testing is a challenging endeavour. I do encourage true developers (people who love code) to start thinking like real testers, to explore the test data and to build testability into their products and test environment. When developers love to test and put on their thinking (testing) hat, you will see a remarkable change in your product's quality. I get an indescribable sense of pleasure when I see developers write their own tests and they get it. They do not complain, they welcome the new challenge. It is like an ocean of knowledge opening up for them to devour. They ask questions that challenge me and we solve them together. This is the kind of coaching I like to do as opposed to playing the role of a mother who nags and says “please test”.
5. I motivate this change by getting the developers to look at the results of code coverage tools. These tools show which line of code is not executed, and encourage the developers to think why? And later think about how to control code behaviour to execute the line they want. This also helps developers write better code, which is also a very important aspect of quality. This is not something a typical naive tester is capable of. Mindset: Effective Separation of Concerns
I have earlier mentioned about the debates between proponents of big up-front design and versus those of emergent design (which is sometimes mistaken as no up-front design at all). Of course, I believe some initial analysis and design is necessary and this also has to be followed by regularly revisiting the design for further improvement.
But whatever your stand is, you want to eventually get a good design. So, what is a good design? How do you distinguish a good design from one that is not. The answer is effective separation of concerns. Separation of concerns (SoC) is about organizing the product (especially the software part) into modules that have as little overlap as possible. SoC is the motivation for structured programming, object-oriented programming/development, aspect oriented programming/development, analysis and design patterns, architecture patterns and so on.
Achieving effective separation of concerns is not easy. In general, the first release of a product is usually undertaken by a group of fairly knowledgeable persons. They have layers, tiers, components and interfaces. The product size is still relatively small. They have an architecture description and in general the team knows the internals of the product. So, far so good. But over several releases, team members get replaced, and code is stuffed into the initial existing code and the complexity (such as McCabe complexity, lines of code per method/function/file) explodes and becomes obscure. This is no good. The initial attempt towards effective separation of concerns, which I call the primary architecture, is insufficient for later releases. There needs to be a secondary architecture to deal with extensions and plug-ins to keep the core of the product intact.
Evolving a product over releases is about dealing with changes. There are different kinds of changes:
- Modification of Existing Feature/Component. This first case is easy. You identify the impacted components and modify the codes there. This is what most developers do. There are no new components to be added. The primary architecture which the team knows provides sufficient guidance on how to do this.
- New Feature/Component. This is slightly more complicated. Nevertheless, the required change is in a new component and so it does not impact the existing code much. But you have to be careful how you organize the components. For example, you might have 5 components in a layer, but over a period of time, 5 becomes 50. You will most likely need to create packages to organize the components and to achieve reuse between them effectively. The primary architecture needs to be updated to reflect the new organization of components.
- Extension of Existing Feature/Component. This third case is tricky as it deals with cross-cutting concerns. This is where a feature needs to be executed in a component whose scope cannot include that feature. This requires some kind of extensibility mechanism in place. It can be a framework, or a design pattern, or an aspect composition mechanism, and so on. I use the term secondary architecture to highlight the need to pay close attention to such cross-cutting effects.
Yet, even trickier is to know the difference between the three kinds of changes above. Effective separation of concerns can be abstract, but please persevere on. Once you get the hang of it, it becomes relatively simple.
Separating these three kinds of changes should be applied not just to requirements, but also to design and test. This is what is termed as “Use Case Modularity – preserving the separation of concerns from requirements to code and test, Extensions and extension points apply not just to use cases, but also test cases as well.
6. Mindset: Stay Close To Your Real Users
The key motivation of user stories is to help the development team put themselves in the shoes of the real users and to describe usage scenarios that are of value to the users. The same motivation applies for use case specifications, but use case specifications have embedded in it the constructs to achieve effective separation of concerns right at requirements and analysis time through basic flows, alternate flows, use case include, generalization and extension. This makes applying use case specifications challenging for some people and user stories overtake in terms of popularity. If you have attempted user stories, I would suggest adding the constructs for achieving effective separation of concerns. This will help you organize and increase the longevity of your product documentation and your tests. (I will discuss this in a separate article.).
Let’s get back to putting yourselves in the shoes of real users. Do consider the context in which the users are using the system, what he is trying to achieve, what information does he has, what kind of skill level does he have.
I have seen developers who build the product, only use or test those parts they are responsible for. It is necessary to look at the product with a new pair of eyes. This is not just the problem with developers or testers. I have seen many projects where the people responsible for the product (I mean product planning, marketing, and the like) who have never even use the product before until a product demonstration (and even that is conducted by someone else). The consequence is that there is nobody in the product development team who has an overall understanding of the products features, constraints and assumptions. This is very dangerous.
Stay close to your real users. This can mean physically close to the real users, but it means more about really understanding how they truly use your product. Developers and testers who views the application day in and day out might be immune to the user's pain. So, understand the user’s pain. If they (users) just put them on the shelves, do understand why. If they feel something is cumbersome, understand why. If they feel some features are great, understand why too. Where possible, get members of your team to sit together with the real users and see how they use the product. If it were even possible, allow the developers to use the product in a real environment. I am not saying all developers in your organization need to do this, but at least some must.
Now, here is another catch. If you want to listen to your customers, then be prepared to act on their feedback. Your customers are usually not unreasonable, and you would recognize the difference between a real need and just a long shot.
7. It Is Not Easy, but You Have No Choice
So, what development approach are you using? Are you doing:
- extreme programming or excuse programming,
- agile development or fragile development
- lean development or lame development
Are you and your team treating quality as non-negotiable? What does it take to come to this conclusion? Does it require a product recall, or an embarrassed executive?
Achieving high quality is not easy. It involves, not just focusing on testing, but also a focus on requirements and architecture. It requires changes in individual habits and a team’s way of working. It takes hard work. It takes a fresh pair of eyes. It does not come for free. If you feel you are expending a lot of energy/effort without getting the desired effect, then perhaps you are venturing in the wrong direction. Perhaps you need a change of mindset. Again, achieving high quality is not easy. But you need to start somewhere and some time. There is no better time than now. The earlier you start the better.
I hope the mindsets articulated in this article helps guide you and your team to start thinking differently about the interconnection and interdependency between quality and agile-lean product development.
About the Author
Pan-Wei Ng, Ph.D. is a firm believer in a lean and agile development. He strives to improve quality and reduce waste. Dr Ng helps companies in Asia adopt and scale lean and iterative development and other practices. He believes in practicality and leadership by example. In fact, he has recently lost 20 kg in 3 months as a result of applying a lean lifestyle. As the Asia Pacific CTO of Ivar Jacobson International (www.ivarjacobson.com), he contributes to the innovation of software engineering practices. He co-authored of Aspect Oriented Software Development with Use Cases with Dr Ivar Jacobson and believes that aspects and effective separation of concerns are important enablers to rapid lead and agile development.