- automated for these test procedures. If the automated tests break, they fix them.
Given that they know little, naive testers need to bug customer/user representatives and developers to understand what needs to be tested. In my opinion, naive testers are not necessary. They are only doing what developers should be doing. Naive testers give the testing profession a bad name. In a number of organizations, testers are those who cannot code, and quality assurance people are people who cannot lead or manage. They are like the dumping ground. But until developers take the responsibility of testing their own stuff, such testers are a necessary evil (overhead).
A usual problem I see is the use of User Interface driven testing tools. Testers automate functional testing by executing UI events when in fact a better approach is to execute via internal APIs. User interface driven testing are highly unstable. For example, a developer might change the colour, position, identifier of a UI element, a dialog box or tooltip might show up, the response time of a button click might take longer than usual, a painted screen might be painted differently, etc. All these causes the UI driven test scripts to fail. A better approach is to conduct what is commonly known as below UI testing. It is more effective. What is below UI? It is the code. It is the code which developers write, which the developers should be responsible for testing. Given that many developers do not test, many organizations build testing teams and departments. They equip them with testing tools, albeit sometimes the wrong set of tools. This of course have some positive effect on product quality, and it is primarily because the developers do not test!
Not just that. Because there are people testing for them, the developers have an even lesser motivation to test. To complicate things further, the testing tools used by testers are in a different programming language compared to what the developers use. Tests written by either developers or testers cannot be passed between them easily, resulting in much duplication. When there is a requirement or design change the separation between developers and testers creates a barrier. Testers are not informed of changes or cannot keep up with changes and gradually, the automated tests become ineffective.
When coaching such organizations, there is strong resistance for developers to do their own tests. People have been cleaning up for them for too long. They had been spoiled. My advice is for organizations to seriously think about how to merge the developers and testers. Motivate, encourage and teach developers to think like testers. Convert naive testers into developers or quality advisors or quality coaches to teach developers how to better test the product.
4. Mindset: Coverage! Coverage! Coverage!
Even with a testing team, there is no guarantee that there will be no bugs when the product ships. I believe we all have seen obvious bugs in products, and during important product demonstrations. So, what is the matter? It is about test design, it is about test coverage. Testing is really not simple. It requires good design and development skills as well as in-depth knowledge about the product domain. That is why I am not in favour of naive testers, other than just a transitional phase until developers do their own testing.
Anyway, testing is about spanning and covering the possible scenarios. This requires understanding the test data input variables along different dimensions of the product including internal design and state variables. Even after identifying these possible cases, there is still a step to implement tests to hit these