Cameron: OK, what about the evolution of software development and testing? What are the highlights of the last twenty years for that?
Hung: We have aspects that led to a better development lifecycle and configuration management. We have agile. We do continuous integration, we do zero downtime deployment, and we have a lot of automation. From a people standpoint we have teams of people all over the world distributed across borders, time zones, and households. And of course we also have crowdsourcing and outsourcing.
From the engineering standpoint we have a better process of work and management control, the ability to test automated systems, improved education, and things like that. We have better auto-update mechanisms. For example, you have a phone—iPhone, Android—and the apps that I am using push upgrades and updates—unlike the old age, where you have to look and see and you have to wait for six months, or even twelve months, to get that upgrade. All of that has changed.
We have better tools. We have full-speed Internet, virtual machines, and different commercial software. We have CRM, automated testing, exploratory testing, and others that are very available as well. If you talk about testing and test automation, testing practices push the software development lifecycles a lot earlier. We have test-driven development—tests for everything. And everyone becomes a tester. Developers become testers, or it happens with the people who created software. Everybody is. So developers are testers, users are testers, and testers are testers. So there is a wider range of people who are doing testing. Testing is very well recognized.
We also have nowadays a lot of cloud-based resources and visualization. Those help get better software and infrastructure for testing and deployment. It also replaced a lot of dependency to use traditional server configurations.
Exploratory testing is very well recognized and very well accepted now in addition to automated testing. We now understand that there is a great challenge in dealing with rate of change for test automation—there is a necessity. You need to look at how we can improve the organization and handle all the things so we can keep up. That is very well known today.
Cameron: Right. That is very cohesive and impressive to be able to fit twenty years into that short of time. Very well done!
If you were to go back to 1993, a year before you cofounded LogiGear, what advice would you give yourself?
Hung: I actually am pretty happy about it, although there were a lot of ups and downs. I kind of really enjoyed all of the ups and downs—up some times, I got kicked in the butt many times, so no regrets. Having said that, I think when I speak of the work and the passion that I have been part of over the past years—also having the opportunity to work with Hans Buwalda and a lot of very smart and talented people—I wish that I maybe had known some things earlier so that I could have solved some problems earlier. That is the thing that I want to talk about. I got a lot of tests automated, and I wish that I had understood that the creation of the testing has to be quicker and easier—also, it should have been automated. Today, the creation of the automated test is still a manual effort.
The second part of that is the updating of the test must be easier, and that should also be automated. We are still not there yet, and we are still realizing that.
The third part is you have to do a lot of coding—or know who to write the right code, or good practices in writing code—even though when the programming language is simple, English-like, all of those need to be reviewed. And the review process must be easy and quick. The big problem before every release, and I want to mention this, is that we run thousands of automated tests at a time, and then when you have all of these failures, oftentimes they are false positives, and it can be a mess just to clean them up. The analysis of these failed tests or false positives should be quick and easy, and it should also be automated as well.
The final feeling is that I wish there was a lot more work put into the sharing of the test. The sharing of the tests must be quick and easy too. Those are the things that I wish I had known. I wish I had known a lot more of those, and then I’d have a solution to all of them by now. But that’s the thing with test automation and software development: It moves at such a fast rate that it’s hard to say what I would have changed.
Cameron: OK. What current trends and testing do you feel are, as of today, the strongest or most successful, and which ones have the biggest flaws, do you think?
Hung: Some of the technologies today—and take testing using cloud-based resources, for example—are now really accommodating to the management and testing of a lot of test environments and have made it all more accessible. So those are really good. Also testing in an agile environment—the agile movement is great, but in my own opinion, we still have a lot of work to do on how to test effectively in an agile development lifecycle. Good testing practices and agile can be combined, and that can be a lot of fun, figuring out how to do that. Another one is the shift from tools to method, such as test automation framework, xUnit framework, and test-driven development. One of the things that I am most passionate about in testing is the test-design-centric automation. At the end of the day, even though we sell tools, you know very well the tool can benefit from embedding a better test design method in it. That's what we did. I believe one of the greater methods is to help designing the automated tests well.
With respect to the flaws: I think testing maturity model, based on capability maturity model. You don't solve technology problems with process, especially heavyweight process. If I had to name another one, I think that it is very important that we get some good metrics. We need some good metrics, whether they are business-oriented metrics or software-oriented metrics. You have to use it as part of your test philosophy so you can improve ROI, your test efficiency—and the big selling point is being able to improve your methods for good testing.