magnitude. Methods that might be ranked as “best practices” for small programs of 1,000 function points in size may not be equally effective for large systems of 100,000 function points in size.
The second point is that software engineering is not a “one size fits all” kind of occupation. There are many different forms of software such as embedded applications, commercial software packages, information technology projects, games, military applications, outsourced applications, open-source applications, and several others. These various kinds of software applications do not necessarily use the same languages, tools, or development methods.
The third point is that tools, languages, and methods are not equally effective or important for all activities. For example, a powerful programming language such as Objective C will obviously have beneficial effects on coding speed and code quality. But which programming language is used has no effect on requirements creep, user documentation, or project management. Therefore, the phrase “best practice” also has to identify which specific activities are improved. This is complicated because activities include development, deployment, and post-deployment maintenance and enhancements. Indeed, for large applications, development can take up to five years, installation can take up to one year, and usage can last as long as twenty-five years before the application is finally retired. Over the course of more than thirty years, there will be hundreds of activities.
The result of these various factors is that selecting a set of “best practices for software engineering” is a fairly complicated undertaking. Each method, tool, or language needs to be evaluated in terms of its effectiveness by size, by application type, and by activity.
Overall Rankings of Methods, Practices, and Sociological Factors
In order to be considered a “best practice,” a method or tool has to have some quantitative proof that it actually provides value in terms of quality improvement, productivity improvement, maintainability improvement, or some other tangible factors.
Looking at the situation from the other end, there are also methods, practices, and social issues that have demonstrated that they are harmful and should always be avoided. For the most part, the data on harmful factors comes from depositions and court documents in litigation.
In between the “good” and “bad” ends of this spectrum are practices that might be termed “neutral.” They are sometimes marginally helpful and sometimes not. But in neither case do they seem to have much impact.
Although the author’s book Software Engineering Best Practices dealt with methods and practices by size and by type, it might be of interest to show the complete range of factors ranked in descending order, with the ones having the widest and most convincing proof of usefulness at the top of the list. Table 2 lists a total of 200 methodologies, practices, and social issues that have an impact on software applications and projects.
The average scores shown in table 2 are actually based on the average of six separate evaluations:
1. Small applications < 1,000 function points
2. Medium applications between 1,000 and 10,000 function points
3. Large applications > 10,000 function points
4. Information technology and web applications
5. Commercial, systems, and embedded applications
6. Government and military applications
The data for the scoring comes from observations among about 150 Fortune 500 companies, some fifty smaller companies, and thirty government organizations. Negative scores also include data from fifteen lawsuits. The scoring method does not have high precision and the placement is somewhat subjective. However, the scoring method does have the advantage of showing the range of impact of a great many variable factors. This article is based on the author’s two recent books: Software Engineering Best