The state of software quality is pretty abysmal today because the industry is too new and too arrogant to do what is required to provide better software quality. There are many measures of software quality: correctness, whether it does what the customer wanted; completeness, whether it does everything the customer expected; reliability, whether the software is available to do specific tasks for the customer; maintainability, whether the software is easy to modify and enhance; and usability, whether the software fits seamlessly into the customer's environment. And these are just the ones that tend to get the most press.
The key lesson that the software industry has not learned yet is that whether the software quality is "good" or not with respect to any of these views depends on the processes that surround the intellectual activities of design and coding rather than on the activities themselves. With the partial exception of maintainability, the quality of the software does not depend at all upon the intellectual content that the developer invests in the software product. Instead, it depends almost entirely upon the way the developer provides the intellectual content.
Of all the things we do as software engineers, the least important is writing programs.
Sadly, too many young people come to software development believing that writing 3GL code is what software is all about. The end product of our endeavors is, indeed, a computer program. However, in the end all that program does is move bits from one pile to another. We will be judged as professionals based upon the quality of that program. The quality of that program will depend not on what we wrote but how we wrote it.
In the '80s I had the privilege of working in a premier software shop doing conventional procedural development. It was a premier shop because it produced software with nearly 5-Sigma defect rates* even though it was a commercial software shop. As important, the shop built that software with productivity that was integer factors better than published data for similar software from places like MS and IBM. The shop routinely collected data on virtually every developer activity. One of the more interesting statistics was that developers spent only about 5% of their time actually writing 3GL code to the point where it compiled without error. Nearly 40% was pure overhead; paid time engaged in activities not directly related to a specific development project. The other 55% of their time was spent on all those niggling little processes that surrounded the coding. So actually mucking in 3GL code occupied less than 10% of their project time.
Most shops spend a whole lot larger fraction of their project time writing code. And they are doing something wrong because I'll give substantial odds they don't come close to the combination of defect rates and productivity that we provided. Getting all that peripheral process stuff right is what enables a shop to be a premier shop. A by-product of that is that one spends a lot less time on coding and a lot more time on thinking.
For the rest of this category I am going to focus on the correctness view of software quality because that is where I believe there will be a major disruption in the industry over the next decade. The reason for my assertion is that software users are beginning to figure out that the only thing that is broken in their lives today is software. The software industry could get away with poor quality for many years because software was perceived by others as an arcane discipline not unlike alchemy. But now software is as ubiquitous in people's lives as automobiles and household appliances. So the users of software see a very stark contrast between software and everything else. And they see that contrast constantly. And they see that quality contrast primarily in terms of correctness and reliability.
That means that defects are no longer acceptable as a by-product of some mystical wizardry. Users are starting to demand that software provide the same quality as everything else. Unfortunately there aren't many competitive alternatives right now because almost all popular commercial software is crappy. So the pressure is only showing up indirectly, as in a rapid increase in product liability lawsuits that have demonstrated that the "as-is" warranty disclaimers for software aren't worth the paper they are written on. [Note that the recent UCITA initiative by the software industry was aimed at putting legal roadblocks in the way of consumer redress. But that's another story...]
The lack of alternatives, though, presents an enormous opportunity for competitive advantage. When an alternative product with better quality does show up in the market, it will have a huge competitive advantage. More important, that competitive advantage will last awhile, just as it did in the '80s when the PacRim kicked the crap out of Western manufacturing paradigms, because it takes a big process investment to catch up. This is exactly what has happened to WinX vs. Linux. Linux got unexpectedly rapid market share because it initially had better security and reliability. That forced MS to play catch-up in a major internal effort over the past several years. Even now, when MS has pretty much caught up, the market perception is that WinX is insecure and unreliable compared to Linux. That has cost MS some very serious money over the past few years.
The moral is: Don't be the one playing catch-up on software quality. The next few category posts go into more detail about what needs to be done.
* Sigma is a nonlinear scale for measuring software defect rates. 5-Sigma is roughly 0.2 defects per KLOC. This was the level for mission-critical software in the '70s and '80s; currently mission-critical software is pushing 6-Sigma (~0.01 defects/KLOC). The industry average for commercial software is currently slightly better than 4-Sigma (~2 defects/KLOC).