Software Errors
Volume Number: 6
Issue Number: 11
Column Tag: Developer's Forum
Software Errors: Pr evention and Detection 
By Karl E. Wiegers, Ph.D., Fairport, NY
Software Errors: Pr evention and Detection
Most programmers are rather cavalier about controlling the quality of the
software they write. They bang out some code, run it through some fairly obvious ad
hoc tests, and if it seems okay, they’re done. While this approach may work all right
for small, personal programs, it doesn’t cut the mustard for professional software
development. Modern software engineering practices include considerable effort
directed toward software quality assurance and testing. The idea, of course, is to
produce completed software systems that have a high probability of satisfying the
customer’s needs.
There are two ways to deliver software free of errors. The first is to prevent the
introduction of errors in the first place. And the second is to identify the bugs lurking
in your code, seek them out, and destroy them. Obviously, the first method is
superior. A big part of software quality comes from doing a good job of defining the
requirements for the system you’re building and designing a software solution that
will satisfy those requirements. Testing concentrates on detecting those errors that
creep in despite your best efforts to keep them out.
In this article we’ll take a look at why the issue of software quality should be on
the tip of your brain whenever you’re programming, as well as discussing some
tried-and-true methods for building high-quality software systems. Then we’ll
explore the strategies and tactics of software testing.
Why Worry About Software Quality?
The computer hobbyist doesn’t think much about software quality. We write
some little programs, experiment with graphics tricks, delve into the operating
system, and try to learn how the beast works. On occasion we write something useful,
but mostly just for own benefit. The “quality” of one-user, short-lived programs
like these doesn’t really matter much, since they’re not for public consumption.
The professional software engineer developing commercial products or systems
for use by his employer has a more serious problem. Besides the initial effort of
writing the program, he has to worry about software maintenance. “Maintenance” is
everything that happens to a program after you thought it was done. In the real world,
software maintenance is a major issue. Industry estimates indicate that maintenance
can consume up to 80 percent of a software organization’s time and energy. As a great
example of the importance of software maintenance, consider how many versions of the
Macintosh operating system have existed prior to version 7.0. Each new system
version was built upon the previous one, probably by changing some existing code
modules, throwing out obsolete modules, and splicing in new ones.
The chore of maintenance is greatly facilitated if the software being changed is
well-structured, well-documented, and well-behaved. More often, however, code is
written rather sloppily and is badly structured. Over time, such code becomes such a
tangle of patches and fixes and kludges that eventually you’re better off re-writing the
whole module, rather than trying to fix it once again. High-quality software is
designed to survive a lifetime of changes.
What is Software Quality?
Roger Pressman, a noted software engineering author and consultant, defines
software quality like this:
Conformance to explicitly stated functional and performance requirements, explicitly
documented development standards, and implicit characteristics that are expected of all
professionally developed software.
A shorter definition is that a high-quality software system is one that’s delivered
to the users on time, costs no more than was projected, and, most importantly, works
properly. “Working properly” implies that the software must be as nearly bug-free
as possible.
While these are workable definitions, they are not all-inclusive. For example, if
you build a software system that conforms precisely to a set of really lousy
specifications, do you have a high-quality product? Probably not. Part of our job as
software developers is to help ensure that the system specs themselves are of high
quality (i.e., that the specs properly address the user’s needs), as well as building an
application that conforms to this spec.
A couple of other important points are implied in this definition. One is that you
HAVE specifications for the programs you’re writing. Too often, we work from a fuzzy
notion of what we’re trying to do. This fuzzy image becomes refined over time, but if
you’ve been writing code during that time, you’ll probably find that much of it has to
be changed or thrown out. Wouldn’t you rather think the problem through in detail up
front, and only code it once? I’ve discovered through years of practice that this
approach results in a much better product than if I just banged out the code on the fly.
Another implication is that “software” includes more than executable code. The
“deliverables” from a software development project also include the written
specifications, system designs, test plans, source code documentation, and user
manuals. Specifications and designs might include narrative descriptions of the
program requirements and structure, graphical models of the system (such as data
flow diagrams), and process specs for the modules in your system.
Software quality impacts these other system deliverables just as it affects the
source code. The quality of documentation is particularly important. Have you ever
tried to change someone else’s code without being able to understand his mindset at the
time he wrote it? Detailed documentation about the parts of the software system, the
logic behind them, and how they fit together is extremely important. But erroneous
documentation is worse than nothing at all, since it can lead you down a blind alley.
Any time the documentation and source code don’t agree, which do you believe?
There’s a compelling economic incentive for building quality into software. The
true cost of a software development project is the base cost (what you spend to build
the system initially) PLUS the rework cost (what you spend to fix the errors in the
system). The rework cost rarely is figured into either the time or money budgets,
with the consequence that many projects cost much more to complete than expected and
soak up still more money as work is done to make the system truly conform to the
specifications. In too many cases, the project is delivered too late to be useful, or not
at all.
Software Quality Assurance
Software quality assurance, or SQA, is the subfield of software engineering
devoted to seeing that the deliverables from a development project meet acceptable
standards of completeness and quality. The overall goal of SQA is to lower the cost of
fixing problems by detecting errors early in the development cycle. And if your SQA
efforts prevent some errors from sneaking into your code in the first place, so much
the better. SQA is a watchdog function looking over the other activities involved in
software development.
Here are some important SQA thoughts. First, you can’t test quality into a
product; you have to build it in. Testing can only reveal the presence of defects in the
product. Second, software quality assurance is not a task that’s performed at one
particular stage of the development life cycle, and most emphatically not at the very
end. Rather, SQA permeates the entire development process, as we’ll see shortly.
Third, SQA is best performed by people not directly involved in the development effort.
The responsibility of the SQA effort is to the customer, to make sure that the best
possible product is delivered, rather than to the software developers or their
management. SQA won’t succeed if it just tells the managers what they want to hear.
Testing is certainly a big part of SQA, but by no means the only part. Testing, of
course, is the process of executing a computer program with the specific intention of
finding errors in it. It’s nearly impossible to prove that a program is correct, so
instead we do our best to make it fail. Unfortunately, most of us perform testing quite
casually, without a real plan and without keeping any records of how the tests went.
Proper software testing requires a plan, or test script. It includes
documentation, sample input datasets, and records of test results. Instead of being
informal and ad hoc, good software testing is a systematic, reproducible effort with
well-defined expectations. We’ll talk more about good testing strategies later on.
Now let’s look at some goals of SQA for the various stages of structured software
development. No matter what software development life cycle model you follow, you’ll
always have to contend with requirements analysis, system specification, system
design, code implementation, testing, and maintenance, so these SQA goals are almost
universally applicable. For one-man projects, much of the formality of these stated
SQA goals is not needed. Instead, try to discipline yourself enough to meet the most
important aspects of the goals, while still having fun writing the programs.
Requirements Analysis
• Ensure that the system requested by the customer is feasible (many large
projects have a separate feasibility study phase even before gathering formal
requirements).
• Ensure that the requirements specified by the customer will in fact satisfy his
real needs, by recognizing requirements that are mutually incompatible,