By Eric Allman, Sendmail
ACM Queue vol. 5, no. 1
A sad truism is that to write code is to create bugs (at least using
today's software development technology)
The really sad part is that at least some of these are likely to be
security bugs. We know how to ameliorate those security bugs (e.g., run
your program in a virtual machine), but that does not eliminate them
The corollary to this is that to distribute a program means that sooner
or later you face the possibility of having to manage a security
"incident." There is no "correct" way to handle a problem - a lot will
depend on hard-to-quantify factors such as how popular the program is,
the environments that it runs in, whether it is open or closed source,
whether it was discovered internally or externally, etc.
There are several approaches to dealing with security problems. Here are
a few of the more popular (but not necessarily good) ones:
Ignore the problem.
This one is very popular, which is surprising given that it doesn't
work. The logic is that maybe if you (the software producer) don't
publicize the problem, it won't get exploited. This approach is
especially popular if the bug is discovered in-house, even more so in
closed source software. But are you willing to stake your reputation on
it? If the problem is discovered by an outside security researcher, you
might be able to get away with claiming you didn't know about it, but if
it ever becomes known that you were hiding the problem, you will be
publicly eviscerated. This is not generally considered pleasurable.
A more common variant is to silently patch the problem in the next
release. This is certainly somewhat better and can sometimes even work
if your users are the rare ones who always update when you release a new
version. This is seldom the case, however, and almost inevitably some
significant set of your users will be running old versions. When the
problem does get discovered (and it nearly always does, eventually),
your users will often blame you for not telling them that they were
exposed. In particular, many customers (quite reasonably) subscribe to
the "if it ain't broke, don't fix it" philosophy - to get them to
upgrade, you have to give them a good reason.
Patch and announce without details.
This approach involves telling the world that there is a security
problem, but doing so without divulging any details. Security groups
(both outside researchers and those within your customers'
organizations), however, understandably like to be able to verify that
the problem was appropriately addressed. In general, not announcing at
least some details has the same effect as poking at a hornet's nest.
This isn't a simple binary decision - for example, few vendors will
release an exploit script, even though they developed one in the process
of fixing the bug. Announcing a security bug marks the beginning of a
footrace between the attackers' ability to exploit and the customers'
ability to patch, and you don't want to give the attackers a head start.
Patch with full disclosure.
Particularly popular in the open source world (where releasing a patch
is tantamount to full disclosure anyway), this involves opening the
kimono and exposing everything, including a detailed description of the
problem and how the exploit works. It doesn't necessarily mean releasing
an exploit script, but sometimes this is unavoidable, particularly if
the problem was discovered by one of the more aggressive security
Announce without patch.
This isn't normally popular among vendors, but it is sometimes necessary
if the problem is already known or will soon be exposed. Such an
announcement (hopefully) comes with a workaround. The worst case, of
course, is a severe vulnerability with no known workaround that you are
forced to reveal. At that point you need to be thinking in terms of
By the way, if the problem was found externally, it's usually good
practice to give credit to the people who found the vulnerability when
you make an announcement. Some companies don't like to admit that a
problem was found externally, but it is ultimately better to build a
good relationship with the security group, and credit is part of that.
Most groups will look at the same software over and over again, so you
are likely to hear from them more than once.
Another important consideration, regardless of the approach you choose,
is the timing of a security release. Again, there is no correct answer,
but there are a few things to consider. Your general rule should be:
"Release as soon as possible, but no sooner." If your product is closed
source, not a large target, and the exploit was discovered internally,
you probably have some breathing room to do the job right, including
producing good patch scripts and documentation, coordinating with other
affected vendors, etc. On the other hand, if you learned about the
exploit because some cracker group just released a fast-spreading worm
that uses your code to propagate and gives the attackers complete
read/write access to sensitive customer data, you will be on a much
tighter schedule, and you may have to cut some corners.
How severe is the exploit?
A bug that gives an external user full control of a machine is more
critical than one that allows the external user to break into the
account of another user who opened an attachment (which that user
shouldn't have opened in the first place). Breaking into even a simple
non-admin account, however, is generally still enough to turn that
machine into a spam zombie, even if it doesn't give access to customer
credit-card numbers, so don't be too complacent.
Is the bug discovered internally or externally?
If it is discovered externally, then you may not have the option of
choosing when the problem is announced. Some security groups will give
you a deadline and sometimes will work with you to do an orderly
release. None of them will agree to keep it quiet forever, and if you
try to stall, they will generally react negatively. If the group is
legitimate (i.e., one that isn't trying to blackmail you), then you can
usually negotiate, but only up to a point. Remember, even if you
disagree with them, most of those groups are on the right side. Treat
them with respect.
Are other vendors affected?
If your code is included in other distributions or you have OEMed it to
partners, then you owe it to your partners to give them a heads up
before you go public, even if they get only a small amount of time
(blindsiding your partners is never a good idea).
Is your code open or closed source?
Generally speaking, it is easier to find bugs in open source code, and
hence the pressure to release quickly may be higher. This isn't a
condemnation of open source code. Ross Anderson of Cambridge University
found that bugs will get discovered in any case; it's just a matter of
when ("Security in Open versus Closed Systems - The Dance of Boltzmann,
Coase and Moore," 2002). In other words, there really is no security
difference between open and closed source code, but the bugs will
generally get fixed sooner in open source code than in closed source
Have you tested your patch?
As with any bug, the obvious solution isn't necessarily the correct one,
and even if there is an active exploit in circulation, you aren't doing
your customers any favors by making them install a patch that doesn't
protect them, forcing them to do it again. As the saying goes, sometimes
when you're up to your ass in alligators, it's hard to remember that
you're there to drain the swamp, but do try to do the job right the
How bureaucratic is your company?
Unfortunately, sometimes your own company can seem to turn into the
enemy. Like it or not, you have to work within the constraints imposed
by management. If they are totally clueless, consider showing them this
magazine. If that doesn't work, get your resume in order, and good luck.
ERIC ALLMAN is the cofounder and chief science officer of Sendmail, one
of the first open source-based companies.
Subscribe to the InfoSec News RSS Feed