scotartt on Sun, 16 Dec 2001 17:14:12 +0100 (CET)

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> [ CRYPTO-GRAM, December 15, 2001]

A security analysis of ID cards as proposed for the USA in light of recent
events. Shows that from a technical point of view that ID cards are
ineffective at achieving what is claimed for them.  Also a little piece
about legal enforcement of internet security. Plus some other security
related matters.


----- Forwarded message from Bruce Schneier <> -----

Mailing-List: contact; run by ezmlm
Precedence: bulk
Delivered-To: moderator for
X-Sender: (Unverified)
X-Mailer: QUALCOMM Windows Eudora Pro Version 4.2.2 
Date: Sat, 15 Dec 2001 13:57:12 -0600
From: Bruce Schneier <>
Subject: CRYPTO-GRAM, December 15, 2001


               December 15, 2001

               by Bruce Schneier
                Founder and CTO
       Counterpane Internet Security, Inc.

A free monthly newsletter providing summaries, analyses, insights, and 
commentaries on computer security and cryptography.

Back issues are available at 
<>.  To subscribe, visit 
<> or send a blank message to

Copyright (c) 2001 by Counterpane Internet Security, Inc.

** *** ***** ******* *********** *************

In this issue:
      National ID Cards
      Judges Punish Bad Security
      Crypto-Gram Reprints
      Computer Security and Liabilities
      Counterpane News
      The Doghouse:  The State of Nevada
      Fun with Vulnerability Scanners
      Comments from Readers

** *** ***** ******* *********** *************

               National ID Cards

There's loose talk in Washington about national ID cards.  Although the 
Bush administration has said that it is not going to pursue it, enough 
vendors are scurrying to persuade Congress to adopt the idea that it is 
worth examining the security of a mandatory ID system.

A national ID card system would have four components.  First, there would 
be a physical card that contains information about the person: name, 
address, photograph, maybe a thumbprint, etc.  To be effective as a 
multi-purpose ID, of course, the card might also include place of 
employment, birth date, perhaps religion, perhaps names of children and 
spouse, and health-insurance coverage.  The information might be in text on 
the card and might be contained on a magnetic strip, a bar code, or a 
chip.  The card would also contain some sort of anti-counterfeiting 
measures: holograms, special chips, etc.  Second, there would be a database 
somewhere of card numbers and identities.  This database would be 
accessible by people needing to verify the card in some circumstances, just 
as a state's driver-license database is today.  Third, there would be a 
system for checking the card data against the database.  And fourth, there 
would be some sort of registration procedure that verifies the identity of 
the applicant and the personal information, puts it into the database, and 
issues the card.

The way to think about the security of this system is no different from any 
other security countermeasure.  One, what problem are IDs trying to 
solve?  Two, how can IDs fail in practice?  Three, given the failure modes, 
how well do IDs solve the problem?  Four, what are the costs associated 
with IDs?  And five, given the effectiveness and costs, are IDs worth it?

What problem are IDs trying to solve?  Honestly, I'm not too 
sure.  Clearly, the idea is to allow any authorized person to verify the 
identity of a person.  This would help in certain isolated situations, but 
would only have a limited affect on crime.  It certainly wouldn't have 
stopped the 9/11 terrorist attacks -- all of the terrorists showed IDs to 
board their planes, some real and some forged -- nor would it stop the 
current anthrax attacks.  Perhaps an ID card would make it easy to track 
illicit cash transactions, to discover after the fact all persons at the 
scene of a crime, to verify immediately whether an adult accompanying a 
child is a parent or legal guardian, to keep a list of suspicious persons 
in a neighborhood each night, to record who purchased a gun or knife or 
fertilizer or Satanic books, to determine who is entitled to enter a 
building, or to know who carries the HIV virus.  In any case, let's assume 
that the problem is verifying identity.

We don't know for sure whether a national ID card would allow us to do all 
these things.  We haven't had a full airing of the issue, ever.  We do know 
that a national ID document wouldn't determine for sure whether it is safe 
to permit a known individual to board an airplane, attend a sports event, 
or visit a shopping mall.

How can IDs fail in practice?  All sorts of ways.  All four components can 
fail, individually and together.  The cards themselves can be 
counterfeited.  Yes, I know that the manufacturers of these cards claim 
that their anti-counterfeiting methods are perfect, but there hasn't been a 
card created yet that can't be forged.  Passports, drivers licenses, and 
foreign national ID cards are routinely forged.  I've seen estimates that 
10% of all IDs in the US are phony.  At least one-fourth of the president's 
own family has been known to use phony IDs.  And not everyone will have a 
card.  Foreign visitors won't have one, for example.  (Some of the 9/11 
terrorists who had stolen identities stole those identities overseas.) 
About 5% of all ID cards are lost each year; the system has to deal with 
the problems that causes.

Identity theft is already a problem; if there is a single ID card that 
signifies identity, forging that will be all the more damaging.  And there 
will be a great premium for stolen IDs (stolen U.S. passports are worth 
thousands of dollars in some Third World countries).  Biometric 
information, whether it be pictures, fingerprints, retinal scans, or 
something else, does not prevent counterfeiting; it only prevents one 
person from using another's card.  And this assumes that whoever is looking 
at the card is able to verify the biometric.  How often does a bartender 
fail to look at the picture on an ID, or a shopkeeper not bother checking 
the signature on a credit card?  How often does anybody verify a telephone 
number presented for a transaction?

The database can fail.  Large databases of information always have errors 
and outdated information.  If ID cards become ubiquitous and trusted, it 
will be harder than ever to rectify problems resulting from erroneous 
information.  And there is the very real risk that the information in the 
database will be used for unanticipated, and possibly illegal, 
purposes.  There have been several murders in the U.S. that have been aided 
by information in motor vehicle databases.  And much of the utility of the 
national ID card assumes a pre-existing database of bad guys.  We have no 
such database.  The U.S. criminal database is 33% inaccurate and out of 
date.  "Watch Lists" of suspects from abroad have surprisingly few people 
on them, certainly not enough to make a real-time match of these lists 
worthwhile.  They have no identifiers, except name and country of origin, 
and many of the names are approximated versions or phonetic 
spellings.  Many have only approximated names and no other identifiers.

Even riskier is the mechanism for querying the database.  In this country, 
there isn't a government database that hasn't been misused by the very 
people entrusted with keeping that information safe.  IRS employees have 
perused the tax records of celebrities and their friends.  State employees 
have sold driving records to private investigators.  Bank credit card 
databases have been stolen.  Sometimes the communications mechanism between 
the user terminal -- maybe a radio in a police car, or a card reader in a 
shop -- has been targeted, and personal information stolen that way.

Finally, there are insecurities in the registration mechanism.  It is 
certainly possible to get an ID in a fake name, sometimes with insider 
help.  Recently in Virginia, several motor vehicle employees were issuing 
legitimate drivers licenses in fake names for money.  (Two suspected 
terrorists were able to get Virginia drivers' licenses even though they did 
not qualify for them.)  Similar abuses have occurred in other states, and 
with other ID cards.  A lot of thinking needs to go into the system that 
verifies someone's identity before a card is issued; any system I can think 
of will be fraught with these sorts of problems and abuses.  Most 
important, the database has to be interactive so that, in real time, 
authorized persons may alter entries to indicate that an ID holder is no 
longer qualified for access -- because of death or criminal activity, or 
even a change of residence.  Because an estimated five percent of identity 
documents are reported lost or stolen, the database must be designed to 
re-issue cards promptly and reconfirm the person's identity and continued 
qualification for the card.

Given the failure modes, how well do IDs solve the problem?  Not very 
well.  They're prone to errors and misuse, and are likely to be blindly 
trusted even when wrong.

What are the costs associated with IDs?  Cards with a chip and some 
anti-counterfeiting features are likely to cost at least a dollar each, 
creating and maintaining the database will cost a few times that, and 
registration will cost many times that -- multiplied by 286 million 
Americans.  Add database terminals at every police station -- presumably 
we're going to want them in police cars, too -- and the financial costs 
easily balloon to many billions.  As expensive as the financial costs are, 
the social costs are worse.  Forcing Americans to carry something that 
could be used as an "internal passport" is an enormous blow to our rights 
of freedom and privacy, and something that I am very leery of but not 
really qualified to comment on.  Great Britain discontinued its wartime ID 
cards -- eight years after World War II ended -- precisely because they 
gave unfettered opportunities for police "to stop or interrogate for any 

I am not saying that national IDs are completely ineffective, or that they 
are useless.  That's not the question.  But given the effectiveness and the 
costs, are IDs worth it? Hell, no.

Privacy International's fine resource on the topic.  Their FAQ is excellent:

EPIC's national ID card site:

Other essays:

** *** ***** ******* *********** *************

          Judges Punish Bad Security

I have two stories with a common theme.

The first involves the U.S. Department of Interior.  There's an ongoing 
litigation between Native Americans and the U.S. Government regarding 
mishandling of funds.  After seeing for himself how insecure the 
Department's computers were, and that it was possible for someone to alter 
records and divert funds, a U.S. District Judge ordered the department to 
disconnect its computers from the Internet until its network is secured.

The second involves a couple of Web hosting companies.  One day, C.I. Host 
was hit with a denial-of-service attack.  They traced at least part of the 
attack to companies hosted by Exodus Communications.  C.I. Host filed an 
injunction against Exodus, alleging that they committed or allowed a third 
party to commit a DOS attack.  A Texas judge issued a temporary restraining 
order against three of Exodus's customers, forcing them to disconnect from 
the Internet until they could prove that the vulnerabilities leading to the 
DOS attack had been fixed.

I like this kind of stuff.  It forces responsibility.  It tells companies 
that if they can't make their networks secure, they have no business being 
on the Internet.  It may be Draconian, but it gets the message across.

On the Internet, as on any connected system, security has a ripple 
effect.  Your security depends on the actions of others, often of others 
you can't control.  This is the moral of the widely reported distributed 
denial-of-service attacks in February 2000: the security of the computers 
at eBay, Amazon, Yahoo, and depended on the security of the 
computers at the University of California at Santa Barbara.  If Eli Lilly 
has bad computer security, then your identity as a Prozac user may be 
compromised.  If Microsoft can't keep your Passport data secure, then your 
online identify can be compromised.  It's hard enough making your own 
computers secure; now you're expected to police the security of everyone 
else's networks.

This is where the legal system can step in.  I like to see companies told 
that they have no business putting the security of others at risk.  If a 
company's computers are so insecure that hackers routinely break in and use 
them as a launching pad for further attacks, get them off the Internet.  If 
a company can't secure the personal information it is entrusted with, why 
should it be allowed to have that information?  If a company produces a 
software product that compromises the security of thousands of users, maybe 
they should be prohibited from selling it.

I know there are more instances of this happening.  I've seen it, and some 
of my colleagues have too.  Counterpane acquired two customers recently, 
both of whom needed us to improve their network's security within hours, in 
response to this sort of legal threat.  We came in and installed our 
monitoring service, and they were able to convince a judge that they should 
not be turned off.  I see this as a trend that will increase, as attacked 
companies look around for someone to share fault with.

This kind of thing certainly won't solve our computer security problems, 
but at least it will remind companies that they can't dodge responsibility 
forever.  The Internet is a vast commons, and the actions of one affect the 
security of us all.

Dept. of Interior story:

Exodus story:

** *** ***** ******* *********** *************

             Crypto-Gram Reprints

Voting and Technology:

"Security Is Not a Product; It's a Process"

Echelon Technology:

European Digital Cellular Algorithms:

The Fallacy of Cracking Contests:

How to Recognize Plaintext:

** *** ***** ******* *********** *************

       Computer Security and Liabilities

Some months ago I did a Q&A for some Web site.  One of the questions is 
worth reprinting here.

Question:  Your book, Secrets and Lies, identifies four parties who are 
largely responsible for network-based attacks: 1) individuals who purposely 
seek to cause damage to institutions, 2) "hacker wannabes" or script 
kiddies, who help proliferate the exploits of bad hackers, 3) businesses 
and governments who allow themselves to be exploited, jeopardizing the 
rights of their customers, clients, and citizens, and  4) software 
companies who knowingly fail to reform their architectures to make them 
less vulnerable.  If you were to impose blame, who -- in your opinion -- 
are the most liable of this group?

Answer: Allocating liability among responsible parties always depends on 
the specific facts; blame or liability cannot be assigned in general terms 
or in a vacuum.  For example, assume we are Coke and we have a secret 
formula.  We store it in a safe in our corporate headquarters.  We bought 
the safe from the Acme Safe Company.

One evening, the CEO of the company takes the secret formula out of the 
safe and is reviewing it on her desk.  Her phone rings and she is 
distracted, and while she is on the phone, the guy who empties the waste 
baskets sees the formula, knows what it is, steals it, and sells it to 
Pepsi for $1 billion.  Or there is a guy in the building who is an agent 
for Pepsi who has been trying for months to get the secret formula, and 
when he notices she has it out, he makes the phone call, distracts her and 
steals the formula.  Or the thief is a college kid who just wants to know 
the formula; he forms the same plan to steal the formula, reads it, and 
returns it with a note -- hah hah I know the formula -- and never sells it 
to anyone or tells anyone the formula.

Under criminal law, all three thieves are criminals.  The janitor or the 
kid may get off easier because they didn't plan the crime (often an 
aggravating factor) and the kid didn't do it for financial gain (often an 
aggravating factor).  Under tort law, the janitor and the agent for Pepsi 
would be liable to Coke for whatever damages ensued from stealing the 
formula.  It wouldn't really matter that the CEO could have been more 
careful.  Intentional torts usually trump negligence.  The kid is also 
culpable, but Coke may have no damages.  If Coke does have damages, laws 
about juveniles may protect him, but if he's an adult, he is just as liable 
as the Pepsi agent.  Thus, I see the hackers and the kids as the same in 
the question.  The only issues are the damage they cause and whether the 
kids are young enough to be protected by virtue of their age.  Furthermore, 
Coke may still want to fire the CEO for leaving the formula on her desk, 
but the thieves can't limit their liability by pointing to her negligence.

Now imagine that the formula is not stolen from her desk, but instead 
stolen from the safe supplied by the Acme Safe Company.  As above, the 
thieves should not be able to reduce their liability by saying the safe 
should have been tougher to crack.  The interesting question is, can Coke 
get money from Acme?  That depends on whether there was a defect in the 
safe, whether Acme made any misrepresentations about the safe, whether the 
safe was being used as it was intended, and whether Acme supplied any 
warranties with the safe.  If Acme's safe performed entirely as advertised, 
then they aren't liable, even if the safe was defeated.  If, on the other 
hand, Acme had warranted that the safe was uncrackable and it was cracked, 
then Acme has breached its contract.  Coke's damages, however, may be 
limited to the return of the amount paid for the safe, depending on the 
language of the contract.  Clearly, Coke would not be made whole by this 
amount.  If Acme made negligent or knowing misrepresentations about the 
qualities of its safe, then we are back in tort territory, and Acme may be 
liable for greater damages.

Figuring out who to blame is easy; assigning proportions of blame is very 
difficult.  And it can't be done in the general case.

** *** ***** ******* *********** *************


Remember the great Windows XP anti-pirating features?  Well, they didn't 
make a bit of difference:

The European Parliament is allowing anti-terrorist investigators to 
eavesdrop on private data on the Internet:

Cyclone is a "safe" dialect of C.  The goal is to be as C-like as possible 
while preventing unsafe behavior (buffer overflows, dangling pointer, 
format string attacks, etc).

More on the full-disclosure argument:

This is what REAL software pirates look like:

The Federal Trade Commission is cracking down on marketers of bogus 
bioterrorism defense products:
The reverse is probably more important: consumer experts need to be warning 
government agencies to avoid bogus anti-terrorism products.

Here's some positive Microsoft news.  They have acknowledged a security 
mistake and apologized for it.  Now, if we can only get them to responsibly 
fix mistakes.

The FBI is talking about its key-logging technology, called "Magic 
Lantern."  Near as I can tell, it works something like Back Orifice: it 
infects a computer remotely, and then sniffs passwords and keys and other 
interesting bits of data for the FBI.  Nothing new, here.
The scariest bit of news revolves around whether anti-virus companies will 
detect Magic Lantern or ignore it.  I don't think that the anti-virus 
companies should be making decisions about which viruses and Trojans it 
detects and which it doesn't.  Aside from the obvious problems of betraying 
the trust of the user, there's the additional complexity of a mechanism for 
detecting malware and then not doing anything about it.  Any hacker who 
reverse engineers the anti-virus product can design a Trojan that looks 
like the FBI's Magic Lantern and escapes detection.
Latest news is that the anti-virus companies will detect it.

Interesting CERT white paper on trends in denial-of-service 
attacks.  Bottom lines: the tools are getting cleverer.

Developments in quantum cryptography:

Search engines are getting smarter and smarter, finding files in all sorts 
of formats.  These days some of them can find passwords, credit card 
numbers, confidential documents, and even computer vulnerabilities that can 
be exploited by hackers.

Thirty countries have signed the cybercrime treaty:

A little-noticed provision in the new anti-terrorism act imposes U.S. 
cybercrime laws on other nations, whether they like it or not.

Excellent, excellent article on Microsoft's push for bug secrecy.  So good 
I wish I had written it:

A very interesting Web page from David Wheeler, giving measurements
that suggest open source operating systems may have an advantage with
respect to security:

Interesting Q&A with Gary McGraw on software security:

Heap overflows:

Bad news.  2600 DeCSS appeal denied; Felten SDMI suit rejected:

Good story about a company's physical security:

This isn't big enough for a doghouse entry, but it is funny enough to 
mention.  Norton SystemWorks 2002 includes a file erasure program called 
Wipe Info.  In the manual (page 160), we learn that "Wipe Info uses 
hexadecimal values to wipe files.  This provides more security than wiping 
with decimal values."  Who writes this stuff?

Security vulnerabilities in Nokia cellphones.

Microsoft bundling anti-virus software with the OS?

SatireWire has the best take on the recent arrests of software pirates:
There are two basic kinds of pirates.  There's the kids and hobbyists, who 
do this for fun and don't cost companies revenue.  And there are the 
professionals, that sell pirated software for profit.  My fear is that 
we're arresting the former while ignoring the latter.

Problems of PKI.  Last year the author railed against my "10 Myths of PKI" 
essay.  This is his retraction.  He learned the hard way: his PKI project 
failed and his PKI vendor is going under.

The Goner worm has come, been over-hyped by the anti-virus companies, and 
has gone; its Israeli authors have been arrested.
Here's an interesting footnote:  Chat channel volunteers take over the IRC 
channel.  It's the first time I've ever heard of a peacekeeping force 
moving in to keep out digital insurgents.

** *** ***** ******* *********** *************

               Counterpane News

Happy Holidays to everyone.  Since I don't have your addresses, here's a 
virtual holiday card:

Counterpane has been chosen as one of ComputerWorld magazine's "Top 100 
Emerging Companies to Watch for 2002":

Schneier is giving the keynote speech at the CyberCrime on Wall Street 
conference on January 10.

** *** ***** ******* *********** *************

        The Doghouse: The State of Nevada

In the spirit of the Indiana bill that would have legislated the value of 
pi, the State of Nevada has defined encryption.  According to a 1999 law, 
Nev. St. 205.4742:

"Encryption" means the use of any protective or disruptive measure, 
including, without limitation, cryptography, enciphering, encoding or a 
computer contaminant, to:
1. Prevent, impede, delay or disrupt access to any data, information, 
image, program, signal or sound;
2. Cause or make any data, information, image, program, signal or sound 
unintelligible or unusable; or
3. Prevent, impede, delay or disrupt the normal operation or use of any 
component, device, equipment, system or network.

Note that encryption may involve "cryptography, enciphering, encoding," but 
that it doesn't have to.  Note that encryption includes any "protective or 
disruptive measure, without limitation."  If you smash a computer to bits 
with a mallet, that appears to count as encryption in the state of Nevada.

** *** ***** ******* *********** *************


We have an AES.  FIPS-197 was signed on November 26, 
2001.  Congratulations, NIST, on a multi-year job well done.

NIST special publication SP 800-38A, "Recommendation for Block Cipher Modes 
of Operation," is also available.  The initial modes are ECB, CBC, CFB, 
OFB, and CTR.  Other modes will be added at a later time.

The Second Key Management Workshop was held in early November.  Information 
is available on the NIST Web site.

AES info:


SP 800-38A:

Key-management info:

** *** ***** ******* *********** *************

        Fun with Vulnerability Scanners

It used to be that when you connected to one of Counterpane's mailers, it 
responded with a standard SMTP banner that read something like:

         220 ESMTP Sendmail 8.8.8/8.7.5;
         Mon, 7 May 2001 21:13:35 -0600 (MDT)

Because this information includes a sendmail version number, some people 
sent us mail that read, loosely interpreted:  "Heh heh heh.  Bruce's 
company runs a stupid sendmail!"

Until recently, the standard response of Counterpane's IT staff was to 
smile and say "Yes, that certainly is what the banner says," leaving the 
original respondent to wonder why we didn't care.  (There are a bunch of 
reasons we don't care, and explaining them would take both the amusement 
and the security out of it all.)

However, we were getting a bit tired of the whole thing.  We have companies 
run penetration tests against us on a regular basis, and more often than 
not they complained that every one of our publicly available SMTP servers 
had the same stupid version of sendmail on it.

Then, we got the results of a vulnerability scanner run against our Sentry, 
a special-purpose device.  The scanner complained that 1) the Sentry's SMTP 
service produced a banner, and 2) SMTP banners usually contain version 
information.  Hence, there was a potential security vulnerability.  The 
banner in question was:

         220 natasha ESMTP Sentry

As you can tell, this banner contains no version information at all.  The 
scanner blindly alerts every time an SMTP server returns a banner.  This is 
the equivalent of those envelopes that says "YOU MAY ALREADY HAVE WON!" in 
big red letters on the outside.  You might have a vulnerability.  Probably 
not; but you never know, you're telling people something and they might be 
able to get information out of it.

Unfortunately, RFC 821 *requires* an SMTP server to return a banner.  The 
original RFC calls for a banner that starts with 220 and the official name 
of the machine; the rest of the banner is up to the user.  It's traditional 
for servers that support ESMTP to mention this in their banner.  Now, many 
RFCs are more honored in the breach than in the observance, but in pure 
practical terms, if your SMTP server doesn't say something that starts with 
220, it won't work.  No banner, no mail.

This means that it's impossible to avoid setting off the vulnerability 
scanner.  It is, however, possible to avoid actual giving out useful 
information.  There are a lot of approaches to this: the strong, silent 
type that our second example almost achieves (220 hostname ESMTP); the 
deceptive type, which our first example achieves (give out a banner that 
implies vulnerabilities you don't have -- for maximum amusement value, pick 
one from ancient times); the confusing type, which gives a different banner 
every time (some hosts do really funny versions of this).  However, none of 
these solves the basic problem of getting people to stop complaining, and 
the complainers are a much bigger problem for us than the attackers.

Attackers are going to figure out what SMTP server is running, regardless 
of its banner.  They can simply try all the vulnerabilities.  Therefore, 
you get rid of attackers by getting rid of vulnerabilities.  A lot of 
attackers are just running scripts, so you can reduce the annoyance value 
by running a banner that doesn't match the script, but almost any approach 
will achieve that.

The human beings who complain, however, are unwilling to beat on your SMTP 
server to figure out what it is.  Deceptive banners fool them reliably, 
wasting your precious time dealing with them.  Empty banners don't get rid 
of them reliably.  We have therefore moved to the amusing defense; our new 
banners read:

	220 This ESMTP banner intentionally left blank.

Scanners will still go off, but pretty much anybody can tell that this 
doesn't contain useful information.

Technically, this isn't RFC compliant, unless you name your host 
"This".  We would worry about this more if we hadn't already been running a 
single host under multiple IP addresses, each with a different name 
attached, each with exactly the same banner, hostname and all.  Nothing 
ever complained that the name in the banner didn't match the hostname.  No 
penetration test ever even noticed that all these "different" machines were 
the same.  Even when they complained about the informative banner that told 
them that.  We figure "This" will do us just as well as a name.  (You could 
just as well put the hostname after the 220 if you feel compliant or use a 
mail system that cares.)

This is all very amusing, and it reduces letters of complaint that people 
actually bother to write, but it doesn't do a thing for scanners.  And it 
isn't just SMTP that the scanners complain about; no, they complain about 
SSH (it has a banner too, which is equally required), and they complain 
about our mail server accepting EXPN (it doesn't return an error; it 
doesn't return any information, either, but you have to look at the output 
to tell that).  They often complain that the Sentry accepts EXPN, even 
though it doesn't respond to the command at all.  All in all, the scanner 
output is all too much like my mail; the important bills are in danger of 
being buried by the junk mail.

This was written with Elizabeth Zwicky.  It appeared in the Nov 2001 Dr. 
Dobb's Journal.

** *** ***** ******* *********** *************

             Comments from Readers

From: Nathan Myers <>
Subject: Full Disclosure

Your message is consistent, effective, and helpful.  However, one remark 
you often repeat is being used to justify harmful practices, and even 
harmful legislation.  It plays into the hands of Microsoft and those like them.

In your essay on full disclosure, you wrote: "the sheer complexity of 
modern software and networks means that vulnerabilities, lots of 
vulnerabilities, are inevitable."  Microsoft's Scott Culp had written, "all 
non-trivial software contains bugs."  The difference between the two 
statements is probably too subtle for most of your readers.  As you say, 
almost all software vendors do very shoddy work, and most large systems are 
riddled with holes.  Still, the step from "almost all" to "all" is much 
larger than it might seem.

 From the standpoint of a judge or legislator, this makes all the 
difference in the world.  If reliable software really cannot be written, 
then Microsoft and its ilk must be forgiven their sloppiness at the outset; 
it would be wrong to hold them to an impossible standard.  If in fact 
reliable software can be written, then such ilk are negligent in failing to 
produce it.

This is not an academic point.  It affects your argument, and 
Microsoft's.  If a software system will always be full of holes no matter 
how many patches are applied, publicizing holes just makes it harder for 
network administrators to keep up.  It is the availability of reliable 
alternatives that cinches the full disclosure argument: users can get off 
the patch treadmill by switching to software that's not buggy.  The extra 
work done to ensure reliability pays off when users switch, or 
needn't.  Full disclosure punishes the sloppy (and their customers) and 
rewards the careful (and their customers).

It doesn't take many examples of truly reliable software to make the point, 
in principle.  How many bugs remain in Donald Knuth's TeX?  In Dan 
Bernstein's qmail?  These were not billion-dollar efforts.

Once it's demonstrated that reliability is possible, getting it becomes a 
matter of economics.  Microsoft, rather than saying reliable software is 
impossible, is forced to admit instead ($40 billion in the bank 
notwithstanding) that they simply cannot afford to write reliable software, 
or that their customers don't want it, or, more plausibly, that they just 
can't be bothered to write any, customers be damned.

Instead of promoting a destructive fatalism about the software components 
we rely on, you would do better to say simply that current economic 
conditions lead most organizations to deploy systems known to be full of 
vulnerabilities.  Leave open the possibility that slightly different 
circumstances would allow for a reliable infrastructure.  Reliability is no 
substitute for effective response, but it just might be what it takes to 
make effective response possible.

From: "John.Deters" <>
Subject: Full Disclosure

There are known cases of thieves stealing credit card information from 
e-commerce Web sites.  The criminal who calls himself Maxus tried 
blackmailing CD Universe (and allegedly others) into paying him to not 
reveal the credit card data he had stolen; when that failed he used those 
stolen account numbers to commit theft.  He was never caught.  What if the 
exploit he used was the same exploit that was used by Code Red, or Nimda, 
or any of the other worms-of-the-week?  He's now out of business, as are 
any other un-named, un-identified, un-caught e-commerce thieves.

I think that exploits such as Code Red, while harmful in the short term, 
cause appropriate fires to be lit beneath the appropriate sysadmins.  They 
bring security issues to the radar screens of upper management.  I'm not 
condoning the use of malicious worms as security testers, but rather 
recognizing that their existence causes people to reevaluate their systems' 
security.  Maxus didn't release info regarding his exploit -- it was more 
valuable for him to steal.  How many other thieves were using these same 
exploits?  Without Code Red, perhaps hundreds of e-commerce Web sites would 
have remained unpatched, and would still be vulnerable to the professional 
thieves.  Perhaps many still are, but certainly not as many as there would 
have been without Code Red.

From: "Gregory H Westerwick" <>
Subject: Full Disclosure

Your discussion in the latest newsletter is really a special case of a 
debate that has been going on in democratic and open societies since their 
inception.  Recent examples in the national press are the FBI's warnings 
about impending terrorist activities and Governor Davis's warning about 
possible attacks on bridges in California.  Was the public better served by 
knowing of credible evidence of terrorist plans?  Did this elicit 
unnecessary panic?  What would have been the political fallout if there had 
been an attack, the state knew that it was likely and didn't warn the 
public?  Did the warning change the terrorist plans?  I don't think we will 
ever have a good answer to these questions, and will probably be debating 
the merits of each side into the next century.

Problem denial is not restricted to the computer industry.  Several years 
ago, I almost swallowed a piece of plastic while drinking a Pepsi.  It 
looked like a small piece of nozzle about the size of a fingernail.  I 
called the Pepsi consumer 800 number, told them about it and gave them the 
batch number from the can so they could go back through the quality records 
and determine if they had an endemic problem with their filling 
machinery.  The next day I got a very apologetic Pepsi rep on the phone 
thanking me for the information, and a week later received an envelope full 
of free six-pack coupons in the mail.

A few months later, the country had another round of "things in the can" 
lawsuits, where crackpots were allegedly finding mice, syringes, and other 
assorted evil junk in drink cans.  Shortly after that, at the behest of the 
bottling industry, came laws that made it a felony to falsely claim finding 
foreign objects in drink cans.  If I found that same piece of plastic in a 
can today, I'm not sure I would let Pepsi know, and they would have one 
less piece of information with which to improve their processes and 
machinery.  They would probably have one less customer as well.

From: "Charles L. Jackson" <>
Subject: Full Disclosure

I think that the responsibility for reliability and security should fall on 
the user.  If the lawyer wants to be protected against crashes, he could go 
to MS and negotiate a contract outside the shrink-wrap license.  He could 
go to a trusted supplier who would design a more reliable system (mirrored 
disks, off-site backups, version-by-version backups, automatic save) and 
who would warrant that system.  He could buy insurance.

MS's disclaimer of liability is necessary.  Maybe we should change the law 
and not let them disclaim the first $1,000 or first $1,000,000 of a loss.

Similarly, I think that much of the current and proposed law regarding 
hacking and intrusion into computer systems is counterproductive.  Today 
network administrators and CEOs can say "Our system was penetrated, and the 
FBI is after the criminals."  CEO's cannot say, "We left the merger plans 
and our secret list of herbs and spices on the table in the restaurant 
while we went to the bathroom, and the FBI is after the criminals who read 
the secrets."  I think that the current set of incentives lets the people 
with the final authority and the responsibility for security off the hook.

Thus, I propose that the law should be that, "If an outsider breaks into 
your computer system over the Internet or via a dial-up connection and 
steals something, they get to keep it."

A weakened variant my rule might be that: "If a script kiddie breaks into 
your system and steals something using a hole for which a patch has been 
available for 30 days, they get to keep it."

The third weakest variant is, "If someone logs into your system using 
either a manufacturer's default password or the user name, password pair 
(guest, guest) and steals something, they get to keep it."

My purpose with these policy proposals is twofold.  One, strengthen the 
incentives for individuals and firms to both buy and use good 
security.  Two, to create an environment where the user or user 
organization understands that it bears the responsibility for security.

Notice that my argument does not apply to DOS attacks, physical intrusion,
etc.  But, if somebody can telnet to your server, log in, and have the
system mail them 500 copies of AC, 2nd Ed., that should be your problem, 
not Microsoft's or Oracle's.

From: Pekka Pihlajasaari <>
Subject: Full Disclosure

I feel that it is simplistic to consider vulnerabilities to be the result 
of a programming mistake.  This reduces the complex problem of correct 
systems development to an interpretation error in implementation.  I would 
suspect that more vulnerabilities are a result of incorrect requirements 
capture through specifications development, into design, and occasionally a 
programming bug.  Relegating the problem to programming means that the bug 
is identifiable through correct testing to meet requirements, whereas a 
higher-level error will not become visible without actually questioning 
system requirements.  The acceptance of a more holistic source of 
vulnerabilities will move emphasis away from the developer responsible for 
implementation and more into the hands of management responsible for a project.

From: Greg Guerin <>
Subject: Full Disclosure

Your article reminded me of two quotations:

"No one pretends that democracy is perfect or all-wise. Indeed, it has been 
said that democracy is the worst form of Government except all those other 
forms that have been tried from time to time."


"I know no safe depository of the ultimate powers of the society but the 
people themselves; and if we think them not enlightened enough to exercise 
their control with a wholesome discretion, the remedy is not to take it 
from them, but to inform their discretion."

I would argue that full disclosure is not perfect, and is even the worst 
form of reporting security flaws, except for all the others.  I would 
further argue that if ordinary people are unable to handle full disclosure 
with the necessary understanding and discretion, the remedy is not to keep 
the information from them, but to enlighten them.

And the authors of those two quotations?  None other than those infamous 
troublemakers and malcontents, Winston Churchill and Thomas Jefferson, 

From: "Marcus de Geus" <>
Subject: Full Disclosure

I feel there is one aspect that is receiving too little attention in this 
debate: the effect that full public disclosure of software bugs would have 
on buyers of new computer software.

The current debate focuses on the security risk (whether perceived or 
actual) that results from publishing a security-related bug rather than 
keeping its existence under wraps for as long as possible.  Consequently, 
the central issue of the debate has become the risk that full disclosure 
brings *to* existing systems, rather than the risk constituted *by* these 

Whereas the proponents of full disclosure consider the benefits of forced 
changes, i.e., software evolution, the champions of bug secrecy like to 
point out the threat that full disclosure brings to the large installed 
base.  The latter is a bit like arguing that the use of tanks in WWI was a 
waste of perfectly good trenches.

I have a sneaky suspicion that one of the parties in this debate has a 
hidden agenda, which is to divert attention from the effect that full 
disclosure of security bug information to *the general public* (as opposed 
to systems administrators, knowledgeable users, crackers, and script 
kiddies) would have on the sales figures of certain software.  In other 
words, the object of the current exercise is to prevent "evolution through 
natural selection" by avoiding normal market mechanisms.  (Now where did 
this come up before?)

Consider a scenario in which notices of defective software are routinely 
published in the daily press, the way recall notices from manufacturers of 
motorcars or household appliances are.  Considering the economic 
consequences of certain security defects, not to mention the risk to life 
and limb (e.g., in hospital IT systems), there is a case to be made for 
compulsory publication of such notices.

I suspect that the manufacturer mentioned most often in such notices (say, 
twice a week; nice pointer to would 
find himself having to either face plummeting sales figures or rapidly 
improve the quality of his products.  (In either case, the security 
problems at issue would be resolved, by the way.)

I wonder, could the studious avoidance of this subject be the result of 
fears that evolution may just prefer quality over quantity?

From: Mike Bursell <>
Subject: Window of Exposure -- area to volume

 >You can think of this as a graph of danger versus time,
 >and the Window of Exposure as the area under the graph.

It occurred to me that if we add another dimension -- "number of systems 
affected" or "vulnerable install base" -- we can move to volume, rather 
than just area.  This could be useful for corporations and communities to 
allow some degree of risk management via scenario analysis, particularly if 
trend analysis of how quickly companies get to the inflection point (if at 
all) is taken into account.

Although we hear lots about the low maintenance costs of homogeneous 
networks or administrative domains, one way of attempting to reduce the 
risk on your systems is to spread the risk by having heterogeneous systems 
-- but quantifying this can be difficult.  It might just be that using this 
sort of analysis might be useful.

From: Martin Dickopp <>
Subject: Linux and the DMCA

On Thu, Nov 15, 2001 at 01:45:27AM -0600, Bruce Schneier wrote:
 > A new version of Linux is being released without security
 > information, out of fear of the DMCA.  Honestly, I don't see how
 > the DMCA applies here, but this is a good indication of the level
 > of fear in the community.

One of the bugs being fixed in the Linux kernel allowed users to circumvent 
local file permissions.  Kernel programmer Alan Cox probably assumes that, 
since local file permissions can be used to protect copyrighted material, 
disclosing details about the bug would be illegal under the DMCA.

Cox is a U.K. citizen who wants to able to enter the U.S. without becoming 
a second Sklyarov.  Why should he undergo the trouble and cost to consult 
an expert on U.S. law about the issue at hand? Instead, he just stays on 
the safe side, understandably.

There seems to be little public awareness of the implications of the DMCA, 
because most people don't see themselves affected.  Therefore, the right 
reaction to the DMCA is not to ignore it, but to abide to it, even in cases 
where it can just reasonably be assumed (without taking legal advise) to 
apply.  This is what Cox did, and I fully appreciate his reaction.

From: (Terence Green)
Subj:  Microsoft Security Patches

XP is not the first time Microsoft has bundled a stealth update in a 
security patch.  Such actions seriously undermine confidence in Microsoft's 
understanding of security but rarely receive the attention they deserve.

Microsoft Security Bulletin MS01-046 (Access Violation in Windows 2000 IrDA 
Driver Can Cause System to Restart) dated August 21, 2001 patches a 
vulnerability in Windows 2000 (an unchecked buffer).


The new functionality is not mentioned in the bulletin itself but the link 
to the patch leads to a page bearing the following note:

"Note: This update also includes functionality that allows Windows 2000 to 
communicate with infrared-enabled mobile devices in order to establish a 
dial-up networking connection via an infrared port.  For more information 
about this issue, read Microsoft Knowledge Base (KB) Article Q252795."


Q252795 explains how, when releasing Windows 2000, Microsoft removed the 
ability to support virtual serial ports so that Windows 2000 would not 
"inherit limitations."

One effect was to orphan IrDA-enabled mobile phones with modems that could 
otherwise have been used with Windows 2000 to make dial-up connections.  It 
also prevented Palm organizers that were able to connect with Windows 98 
via IrDA from doing the same with Windows 2000.  The functionality in 
security patch MS01-046 addresses the mobile phone issue.

** *** ***** ******* *********** *************

CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, 
insights, and commentaries on computer security and cryptography.  Back 
issues are available on <>.

To subscribe, visit <> or send a 
blank message to  To unsubscribe, 
visit <>.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will 
find it valuable.  Permission is granted to reprint CRYPTO-GRAM, as long as 
it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier.  Schneier is founder and CTO of 
Counterpane Internet Security Inc., the author of "Secrets and Lies" and 
"Applied Cryptography," and an inventor of the Blowfish, Twofish, and 
Yarrow algorithms.  He is a member of the Advisory Board of the Electronic 
Privacy Information Center (EPIC).  He is a frequent writer and lecturer on 
computer security and cryptography.

Counterpane Internet Security, Inc. is the world leader in Managed Security 
Monitoring.  Counterpane's expert security analysts protect networks for 
Fortune 1000 companies world-wide.


Copyright (c) 2001 by Counterpane Internet Security, Inc.

----- End forwarded message -----

   [[ From: ]]                |
   [[ ]]                |

#  distributed via <nettime>: no commercial use without permission
#  <nettime> is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: and "info nettime-l" in the msg body
#  archive: contact: