By Patrick Grey
August 8, 2006
THE 21st-century hacker has three options upon discovery of a
vulnerability in popular software: sell it to a security company; give
details of the bug to the company that makes the software; or sell it to
the criminal underground.
Legitimate security companies are bidding against criminal syndicates to
buy the hackers' handiwork, experts say. Security specialist iDefense
actively markets its links to independent bug hunters, offering top-dollar
to hackers for information it can pass to its vulnerable customers.
"If you had a serious vulnerability you may get, say, $10,000 from
iDefense. I guarantee you that if it's a big enough vulnerability, someone
who wants to use it for less (legitimate) purposes will give you $20,000
for it," says Steve Manzuik, a vulnerability research manager at US-based
firm eEye Digital Security.
"Can I say I have proof of this happening? No, but we hear the rumours."
The online theft of sensitive information - anything from your online
banking details to your tax file number - is a booming business. Identity
theft is easy money for the unscrupulous, and the problem will become
worse before it gets better, says iDefense research labs director Michael
Sutton. "To me, that's a really scary scenario," Mr Sutton says. "We're
going to keep seeing more and more of it because it's a business not just
for us, it's a business in the (criminal) 'underground'."
One underground source, speaking on condition of anonymity, says he'd been
offered cash by other hackers seeking vulnerability information. He
declined to sell information because "you don't know who you're dealing
with" but says it's a common occurrence, with up to $US15,000 being
offered. He adds he did not ask those who had sought information from him
what they would use it for because he did not want to know.
Although iDefense is a legitimate company, part of the Verisign group,
eEye's Mr Manzuik fears its purchases of information from individual
researchers foster a culture of competition between legitimate enterprise
"The purchasing of vulnerabilities kind of gives legitimacy to the
underground; to guys who've always sort of done it on the sly," Mr Manzuik
says. "Now you can get a legitimate bidding war between some shady dude
The community of enthusiast, amateur hackers that formed today's
legitimate debugging industry has straddled the line between transparency
and secrecy for years. Independent researchers, frustrated by slow
responses from vendors they had reported bugs to, would often release
sensitive details of vulnerabilities to the public in an attempt to force
vendors to respond. These actions were often met with threats of legal
action, forcing many researchers underground.
Today the fight between legitimate hackers and software vendors is
increasingly open. Bugs are big business for everyone; Verisign's $US40
million acquisition of iDefense last year proved it.
It is certain criminal syndicates are seeking the latest vulnerability
information to drive the software they use to harvest identity
information, but there are differing views about the disclosure practices
of legitimate researchers.
Bug hunter David Litchfield has been critical of software giant Oracle for
several years. When the database maker boasted in 2001 that its software
was "unbreakable", Mr Litchfield did some research and soon found 24
vulnerabilities, some critical.
The 30-year-old security researcher targeted Oracle because he thought the
world's biggest database vendor's ongoing "unbreakable" marketing campaign
- intended to persuade customers security was a top priority
- was deceptive.
Oracle's chief security officer Mary Ann Davidson conceded the next year
that "calling your code 'unbreakable' is like having a big bullseye on
your products and your firewall".
"You must admit, from a marketing standpoint, it has a punchy sound. It's
a lot better than 'Pretty Darned Good Security'," she said at the time.
Mr Litchfield says he is in business "to whip vendors into shape". He
whipped Microsoft by reporting dozens of vulnerabilities to the company,
which responded in 2003 by becoming a client of Mr Litchfield's family
business, NGS Software, which has its head office in Britain and recently
opened an office in Sydney. He says he is part of a movement that makes
security a boardroom agenda.
"I feel (I am) 0.01 per cent responsible for where Microsoft is today,"
says Mr Litchfield. "They are (now) the epitome of what good security
procedures and processes should be for a vendor."
Meantime, iDefense labs' Michael Sutton believes Oracle still has a "see
no evil, hear no evil, speak no evil approach to security".
"If nobody talks about a vulnerability, it doesn't exist," Mr Sutton says.
"So they don't want to work with the researcher, they want to quiet you,
they want to shut you up."
However, Oracle has not been able to silence Mr Litchfield. In a posting
to a security mailing list in February, he fired a salvo in an ongoing
security war. He reported to Oracle the "application server
vulnerability", which allowed normal database users to get administrator
privileges. He says he was distressed that Oracle's patches did not appear
to fix the flaw; Mr Litchfield circumvented each patch easily.
"This email will show that after four years of waiting for Oracle to try
to get it right, I eventually decided to take matters into my own hands,"
he wrote. "(I want to) provide Oracle customers with more help than Oracle
is currently doing. Oracle - shame on you."
Last year Oracle started issuing critical patches every three months, and
recently began using code scanning analysis software.
Oracle defended its security approach in April. "Oracle's top priority is
to protect its customers," the company said. "We are continually
evaluating our security development processes, as well as looking at ways
to further strengthen our overall product security."
Last month Oracle rolled out its second-quarter batch of bug fixes for 65
vulnerabilities; more than two dozen for its flagship database software.
The fixes cover Oracle Database, E-Business Suite, Application Server, and
several for the software acquired when it bought software rivals
PeopleSoft and JD Edwards.
Steve Manzuik puts it simply: "I'd definitely say Oracle is the worst"
vendor for security fixes.
Contrasting with the pariah status of Oracle, is Microsoft, a company
cursed in the 1990s for its lax attitude to security. The company has
reformed, says iDefense's Mr Sutton.
"Microsoft has taken it very seriously," he says. "I agree that they do
have things that they need to improve, and they always will. But they are
working very hard, they have people dedicated to it and have money
dedicated to it. And they're working very hard to work with the
researchers. What a 180-degree turn from where they used to be."
But resentment lingers between Microsoft and the hacker community. On
complaints from Microsoft delegates, Mr Manzuik says organisers of a
recent security conference threatened to cancel his speaking credentials
after he installed the Firefox Web browser on computers at the Microsoft
stand. On reflection he says it was childish and unprofessional: "I
wouldn't have been happy if someone messed with an eEye booth."
These experts are unified on one front: beyond threats to server and
operating system software,the biggest emerging threat to internet commerce
is with the applications run on desktops, they say.
"Finding those (application) bugs is a lot easier because that's where the
vulnerabilities have moved to," Mr Manzuik says. Consider the computer
worm Mdropper spreading through a malicious Microsoft Word document. The
Word file arrives in an email inbox and, if opened, the trojan Ginwui.C
would be installed using an undocumented security flaw in the word
processor. There was no fix for the flaw when the worm was released.
TODAY's most worrisome viruses don't disrupt PCs as the Blaster and Code
Red did. Like Mdropper, they allow criminals to siphon information such as
internet banking passwords, says Mr Manzuik.
Armed with an application flaw, the virus writer has only to trick users
into opening a file or visiting a malicious website. It was once common to
issue a proof-of-concept to illustrate the vulnerability; anyone with the
code could use it against real targets on the internet. Researchers fear
that such code will fall into the wrong hands.
"We don't need more proof-of-concept code. If we look at historical data
on this, most large exploits that have been turned into (worms) have come
from proof-of-concept code," Mr Manzuik says.
Often security researchers released the code as a confrontational strategy
to get software makers to take action, Mr Manzuik says. And it alerted
tardy administrators to install security patches. Now, there's a
fraternity between the groups; everyone's on the same team.
"Vendors receive tremendous value from vulnerabilities; they're the only
ones that receive a benefit, but they don't pay for it," Mr Sutton says.
"(That's) never going to change until vulnerability discovery moves
earlier in the software-development lifecycle."
Developers and quality assurance researchers working in house at software
companies should be looking for security vulnerabilities, he says.
Microsoft encourages its coders to write secure software, rather than get
the product out the door as fast as they can, Mr Sutton says.
"They're switching to a model where their developers are getting dinged on
vulnerabilities that are found," he says. "In the end, money talks. If my
bonus is based on vulnerabilities in my software, all of a sudden I'm
going to wake up and go 'Oh, I actually need to learn this and pay
However, Mr Litchfield says even that approach has its problems. "The
danger with that of course is you don't want to swing to the other extreme
where you inhibit innovation," he says. "(Where) developers are fearful of
doing something interesting simply because they're fearful of retribution
from their company."
Mr Litchfield says the status quo - security bug hunters unveiling flaws
that are later patched by vendors - actually works. "Patch and fix does
work and eventually what you get left with is a bloody secure product," he
says. "Take SQL Server 2000 for an example. When was the last time a bug
was found in that product? Probably 2003, or maybe early 2004?"
Pressure from the research community is what drives product improvements,
he argues. "It depends on when the product is released and how much
scrutiny it gets from the research community, and eventually you get left
with a secure product."
-- Additional reporting Nick Miller
Visit the InfoSec News store!