Linux Vulnerabilities More Numerous And Severe Than Windows
Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Linux Vulnerabilities More Numerous And Severe Than Windows
Report: Linux Vulnerabilities More Numerous And Severe Than Windows
The report was Microsoft-funded, but researchers are providing the full methodology and challenging Linux advocates to prove them wrong.
By Michael Cohn Security Pipeline
Red Hat Enterprise Linux ES 3 has more high-severity risks than Windows Server 2003, and users are exposed to them for a longer period, according to a report released Tuesday.
A draft of the report was released last month and quickly attracted controversy for its methodology as well as allegations of ties between Microsoft and its researchers.
The full report confirms that Microsoft funded the study, and is sure to prompt further accusations of bias. But the researchers are providing the full methodology and challenging other security experts to test the legitimacy of their results.
Richard Ford, a research professor in the computer sciences department at the Florida Institute of Technology's College of Engineering, and Herbert Thompson, director of research and training at Security Innovation, a security technology provider, conducted the study. They used the ICAT Metabase, a database of vulnerabilities from the National Institute of Standards and Technology to measure the severity of the various vulnerabilities identified over the course of 2004. The report also tabulated the "days of risk" from the time vulnerabilities were publicly identified to the time they were fixed.
The report drew criticism from Red Hat. The head of the company's Security Response Team, Mark Cox, said on his blog,"Red Hat was not given an opportunity to examine the 'Role Comparison Report' or its data in advance of publication and we believe there to be inaccuracies in the published 'days of risk' metrics. These metrics are significantly different from our own findings based on data sets made publicly available by our Security Response Team."
Researchers analyzed the two systems configured as Web servers with add-on software.
- Researchers found that the Red Hat Linux had 3,893 total days of risk for all the risks classified as high severity, compared to 1,145 for Windows Server 2003.
- The average days of risk per vulnerability were 71.4 for Red Hat Enterprise Linux, compared with 31.3 for Windows Server.
- The team also looked at the vulnerability of the two systems to a port scan. They found that Red Hat Enterprise Linux had 77 high-severity vulnerabilities in its default configuration compared to 33 for Windows Server 2003, out of a total vulnerability count of 174 for Red Hat vs. 52 for Windows.
However, Thompson admitted that the relative severity of a vulnerability doesn't necessarily correlate with how much damage an attack can cause. "I have seen multiple instances where l5 low severity vulnerabilities have been combined into an attack that would have done damage as bad as a high severity attack," he said. He also cautioned that the "attack surface" of both systems could be mitigated simply by turning on the firewalls that come with both Windows Server and Enterprise Linux.
In addition, Thompson admitted that the vulnerability counts lumped together the vulnerabilities found in Linux, as well as add-on open source software for the Apache web server, PHP scripting platform, and MySQL database. The report mentioned, though, that MySQL had five vulnerabilities that took more than 90 days to fix.
One critic of the report said it's difficult to measure the relative severity of vulnerabilities.
"There are so many ways to rate vulnerabilities and severities," said Johannes Ullrich, chief technology officer of the SANS Internet Storm Center, a service that reports on security vulnerabilities. "It's hard to come to up with an objective measure."
He also noted that a complete Linux distribution comes with a greater variety of software than Windows, making it larger, more complex, and more prone to vulnerabilities.
And the skills of the person running the system is extremely important to measuring how secure that system is, Ullrich added, "No operating system is secure unless you know how to apply the patches, configure the passwords, and disable services you don't need. You can't rely on a single security measure. You have to use firewalls and such to build up layered defenses. If you don't do that right, any operating system is vulnerable," he said.
Thompson expects he and his co-researcher will face charges of bias on behalf of Microsoft due to the company's funding of the study. "One of the big issues was to get the methodology out there. We knew people would question the results because of Microsoft's involvement in funding," he said.
He and Ford submitted their research proposal to Microsoft, Microsoft evaluated the proposal, and decided to fund it. Thompson said the researchers also sent the methodology to various analysts, including Charles Kolodgy of IDC, and had it vetted by various academics as well as people at the RSA Conference. ."
Asked if the study would have been published if the results had come out in favor of Linux, Thompson responded, "They certainly gave us input but I'm sure the results would ultimately have been published no matter what the outcome was."
In the report, the researchers cited an earlier study by Forrester Research that also attracted a fair amount of criticism from Linux proponents. Thompson expects to hear reaction from them again. "I'm sure we'll get a fair amount of creative input based on who funded this study," he said. He pointed out, however, that Security Innovation has a wide range of clients, including Hewlett-Packard, Cisco, and IBM, and his aim was to encourage feedback from the technology community about how the methodology can be optimized for future studies. "Certainly I hope that when the criticism comes, it comes on the methodology and our acts instead of loud commentary on who funded this particular study," he said.
While the current study examines Windows Server and Red Hat Enterprise Linux in Web server configurations, Thompson and Ford plan to conduct future comparisons of database server and workstation roles.
How can you compare a Linux system with MySql and acting as a Server, with a home system?
If Linux SERVERS are more vulnerable out of the box than M$ SERVERS, so what? Real Administrators are always going to customize them with the latest security techniques.
Comparing home Windows and Linux distributions out of the box would have had some meaning,
because that's often just how home users leave them for months on end, while surfing the net.
Finally, blurring statistics all together is a ridiculously unscientific methodology. 'Average' days until a security hole is plugged has no statistical meaning, nor can it have any probability value in assessing any given instance of a working system. It is not what the 'average' time of an open security window that matters, but always, what will the time be for the actual security holes in any given real system.
Real probability theory presupposes actual physical cases that can be investigated and concerning which results can be predicted. While a random group of 'SERVERS' with a distributed set of security holes can be 'penetrated' and the 'average' time for a repair presumably measured, this would take decades of hard trials.
The very word 'penetration' here has no tangible meaning, since a random selection of servers will contain sites and systems with no valuable information or sensitive control equipment that might be breached. Some 'worthless' or unimportant servers could be penetrated easily and repeatedly, but these 'penetrations' would be harmless, and act as statistical 'red herrings'. You can't say for instance that a nuclear plant has been 'penetrated' when all that has happened, is you've trashed a junior clerk's desktop while he was surfing porn.
One will in fact expect that important sites and systems will by default have better security measures, and better personnel, while other cheap systems will not even be intelligently monitored.
What happens if in your statistical sample, all the M$ machines are brand new expensive sites and systems, with highly paid security personnel, whereas the Linux group includes older systems, or OS instances spread across a wider range site quality and importance?
'Probability' and 'Statistics' are two completely separate fields and present their own unique problems.
Last edited by penguinlnx; 04-04-2005 at 05:11 AM.