Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Open Source Is Fertile Ground for Foul Play

The nature of open source makes security problems an inevitable concern. There are a handful of ways that malicious code can make its way into open source and avoid detection during security testing, making government adoption of open source particularly worrisome.


advertisement
n old adage that governments would be well-served to heed is: You get what you pay for. When you rely on free or low-cost products, you often get the shaft, and that, in my opinion, is exactly what governments are on track to get. Perhaps not today, nor even tomorrow, and not because open source products are less capable or less efficient than commercial products, but because sooner or later, governments that rely on free open source software will put their country's and their citizens' data in harm's way. Eventually—and inevitably—an open source product will be found to contain a security breach—not one discovered by hackers, security personnel, or a CS student or professor. Instead, the security breach will be placed into the open source software from inside, by someone working on the project.

This will happen because the open source model, which lets anyone modify source code and sell or distribute the results, virtually guarantees that someone, somewhere, will insert malicious code into the source. Malevolent code can enter open source software at several levels. First, and least worrisome, is that the core project code could be compromised by inclusion of source contributed as a fix or extension. As the core Linux code is carefully scrutinized, that's not terribly likely. Much more likely is that distributions will be created and advertised for free, or created with the express purpose of marketing them to governments at cut-rate pricing. As anyone can create and market a distribution, it's not far-fetched to imagine a version subsidized and supported by organizations that may not have U.S. or other government interests at heart.

Third, an individual or group of IT insiders could target a single organization by obtaining a good copy of Linux, and then customizing it for an organization, including malevolent code as they do so. That version would then become the standard version for the organization. Given the prevalence of inter-corporation and inter-governmental spying, and the relatively large numbers of people in a position to accomplish such subterfuge, this last scenario is virtually certain to occur. Worse, these probabilities aren't limited to Linux itself, the same possibilities (and probabilities) exist for every open source software package installed and used on the machines.



How Can This Happen?
The products of the open source software development model have become increasingly entrenched in large organizations and governments, primarily in the form of Linux, a free open-source operating system, the free open-source Apache Web server, and open source office suites. There are several reasons that open source software—and Linux in particular—are seeing such a dramatic uptick in use, including IBM's extensive Linux support effort over the past several years, and the widespread perception that Linux is more secure than Windows, despite the fact that both products are riddled with software security holes. (Use this menu to see the number of vulnerabilities reported by security watchdog group Secunia for an OS-by-OS comparison.)

Editor's Note: (Added Feb. 16) The Secunia data, the link to which was added to this article in post-editing, does not include vulnerabilities for applications that install with Windows. Readers should also see CERT's most recent quarterly summary of vulnerabilities or search its vulnerability database to research and compare OS vulnerabilities.

So far, major Linux distributions such as Debian and others have been able to discover and remedy attacks on their core source-code servers. The distributions point to the fact that they discovered and openly discussed these breaches as evidence that their security measures work. Call me paranoid, but such attacks, however well handled, serve to raise the question of whether other such attacks have been more successful (in other words, undiscovered). Because anyone can create and market—or give away—a Linux distribution, there's also a reasonably high risk that someone will create a distribution specifically intended to subvert security. And how would anyone know?

Open source advocates rightfully maintain that the sheer number of eyes looking at the source tends to rapidly find and repair problems as well as inefficiencies—and that those same eyes would find and repair maliciously inserted code as well. Unfortunately, the model breaks down as soon as the core group involved in a project or distribution decides to corrupt the source, because they simply won't make the corrupted version public. Therefore, security problems for governments begin with knowing which distributions they can trust.

Can Self-Policing Work?
The open source model does a good job finding and winnowing out malicious code submitted as part of a project when the people in charge of the true project source are both actively looking for potential security problems and also not actively attempting to subvert the model. For example, it's likely that someone will find and notice such obvious attempts at any of the large, well-run projects. Still, I'd be very surprised if some open source software doesn't already contain well-hidden malicious code. It's an onerous thought, but many programmers will tell you that the temptation to build in special debugging and monitoring capabilities or to write back door code is powerful. The temptation for businesses is, in my opinion, even more powerful. If businesses think that they can gain a competitive advantage by altering their software to provide reports on other, competing products within an organization, marketing pressures will eventually force them to do exactly that.

This (hopefully potential) problem isn't limited to open source software, but open source certainly has far fewer inherent barriers than commercial software. The easier it is to access the source code, alter it, and then recompile it for custom uses, the more likely that it will happen—and then you have no security. Any security checks performed on the software before the source is delivered are invalid. That means that many of the advantages that individuals have gained by using open source software, specifically, those of choice and the possibility of altering their software to better suit their own needs, won't and can't apply in a secure government situation. To limit their vulnerability, governments can't afford to give everyone a choice, nor can they afford to provide access to the source code for their software.

Open source software goes through rigorous security testing, but such testing serves only to test known outside threats. The fact that security holes continue to appear should be enough to deter governments from jumping on this bandwagon, but won't be. Worse though, I don't think that security testing can be made robust enough to protect against someone injecting dangerous code into the software from the inside—and inside, for open source, means anyone who cares to join the project or create their own distribution.

I'm not na



   
A. Russell Jones is the Executive Editor at DevX. Reach him by e-mail .
Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap