devxlogo

Open Source and Security: Letters to the Editor

Open Source and Security: Letters to the Editor

o call last week’s featured opinion unpopular would be understated to say the least. In his commentary, DevX Executive Editor A. Russell Jones opined that open source software is more likely be altered by those with ill-intentions than is closed source, because such people will capitalize on the availability of source to alter, recompile, and distribute the malicious source. He felt that the rising profile of open source projects?particularly in governments?offers a tempting target to those with malicious intent.

DevX received scores of letters that questioned the author’s arguments, his understanding of the open source process, and, in some cases, the motivations of DevX. DevX published one rebuttal last week (“Who’s Guarding the Guards? We Are“), but we’d also like to bring our readers’ attention to a few of the responses published on other respected online developer sites and Web logs:
Is Open Source Secure?” by Mark Stone
Refuting the FUD at DevX,” by Joe Barr

And on the pages that follow, we will expose a sampling of comments from the many letters we received.

Usually, we try to answer all readers’ e-mails personally, but in this case that’s going to be difficult, if not impossible. As a general response, I would like to first thank those who took the time to write. Feedback, criticism, and even censure, play an important role in reminding us of our responsibility to our readership. We were aware that the opinions expressed would be controversial; our motivation, as with all published opinion, is to expose ideas for debate. It goes without saying that the merit of Russell Jones’ opinions are for each reader to decide on his or her own. I wish only to assure you that the reputation of DevX and our responsibility to our readership is of the utmost importance to every person on the editorial team.

As promised, here are some excerpts from letters we received this week that are a representative sampling of the comments. All letters are published with permission.

Lori Piquet
Editor-in-chief

Editor,
I am concerned that Mr. Jones’s column of February 11th, “Open Source is Fertile Grounds for Foul Play,” indicates a significant misunderstanding of open source development processes. The argument presented is that all software development carries the risk that malicious code will be inserted by insiders, and that open source is especially vulnerable because more people are insiders. The first part is absolutely true, and applies to both closed and open source development as Mr. Jones acknowledges, but the second part does not stand up to scrutiny.

If a consumer buys either open or closed source software from Bob’s Back-Alley Software and Pawn Shop, well, it’s a bad idea either way.
Most open source projects have only a small group of “core developers” who have the ability to modify the official source code, just as is the case with proprietary software development. Any malicious person could insert destructive code into his or her own copy, but not back into the official version. That leaves the possibility of intentional compromise by the core developers, or by subsequent distributors. The first is a risk, but less so than with proprietary software: The number of people in a position to corrupt the source is similar in both models, but the possibility of outside review reduces the danger for open source software. Mr. Jones posits that core developers could avoid such scrutiny by not making the corrupted version public, but this is nonsensical: The version of the source code available for use is by definition also available for review.

The other concern raised is that distributors who repackage open source software could add vulnerabilities. Again, this is possible, but no more so than with proprietary software. It’s easy for an attacker to add malicious code to compiled binaries; indeed much pirated software is reported to contain viruses or Trojan horses. For both open source and proprietary software, the solution is the same: Be careful who you get your software from. Downloading open source software directly from the public sources or buying a packaged version from a trustworthy distributors is no riskier than buying, for example, Windows directly from Microsoft or a system integrator such as IBM. If a consumer buys either open or closed source software from Bob’s Back-Alley Software and Pawn Shop, well, it’s a bad idea either way.

Open source is not the security panacea that some advocates make it out to be, but it doesn’t incur the added risks that Mr. Jones attributes to it, either. A government or other user who applies common sense to its software acquisition is no more at risk from open source software than closed source, and may even be a bit safer.

Eric Anderson

Editor,
I recently read the opinion piece written by A. Russell Jones entitled “Open Source Is Fertile Ground for Foul Play.” This piece is just the type of Fear Uncertainty and Doubt (FUD) that I would expect to see from a journalistic institution that caters to corporate developers. Without actually presenting any evidence, open source developers are accused of producing malicious code. As a one-time corporate developer turned open source programmer, I felt incensed at the insinuation that corporate developers are somehow more principled than their open source counterparts, and so, felt that I should put into words some of my impressions regarding the article.

Mr. Jones’ fears regarding back door programs are rightly justified. In fact, users of any software, be it closed or open source, should be concerned about such things. Just look at the number of “spy-ware” programs that are found in the wild. Many corporations are using such programs to gather information on their end users, including a certain operating systems company based in Redmond, WA. Often they justify this intrusion into our private lives by saying that it allows them to determine what end users want in their software. They claim that the data gathered by this software is only for demographic purposes and that individual data is not saved, but the fact of the matter is, it is transmitted somewhere. Those transmissions may be intercepted and used in a malicious manner, and no guarantees are given that the companies aggregating the data won’t use them for other purposes at a later date.

If the government should be concerned about people in the open source community putting malicious code in programs, they should be doubly concerned about companies that outsource their closed source development to other nations.
I personally feel that the risk of backdoors and spyware code is much greater in closed source programs than in open source programs. When I purchase a closed source program, I have no way of verifying what the program is doing behind the scenes. My trust in the software completely rests on my belief that the company that produced it is acting in an ethical and safe manner. Sometimes that trust is misplaced. I have seen developers put back doors in to simplify development/debugging. I have heard of developers putting backdoors into software with malicious intent. Sometimes these are reported to their supervisors, sometimes not. And sometimes, they don’t get removed, whether that be by oversight or by intent. These back doors, malignant or benign are difficult to discover because closed source companies are reluctant to allow people access to their intellectual property. Doing so usually involves getting a court order, or signing a prohibitive non-disclosure agreement, which limits the rights of the entities gaining access to the code to be able to do anything about what they see.

However, in open source software, full disclosure is the name of the game. Most open source projects track exactly who was responsible for each update to the software. It is usually easily verifiable, and the source code is available for perusal. Additionally, if security holes exist in open source software and are found, they are often fixed in a matter of days (depending on the severity and complexity of the issue). Compare that to weeks or even months in the closed source world.

One concern that Mr. Jones neglects is the fact that many companies are outsourcing their development to countries outside the U.S. Admittedly, this is also a concern for open source software, since many developers of said software reside outside the U.S. However, this is mitigated somewhat by the “many eyes” theory of software security. In the open source community, it is often easy to identify who was responsible for modifying the code for a particular piece of software. But, if the government should be concerned about people in the open source community putting malicious code in programs, they should be doubly concerned about companies that outsource their closed source development to other nations. Currently there is no regulation in place for companies disclosing where closed source software is actually developed when the government (or anyone else for that matter) purchases the software.

Mr. Jones for the most part gave a very reasoned explanation of some of the issues that the government should be aware of, but I disagree with his conclusion, that open source software is somehow more of a risk than closed source software. There are risks in both. However, I feel that the risks associated with open source software are more easily mitigated than those with closed source.

Mr. Jones concludes by attempting to downplay the methods that are used to mitigate risks involved in using open source software. Mr. Jones states in his final section “You can set up as many layers of security as you like, but at some point, you have to trust the layers themselves.” This is not necessarily true. While many people choose to trust the layers of security in open source software, this choice is not forced on them. In contrast, the layers of security in closed source software are hidden from the consumer. In fact, even the depth of those layers is not disclosed.

Mr. Jones uses the question “Who will watch the watchers?” to conclude his article, and I would like to answer it. With open source software, the answer could be you, or me, or anyone for that matter. What is done in open source software is done in the open. The watchers cannot hide their actions. With closed source software, sadly, the answer is often no one.

Nathan Tenney

Editor,
As Mr. Jones notes, there is serious danger when working with software that someone on the inside will corrupt the software. However, putting this down as a weakness of open source is incorrect. With open source software, the source must be made available (and is frequently used to build the binary after receipt). Injecting tainted source into an open source project is difficult and risky. Difficult because you have to have sufficient access to make the changes. Risky because anyone (literally, since anyone can access the source, although fewer will have the knowledge to do so) can find the error and report it. Once reported, the person who inserted the offending source can be found, as all major open source projects use something like CVS to track changes to the code (and who made them!). Because of this, such a compromise is impractical.

In fact, Mr. Jones’ article anticipates this. What it actually discusses is not the possibility that open source code could be compromised but the possibility that someone will not publish the source with their exploit but will hide or close it instead. This narrows down the number of people who could do this to those who have access to the binaries of the code. As a former system administrator, I can tell you that I primarily installed software by compiling from source (and so should anyone who installs software in a situation where security is important, much less paramount like with Defense and agencies that manage personal data). Thus, I would have had to have been the one to install the offending security hole.

Now, Mr. Jones would argue that a system administrator would have a much easier time doing this with access to the source. That may be true, but is not really important. One, the system administrator already has sufficient authority to compromise the system, whether open or closed source. Two, in closed source projects, someone still builds the binary. In fact, the same person builds the binary for all uses of the software. The temptation must be far greater to add in a backdoor at that level, particularly since there is no way to check for it. Heck, they put “Easter eggs” like pinball into their software. Why not malicious Easter eggs? How would we ever know?

The weakest link is the person who compiles/installs the software. With open source, that person can be (if you choose) a member of your organization.

If I compile from source, my employer can do whatever checking is necessary to determine if I am a moral and ethical person. In government, they already have such security checks in place. Can they do the same checking on all the people involved with making the Microsoft Windows binary? It only takes one hole to open the system. Any dll could be suspect. With open source, you can drop this down to people who actually create the binaries. One person can easily build an OS and a complete set of software.

Now what about smaller organizations that can’t afford to keep a full time system administrator? Won’t they be subject to an outside organization installing software in pure binary form that they won’t know what it is doing? Possibly, but if that software is open source (at least with the GNU GPL), then they have to receive the modified source as well. If they ever have reason to recompile from source, the problem disappears. Further, if recompiling from source were to create a different binary (ignoring info like compile time, etc.), that would be a violation of the license and subject them to legal action, even without proving the maliciousness.

You have the same problem with an outside organization installing pure (closed source) binaries. Contrary to Mr. Jones’ claims, the barrier is not significantly higher for a dedicated organization to do this with closed source projects. All that is needed is a contact with access to the source (and even that isn’t necessary; look at Worm.Gibe variants, which pretend to be the Windows Update client; looking like a familiar program is enough). With Microsoft Windows, that could be a sub-contractor (Microsoft subcontracts a great deal of its programming work) or an employee of Microsoft or an institution with access to the source code (see http://www.microsoft.com/resources/sharedsource/default.mspx). Any of those people are fully capable of accessing the source (at least for their piece) and modifying it.

For that matter, why bother with source? Get an assembly code editor and modify the binary directly. All that person needs to do is tag some code to the end and modify a jump subroutine call to go to the new code instead of the current code. Save the current state. When finished running the new code, restore the previous state and jump to where the original jump subroutine would have gone. Or rename some basic piece of software and put a loader program in its place.

Even better: Instead of playing with someone else’s software, they put it in the piece that they wrote. Then they have total control, and with closed source there is no way to check their work. At least with open source, the customer could (potentially) recompile the program from the sources provided (which cannot con). With closed source, even if an exploit was found via suspicious network traffic, how do you respond? You can’t fix the problem?you don’t have the source. If it’s mission critical software that was infected at the source, how do you replace it? What if they used a proprietary encryption format on their data stores? How do you get your data back?

To summarize, the problem of potential malware installed by insiders is a problem of closed source software. With any reasonable security precautions, open source software users can at least respond (if not prevent) the problem. The weakest link is the person who compiles/installs the software. With open source, that person can be (if you choose) a member of your organization. With closed source, that could be any number of people in the organization that wrote the software, plus still the person who would compile software in your organization (who can add malware to the system). With open source, you can rewrite the code to remove the malware. With closed source, you can’t. You have to go back to the people who wrote the software; the same people who most likely wrote the malware component.

Mr. Jones mentions that this is hopefully just a potential problem with open source. I know that it is an existing problem with closed source. I once worked at a place that gave me a piece of software for which they had a site license and told me I could install it at home. What did it do? Well, beyond its obvious purpose, it installed an extra piece to the printer driver that sent anything that I printed to them (I found this out by playing with firewall software). What was the name of the software? Microsoft Office.

Matt Fletcher

Editor,
In his article, “Open Source Is Fertile Ground for Foul Play,” Russell Jones seems to have missed the whole point of open source. That is quite simply that open source is open. Not only is the code freely distributed, it is also freely discussed on the Internet. In addition to possible criminal penalties, anyone who is discovered to have deliberately submitted malicious code to an open source project will certainly be discussed at length and dismissed from any projects they’ve joined. The programmer who did such a thing would be committing social and professional suicide.

On the other hand, closed source programs are obviously dangerous. To give a real world example, we just learned that Microsoft sat on a security vulnerability for six months. This would simply be impossible in the open source world, which usually issues patches within 48 hours. Worse, consider the TimeLine license issue:
http://news.com.com/2100-1001-985359.html

This article demonstrates that closed source is very dangerous. If the TimeLine software had been distributed under a truly open license it could be used without fear of legal entanglements. There’s probably no need to mention worms, virii, Trojans, adware, or closed source software that phones home.

In addition to committing professional suicide, such a criminal might be indicted for any of several crimes, ranging from “unlawful access” to treason.
Mr. Jones should also note that most open source projects have only a few members who are actually allowed to commit patches to the source tree, and most of these alpha-geeks carefully read all the submissions they receive. This means, of course, that open source projects are not as vulnerable as Mr. Jones imagines them to be. Someone has authority and actually reads the code before committing it.

Mr. Jones might also consider that obvious fact that anyone who wants to use open source code for an important project is fully capable of auditing that code to whatever depth pleases them, something they can’t do with code from Microsoft or Sun. Sure, some clueless criminal who hasn’t considered the issue can try giving a malicious open source package to government, but what happens when that government has their programmers look the package over for Trojans? In addition to committing professional suicide, such a criminal might be indicted for any of several crimes, ranging from “unlawful access” to treason.

Further, when it comes to examining code, let’s actually look at some real numbers. Imagine an organization purchasing MS Office Pro and XP for a thousand users. Even with volume discounts, they can expect to pay around half a million dollars for the privilege. Or, they can install Linux and OpenOffice for free, hire one programmer to add custom features and another to inspect the open source code for vulnerabilities. Total cost, perhaps $200,000 dollars. In other words, that organization can get free software, a year of code auditing, and a year of customization for less than half what it costs to buy a Windows solution. Go to India and you can get 10 programmers for a year for that same price.

As Mr. Jones notes, an inside job is possible, but this is an extremely weak argument. The sysadmin for any organization can install back doors, keystroke loggers, Trojans, malware and virii, and it doesn’t matter what brand of software is being run. It’s also important to remember that 90 percent of the programmers out there don’t work for software manufacturers, either open or closed. They work creating and maintaining some big company’s custom codebase. These programmers have the capability, and possibly even the motivation, to create malicious code. Once again, it doesn’t matter what operating system is being used.

Lastly, Mr. Jones’ comparison of Windows and Linux security vulnerabilities is deeply flawed. Let’s examine the site he recommends, and compare Redhat 9:
http://secunia.com/product/1343/
to Windows XP professional:
http://secunia.com/product/22/

Editor’s Note: The Secunia link was not included in the original submission by Dr. Jones. It was added in post-editing by me. Links to other third-party vulnerability data should have been included. These have recently been added to the original article.?Lori Piquet

Someone who doesn’t understand the way Linux is packaged and delivered might take look at the data and assume that Windows, with only 34 security advisories in 2003, was a better operating system than RedHat, with 72 security advisories, but to someone with even a tyro‘s knowledge of Linux, the Secunia data is deeply flawed. Let’s take a look at the man behind the curtain.

Windows XP comes on one CD. It includes only the core operating system, a few games, some small, but useful programs, and the two most insecure programs on the planet?Outlook Express and Internet Explorer, adding up to a total of perhaps 300 executable programs. But on the list Mr. Jones recommends, the vulnerabilities for IE and Outlook aren’t listed. Merely adding the 2003 vulnerabilities for these two programs would make the list of Windows security problems larger than the list for RedHat. Oops. The list Jones suggested your readers peruse is dishonest.

But it gets worse. Redhat 9 comes on four CDs, and contains somewhere in the neighborhood of 1,500 separate executable programs, and the vulnerabilities for all of these programs are listed. For example, we see two different mail servers, sendmail and squirrelmail, on the Secunia list. No real-life server installation would contain more than one mail serving program. The same is true of CUPS and LPR, which are two listed printer daemons. We also see several other server programs listed, including samba, PHP, Apache (listed as httpd,) PostGreSQL and iproute. There are also several userspace programs on the list, such as Eye of Gnome, PAN, unzip, Ghostscript, Netscape, XPDF, tcpdump, up2date, etc., listed in the Redhat section.

In other words, Secunia is comparing a completely bare Windows XP box to a Linux box, which is fully loaded with both server and userspace programs.

To make the comparison fair, you’d have to add around a thousand programs to the Windows box. First, install gobs of server software, all of it on the same machine. Use programs such as Microsoft SQL, Exchange Server, IIS, ASP Server, two different network printing programs, etc. Then install a bunch of userspace software such as WinZip, Adobe Acrobat, IE, Outlook and Eudora, and then include a bunch of utilities not normally found on Windows machines. Now make the comparison. It doesn’t look nearly so good, does it? In fact, the RedHat box is much more secure. Go a step further and consider that it takes Microsoft months to patch a vulnerable piece of software, while the open source community usually patches within 48 hours.

Now let’s do some math. Divide the 34 Windows vulnerabilities into 300 programs. We end up with one vulnerability for every 8.8 programs on the Windows install disk. Now divide the 72 Linux vulnerabilities into the 1,500 executable programs on the Linux install disks. RedHat 9 has one vulnerability for every 20.8 programs. In other words, Windows is 2.3 times as insecure as Linux.

So there it is. Jones doesn’t understand the way Linux is distributed well enough to interpret the Secunia data, he didn’t consider the financial numbers, and he clearly doesn’t understand the open source culture. Why a knowledgeable reader would take his piece seriously is beyond me.

Alex Roston

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist