Browse DevX
Sign up for e-mail newsletters from DevX


Open Source and Security: Letters to the Editor : Page 4

Scores of readers responded angrily to our featured opinion last week, "Open Source Is Fertile Ground for Foul Play." See a sampling from our mail bag.




Building the Right Environment to Support AI, Machine Learning and Deep Learning

As Mr. Jones notes, there is serious danger when working with software that someone on the inside will corrupt the software. However, putting this down as a weakness of open source is incorrect. With open source software, the source must be made available (and is frequently used to build the binary after receipt). Injecting tainted source into an open source project is difficult and risky. Difficult because you have to have sufficient access to make the changes. Risky because anyone (literally, since anyone can access the source, although fewer will have the knowledge to do so) can find the error and report it. Once reported, the person who inserted the offending source can be found, as all major open source projects use something like CVS to track changes to the code (and who made them!). Because of this, such a compromise is impractical.

In fact, Mr. Jones' article anticipates this. What it actually discusses is not the possibility that open source code could be compromised but the possibility that someone will not publish the source with their exploit but will hide or close it instead. This narrows down the number of people who could do this to those who have access to the binaries of the code. As a former system administrator, I can tell you that I primarily installed software by compiling from source (and so should anyone who installs software in a situation where security is important, much less paramount like with Defense and agencies that manage personal data). Thus, I would have had to have been the one to install the offending security hole.

Now, Mr. Jones would argue that a system administrator would have a much easier time doing this with access to the source. That may be true, but is not really important. One, the system administrator already has sufficient authority to compromise the system, whether open or closed source. Two, in closed source projects, someone still builds the binary. In fact, the same person builds the binary for all uses of the software. The temptation must be far greater to add in a backdoor at that level, particularly since there is no way to check for it. Heck, they put "Easter eggs" like pinball into their software. Why not malicious Easter eggs? How would we ever know?

The weakest link is the person who compiles/installs the software. With open source, that person can be (if you choose) a member of your organization.

If I compile from source, my employer can do whatever checking is necessary to determine if I am a moral and ethical person. In government, they already have such security checks in place. Can they do the same checking on all the people involved with making the Microsoft Windows binary? It only takes one hole to open the system. Any dll could be suspect. With open source, you can drop this down to people who actually create the binaries. One person can easily build an OS and a complete set of software.

Now what about smaller organizations that can't afford to keep a full time system administrator? Won't they be subject to an outside organization installing software in pure binary form that they won't know what it is doing? Possibly, but if that software is open source (at least with the GNU GPL), then they have to receive the modified source as well. If they ever have reason to recompile from source, the problem disappears. Further, if recompiling from source were to create a different binary (ignoring info like compile time, etc.), that would be a violation of the license and subject them to legal action, even without proving the maliciousness.

You have the same problem with an outside organization installing pure (closed source) binaries. Contrary to Mr. Jones' claims, the barrier is not significantly higher for a dedicated organization to do this with closed source projects. All that is needed is a contact with access to the source (and even that isn't necessary; look at Worm.Gibe variants, which pretend to be the Windows Update client; looking like a familiar program is enough). With Microsoft Windows, that could be a sub-contractor (Microsoft subcontracts a great deal of its programming work) or an employee of Microsoft or an institution with access to the source code (see http://www.microsoft.com/resources/sharedsource/default.mspx). Any of those people are fully capable of accessing the source (at least for their piece) and modifying it.

For that matter, why bother with source? Get an assembly code editor and modify the binary directly. All that person needs to do is tag some code to the end and modify a jump subroutine call to go to the new code instead of the current code. Save the current state. When finished running the new code, restore the previous state and jump to where the original jump subroutine would have gone. Or rename some basic piece of software and put a loader program in its place.

Even better: Instead of playing with someone else's software, they put it in the piece that they wrote. Then they have total control, and with closed source there is no way to check their work. At least with open source, the customer could (potentially) recompile the program from the sources provided (which cannot con). With closed source, even if an exploit was found via suspicious network traffic, how do you respond? You can't fix the problem—you don't have the source. If it's mission critical software that was infected at the source, how do you replace it? What if they used a proprietary encryption format on their data stores? How do you get your data back?

To summarize, the problem of potential malware installed by insiders is a problem of closed source software. With any reasonable security precautions, open source software users can at least respond (if not prevent) the problem. The weakest link is the person who compiles/installs the software. With open source, that person can be (if you choose) a member of your organization. With closed source, that could be any number of people in the organization that wrote the software, plus still the person who would compile software in your organization (who can add malware to the system). With open source, you can rewrite the code to remove the malware. With closed source, you can't. You have to go back to the people who wrote the software; the same people who most likely wrote the malware component.

Mr. Jones mentions that this is hopefully just a potential problem with open source. I know that it is an existing problem with closed source. I once worked at a place that gave me a piece of software for which they had a site license and told me I could install it at home. What did it do? Well, beyond its obvious purpose, it installed an extra piece to the printer driver that sent anything that I printed to them (I found this out by playing with firewall software). What was the name of the software? Microsoft Office.

Matt Fletcher

Comment and Contribute






(Maximum characters: 1200). You have 1200 characters left.



Thanks for your registration, follow us on our social networks to keep up-to-date