Browse DevX
Sign up for e-mail newsletters from DevX


Mock Pugilists Raise Fists Over Pet Shop

The polarization of Java and .NET continues apace even as the technological gap between them slams shut. Is there any real, pragmatic benefit to a benchmark between two highly competent platforms, particularly now that we have interoperability with Web services?




Building the Right Environment to Support AI, Machine Learning and Deep Learning

s is often the case in title fights, it didn't really get interesting until the last round.

On an otherwise uninspiring Monday night, in the grand ballroom of a Redwood City, CA hotel, I was listening, against my will, to every bad song that ever made it into a "Rocky" movie soundtrack. The event was a boxing-themed "Smackdown" between J2EE and Microsoft .NET, organized and presented with admirable amiability and nonpartisanship by the Software Development Forum. It was ostensibly intended to focus on Web services but only isolated portions of the 2+ hour match were Web services-specific.

Three team members each from Sun Microsystems and Microsoft were sent to opposite "corners." During the event a series of questions would be read. One member from each team would have five minutes to address each question. Each question and its two five-minute answers composed a round. Though it was a little heavy-handed, the boxing metaphor was actually sort of cute and helped take the sharp edges off of deep-seated animosities in the room. Unfortunately, they really could have saved us all some time by getting right to the last question, which, of course, was about Pet Shop.

I was eager to hear both sides debate the highly controversial results of Microsoft's "benchmark" of .NET based on the Java Pet Storeapplication (a rather infamous quasi-standard traditionally used for testing and evangelizing best practices for multi-tier Java applications), but in the end I was left with only with a strong feeling of futility and deja vu.

Microsoft's argument: In the absence of any industry-standard benchmark that can be used to do a fair side-by-side comparison of Java and .NET applications, Java Pet Store seemed the best chance for Microsoft to perform a comparative test using an industry-recognized standard. After all, as Dino Chiesa, Strategist for .NET Developer Solutions Group pointed out, Oracle recently used Pet Store as a de facto benchmark and Sun itself calls Java Pet Store a "blueprint application." While the results have been controversial, the data, Chiesa contended, shows a 10x performance advantage for .NET over J2EE and dramatic cost/performance advantages for .NET. In short, Chiesa inferred, the data speaks for itself: .NET is faster and provides a better return on investment.

Sun's argument: There are so many flaws in the way the .NET version of Pet Store was handled that the results are devoid of credibility. First, Sun argues, Microsoft funded the report by sponsoring the costs of The Middleware Company, which performed the testing. "Funny thing about benchmarks that you fund," said Tom Daly, Staff Engineer for Sun, "they tend to come out in your favor."

Among the other problems with Microsoft's benchmark, according to Daly: the Java implementations were based on JDK 1.3 instead of the faster version 1.4, no run rules, no peer reviews, and very little disclosure. Sun emphasized that the Java Pet Store is not a benchmark, and Daly said emphatically the "lines of code comparison is just plain wrong." The .NET code, he said, was optimized in several ways that stacked the results.

My roll up is that Microsoft (and The Middleware Company) made some errors (of arguable severity) in judgment and whether those errors are material or superficial does not change the fact that the credibility of this .NET Pet Shop benchmark is damaged beyond repair.

At the "Smackdown" many attendees and at least one member of the panel of judges volubly advocated a rematch. In other words, an independent, mutually agreeable benchmark carried out by one or more third parties—one that, win or lose, the vendors agree to ratify as fair and impartial.

I think almost everyone likes that idea, including me. The unfortunate reality is that the cost and logistics of performing such a benchmark makes it highly unlikely. Even if both companies were willing to split the considerable cost of doing so (and only Microsoft would likely want to—after all, unlike with Java, Microsoft is virtually the sole beneficiary of .NET's financial success), the chances that they could ever agree upon a testbed are a million to one.

But in the end, it doesn't matter. Customers and implementers get no direct benefit from a successfully implemented benchmark. If we could all sit down and design the perfect, fair, infallible benchmark—invulnerable to any accusation of bias—what would happen? Well, one platform or the other would be negligibly faster than the other one. Not incredibly faster, not orders of magnitude faster—a little faster, and even then only in that specific configuration. Give the product to a customer and let them optimize it for their specific environment and you will see a negligible performance gap become a significant performance advantage.

What is it that we think we might get out of a benchmark? Both platforms are highly competent and high performing. The great news for the business world is that you can't really mess up this decision too badly. If one platform makes more sense for you, based on other business issues, you already know what your choice is. If you're one of the relatively few organizations that's on the fence, well your decision boils down to, not the choice of two evils, but the choice of two solutions. A benchmark would certainly be interesting to someone who's making "the big decision" but in the end it's what youdo with it that matters, not what they do with it in some lab. Finally, Web services interoperability ensures that no matter which decision you make now, with a little planning you'll still be able to use the fruits of that decision in the future, even if your company makes dramatic technological changes down the line.

Will the results of a benchmark matter to those who have already staked their flag in one turf or another? If their platform loses will they run screaming to their ISVs, demanding an explanation? Will they frantically rip applications out by their figurative roots? Nope. It is the very rare enterprise that makes a technology investment based on transactions per second alone. And a very myopic IT manager who'd second-guess his choice based on a single benchmark.

There are only two groups of people that have a vested interest in a completed benchmark: 1) Employees of Microsoft, Sun, and other major vendors who have staked their businesses on Java (Oracle, BEA, IBM), and 2) each platform's fanatically partisan proponents. As partisanship increases among the mainstream proponents of each platform, it becomes less likely that even the fiction of an infallible benchmark would have any notable effect. At worst, the response will be: We'll get 'em next time.

So, in the end, the Pet Shop controversy boils down to a lot of childish arguing about who's got the bigger bat. Oops, I mean ball. No, I mean ... oh, never mind.

Comment and Contribute






(Maximum characters: 1200). You have 1200 characters left.



Thanks for your registration, follow us on our social networks to keep up-to-date