Another Review of "The Petstore Revisited: J2EE vs .NET Application Server Performance Benchmark"

Email.Email weblog link
Blog this.Blog this
Dion Almaer

Dion Almaer
Nov. 01, 2002 08:17 AM

Atom feed for this author. RSS 1.0 feed for this author. RSS 2.0 feed for this author.

When I woke up at the beginning of the week, and saw the report that The Middleware Company came out with, I just couldn't believe it. I didn't understand what they had done this, and *I work* for this company! I was as clueless as you.

There has been a lot of responses to the report. A lot of people are on the warpath... holding their axes up high as they charge at TMC and TSS.

As I have read the report, peoples comments, and other posts on the subject, I see key issues that upset me:

  • Lack of Full Disclosure
  • Unfair Benchmark

Lack of Full Disclosure

Let's face it, the report does not fully disclose who did the report, who was involved in the benchmarking, why certains "rules" were chosen, and who paid for it (directly or indirectly). When you don't have full disclosure, you don't show the whole picture. We are getting information out of TMC now, but it should have been there from the beginning, and they need to come out with all of the information.

Unfair Benchmark

Benchmarks suck. Lets face it. Even the most well designed benchmark only does a performance comparison within strict bounds. In a running race this is fine, as there are not that many variables. In the enterprise computing world, benchmarks mean next to nothing. It is very easy to come up with a set of rules, where you come top in a benchmark. This happens all the time. I think this happened here.

What was the point of this report/benchmark? It seems unclear to me. Was it just about performance? Was it about ease of development (LOC)? It blurs.


If it was for performance, then the vendors of Server A & B should have been able to come in and tweak the code, deployment descriptors, OS settings, hardware, and whatever they needed, to get the highest numbers for this particular test. Microsoft were the only company that was allowed to do this. That isn't fair at all. We have seen that an older version of PetStore was used, and certain optimizations were not part of the app:

  • EJB 2.0 Local Interfaces
  • EJB 2.0 CMP
  • How about taking out EJB as an example?
  • It would be nice to try JDO
  • Try other JVMs (e.g 1.4, JRockIt)

If this would have been done, not only could we have seen more results, but THESE ARE THE THINGS THAT MATTER! I don't get about bbops as numbers on their own. I want to see the effects of CMP 2.0, Local interfaces, straight servlet frameworks, other persistent mechanisms. This is knowledge that I can use in my projects. I will not be able to tweak the system like the vendors can, so give me what I care about. The TMC guys spent a lot of time tweaking this and that, so there is a LOT of good information in their brains. What did they change and why? How did it help performance?

We also learn that the benchmark used XA transactions, and other "rules" like this. I want to know how tweaking these "rules" affects the results. I try very hard to stay way away from distributed transactions unless I *really* need them. The TMC engineers have stated that they did try some optimizations that have been stated, and that .NET still would have "won", but again, lets move away from this winning and losing talk.

Ease of Development:

LOC? Who gives a damn. "This app has 10,000 lines of code". What does that mean to me? How much of it was generated with tools (XDoclet, IDEs, etc)? I don't write an application and say to myself "Oh, I will take out this cool class that helps out, as it is another 500 lines of code to worry about". Are we also counting framework code that we use? How would things be affected if we used Struts/WebWork, and OSCore/OSCache, OJB, and the list goes on. LOC itself means nothing. I want to see how the architecture looks, how it feels, and that will give me info that I want to know. NOTE: Rickard comments on the LOC discussion in his commentary

So what does this report mean?

Ok, so after all of this, what did the report mean? That .NET is better than J2EE? Not at all. It didn't show anything like that. It just showed that for one PARTICULAR set of rules, Microsoft managed to get better numbers than a couple of TMC employees who used a couple of J2EE app servers, trying to optimize old code, with non-bleeding edge servers (whereas MS could even tweak their .NET core! Do you think you could download the .NET runtime they were using at the time?)
[CLARIFICATION: I am not saying that the CLR team were there tweaking their VM to get better results on this benchmark. I know they have better things to be doing.... I am just trying to make a point about the access.]

Well, when I look at that, all the report has taught me is that I would love to see more information from people trying out different architectures, and how performance and ease of development is really affected. I am a firm believer of KISS and "Fast Enough" principles, but this stuff is just interesting to read about.

Lets try to get some real information on this topic shall we?

There are a huge set of reasons why someone would choose J2EE over .NET, and I am looking forward to seeing the community, and the J2EE vendors hit back at this.

Also, maybe something will come of the Rematch. Again though, I don't want this rematch to just be about numbers, lets change the objectives here, and just learn about the technology.


These are my thoughts, and have nothing to do with my employers. I work for TSS, and all of this came to me as a huge surprise. As big a shock for us, as for you guys.


Here are links to all of the information regarding the report, and feedback:
The Petstore Revisited: J2EE vs .NET Application Server Performance Benchmark:
Discussion on TheServerSide.Com:
Rickards Review:
Cedrics Comments:
Slashdot's Posting:

Dion Almaer is a Principal Technologist for The Middleware Company, and Chief Architect of TheServerSide.Com J2EE Community.