Formula 1 of Open Source ESB's?



Felipe Massa took the Formula 1 race in France this past weekend with an impressive performance over Hamilton. But we're not here to talk about the fast world of Formula 1, this is a blog about open source ESB's and performance after all.

Like you, I am highly skeptical of vendor's proclamations about them being the fastest and greatest thing since sliced bread was invented. In my past life as a software architect every vendor I dealt with would throw such numbers at me, IBM being the worst offender of the lot. They would do anything they could to persuade me that JBoss was a bad choice for app. server since it was 'unreliable', 'to slow', or to 'risky'. I asked them many times to show me their numbers to back up those claims, and either they never would, cause no such BS existed, or they would show me benchmarks which were so obiously skewed that it would be laughed out of court if they ever did try to use it as evidence of a crime. Hilarious stuff.

So, how does this relate to open source ESB's and performance testing? Well, a while back when WSO2 first put out some benchmarks, I was still working at MuleSource. As usual, when I first read about some performance tests and vs. Mule I laughed it off as just another vendor ploy. We all did. That is, until a customer prospect I was working with actually spent the time to go deep into the tests himself, and used the same code to run his own numbers. The results were virtually identical to what was published. I was intrigued and did a little digging myself. It turns out, the guys @ WSO2 were pretty open about their tests. They even contacted Ross of Mule to get help tuning the tests to make sure Mule was tuned as optimally as possible so that they could not be accused of downgrading Mule's performance or using 'default' settings. No response from Mule was given at the time. I had no clue this had taken place till only a few weeks ago, since none of this was shared neither internally @ Mule nor in any public forum I ran into. The whole discussion came to light on a Mule community forum, which you can read here.

Apr 13, 2008; 11:50pm Ross Mason-3
Hi Ruwan,

Thanks for reaching out. We will run these benchmarks along side our own and publish the results...

Cheers,

Ross Mason
CTO, Co-Founder
MuleSource Inc.


So basically, since April13 since Ross promised to run the benchmarks themselves and provide some comment on the forum, not a peep has been heard from them on the subject.

Well, a lot has changed since then, and yet a lot hasn't...
- Mule is on the 2.0 platform and the proprietary version is on the 1.5 E.E.
- WSO2 has shipped 1.7 of the ESB
- ServiceMix is on 3.2.1

It was time for another set of tests.

Just how do you benchmark an ESB? Well, the simplest way is to create a common set of scenarios and put all the ESBs through the same tests, using the same hardware, tweaking each till you get the last little bit of juice out of them - all the while documenting everything, and making everything available for anyone to repeat the same tests on their own boxes. WSO2 has done just that. You can see the full test suite and data here.

So just how did WSO2 do vs. the others tested? You might think, they won, they do it all, they are the best. Well, not quite... yes, WSO2 is good, but we didn't do the best in everything. We beat the pants out of any of the open source ESB's hands down in every test, but the proprietary guys were very good also. Some observations:

- WSO2 ESB beat all open source ESBs in performance by a significant margin
- WSO2 ESB lost in TPS test to a closed, proprietary ESB
- ServiceMix was better then Mule in every scenario tested (this was surprising to me given the JBI overhead)
- Mule had a failure rate of about 1% on payloads

In the spirit of openness and transparency, all of these tests and the raw data in the reports is available for you to download and verify yourselves. This is the only published, open set of benchmarks I have seen yet to put the major open source ESB's through a set of repeatable performance tests.

1 comments:

Holger Hoffstätte said...

"We all did."

There were exceptions. :)

    web site hit counter