I am having a discussion with someone who thinks it is impossible for ESB's to scale. Does anyone have any examples of particularly large message volumes and nodes that I can cite?
Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice http://www.cintegrity.com
Unfortunately, I don't currently have numbers, but Heathrow Terminal 5 uses ESB for the baggage handling system. I'll contact a guy I know at Progress UK and see if he has any information on number of messages or if he knows any other large message volume sites.
EDIT: No idea on T5, but BA (British Airways) are doing approx 1 million messages a day, as are First4Farming here in the UK.
I already mentioned CERN with 2.6 million messages a day and he wasn't impressed.
tamhas wrote:I am having a discussion with someone who thinks it is impossible for ESB's to scale. Does anyone have any examples of particularly large message volumes and nodes that I can cite?
That's a very generalized statement. TIBCO runs at Vodafone with over 800 mil events (messages) per day. Another one I think is really interesting is the FAA choosing FUSE for its needs. Sonic has a list of customers longer than my arm who rely on huge volumes of transactions being processed at a time. Even Oracle (and this sticks in my throat to say) can handle huge volumes of of transactions per day. Each of the companies' sites has cases that you can use as examples.
Now let's get specific on what the scalability issues are, because I think you *can* make an argument that a large number of services with a large number of disparate data models with transformations through a canonical model can cause scalability issues. That can also be easily addressed, though.
The person with whom I was having the discussion believes that ESBs can't possibly be scalable because of the overhead of encoding and decoding XML messages and the extremely general purpose nature of the bus connection. He comes from a RT/E environment, i.e., a cycle counting world, in which messages are bit streams of native data types and the only encoding an decoding is to handle endian and byte order differences between different architectures, if any. He has worked on some very high message count systems and is critical generally of "convenience" tools which he sees as ways to test Moore's Law, i.e., "we can be sloppy because we can always put more iron under it and next year we can put even more". While there are domains in which he has a point, I was hoping for a Really Big Number that would make it clear that the ceiling was very high.
What is a "Really Big Number"? I'd say 800 mil messages at Vodafone (that's 9400 messages per second) is pretty significant, no?
How much bigger do you want it to scale than supporting the entire air traffic control system of the United States?
What is TIBCO's technology compared to Sonic or Fuse?
I'm not sure what you mean by "What's TIBCO's technology?"
They have an ESB called ActiveMatrix. They also have a significant technology stack that has been very successful on Wall Street where it has been used for for financial trading systems. I can't point you at any customer comments about it, but I know that they were involved on the NASDAQ about 3 years back. I don't know if that is still true.
I was involved in a very substantial product evaluation of both TIBCO and Sonic at that time. BEA's Aqualogic (which Oracle has now acquired) was also in the game. Sonic came out head and shoulders above the others both technically and from an ROI point of view. TIBCO came in a strong second because of what it adds on top of the stack, although I believe Sonic has closed that gap now with Apama, DataXtend and some of its other recent acquisitions.
OK, that's basically what I wanted to know ... i.e, that it was also ESB, XML messages, routing, queues, and that sort of thing. I haven't paid much attention to TIBCO in many a year and when I did they had a hub and spoke EIA solution that competed with Forte's Fusion product (which was basically an ESB).
The example of a big number he cited was "there are thousands of geophones spread all over the West Coast that continuously monitor seismic activity on the San Andreas fault zone. They gather tens of thousands of values per second from each geophone in real time to be centrally analyzed." But, I have no knowledge of how these are connected or whether the numbers are accurate.
TIBCO was very much a hub-and-spoke system and this was one of the problems that we had with it technically when I did the evaluation 3 years back. ActiveMatrix is a fairly recent technology that seems to decentralize much of the processing. Ultimately, though, whether it is a hub and spoke system or a federated system has no impact on its ability to handle high throughput. At the time, TIBCO's TPS volumes at its big customers were far higher than Sonic's and its up-time requirement at those sites was far more stringent. That does not mean that Sonic could not meet those requirements - just that the requirements for TIBCO in those environments was far higher. They were contractually bound to meet 5 nines.
Sonic's federated model is more scalable, more redundant and allows for greater throughput. So we had no doubt about it meeting those requirements. It just did not have the same kinds of customers to compare against.