Monday, January 30, 2006

Benchmarks: Numbers Don't Lie, but Liars Use Numbers

Can industry standard benchmarks or even application benchmarks, like SAP's be relied upon to make technology choices? I have come to the conclusion that benchmarks are not reliable measuring sticks for use by decision makers regardless if they are a mythical application, like a SPECjAppServer2004 benchmark, or an ISV specific application benchmark like SAP's (http://www.sap.com/solutions/benchmark/index.epx). Why do I believe this is true?

With the industry standard benchmarks, whether they are SPEC, or the Transaction Performance Council, the applications are far too simple to simulate a real-world workload. Real-world applications have far more complex business logic, and are usually highly data driven. When I say data driven, I mean that the application logic branches are almost always determined by querying a database for what to do under certain business cases. They are really automated business processes. I have seen cases where the customer setup in an application had over 60,000 locations, and in another case where there were over 100,000 specific products listed in a customer contract. These are but two simple examples, and what they lead to is a read/write ratio in the applications that are heavily tilted to the read side. In two major applications that I have been involved with the read/write ratios were 98% read, 2% write, and 93% read, 7% write. Industry standard benchmarks do not have such ratios because they don't simulate these types of complex, data driven, large dataset applications.


With ISV specific benchmarks, even though they are running a real business application, they don't represent a customized deployment of their technology. They are specifically crafted to create the highest possible numbers because they are actually marketing tools, not something that can be relied upon for your own implementation. If you look at some of the SAP benchmark results they have numbers like 29 million and more dialog steps per hour, and stuff like that! This is a dead give away to anyone who has half a brain. Does anyone's SAP implementation in the world do 29 million of anything in one hour? I think not! My entire career has been spent in high volume transaction oriented businesses (until just recently), and believe me, these types of numbers are completely off the chart, and meaningless.

There is one other aspect to both types of benchmarks. The benchmark configurations used are configurations that no customer, in their right mind would deploy in a production environment. You will see things like raw disc being used, with RAID level 0 (just stripping). Undocumented features of databases being turned on, that are specifically for benchmarks, but make the database unsupported by the vendor in a production environment. Data being stripped over hundreds or even thousands of disk drives. All logging of any kind, whether it be databases, application servers, OS, etc., being turned off to lower the overhead as much as possible. These are but some of the tricks that are used in so-called audited benchmark results. Where does that lead us where these benchmarks are concerned.

It leads us to one place and one place only. That "numbers don't lie, but liars use numbers"! These benchmarks are marketing tools, and no more. They don't represent anything remotely close to a production deployment, and the numbers will always be higher than what can be achieved in a real-world deployment that can be managed. Don't rely on these marketing ploys to make decisions, instead run your own workload in a proper production like configuration, and make your decisions based on facts, and not fiction.

Monday, January 23, 2006

A New Beginning

I have recently changed jobs, and have gone from a traditional internal IT shop to an open source company. Friday was my last day at my old job, and today was my first at my new job. What I find most interesting about the differences, is the passion that is so often drained from employees in traditional IT shops, is alive and well in my new position.

People really care about what they are doing, and it shows in everything that I have experienced so far. A successful endeavor, not matter what its purpose, has to involve people who care. What a refreshing difference! It is wonderful to be involved in something where people say what they mean, and mean what they say. No hidden agendas, no politics, just a spirit of let's do the right thing.

I think I have found a position where I can turn my passion into my vocation, and you can never go wrong with that.

Monday, January 16, 2006

The Myth of "One Throat to Choke"

When decision makers start to compare various technology solutions, one thing that inevitably comes up is the notion of a single vendor solution, with one support organization, versus a best-of-breed solution with multiple support organizations. The so-called "One Throat to Choke" support model.

I call this a myth, for several reasons. While we can all recall situations where multiple vendors have pointed fingers at each other versus helping solve our problem. I can also recall situations where multiple vendors worked quite well together to solve problems. Just like we can all recall a single vendor not addressing a problem even though it was clearly their problem to deal with.

First, with most single vendor solutions, that have anything more than one moving part, so-to-speak, they have proprietary features of the integrated solution that are intended to lock you in. They also make it very difficult to get value out of their solution without using the proprietary features. With that, once you have landed in that trap, the switching costs start to mount, and for some conservative organizations, they become insurmountable. Once they have you in that situation, there support really doesn't have to be very good. So now, you have "one throat to choke", and you are just there choking them with no results! This is especially true in the software industry with "stacks" or "suites" that are supposed to save you from all the integration costs, because they are pretested and certified together.

Second, most mature technologies today are based on a set of open standards. With open standards, the integration costs aren't as high, and in some cases downright non-existent. With standarized interfaces and protocols between the various pieces of a best-of-breed solution, it is often quite easy to determine where the problem lies. Lessening the pointing of fingers, and making it easier to determine where a problem lies, and who needs to be involved to fix it. Also, when vendors are put into a competitive situation, often they will work harder to solve your problems then vendors that have you locked in!

Finally, with many technology combinations in a best-of-breed solution, the vendors have predefined cross support relationships, and if they don't, many times they are willing to put those in place for you.

While it may seem alluring to have "one throat to choke", I think the differences in resolving problems is minimal, at best, when compared to a multi-vendor solution. Also, with the lock in strategy of "stack" or "suite" products, you are many times left with an inferior solution with no competitive pressure that helps you as a customer.

Saturday, January 14, 2006

When Technology Evaluations Go Awry

Recently, I have been witness to a technology evaluation that has been a real eye opener. What you would like to believe is that individuals involved in an evaluation will have an open mind to all solutions, and that they would not try to hide the weaknesses and problems in one solution versus another.

I guess I shouldn't be surprised that human nature has raised its ugly head in this. Although I would like to think that people will be honest, and have the best interests of everyone in mind, I have uncovered multiple instances where individuals have actually covered up things, and outright rated specific features that they did not even observe. All because they don't believe in an open standards and open source approach to technology. They believe that traditional commercial ISV solutions are inherently better, so they set out to "prove it", and in so doing they intentionally skewed test results, hid problems, or attempted to explain them away.

Now, what is the the result of all of this? The ultimate result will be that a company will pay more money for a solution that has no additional benefits over the open solution, has poor technical support, doesn't truly work as well as the evalution seems to state, and will take their entire organization backwards instead of forward!

This is the saddest thing I have ever witnessed in my career. That people would put their own "beliefs" and "pride" ahead of the best interests of the organization they are apart of.

I believe that what I just witnessed marks the beginning of the end of what used to be a successful organization. I can only hope that this kind of behavior is eventually rooted out.