I love the late evening banter on Twitter, where a conversation between a number of individuals turns into a personal rant from yours truly. Tonight’s subject – performance management of Microsoft Exchange and overconfiguration of storage for email.
Some 4 years ago, I was working for a large investment bank (which may now be defunct) and I did the storage configuration and testing for the new Exchange deployment. Having been called in at the last minute, I had to take the storage configuration provided by the previous experts and the vendor. This consisted of a DMX1000-P2 (performance model) and using only the fastest 50% of the drives.
As the pre-deployment testing progressed, all MSFT Exchange servers were installed, configured and loaded with the Jetstress software to test performance. Unsurprisingly, as the setup had been so hideously over-configured, the testing concluded with flying colours. As I checked out the configuration of the individual servers, I found wide variations in their setup; HBAs at 1Gb/s rather than 2 (with HBAs on the same servers running at different speeds); drivers and firmware that were inconsistent; differences in the host logical volume layout. Despite all this, the configuration worked flawlessly, even with all of the intended production servers running stress loading at the same time.
This isn’t the only over-configured Exchange implementation I’ve seen; another springs to mind that used 300GB drives as 146GB models. I’ve also seen the same attention given to Notes. In that instance, however, common sense prevailed and it became clear very quickly that each Notes server could be more heavily loaded with data and that there was no need to short-stroke the drives to achieve the desired throughput. Performance/capacity logic was applied and the configuration streamlined.
The moral of this story? (a) don’t over-configure purely based on what the vendor recommends. Chances are they’re doing CYA to ensure they can’t be blamed for poor response times and throughput (b) review your configuration regularly and if response times are overly good, tune things down; use that extra disk space; load the servers more heavily.
Don’t just assume because everything works normally that you can’t squeeze that extra level of performance from the configuration.