These E6500s during peak hours are overloaded (most of the time 0% of IDLE cpu and dozen threads queued for running, some network packet drops, etc. - you get the idea). Well T2000 with exactly the same production workload is loaded at about 20% peak, no network packet drops, no threads queued. So there's still lot of head-room.
In order to see how T2000 is capable of doing IOs I increased some parameters in our applications so data processing was more aggressive - more nfs traffic and more CPU processing - all in a production with real data and workload. Well, T2000 was reading almost 500Mb/s from nfs servers, writing another 200Mb/s to nfs servers, and communicating with frontend servers with about 260Mb/s. And still no network packet drops, no threads queued up, server was loaded at about 30% peak (CPU). So there's still large head-room. And all of this traffic using internal on-board interfaces. When you add numbers you will get almost 1Gb/s real production traffic.
Unfortunately our T2000 has only 16GB of memory which was a little bit problematic and I couldn't push it even more. I whish I had T2000 with 32GB of ram and 1.2GHz UltraSparcT1 - I could try to consolidate even more gear and try more data processing.
ps. well, we're definitely buying another T2000s and putting them instead of E6500s, E4500s, ...
Applications weren'r recompiled for UltraSparcT1 - we use the same binaries as for E6500 and applications were configured exactly the same. NFS traffic is to really lot of small files with hundreds of threads doing so concurrently, with a lot of meta data manipulation (renaming, removing files, creating new ones, etc.) - so it's no simple sequential reading of big files. On-board GbE NICs were used on T2000. No special tuning was done especially for T2000 - the same tunables as for E6500s (larger TCP buffers, backlog queues, more number of nfs client threads per fs, etc.). Solaris 10 was used.
No comments:
Post a Comment