Friday, March 24, 2006
So, after a week with the Niagara T2000, I’ve managed to find some time to do some more detailed benchmarks, and the results are very impressive. The T2000 is definitely an impressive piece of equipment, it seems very, very capable, and we may very well end up going with the platform for our mirror server. Bottom line, the T2000 was able to handle over 3 times the number of transactions per-second and about 60% more concurrent downloads than the current ftp.heanet.ie machine can (a dual Itanium with 32Gb of memory) running identical software. Its advantages were even bigger than that again, when compared to a well-specced x86 machine. Not bad!
Friday, March 17, 2006
Thursday, March 16, 2006
Wednesday, March 15, 2006
The Rock processor - due out in 2008 - will have four cores or 16 cores, depending on how you slice the product. By that, we mean that Sun has divided the Rock CPU into four, separate cores each with four processing engines. Each core also has four FGUs (floating point/graphics units). Each processing engine will be able to crank two threads giving you - 4 x 4 x 2 - 32 threads per chip.
Sun appears to have a couple flavors of Rock – Pebble and Boulder. Our information on Pebble is pretty thin, although it appears to be the flavor of Rock meant to sit in one-socket servers. Boulder then powers two-socket, four-socket and eight-socket servers. The servers have been code-named "Supernova" and appear impressive indeed. A two-socket box – with 32 cores – will support up to 128 FB-DIMMs. The eight-socket boxes will support a whopping 512 FB-DIMMs. Sun appears to have some fancy shared memory tricks up its sleeve with this kit.
Monday, March 13, 2006
Thanks to extraordinary efforts from David Miller, the Ubuntu SPARC team and the
entire Linux-on-SPARC community, it should now be possible to test out the
complete Ubuntu installer and environment on Niagara machines. As of today, the
unofficial community port of Ubuntu to SPARC should be installable on Niagara,
and we would love to hear reports of success or failure (and love them more if
they come with patches for performance or features :-)).
Thursday, March 09, 2006
Thursday, March 02, 2006
These E6500s during peak hours are overloaded (most of the time 0% of IDLE cpu and dozen threads queued for running, some network packet drops, etc. - you get the idea). Well T2000 with exactly the same production workload is loaded at about 20% peak, no network packet drops, no threads queued. So there's still lot of head-room.
In order to see how T2000 is capable of doing IOs I increased some parameters in our applications so data processing was more aggressive - more nfs traffic and more CPU processing - all in a production with real data and workload. Well, T2000 was reading almost 500Mb/s from nfs servers, writing another 200Mb/s to nfs servers, and communicating with frontend servers with about 260Mb/s. And still no network packet drops, no threads queued up, server was loaded at about 30% peak (CPU). So there's still large head-room. And all of this traffic using internal on-board interfaces. When you add numbers you will get almost 1Gb/s real production traffic.
Unfortunately our T2000 has only 16GB of memory which was a little bit problematic and I couldn't push it even more. I whish I had T2000 with 32GB of ram and 1.2GHz UltraSparcT1 - I could try to consolidate even more gear and try more data processing.
ps. well, we're definitely buying another T2000s and putting them instead of E6500s, E4500s, ...
Applications weren'r recompiled for UltraSparcT1 - we use the same binaries as for E6500 and applications were configured exactly the same. NFS traffic is to really lot of small files with hundreds of threads doing so concurrently, with a lot of meta data manipulation (renaming, removing files, creating new ones, etc.) - so it's no simple sequential reading of big files. On-board GbE NICs were used on T2000. No special tuning was done especially for T2000 - the same tunables as for E6500s (larger TCP buffers, backlog queues, more number of nfs client threads per fs, etc.). Solaris 10 was used.