I think I am suffering from bottleneck issues. What I am witnessing is down right dismal performance from our server. I have dealt with ZFS in the past and this is by far the worse performance numbers I have seen. Now with that being said I know its a configuration issue and that’s why we test these sorts of things in our development rack. The ZFS filesystem and architecture can handle what I’m doing with ease but the issue here has to be drive placement and RAID conf. In my previous post you will see that in the 24 bay SATA enclosure there are drives that have white labels. These are apart of a RAID 1 array that serve as the OS drive that is shown in the BIOS. All drives in the enclosure are connected to an HP SAS Expander that connects to an Adaptec 5445 in the main chassis.

Now when performing an rsync or a disk usage (du) I get slow response and ultimately unresponsiveness from the box. I cant even log in after a while. The most recent issue was after reboot there was a service that was in maintenance mode and caused the startup procedure to halt entirely and drop to a maintenance shell. I eventually got it going but this is odd. I do see that one of the drives that is in the Pink array is having a S.M.A.R.T error but I don’t see it failing yet. It will be replaced soon. I believe I am seeing some serious I/O bottlenecks from this systems current config. I ordered the following to add to the mix:

2x Quad core 2.33 SLAEJ processors
8GB ECC FB Dimm PC2-5300
Final 146GB 15k SAS drive for the Yellow array
Finally a Chenbro 12803 Sas Expander (i have been looking for this!)
2x 32GB SSD (YAY!) for ZIL/L2ARC

I may have to reformat once these parts are added and things are moved around just for ease of mind. I want to use the SSDs to make some sort of increase in IOPS and test how they increase performance. So far im seeing 9 to 32 MB/s in the 146GB 10k U320 RZ2 array (ORANGE) and 20 to 40 MB/s in the 500GB 7.2K SATA array (pink). I will add the 1.5TB 7.2K SATA drives (YELLOW label PINK dots) later once I destroy the array they were in.

I know that ZFS can perform better than this and I will make it better once I figure out the best configuration or reason for the poor performance. For shits and giggles I may just put the OS on its own set of SSDs and velcro them to the case (don’t judge me!) so I can utilize the internal on board SATA connectors. Thus ensuring that the OS partition has its own personal bandwidth highway I guess.

All and all the experience is great since I am trying to learn Open Solaris more. This is exactly why we have a Dev rack to test this on. I love it!