Archive for January, 2012

I/O Whoas :(

I think I am suffering from bottleneck issues. What I am witnessing is down right dismal performance from our server. I have dealt with ZFS in the past and this is by far the worse performance numbers I have seen. Now with that being said I know its a configuration issue and that’s why we test these sorts of things in our development rack. The ZFS filesystem and architecture can handle what I’m doing with ease but the issue here has to be drive placement and RAID conf. In my previous post you will see that in the 24 bay SATA enclosure there are drives that have white labels. These are apart of a RAID 1 array that serve as the OS drive that is shown in the BIOS. All drives in the enclosure are connected to an HP SAS Expander that connects to an Adaptec 5445 in the main chassis.

Now when performing an rsync or a disk usage (du) I get slow response and ultimately unresponsiveness from the box. I cant even log in after a while. The most recent issue was after reboot there was a service that was in maintenance mode and caused the startup procedure to halt entirely and drop to a maintenance shell. I eventually got it going but this is odd. I do see that one of the drives that is in the Pink array is having a S.M.A.R.T error but I don’t see it failing yet. It will be replaced soon. I believe I am seeing some serious I/O bottlenecks from this systems current config. I ordered the following to add to the mix:

2x Quad core 2.33 SLAEJ processors
8GB ECC FB Dimm PC2-5300
Final 146GB 15k SAS drive for the Yellow array
Finally a Chenbro 12803 Sas Expander (i have been looking for this!)
2x 32GB SSD (YAY!) for ZIL/L2ARC

I may have to reformat once these parts are added and things are moved around just for ease of mind. I want to use the SSDs to make some sort of increase in IOPS and test how they increase performance. So far im seeing 9 to 32 MB/s in the 146GB 10k U320 RZ2 array (ORANGE) and 20 to 40 MB/s in the 500GB 7.2K SATA array (pink). I will add the 1.5TB 7.2K SATA drives (YELLOW label PINK dots) later once I destroy the array they were in.

I know that ZFS can perform better than this and I will make it better once I figure out the best configuration or reason for the poor performance. For shits and giggles I may just put the OS on its own set of SSDs and velcro them to the case (don’t judge me!) so I can utilize the internal on board SATA connectors. Thus ensuring that the OS partition has its own personal bandwidth highway I guess.

All and all the experience is great since I am trying to learn Open Solaris more. This is exactly why we have a Dev rack to test this on. I love it!

Email Migration – Done

Billing System – Done

Now on to the next project  Rack Consolidation. We have a few list of projects that we want to get done and this is more of a spring cleaning for our development rack. The goal is to…well consolidate and virtualize some of our servers into one system. We choose Open Solaris and we are still learning it, but it offers us the capability to run storage and virtualization in one box. Of course there is the almighty ZFS offering but the virtualization aspect is new for us on this platform and will just add to our knowledge. Sadly the Open Solaris project is a bit dead but thankfully it has forked into Open Indiana which I believe is based off of Open Illumos. We mainly like it for the storage aspect though.

Storage arrays well be color coded to keep track of all the disk as far as speed and which array they are apart of.

Yellow = 146GB 15K SAS (High Speed VM Storage)
Orange = 146GB 10K U320 SCSI (Moderate Speed Unimportant Storage/Secondary Data Backup/Moderately Compressed)
Pink = 500GB/1.5TB 7.2K SATA (Moderate Speed Important Storage/Primary Data Backup)
Green = 1.5TB 5.4K SATA (Archive Storage/Tertiary Data Backup/Highly Compressed)
White =160GB 7.2K SATA (OS)

Yellow is for VMs that require high speed writes/reads.
Pink will hold most of our software repository like OS images, some VMs, Primary backups of VMs from the Yellow array
Orange will hold the everyday unimportant/light use data like Music, Documents, Temporary Customer backups, and whatever else and will be compressed to conserve space.
Green will be archive storage for all other arrays data and will be highly compressed making it a good place for old stuff we keep around just to have like older copies of Ubuntu or our documents and what have you.
White is the main root pool that will not store anything but the OS and its necessary files.

While designing and planning this we will work to get our phone system up and running again. As we feel it is time to begin using that again.