Archive for the ‘ MJNS ’ Category

After tallking with others and just realizing how shitty my numbers were when it came to disk/array perfomance testing I decided to get back into bonnie++ testing. I feel much better with these numbers though because I know the power of bonnie along with the simplicity…well this wasnt so simple but it is a bit more automated. Also while running this I saw some very interesting numbers when looking at zpool iostat 1K+ write operations which was nice. These numbers are more realistic and can be used in a conversation to describe performance since it uses a fairly well known tool. It will become my standard along with the usually test of dd’ing zeros…I still like to see those big numbers. :)

Source: http://www.krazyworks.com/testing-filesystem-performance-with-bonnie/

8x146GB 15k with LVM on top using ext4 via Hardware RAID6

bonnie++ -n 0 -u 0 -r `free -m | grep ‘Mem:’ | awk ‘{print $2}’` -s $(echo “scale=0;`free -m | grep ‘Mem:’ | awk ‘{print $2}’`*2″ | bc -l) -f -b -d /var/lib/vz/r6-15k-146-array/
Using uid:0, gid:0.
Writing intelligently…done
Rewriting…done
Reading intelligently…done
start ‘em…done…done…done…done…done…

Version  1.96       ——Sequential Output—— –Sequential Input- –Random- Concurrency   1     -Per Chr- –Block– -Rewrite- -Per Chr- –Block– –Seeks– Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
h1           32052M           467993  60 177575  23           415967  25 970.4  42 Latency                         326ms     543ms               119ms     408ms

Seq Write: 457MB/s
Seq Re-Write: 173MB/s
Seq Read: 406MB/s

8x1TB 7.2k using ZFS RZ2 (RAID6) with lzjb compression enabled

First run on new array

bonnie++ -n 0 -u 0 -r `free -m | grep ‘Mem:’ | awk ‘{print $2}’` -s $(echo “scale=0;`free -m | grep ‘Mem:’ | awk ‘{print $2}’`*2″ | bc -l) -f -b -d /mnt/datapool2/
Using uid:0, gid:0.
Writing intelligently…done
Rewriting…done
Reading intelligently…done
start ‘em…done…done…done…done…done…
Version  1.96       ——Sequential Output—— –Sequential Input- –Random-
Concurrency   1     -Per Chr- –Block– -Rewrite- -Per Chr- –Block– –Seeks–
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP

h1           32052M           92609  16 81752  14           568497  31 310.7   3 Latency                         610ms     694ms             57049us     388ms

Seq Write: 90MB/s
Seq Re-Write: 79.8MB/s
Seq Read: 555MB/s

Second run on new array

Using uid:0, gid:0.
Writing intelligently…done
Rewriting…done
Reading intelligently…done
start ‘em…done…done…done…done…done…
Version  1.96       ——Sequential Output—— –Sequential Input- –Random-

Concurrency   1     -Per Chr- –Block– -Rewrite- -Per Chr- –Block– –Seeks–
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
h1           32052M           94759  15 81940  14           557672  31 313.9   3
Latency                         613ms     524ms             83193us     339ms

Seq Write: 92.5MB/s
Seq Re-Write: 80MB/s
Seq Read: 544MB/s

During Writing intelligently

zpool iostat datapool2 1
capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
———-  —–  —–  —–  —–  —–  —–
datapool2   1.99T  5.26T      0      0      0      0
datapool2   1.99T  5.26T      0      0      0      0
datapool2   1.99T  5.26T      0      0      0      0
datapool2   1.99T  5.26T      0      0      0      0
datapool2   1.99T  5.26T      0      0      0  4.00K
datapool2   1.99T  5.26T      0    684      0  2.60M
datapool2   1.99T  5.26T      0  1.15K      0  4.77M
datapool2   1.99T  5.26T      5    633  5.00K  2.46M
datapool2   1.99T  5.26T      0    602      0  2.41M
datapool2   1.99T  5.26T      0  1.15K      0  4.79M
datapool2   1.99T  5.26T      0    654      0  2.48M
datapool2   1.99T  5.26T      0    955      0  3.92M
datapool2   1.99T  5.26T      0    858      0  3.30M

During Rewriting

zpool iostat datapool2 1
capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
———-  —–  —–  —–  —–  —–  —–
datapool2   1.99T  5.26T    521  1.18K  2.28M  4.84M
datapool2   1.99T  5.26T    778    604  3.41M  2.42M
datapool2   1.99T  5.26T    783    604  3.42M  2.42M
datapool2   1.99T  5.26T    520  1.18K  2.27M  4.85M
datapool2   1.99T  5.26T    640    604  2.80M  2.43M
datapool2   1.99T  5.26T    658    604  2.88M  2.43M
datapool2   1.99T  5.26T    774    633  3.41M  2.55M
datapool2   1.99T  5.26T    524  1.16K  2.28M  4.75M
datapool2   1.99T  5.26T    778    604  3.41M  2.42M
datapool2   1.99T  5.26T    776    604  3.41M  2.43M
datapool2   1.99T  5.26T    525  1.18K  2.28M  4.85M

During Reading intelligently

zpool iostat datapool2 1
capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
———-  —–  —–  —–  —–  —–  —–
datapool2   1.99T  5.26T  4.06K      0  18.3M      0
datapool2   1.99T  5.26T  4.03K      0  18.2M      0
datapool2   1.99T  5.26T  4.03K      0  18.2M      0
datapool2   1.99T  5.26T  4.10K      0  18.5M      0
datapool2   1.99T  5.26T  4.10K      0  18.5M      0
datapool2   1.99T  5.26T  4.14K      0  18.7M      0
datapool2   1.99T  5.26T  4.03K      0  18.2M      0
datapool2   1.99T  5.26T  4.03K      0  18.2M      0
datapool2   1.99T  5.26T  4.24K      0  19.1M      0
datapool2   1.99T  5.26T  4.07K      0  18.3M      0
datapool2   1.99T  5.26T  4.03K      0  18.2M      0

During start ‘em

zpool iostat datapool2 1
capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
———-  —–  —–  —–  —–  —–  —–
datapool2   1.99T  5.26T    311     31  1.48M   384K
datapool2   1.99T  5.26T    318     27  1.50M   336K
datapool2   1.99T  5.26T    305     29  1.43M   360K
datapool2   1.99T  5.26T    321     35  1.51M   432K
datapool2   1.99T  5.26T    301     41  1.43M   504K
datapool2   1.99T  5.26T    329     31  1.55M   384K
datapool2   1.99T  5.26T    308     20  1.46M   252K
datapool2   1.99T  5.26T    316     36  1.49M   444K
datapool2   1.99T  5.26T    326     28  1.55M   348K

Transferring ISO between arrays using rsync

FROM 8x1TB 7.2k (ZFS RZ2)
TO 8x146GB 15k (Hardware RAID6 w/ LVM on top)

rsync -a –progress /mnt/datapool2/vm/template/iso /var/lib/vz/r6-15k-146-array/test/
sending incremental file list
created directory /var/lib/vz/r6-15k-146-array/test
iso/
iso/FreePBX-1.1007.210.58-x86_64-Full-1344904533.iso
827643904 100%  131.17MB/s    0:00:06 (xfer#1, to-check=14/16)
iso/FreePBX-1.815.210.58-x86_64-Full-1344903580.iso
827457536 100%  135.75MB/s    0:00:05 (xfer#2, to-check=13/16)
iso/Windows-7-Home-Premium-x64.iso
3731709952 100%  130.39MB/s    0:00:27 (xfer#3, to-check=12/16)
iso/Windows-XP-SP3-Automated.iso
648937472 100%  130.04MB/s    0:00:04 (xfer#4, to-check=11/16)
iso/clearos-community-6.3.0-x86_64.iso
663922688 100%  117.69MB/s    0:00:05 (xfer#5, to-check=10/16)
iso/oi-dev-151a-text-x86.iso

…..

sent 12959018903 bytes  received 301 bytes  141628625.18 bytes/sec
total size is 12957435904  speedup is 1.00

TO 8x1TB 7.2k (ZFS RZ2)
FROM 8x146GB 15k (Hardware RAID6 w/ LVM on top)

rsync -a –progress /var/lib/vz/r6-15k-146-array/test/ /mnt/datapool2/
sending incremental file list
./
iso/
iso/FreePBX-1.1007.210.58-x86_64-Full-1344904533.iso
827643904 100%   78.86MB/s    0:00:10 (xfer#1, to-check=14/17)
iso/FreePBX-1.815.210.58-x86_64-Full-1344903580.iso
827457536 100%   78.87MB/s    0:00:10 (xfer#2, to-check=13/17)
iso/Windows-7-Home-Premium-x64.iso
3731709952 100%   80.21MB/s    0:00:44 (xfer#3, to-check=12/17)
iso/Windows-XP-SP3-Automated.iso
648937472 100%   68.44MB/s    0:00:09 (xfer#4, to-check=11/17)
iso/clearos-community-6.3.0-x86_64.iso
663922688 100%   77.94MB/s    0:00:08 (xfer#5, to-check=10/17)
iso/oi-dev-151a-text-x86.iso
514420736 100%   75.98MB/s    0:00:06 (xfer#6, to-check=9/17)

NEW STORAGE A-YAY! (Array)

New array built of SAS/SATA JBOD unit w/ 8x 1TB Seagate Constelattion ES drives connected to an Adaptec 6445 card  in RZ2 (RAID6)
root@h1:~# dd if=/dev/zero of=/mnt/datapool2/test000 bs=1G count=4
4+0 records in
4+0 records out
4294967296 bytes (4.3 GB) copied, 28.622 s, 150 MB/s
Old array built of Dell 220S DAS units w/ 30x 146GB 10k U320 drives in RZ2 (RAID6)
This has Compression is enabled
root@h1:~# dd if=/dev/zero of=/mnt/datapool1/test000 bs=1G count=4
4+0 records in
4+0 records out
4294967296 bytes (4.3 GB) copied, 55.3248 s, 77.6 MB/s
So to be fair or make sure everything is the same I enabled compression on the new array as well.
root@h1:~# zfs set compression=lzjb datapool2
root@h1:~# dd if=/dev/zero of=/mnt/datapool2/test000 bs=1G count=4
4+0 records in
4+0 records out
4294967296 bytes (4.3 GB) copied, 16.6144 s, 259 MB/s
A bit unbelievable so lets make that file twice the size.
root@h1:~# dd if=/dev/zero of=/mnt/datapool2/test000 bs=1G count=8
8+0 records in
8+0 records out
8589934592 bytes (8.6 GB) copied, 32.7574 s, 262 MB/s
THATS CAPTAIN FUCKING INSANO SON!
Spent about 1,400 on this project to get rid of the power hungry Dell 220S units and gain more storage space. Splurged on the drives by going with the higher end SATA enterprise series (Seagate constellation ES). These drives and the card can perform faster at 6GB/s but the SAS Enclosure is 3GB/s which is still roughly 9.3 times faster than old school U320 SCSI drives. If it were 6GB/s end to end I would just have to change my pants and start wearing diapers every time I transfer files. Plus I havent even added the SSD drives as ZIL device which will speed things up even more!
PICs Coming soon.

Yes i know this is a company blog and it represents who we are as collective, but have you seen the outrageous price of a Cisco access server the Cisco 2511. Its kinda ridiculous and since we are constructing a lab for certification purposes we don’t feel like spending $180 plus on a device that will allow us to log into the console port of our routers/switches. So we thought why not build our own Cisco Access Server. We don’t need anything fancy just some computer with a lot of Serial ports. LOL…I had to laugh at that last line, “We don’t need anything fancy…” cause we used a Dell 2950 that has 8GB of Ram, 2x Quad core Xeon processors with the standard RAID 1 setup of 250GB drives, buuuutttt you can do it with whatever. This PC is running Ubuntu Desktop so we can log into it anytime via VNC to play with the setup. This allows us to leverage an 64 bit OS that can have packages installed like:

TFTP – For conf and IOS backup practice
NMAP – To test ACLs/Port Forwards
VBox – Why simulate the functions of a Windows box when you can use it in a VM if you REALLY want to
Media Access – We have tons of tutorials, videos, and other things on the NAS at our disposal that can be access from this box.
Putty/Minicom – Allows us to access the devices via the serial ports either with a gui/cli. Putty is just more popular but there are others.
Packet Tracer – I have to find a 64bit version of this or we can stick it in a VM

So we found a 6 port Serial PCI card on eBay for $28 (YEAH BABY!). It was the SD-PCI-6S by Syba. There were cheaper sellers but I didn’t feel like waiting a month for shipping from overseas. It arrived in no time and installation didn’t require anything in the BIOS just a minor software tweak which I got figured out with Ubuntu.

I had to some trouble with all the PCI Card’s serial ports being recognized and mapped to a /dev file. I was able to access the first 2 ports on the card and remap any port to the special ttySx (where ‘x’ represents a number) file. It turned out to be something that had to be added to grub conf file like so:

nano /etc/default/grub

Find the line:
GRUB_CMDLINE_LINUX_DEFAULT=”" <—may be blank between quotes if not add a space between arguments

change it to:
GRUB_CMDLINE_LINUX_DEFAULT=”8250.nr_uarts=8″ <—- 8 represents the number of ports you may have. I added 6 so 8 was correct

commit the change by rebuilding the grub menu with the new arguments in place so it survives reboots:
sudo update-grub

Taken from “Beyond Bits and Bytes” Blog

Now all ports work after reboot. I added labels to the ports on the back of the server. Then got VNC to run when the machine starts which is essential for remote lab time. This will be great for us and our friends who want some lab time for certifications. Smarter fiends is always a plus.

As far as cost savings this was a very good move if I must say so myself. We get all the ports we need for the lab plus the added functionality of a full blown server that is over spec’d for this project. Granted the Dell 2950 is overkill, but it was going to be used for development purposes anyway. So $28 bucks for a PCI card versus $180+ for a console port switcher…SO WORTH IT. Also this card can be used in a desktop box as well with no issue and the extra money saved went to buying another 3550-24 PWR to complete the lab.

With Jumbo Frames enabled on the new gig switch, the storage server configured, and the host machines clustered the dev rack is moving towards stability. Now that were almost done with testing its time to get the cabling of this bad boy a little neater. Now with this being a dev rack and all testing is never truly over and improvements will always be made but it will look nice doing it from now on. Were getting great numbers from the NFS shares concerning transfer speeds without any tweaks. Running a quick back in Proxmox 2.0 i get 87 Mbps from yellow array (RAIDZ 146GB 15K SAS) which is currently degraded (replacement drive in route) to the orange array (RAIDZ2 146GB 10K U320 SCSI w/ gzip compression enabled). So far so good but the next project is some sort of off site tape backup for the data. FreeNas is great with its daily logs and prompt warning when finding issues. More to come…

Next plan of action

We have been working with our customers a lot lately which is good for us. Some are requiring more servers and we are happy to oblige them of course.

So far FreeNAS is our distro of choice for storage systems on the Dell Power Edge 840 systems. Hot swap is nice if you can get it. A low end model with a Xeon dual core proc, 1GB of RAM, no hdd, no hot swap start at $100 with shipping. This is great for small offices. Then all that is needed is an 8GB flash drive for ~$10  off newegg and some HDDs of your choice. Using ZFS to RAID the drives from their. Compression enabled makes the disk space last longer as well. Which is a plus for small businesses wanting to get the most from their storage. We are going to start working with the RD1000 for take home backups of the most important data on the storage server as well.

Next is more monitoring and log shipping for or user’s systems. We will offer it as a free service for simple monitoring like logs to emails and interface outages on important ports. With Cacti and an SMTP relay this isn’t hard to do we just need the back end built out for it. Which is the reason for the dev racks VM servers.

Our documentation skills have improved as well. We document with detail and it has helped us out a great deal when troubleshooting. So we use Open Atrium with a group for each customer to help us keep the info seperate and to allow for customer interaction later if we choose.

So were hanging in there and making improvements to the company, our services, and ourselves as well. PEACE, LOVE, and HIGH SPEED DATA TRANSFERS

JUMBO FRAMES

So a new friend of mine explained Jumbo frames to me and I have heard about it before but his explanation was super simple. It goes a little something like this.

You got 1 packets with an MTU of 1500 a piece which is the default for Fast Ethernet (100Mbps). A jumbo frame is really anything larger than this value for Gigabit Ethernet (1000Mbps) switches the common maximum is 9000 with some higher end switches supporting larger sizes. So with CPU, TCP/IP, and wire speed over head transferring those 6 packets takes time and resources. Of course this overhead and processing is miniscual in most environments. Now with jumbo frames on gigabit interfaces with an MTU of 9000 those 6 packets are now just one big packet. You can say that’s 1/6 the CPU, TCP/IP, and wire speed resources it takes to transfer that same amount of information. Of course real world calculations were not done on that simply because i don’t feel like figuring it out, but you get the point. So the hosts are working less but transferring the same payload and the switch is working less as well because it has less packets to push/process.

To sum it up you ship 1 big elephant instead of 6 baby elephants (dam that was pretty good)…but your like me and saying so what will my data transfer speeds be higher? I soon found that they actually are my friend.

Background/Environment:

- Cisco 2970G-24T coming in at a whopping $15 per port…oooo so worth it!
- Freenas with 2xGig ports with LACP and Jumbo frames enabled on the interface and switch ports connected to a RAIDZ2 pool of 14 146GB 10K u320 SCSI drives. Known as “orange”. Dam right I color code my arrays!

- Sharing Orange via NFS

- 2x Proxmox 2.0 Beta systems with 2xGig ports with LACP  and Jumbo frames enabled on the interface and switch ports mounted to orange via NFS.

This took a bit of time to get right and i will include confs and links to that I used to get it done. 

host01: change mtu/confirm mtu/test writing a 4gig file to the NFS share

root@host01:~# ifconfig bond0.3 mtu 1500
root@host01:~# ip route get 172.18.3.9
172.18.3.9 dev vmbr3 src 172.18.3.15
cache mtu 1500 advmss 1460 hoplimit 64
root@host01:~# dd if=/dev/zero of=/mnt/pve/vm-orange/test0001 bs=1G count=4
4+0 records in
4+0 records out
4294967296 bytes (4.3 GB) copied, 50.1927 s, 85.6 MB/s
root@host01:~# ifconfig bond0.3 mtu 9000
root@host01:~# ip route get 172.18.3.9
172.18.3.9 dev vmbr3 src 172.18.3.15
cache mtu 9000 advmss 8960 hoplimit 64
root@host01:~# dd if=/dev/zero of=/mnt/pve/vm-orange/test0001 bs=1G count=4
4+0 records in
4+0 records out
4294967296 bytes (4.3 GB) copied, 38.5115 s, 112 MB/s

host02: change mtu/confirm mtu/test writing a 4gig file to the NFS share

root@host02:~# ifconfig bond0.3 mtu 1500
root@host02:~# ip route get 172.18.3.9
172.18.3.9 dev vmbr3 src 172.18.3.16
cache mtu 1500 advmss 1460 hoplimit 64
root@host02:~# dd if=/dev/zero of=/mnt/pve/vm-orange/test0002 bs=1G count=4
4+0 records in
4+0 records out
4294967296 bytes (4.3 GB) copied, 62.7919 s, 68.4 MB/s
root@host02:~# ifconfig bond0.3 mtu 9000
root@host02:~# ip route get 172.18.3.9
172.18.3.9 dev vmbr3 src 172.18.3.16
cache mtu 9000 advmss 8960 hoplimit 64
root@host02:~# dd if=/dev/zero of=/mnt/pve/vm-orange/test0002 bs=1G count=4
4+0 records in
4+0 records out
4294967296 bytes (4.3 GB) copied, 57.3596 s, 74.9 MB/s

Not sure yet why I get different speeds from the boxes but I assume it may have to do with the amount of memory in each system. Host01 has 8GB while Host02 4GB. I will deal with that in the future though $$$.

You see that after the change we have an increase in transfer speeds so the benefits are apparent and with faster arrays, more memory, and the possible improvements to 10Gbps interface (I FUCKING WISH) we can squeeze even more out of this. Also don’t forget about tweaks to NFS and SMB, this can really give you improvements without the expensive upgrades.

Confs and articles:

Cisco 2970 – Enable jumbo frames on all gigabit interfaces. This requires a reload to take affect.
(config)#system mtu jumbo 9000

FreeNAS-8.0.3-RELEASE-p1-x64 (9591) Documentation (read it like a bible) Video (older version but good detail)
Go to Network > Link Aggregation >  View Link Aggregations > Edit Interface > In options field enter “mtu 9000″ > save and reboot to make sure it stuck.

Proxmox 2.0 beta – This allows for the mtu change to survive reboots. This is the same for debian/ubuntu.
Edit the  /etc/rc.local file with the following format: ifconfig <interface name physical and or vlan> mtu 9000
Ex:
ifconfig eth0 mtu 9000
ifconfig eth1 mtu 9000
ifconfig bond0 mtu 9000
ifconfig bond0.3 mtu 9000

 

 

This storage server has been a back and forth ordeal and I was kicking my self profusely trying to find what the hell has gone awry here. Well after spending more money :( getting another HBA for the SCSI enclosure and finding that the card doesn’t work for my purpose, which was a serious downer. I just ordered the PCI-X riser for the 2950 so i can use some of the old cards I have lying around. In the mean time i started getting errors about memory and the CPU from the LCD on the front…WTF!!!! I just got these Procs (at a bargain) and one is bad. Sorry to say i got them from two different vendors so the packing is mixed up and scattered around my work space. It was confirmed when i used memtest and the box froze completely after the first pass. I took out one proc and reran it then left it. I will check on it later, but that’s crazy.

This might explain the I/O Whoas issue with the storage array. Meaning that the first RAID card may not have been an issue after all. Hmmmm…DAMMIT!!! Well i shall move forward and replace the proc after I run a multi pass memtest to make sure the current proc is still good.

UPDATE:

Well a little after I wrote the first portion of this I found that the 8GB of mem that I had may have been the issue but even after testing further I got an error that shows that one of the DIMM slots may have been an issue. So I reseated everything and tested again after deciding to re-purpose the box and oddly enough it passed 4 tests no issues in mem test nor on the Dell LCD. O well I don’t care to make this a storage server any more so it will do as is and since it passed the memtest i guess its good for a dev box. :)

NEVER!! As long as i keep learning something new i will continually update and change the configuration of this rack. I have found myself asking that question and finally i understand why its never finished. Im always finding new projects and as a company we are continually tested by our customers.

So this will never be done in a sense that nothing will be added but some day it wont be changed ever again…like when im dead for instance :).  So with this current change I am adding two Sunfire x4100 M2 servers. Converting our old production VM server to a storage server using FreeNAS w/ ZFS, Adaptec U320 SCSI and 5 series SAS controllers, a Chenbro SAS Expander that will be housed in a Supermicro case later on with a JBOD board. Then I will use the Dell 2950 as test bed for my new venture into programming/scripting.

I finally stopped fighting it and said FUCK IT, DO IT LIVE! its time to roll with the big boys. So that box will be my learning platform for Open Solaris and ruby shell scripting. Im hoping to create a search engine like Google’s Search Appliance that will index all the information across the various systems we use to run our business. From File shares to web based information. Should be easy right? Well im gonna take baby steps and do a few Hello World programs first.

The Sunfire boxes are awesome with the iLOM boards that allow me to do remote power functions and even watch the server boot from BIOS as if a screen was directly attached to it OUT OF THE BOX! I know a lot of exclamation points but im excited. This will be a great learning platform for us and we will build on it as always.

Time for show an tell

(click on pics for a bit of a description)

I/O Whoas :(

I think I am suffering from bottleneck issues. What I am witnessing is down right dismal performance from our server. I have dealt with ZFS in the past and this is by far the worse performance numbers I have seen. Now with that being said I know its a configuration issue and that’s why we test these sorts of things in our development rack. The ZFS filesystem and architecture can handle what I’m doing with ease but the issue here has to be drive placement and RAID conf. In my previous post you will see that in the 24 bay SATA enclosure there are drives that have white labels. These are apart of a RAID 1 array that serve as the OS drive that is shown in the BIOS. All drives in the enclosure are connected to an HP SAS Expander that connects to an Adaptec 5445 in the main chassis.

Now when performing an rsync or a disk usage (du) I get slow response and ultimately unresponsiveness from the box. I cant even log in after a while. The most recent issue was after reboot there was a service that was in maintenance mode and caused the startup procedure to halt entirely and drop to a maintenance shell. I eventually got it going but this is odd. I do see that one of the drives that is in the Pink array is having a S.M.A.R.T error but I don’t see it failing yet. It will be replaced soon. I believe I am seeing some serious I/O bottlenecks from this systems current config. I ordered the following to add to the mix:

2x Quad core 2.33 SLAEJ processors
8GB ECC FB Dimm PC2-5300
Final 146GB 15k SAS drive for the Yellow array
Finally a Chenbro 12803 Sas Expander (i have been looking for this!)
2x 32GB SSD (YAY!) for ZIL/L2ARC

I may have to reformat once these parts are added and things are moved around just for ease of mind. I want to use the SSDs to make some sort of increase in IOPS and test how they increase performance. So far im seeing 9 to 32 MB/s in the 146GB 10k U320 RZ2 array (ORANGE) and 20 to 40 MB/s in the 500GB 7.2K SATA array (pink). I will add the 1.5TB 7.2K SATA drives (YELLOW label PINK dots) later once I destroy the array they were in.

I know that ZFS can perform better than this and I will make it better once I figure out the best configuration or reason for the poor performance. For shits and giggles I may just put the OS on its own set of SSDs and velcro them to the case (don’t judge me!) so I can utilize the internal on board SATA connectors. Thus ensuring that the OS partition has its own personal bandwidth highway I guess.

All and all the experience is great since I am trying to learn Open Solaris more. This is exactly why we have a Dev rack to test this on. I love it!

Email Migration – Done

Billing System – Done

Now on to the next project  Rack Consolidation. We have a few list of projects that we want to get done and this is more of a spring cleaning for our development rack. The goal is to…well consolidate and virtualize some of our servers into one system. We choose Open Solaris and we are still learning it, but it offers us the capability to run storage and virtualization in one box. Of course there is the almighty ZFS offering but the virtualization aspect is new for us on this platform and will just add to our knowledge. Sadly the Open Solaris project is a bit dead but thankfully it has forked into Open Indiana which I believe is based off of Open Illumos. We mainly like it for the storage aspect though.

Storage arrays well be color coded to keep track of all the disk as far as speed and which array they are apart of.

Yellow = 146GB 15K SAS (High Speed VM Storage)
Orange = 146GB 10K U320 SCSI (Moderate Speed Unimportant Storage/Secondary Data Backup/Moderately Compressed)
Pink = 500GB/1.5TB 7.2K SATA (Moderate Speed Important Storage/Primary Data Backup)
Green = 1.5TB 5.4K SATA (Archive Storage/Tertiary Data Backup/Highly Compressed)
White =160GB 7.2K SATA (OS)

Yellow is for VMs that require high speed writes/reads.
Pink will hold most of our software repository like OS images, some VMs, Primary backups of VMs from the Yellow array
Orange will hold the everyday unimportant/light use data like Music, Documents, Temporary Customer backups, and whatever else and will be compressed to conserve space.
Green will be archive storage for all other arrays data and will be highly compressed making it a good place for old stuff we keep around just to have like older copies of Ubuntu or our documents and what have you.
White is the main root pool that will not store anything but the OS and its necessary files.

While designing and planning this we will work to get our phone system up and running again. As we feel it is time to begin using that again.