Archive for March, 2011

We are in the planning stages of rolling out our own line of custom built systems. Developing this computer building division will help us gain a few more clients and offer a one stop shop for customers. Affordable systems for home users and business…blah blah blah.

OK now lets get to the cool stuff. My business partner presented the idea that we put our own logo as the BIOS splash screen. A novel idea I said. Then I thought of how I have always wanted a cool utility partition to do diagnostic work or for disaster recovery of files. You know instead of breaking out the Ubuntu CD/USB toolkit for Memtest86+ or downloading programs like chntpw, ddrescue only to have it all disappear at reboot. Which sucks especially if you weren’t finished with the process. Most power users dual boot to get the best of both worlds. Speed, open source and of course  compatibility.

Basically the idea is to install Ubuntu Desktop on another partition that is 13 to 15 GB depending on Windows partition’s image compression. Yeah I know that sounds like a lot but here is what will be on it though. Boot screen with our logo that gives you the choice of Windows, MJNS Utility Partition (Ubuntu Desktop 10.10), and Memtest86+. Now Memtest alone is great to have because I have found memory to be a major issue in a few customer systems. On the utility partition we will have a slew of tools that are listed below. The most useful among them is partimage, gddrescue, photorec, ClamAV, TeamViewer, and chntpw.  Then add the fact that the partition is a full OS with browser, office suite, and more.

We, along with most other IT shops, use Ubuntu to diagnose issues and recover data. It seems like a no brainer of course. Simply one of those things you think of then say, “WHY DIDNT I THINK OF THIS SOONER!” We plan to roll this out not only for custom built systems, but on any computer we reformat. The ability to restore the windows partition on the fly making the turn around time for reformats shorter than 2 – 3 days. A time frame which is sometimes hard to meet depending on our load while other times easy.  I mean jeeeezzzzz it just makes sense.

Tools:

  • Partimage (Image/Restore partition for quick reformats)
  • Teamviewer (remote controlled disaster recovery)
  • ClamAV with ClamTK GUI (others can be added of course)
  • CHNTPW (OMG I forgot my password….Again!)
  • NTFS-3g/NTFS Progs (with ntfsundelete)
  • Recovery programs (photorec,foremost,gddrescue)
  • GParted
  • Nmap/Whois
  • Bonnie++ (Disk Benchmarking)
  • SSH Server
  • Samba (For quick shares and drag/drop files)
  • Filezilla (getting files from our support server)
  • Google Chrome (cause its cool)
  • Customized welcome music, possibly a video and our site as the home page…..BRANDING!

Most of these recovery, boot, utility partitions are closed with little and nothing as far as functionality, this will be far more robust. Testing to begin soon. Spoke to a friend about this and he mentioned that HP acquired out a small opensource distro and they plan to implement something like this in their new systems. I may be bugged or something, but its all good when its open source.

While writing about 2TB of data from a customer’s external drive an internal  drive fails but is revitalized upon restart. Most likely its a case of a I/O timeout or a failing drive. NO Problem! Even though configured as a RAIDZ2 (RAID-6) pool quite a bit of data was written to the array, sustaining speeds that of normal operations. Well we informed the customer of the drive failure, then assured them no data was lost. Rebooted to make sure everything came up and guess who decided to join the party again. The drive came up and ZFS began the resilvering (resyncing/rebuilding) process. Estimated to take 10 hours but to my glee and surprise it finished in 4 hours 13 minutes with 251 GB of data resilvered. We will still order a new drive on Monday to be sure but for now we will begin a scrub on the array to make sure no data corruption occurred. The zpool scrub should take a few hours as well but we can wait. Its better than having a corrupted set of data.

Whoa! thats slow. Now now thats not to say it isn’t something to do its probably not the best for the hard ware in use. I still believe in it because of the potential use in a long term data retention scheme. A picture is below. What you will see is slow writes due to the deduplication process’ excessive reads for the Hash logs for each bit of data. Making writes to the array at a measly 1 to 1.5MB/s. Even with compression lowered to lzjb or a lower level of gzip we still saw speed around 1.6MB/s. Soooooo after a waiting over 10 hrs to write nearly 600MB we decided to turn dedeuplication off and what do yah know where back to a more respectable speed of 24 to 28MB/s. We first thought it was the gzip-9 compression, but we felt like “NAH! you crazy. Dedupe wouldn’t do that.” This is from an rsync from a MAC Pro’s RAID0 array to a Ubuntu box with ZFS Fuse installed. We are a lot happier with this for now because we are doing the initial tranfer to the box from the customer’s internal storage. The gzip-9 compression does a great job though on most data and the write speed is needed since the customer wants to edit from the box in the future. More to come if there is anything else to report.

 

dedup enabled = slow| dedup disabled = faster gzip-9 enabled on a RAIDZ2 pool

Parts of the server

A little bit of history and thoughts for improvements:

Since we had limited funds from the customer to work on this we could have had better performance using higher end parts like a motherboard with more RAM, dual sockets, and better NIC cards. Now to improve the speeds for this build as-is we could add more memory and witness a large performance boost with SSDs for log/cache devices for the storage pool. The new SATA-6GB/s drives are out too. We had 3GB/s drives spinning at 5900 RPM. So, “Spend more get more” would probably be words to keep in mind for a faster production box.

The CPU to my surprise hasn’t been greatly taxed yet which is awesome. It can be upgraded to a Quad core or something but it really isn’t needed yet. The real bottleneck here is the pool configuration for log/cache device. Putting that on SSDs would surely speed it all up a great deal. Faster drives can be attached with a DAS unit later via a HBA or RAID card with the ability to connect to SAS Expanders. ”Spend more get more”

With our latest network revision we are focusing more on network security and lowering the amount of unprotected and unsecured ports. Some ports you can not help but open like your FTP and HTTP which are inherently internet facing. There are things that can be done to tweak the underlying daemon to be more secure though. Also other modules and encrypted versions of the protocols exist that use secure certificates and algorithms to obscure the data being transfered between hosts. For instance HTTPS uses certificates to establish secure methods of communications between client browsers and Web servers. The certificates are used mainly to encrypt HTTP requests for banking sites, web based emails, and other web based resources.

We have grown to accustomed to opening ports for various services like camera servers, a/c controllers, and odd port HTTP services. Our goal is to have less ports opened for internet facing  services. Now hackers can still jump on to an incoming stream to exploit any vulnerabilities on the other side. For now we are working on edge based security by implementing other tools like Intrusion Detection and Prevention Systems (IDS/IPS) like Snort.  Snort can be used as an add-on to routers with a package based system like PFSense which is what we use, installed as a daemon on a server that is turned into a router, or as an outside server configured with port mirroring monitoring the traffic transparently.

So as the first step to minimizing our surface for attack is to use a VPN tunnel to control/manage servers behind the firewall as opposed to opening a port. Even remote management of firewalls is a security risk. For example dd-wrt was a victim of an attack on its web based management tool when accessible from the internet. A work around to this would be to use a VPN connection or a system behind the firewall to manage the firewall. Though remote control protocols are always targeted, especially RDP on the well known port 3389. We choose to use IPSec for our VPN this time around instead of OpenVPN or PPTP (which only works for a while if at all).  We didn’t feel like messing with CAs and Keys this time around but may be implemented later on if we choose too. We are still researching any vulnerabilities with IPSec as configured along with best practices for this setup on our development network. Yes, we test things before deploying it in the wild. It gives us time to learn and figure out the best method for implementing services like VPNs or Networking topologies.

So far I like it. Using Schrewoft’s IPSec client and PFSense I get to the web management GUI and it works with my redundant firewalls We just have to reconnect when one goes down. We will be testing the ability to reach other devices later on when we added more boxes to the network. We like the speed even over wireless from Bellsouth to Comcast. We will be testing data transmission speeds also when more boxes are added since a basic webpage isn’t a great test for the speed of connections.

So to summarize, no more swiss cheese firewalls, all open holes are monitored and flagged if suspicious traffic is discovered, and management is done over encrypted tunnels instead internet facing management GUIs that may be vulnerable. We will be doing more of this in future deployments of any network we roll out.

OOO you thought this was a game. LOL, nah buddy. We recently attended a talk at SFISSA (South Florida Information Systems Security Association), we definitely plan to join them again this month also. Well at the last talk we expressed interest in donating some old equipment to the cause and we were told the fine guys at Hack Miami may be interested. We got in contact with James C. aka Hat Trick and linked up. James works for <can’t say> where he test web page vulnerabilities to protect against hacker’s like himself from getting important customer data. We donated a HP ML350 and a Cisco 3500 24 port switch to the cause. Talked for a bit and showed him around the home office and our development network we are working on. A very cool and interesting dude. You can tell he has been through some networks. He has done work on hacking into terrorist networks, who actually seem to be very vulnerable from his perspective. Did I mention he was a super cool dude, we went into it quite a bit and we will be doing more with Hack Miami and other organizations. He will list us as sponsors for the group so thats always a good look for us.

Once the latest revision on our network is done we will definitely ask these guys to try and hack us. Why you ask? This is important and needed as a proactive step to ensuring that networks we build and design are safe to use. There is no point in sitting down now a days and waiting to be hacked as if “No one will ever find us over here.” is a mantra to design lack luster security scheme for an internet facing server or network. Even internal networks can be hacked to bits by John in accounting who just got laid off. Security is important and without it in mind you may be on the bad end of identity theft. Sounds fun doesn’t it.

Ask your web design guy about %27

Got a little insight on some web programming and other network security centric ideals along with some other hilarious photos. We hope to be  more involved with local computer organizations. This helps us with exposure amongst our peers and we get a lot of info on network security, new technologies, insight on best practices for enterprise organizations, along with cool stuff other NERDS are doing.

Ummm there isn’t any. What I mean is that most business don’t care  and they feel the expense for such security is to high and the age  old phrase “who would attack me?” well just about anyone these days.  With this recessions we seem to be in, crime is at an all time high both in the street and on the Internet. A myriad of methods to collect or steal data that start from constant phishing emails telling you to renew or update account info all the way to site hijackers posing as online banking institutions that harvest your log in info then drain you dry. It’s only become easier these days with the explosion of wireless networks being put up without the proper protection or updating the systems sharing the network. Inferior wireless security and encryption methods are still used to block access. Little do most businesses and wireless users know that there  are people out there that look for these vulnerabilities and exploit them. Harvesting the I’ll gotten gains (magic the gathering reference for the nerds).

Say for instance this was a financial instituion or even a store. These places store financial information of customers. Whether it’s on quickbooks or through a credit card terminal. So if one would say break through that and take a look at the computers on the network I’m sure you would find that the systems haven’t been updated with even the latest service pack. There are sites where vulnerabilities are cataloged, categorized, and archived so they are easily searched to help a hacker exploit your system(s). Cracked versions of software are no different. Sureeee it’s free and even though it can’t be updated it’s cool right? NOPE! Some of these wares have built in vulnerabilities and hacks that go un noticed by antivrus/antimalware programs. One program or document type that is known for its numerous exploits is the PDF format. Used in the past to jailbreak iPhones and penetrate system security this along with other popular formats like doc/docx (Word) and xls/xlsx (Excel) have been transporter of nasty malware/virus attacks. They install, infect, and repeat on any other system on your network.

Some would say man your just being paranoid, but when your system gets infected or you have financial data that is of the greatest importance to keep your company going can be potential compromised you tend to protect that at any cost.

The reason for this post is because recently I was getting my vehicle repaired by the dealer. I noticed while looking to connect to their wireless their WIFI encryption was WEP. Known to hacked in no time and not allowed to be used by networks with credit card terminals by some merchants services vendors. I informed them of their easily hackable network and let them know. They kinda shrugged it off as if I was some sort of salesman. Which I guess I was in a way cause I told them I could fix it :D) The lady who I think ran the place declined even when I briefly explained what could happen. I walked away and told my partner the story. He said “Wow and she didn’t care” I replied “No”, “Well I guess she doesn’t want to keep her money then.”…..”YEP!”

For example if I was to hack into this dealers network. Access one of there terminals/PCs that allow me to make say MINOR changes to my payments we could make a Jaguar XJ or BMW M3 a biiiiiit more affordable. Now a good hacker doesn’t do this drastically. No, a good hacker takes their time makes payments on time and stays under the radar. Finding the best way to make sure they are unnoticed. Leave some fruit on the tree for the winter in a sense. A friend and I spoke about this and we came up with this pan.

THE PLAN – CODE NAME: NICE AND EASY

Make payments for a year. Don’t be late or cause any discrepancy. Then go back and change their payment logs. Augment it showing that you have paid them in full within that one year term. Yeah that’s it niceeee and easy. They got something right.

This is how some hackers squat in the bushes and work till they are found out. Like infecting a heavily visited government site or blog with a Java exploit. I personally feel bad doing something like this, but hey I feel bad though the person who does penetrate their network on the other hand…..well I’m sure they won’t. Security is hard and computer/network security is even harder. Exploits are patched then exploited again. There are many flaws in computer systems and they are all brought to light sooner or later. I’m just trying to find better ways to protect my little piece of the CLOUD.

 

Some good reading/watching

Old PDF Exploit

WEP Cracking Step-by-Step

WEP Hacking VIDEO!

 

 

 

The Evolution of A Network

Just as we evolve and our needs/wants grow so should the things around us. Have you ever seen a forest with only one color of plant or one type of plant for that matter. This is one of many examples of a growing network. As the forest grows it goes through changes and change outs. Old leaves are shed and new foliage begins to grow. Assimilating with the current surroundings to take advantage of the resources that are provided. The roots act as the cables that transfer important nutrients or information as we say, the leaves store the data that is sent by the sun light to help power growth and evolution. It grows stronger with each passing moment. So the forest is also a network that undergoes changes, upgrades, and augments itself based on its surroundings.

We follow this same principle when it comes to our own network. When new things are learned we love to add it to our existing setup or build new networks around it if need be. We are moving towards data storage here at MJ Network Solutions and we need to get familiar with the demands of what Enterprise level customer’s require. They all need storage that is fast, reliable, and scalable. We have been studying and researching ways to offer this. We have begun to make purchases toward this goal. In a previous post we mentioned the Cisco MDS 9216A SAN Fabric Switch. Well we got one. We are waiting on a few more parts but in a few weeks our SAN will be operational and we can not wait to see the numbers from the test results.

This is a small piece to the 4th revision of our humble test/development/production network. We do this to make answering our customer’s request effortless on our part. We have been through times where we have given a solution only to find there was a better, more affordable, and easier way to do something. Nothing is worse then not having the answer. So we are proactive in our approach in certain areas. Learning new things with albeit old equipment does nothing but help us in the long run.

Here is to growth in both the forest and your own piece of the CLOUD!

SAN Fabric Switch

Network storage upgrade never looked so good

Update when it asks

When java, flash, or adobe reader prompts you for an update, you should do it. Recently a customer needed malware removed. Even with the latest OS and up to date antivirus the system was still infected due to the out of date java plugin. It was minor and thankfully easy to remedy, but it could have been avoided by updating their software on a regular basis or at least when it prompts you to. Out of date software produces vulnerabilities for malware and wide spread infections. Sooo yeah update when it says so, OK. If your unsure then post a question here on our facebook page. An opening like this can destroy a network over time, sometimes in no time at all. We run through system updates on a regular basis to make sure we are in good standing, but even though not very often we are victims even with our best efforts.

Heed it's warning

 

 

So I have good news and I have bad news.  The good news is that my feelings for ZFS thus far is simply…OMFG this freaking filesystem rocks. The bad news is my Openfiler storage box is about to get the freaking axe. A couple of months ago my Openfiler box had a drive that was failing and caused “silent data corruption”. Meaning that the drive was seen as good but with bad sectors. These sectors were striped across the array which caused crazy data corruption throughout my LVM2 set which was built up of 2 raid6 arrays. One with 500GB drives and the other with 1.5TB drives. Of course Openfiler had nothing to do with this and my Adaptec RAID card could have done a better job of handling it, like making the drive unusable.  Openfiler is still great for easy iSCSI, FTP, and SMB though and still has a place in my heart.

Anyway the box used for all of this was built from off the shelf parts from good ol’ newegg.com, see attachment of the cart. Disregard the RAM since I ordered the wrong type, which is ok because compusa had it. After the build we added more fans for the case and a set of 80GB drives to act as the boot drives in a software RAID1 configuration . More pics of the build to come. The server OS is Ubuntu 10.10 64bit server. No special configs except for NIC Bonding which is in mode 6 for fault tolerance and load balancing to the network.

The configuration of the zfs pool is raidz2 with deduplication and compression enabled across 5 2TB drives. Giving me 5.4 TB of space. After using the commands and some tests the “zpool list” command now shows an effective space of 9.06TBs. WTF!! I couldn’t believe it. When writing sequential data of all zeros the writes become faster due to the dedup’n going on starting at 98 MB/s and increasing with out even taking up extra space. When writing files using /dev/urandom we get a miserable 5 to 6 MB/s. But with compression we still see some resource savings. Network transfer are a steady 10 MB/s with both files containing zeros and random data. If these numbers were linear in there calculations you would see a consistent savings of 23 to 40 percent depending on the data written of course. That is awesome all the same, cause even at the low end of that scale we see the benefits of this configuration for archive storage saving us nearly 1.35TB of space.Though im personally horrible at math so do the calculations yourself. Im still a bit worried about how well this will do with other data like iso files, music, and document files, but there is more work to be done with some real data. Pictures below.

The CPU hit was minimal and it sure did eat up the memory for caching which was expected. The minimal CPU usage is a relief as that was a thing we worried about. Utilization was no more than 30% for the zfs process, in fact dd took up 99% of one core. Probably do some tuning with the “nice” of the process to give it more attention to relieve any bottlenecks with resource allocation to the ZFS process.

zfs-cart-selection

Im still a bit confused, better yet not fully educated on ZFS and how it does things. But these numbers are promising to say the least for the current purpose of archiving storage.

Drive failure have been tested and its asounding how fast ZFS recovers. We unplugged a drive’s data cable and at first it didn’t even notice it for a while until we ran the “zpool scrub” command which does checksum consistency checks on the array to recover data from corruption. Once it realized it, we added the drive back after a reboot since reconnecting it changed the dev mapping to a different letter, /dev/sdg to /dev/sdh. Once rebooted a simple “zpool scrub” to resilver (resync) the array. If we wanted to replace the drive a “zpool replace <pool name> <old drive> <new drive>” command would have done it. Since no new data was written to the array the resilvering process took less than a 2 minutes for the pool made up of 2TB….If we are lying im dying! Thats huge because resyncing such large drives in an array would be hours if not days for a software RAID6 and a few hours to a day for a RAID card. Trust me we have seen syncing and initializations of 1.5TBs take so long and be so stressful that a drive will fail before even being put into production.

Yes, we know it was a bit long winded but this is amazing after dealing with software/hardware RAID then combine all that with LVM for large data pools for storage. This is fast and offers a lot of cool features. A truly advanced file system.