I’m a bit of “gadget geek” – I’ve been working with and on computers since 1982 when I got my first Atari 1200xl. I was hooked almost immediately. I spent hours typing in code listings and hours more waiting for them to save ( and load ) from cassette tapes.
Now days ( 2011 ), I’m a windows developer turned database administrator. I’ve got an MSDN subscription as well as my still very intact “love of computers.” I have a pile of “occasionally” working systems ( PCs and Macs ) from friends, family and work “discards” that end up scattered throughout the office with bits and pieces of code, virtual machines, development configurations and of course – lots ‘o Linux installations all over the place.
Being a Linux geek, I started thinking about a centralized storage and figured – hey why not build a Linux server system with network shares for all this data. Then I can take images and create virtual machines from these old systems and reclaim some closet space!
That’s when I found out about Drobo – and soon after that - FreeNAS. I think it was about version 0.65 or so when I first read about FreeNAS. I tried installing it on my Linux server, but not all of my hardware was FreeBSD compatible. I started saving some $ and learning more about NAS technology as well as looking into my own needs and learning about FreeBSD hardware. Before I knew it 5 years had passed and I had saved a chunk of change toward this project. It was time to implement it.
In October, I started reading everything I could find and watching videos on how easy it was now to do a FreeNAS install. I also found out it ran off of a USB stick and that the interface was redone / simplified and now it supported ZFS as well as dual-disk redundancy! ( Hey, Drobo does that too – but this does it free ). I built up a VM in Hyper-V and installed it – it ran great and configuration was a snap!
So before diving in, I made up some goals:
We all have them and with this project, I definitely did! But I figured it was better to accept them and document them rather than build up my FreeNAS server in denile.
My first misconception was – “I’ll build a case that can hold 10 to 15 drives, but I’ll start with 4 or 5 drives and expand as I can!” Well, thanks to the FreeNAS forum, I learned that while I could do that, each “expansion” would need to be a new RAID set. That was something I didn’t realize right away.
My second misconception was – “A drive is a drive – as long as it’s SATA, I’m all set.” Let me just say that while you can find some long, “which drive is best?” arguments on the FreeNAS forums – you’ll want to take some time to read some of them anyway. There are SATA and SATA II drives which transfer data at different rates and for this project one thing I found is that you want consistency.
My third misconception was – “Now I have to find an affordable RAID controller.” That would have been a miracle in itself. The good RAID controllers with battery backups run 500$ ( US ) and up! Holy smokles! Again, I turned to the FreeNAS forums and read what I could find on this – lo and behold you don’t really NEED a RAID controller – in fact, ZFS performs very well as software RAID systems go without out. In some cases, you can’t use your RAID controller with it.
While I did end up using a “RAID” controller – I used it in JBOD mode just to give me access to up to 16 drives on one controller card.
I said a few times above that I “read and read” in the FreeNAS forums. I did just that – there was so much useful information there that I did not post a single question / comment to the forums until I was done with the project. Take the time and read – if you don’t understand something post and ask. The FreeNAS forums are loaded with helpful and knowledgeable people. It just so happened – someone else has asked each of the questions that I would have asked anway – so it saved me a few “steps”.
This was the hardest part for me. I started looking at hotswap RAID cases – 99% of which seem to be for rack-mount server systems. Well while that is pretty cool – this is again for my house, not a server room, so I wanted something a little more “PC” and a little less “server room”.
Here’s the list – I’ve included links so you can check out the parts if you’d like. Most of this came from Newegg.com – some was on sale. Some was just normal pricing:
Parts list -
NOTE: Many people in the FreeNAS forums say that you should stay away from the “Green” WD hard drives – they run at odd / variable speeds. But for my project – it was about SPACE, not speed, so I accepted it might be slower / knew that going in.
1. Server Motherboard - 249.99
2. Case w/9 exposed drive bays - 59.99
3. (2) 4gb kits @ 59.99 ea. - 119.98
4. Dual-core Pentium CPU - 77.99
5. Ultra Quiet CPU FAN - 56.99
6. 650w Power Supply - 65.00 ( guess )
7. (3) Hot-Swap Cage @ 109.99 ea - 329.97
8. (8) 3TB HDs @ 124.99 ea - 999.92
9. (7) 2TB HDs @ 69.99 ea - 489.93
10. RocketRAID 2240 Controller - 249.99
11. (4) Infiniband to (4) SATA
@ 25.00 ea - 100.00
Raw storage capacity: ( 8 x 3TB) + ( 7 x 2TB ) = 38 TB
That seems a bit much, but when you consider the following option, it’s not looking so bad!
1. Drobo 1200i ( empty? ) - 2199.00 ( guess )
2. (8) 3TB HDs @ 124.99 ea - 999.92
3. (4) 2TB HDs @ 69.99 ea - 279.96
The guess price is based on a review (http://news.cnet.com/8301-30685_3-20030860-264.html?tag=mncol;1n) which listed the 8-bay version “starting” at that price. They didn’t have a 12-bay price empty, so I’m being generous.
Raw storage capacity: ( 8 x 3TB) + ( 4 x 2TB ) = 29.1 TB*
*This number is according to the capacity calculator on the Drobo website.
So many online orders later – and I actually saved a little because I had some “old” 2TB SATA drives laying around that were the same “green” drives – so that saved me some money on this. Here’s everything except those few 2TB drives that I did have to order..
So I pulled all 15 of the drive sleds out and mounted the first (8) 3TB drives in them and set all of those aside. I mounted the remaining (7) 2TB drives and set those aside as well.
Next – it was on to the case.
This case was chosen specifically because it has 9 exposed 5.25” bays. But like many PC cases, there were metal tabs in the front and back of each bay to “support” the drive in that bay. So I had to use needle nose pliers to bend down the tabs for 2 of the bays, then skip a bay and repeat. Once all 6 of the required bays were “modified”, the drive cages fit in nicely.
This particular case has those yellow buttons on the drive bays. Once the cage it in position, you can “press and lock” the button. This extends “prongs” in place of the drive mounting screws to hold the cage in place. One thing to note is while you can’t tell in this photo, you will need both sides of the case off to install the cages so you can lock the pins on the other side.
The “pins” left a little “give” in the drive cages, which I didn’t really like ( vibration! ) – but I let that go for now and moved on.
Next up was the motherboard. I picked this one specifically for the fact that it had (2) 133 gHz PCI-X slots. It actually has a total of 4, but the others are 100 gHz. I needed this for the RAID controller.
Research paid off here – I found a board I liked a LOT better, cheaper and it LOOKED right, but taking the time to have the specs printed and making sure it was all compatible with each other and FreeBSD saved me some money, time and heartache in the long run.
Above, the 8gb of RAM is installed and the dual-core Pentium in installed. This board did have an odd processor slot – it was offset at a 45 degree angle. I had some concerns about the fan covering the entire top of the processor. Some Googling of the various model numbers together found that people who had used this same processor / motherboard and fan combination did have it running just fine without any heat issues.
Lastly, I popped the fan onto the MB and installed the whole mess in the case. I also put the power supply in to the case ( not connected ) at this point.
The fan that I used was HEAVY for a fan – it has a LOT of heat-dissipating surface on it – but that means weight – I almost dropped the motherboard putting it in place because the added weight made it a bit awkward to handle.
I want to mention the power supply here. I found that this was somewhat important during my research. This power supply has (4) independent 12v “rails”. What that means is that there are groups of power cables and each has it’s own power – it’s not shared. From what I found online, this helps evenly distribute the power load during times like “peak” usage / powering up / etc.
I split those “rails” like this:
(1) Motherboard connector ( this DOES count as 1 rail – keep that in mind if you’re looking for more ).
(2) Raid cage 1
(3) Raid cage 2
(4) Raid cage 3
You may recall from my goals that anything over 400w was going to be too much for me – because I want this thing as “green” as possible. Well, I had to go up to 650w to find a power supply that met my needs. Live, learn and compromise.
So the camera flash did not do me a great service here. The cages all have power, the motherboard has power and the RAID controller is in place. I’ve also got the cages SATA connectors in place.
By the way, this RAID card – it’s only 3G/s. The cages will support 6G/s as will the motherboard, but the price difference for RAID cards was large enough that I opt’d again for a little less speed in favor of more storage capacity. It’s a balancing act, right?
I connected up the “front panel” USB and eSATA connectors here as well – though again, it’s harder to see. It’s that mess of black smaller cables – next to the top of the bottom cage.
This motherboard also has a feature that I hadn’t seen on any other motherboard – a USB connector on the board. I actually did not catch this in the specifications, but for a FreeNAS server, this was a serendipitous bonus! I could stick my USB stick inside the case and not have to worry about it snapping off of the front or back if someone happened too close to the server.
So I installed the FreeNAS image to the USB stick, slid all those drives into place and locked them in nice and snug and got ready to boot the beast up!
I will say, it was suprising how “front heavy” the case was with 15 drives in it – it weighs maybe 50 lbs but it’s awkward to move around.
I created my 2 RAID sets, the first one from the 3TB drives. Total size – 15.2 TiB. The second RAID set from the 2TB drives. Total size – 10.1 TiB.
Here, I’d already done some file copies to it for speed testing, but with RAID-Z2 on both volumes ( dual-disk redundancy ), I ended up with a total of 25.3 TiB of available, usable storage!
One final nice little surprise was that the power draw on my UPS showed that the system was only pulling 155w when it was idle! When it was writing, it would spike up to 186w or so, but I haven’t seen it go above that yet.
Time is a bit tight around the holidays, but "next time", I'll try to put together a post showing how I created a Hyper-V SQL Server using iSCSI "drives" on the FreeNAS server as the underlying data drives.
Usually, I end my posts or messages with something like "Thanks for your help" or a "Hope it helps!" -type of thought.
In this case, I just hope you enjoyed something a little different than the "normal."
I hope you and your family have a Merry Christmas!