Bits dropped from st0w's various technical experimentations
RSS icon Email icon Home icon
  • Solaris + ZFS = The Perfect Home File/Media Server

    Posted on May 16th, 2009 st0w 35 comments

    For a while I’ve been wanting to build the perfect home file server.  When I last lived with a couple good friends, we had a FreeBSD box setup for serving content.  Now keep in mind this was back in 2002, so the whopping 300GB we had online was fairly sizable.  The disks were configured in a RAID 1 array, so as to ensure we wouldn’t lose data if we lost a drive.

    Well, we did.  But that was because one drive died and we (ok, I) didn’t replace it before the second one in the mirror also died.  Since then, I’ve been keeping my data on an external 1TB Western Digital drive, all the while being worried that if anything happened to the drive, I would again lose everything.  I needed a file server to maintain all my various data, and it needed to be both flexible and powerful.  Those cheapo NAS devices don’t offer the level of control and access that I wanted.

    My requirements for building a server were:

    • Secure – I want to be able to tweak and control access at every level
    • Able to easily add to and extend existing storage
    • Very fast network performance.  I use and love my Popcorn Hour to watch video in HD, so the server should be able to stream 1080p while doing any number of other tasks without issue
    • Ability to serve all different types of hosts: NFS, AFP, SMB, SCP/SFTP
    • Extremely fault-tolerant.  I’d like to be able to lose two disks and still be ok
    • Flexible. I do a number of other things/experiments, and I’d like to be able to use it for more than just serving files.
    • Inexpensive.  Self-explanatory, I’m a poor student.

    The storage flexibility is something that has been around for ages on expensive SAN solutions from people like EMC. I’ve worked pretty extensively with Solaris over the years, and have been closely following OpenSolaris over the past few years.  Solaris is all of these things, and with ZFS as the filesystem of choice, that same flexibility can be had for a much cheaper price.  ZFS is just the coolest thing to happen to data storage in ages. Read more about ZFS to see just how cool it is. It brings the ‘I,’ which stands for ‘Inexpensive,’ back into RAID.

    I ordered the following parts to come up with a fast, fault-tolerant, flexible server:

    1SUPERMICRO MBD-X7SBL-LN2 Motherboard$204.99
    1COOLER MASTER 4-in-3 Hardisk Cage$21.99
    1Intel Xeon E3110 3.0GHz Dual-Core CPU$169.99
    1LG 22X DVD±R SATA DVD Burner$25.99
    2Kingston 4GB (2 x 2GB) DDR2 800 (PC2 6400) ECC DRAM$106.98 ($53.49 ea)
    6Western Digital Caviar Black 1TB 7200 RPM SATA 3.0Gb/s Hard Drives$629.94 ($104.99 ea)
    1Rosewill RC-216 PCI Express SATA card$24.99
    1Western Digital Caviar SE 320GB 7200 RPM SATA 3.0Gb/s Hard Drive$49.99
    1CORSAIR CMPSU-650TX 650W Power Supply$99.99

    For a total of $1,334.85, pre-tax/shipping.  Which I think is pretty inexpensive for what you’re getting.  A few notes:

    ECC memory.  You don’t need to use ECC memory, but it’s really a bad idea to run ZFS without ECC. ECC memory helps to prevent any internal corruption, and since data integrity is important, it’s worth it. You’re spending all this money to build a file server with redundancy of disks to help prevent data loss, why wouldn’t you build redundancy into memory as well? So don’t run ZFS without ECC.

    You’ll see there’s no case on that list.  I reused an old ATX case I had around, which works fine with the microATX Super Micro motherboard.

    The motherboard comes with six SATA cables, you may want to order a few extra to have on hand in case you don’t already have some lying around. The power supply has eight SATA power connections, all of which will be consumed by the drives in this setup. So if you ever intend to connect additional drives, you’ll have to get a converter to use the Molex connectors to power them.

    The power supply.  Make sure you have enough juice from whatever power supply you use to power all the drives.  Do a little research if you’re unsure, as you don’t want to have a system with drives that aren’t getting enough power.

    The extra SATA card.  The mobo only has six SATA ports on it, and the system has eight drives.  This was the cheapest decent SATA II card I could find, with the downside being that I’ll have to replace it if I want to add more drives in the future.

    Once it’s all built, download the latest OpenSolaris ISO and follow the install guide. I won’t go over it here, as it’s very straight forward. One thing you should note is that when Solaris installs, it will require you to select an entire disk for the OS to use.  This is where you pick the smaller 320GB drive and let it do its thing.

    Once it reboots, it’s time to setup a big ZFS file system from the other drives. You have to decide whether you want to use RAID-Z or RAID-Z2.  The difference between the two is that RAID-Z uses one disk for parity, and RAID-Z2 uses two disks for parity.  If you go with RAID-Z, you’ll have more usable space at the cost of less fault-tolerance: lose more than one disk and you’re toast.  RAID-Z2 allows you to still retain data if you lose two disks, at the cost of losing one more disk’s worth of storage.  After my previous difficulties, and considering disks are so cheap these days, I opted to go with RAID-Z2. Keep in mind that with ZFS if your file system is ever running out of space, you can just toss in another disk, add it to the pool, and you’re done.

    CLARIFICATION (updated 5/19/2009): Thank you to Steve and Grant, who correctly pointed out in the comments that this last statement about simply tossing in another disk does not apply to RAID-Z and RAID-Z2 pools. Because of how ZFS is built, it is currently not possible to just extend a RAID-Z/2 set by adding a single drive. Although there has been discussion of how this could be implemented, it currently hasn’t been done. The same applies to shrinking RAID-Z stripes. Steve and Grant are correct, if you wish to maintain the redundancy provided by RAID-Z/2, you must add another set of drives to the pool, thus requiring at bare minimum two drives for RAID-Z or three for RAID-Z2.

    So lets get to actually creating that ZFS file system, with the examples using RAID-Z2.

    First, identify the disks in the system:

    .oO(root@st0wlaris ~) format
    Searching for disks...done
           0. c3t0d0 <DEFAULT cyl 60798 alt 2 hd 255 sec 126>
           1. c3t1d0 <DEFAULT cyl 60798 alt 2 hd 255 sec 126>
           2. c4d1 <DEFAULT cyl 60798 alt 2 hd 255 sec 126>
           3. c5d0 <DEFAULT cyl 60798 alt 2 hd 255 sec 126>
           4. c5d1 <DEFAULT cyl 60798 alt 2 hd 255 sec 126>
           5. c6d0 <DEFAULT cyl 38910 alt 2 hd 255 sec 63>
           6. c7d0 <DEFAULT cyl 60798 alt 2 hd 255 sec 126>
    Specify disk (enter its number): ^C
    .oO(root@st0wlaris ~)

    Since I know c6d0 is the only disk currently in use (it’s the only differently-sized disk), I know all the others will be used in my raidz2 setup. But to be sure, first verify the existing pool’s contents:

    .oO(root@st0wlaris ~) zpool status
    pool: rpool
    state: ONLINE
    scrub: none requested
           NAME        STATE     READ WRITE CKSUM
           rpool       ONLINE       0     0     0
             c6d0s0    ONLINE       0     0     0
    errors: No known data errors

    Yep, c6d0 is in use. So we’ll use all the other disks. So first, create the storage pool:

    .oO(root@st0wlaris ~) zpool create zpool raidz2 c3t0d0 c3t1d0 c4d1 c5d0 c5d1 c7d0
    .oO(root@st0wlaris ~) zpool status
    pool: rpool
    state: ONLINE
    scrub: none requested
           NAME        STATE     READ WRITE CKSUM
           rpool       ONLINE       0     0     0
             c6d0s0    ONLINE       0     0     0
    errors: No known data errors
    pool: zpool
    state: ONLINE
    scrub: none requested
           NAME        STATE     READ WRITE CKSUM
           zpool       ONLINE       0     0     0
             raidz2    ONLINE       0     0     0
               c3t0d0  ONLINE       0     0     0
               c3t1d0  ONLINE       0     0     0
               c4d1    ONLINE       0     0     0
               c5d0    ONLINE       0     0     0
               c5d1    ONLINE       0     0     0
               c7d0    ONLINE       0     0     0
    errors: No known data errors
    .oO(root@st0wlaris ~)

    Identify the available pool and file system space.

    .oO(root@st0wlaris ~) zpool list
    rpool   298G  7.33G   291G     2%  ONLINE  -
    zpool  5.44T   216K  5.44T     0%  ONLINE  -
    .oO(root@st0wlaris ~) zfs list
    NAME                     USED  AVAIL  REFER  MOUNTPOINT
    rpool                   11.3G   282G    72K  /rpool
    rpool/ROOT              3.31G   282G    18K  legacy
    rpool/ROOT/opensolaris  3.31G   282G  3.24G  /
    rpool/dump              4.00G   282G  4.00G  -
    rpool/export            23.2M   282G    19K  /export
    rpool/export/home       23.1M   282G    19K  /export/home
    rpool/export/home/st0w  23.1M   282G  23.1M  /export/home/st0w
    rpool/swap              4.00G   286G    16K  -
    zpool                    120K  3.56T  36.0K  /zpool

    You can see here the disparity between the first and second commands on zpool. 5.44TB of space, but you wind up with 3.56TB of actual usable space. This is expected, as two of the six 1TB drives have been effectively sacrificed to parity.

    Next just create any file systems you want, and you’re done. I know. Criminally easy, isn’t it?

    .oO(root@st0wlaris ~) zfs create zpool/media
    .oO(root@st0wlaris ~) zfs create zpool/software

    And now make them available read-only via NFS:

    .oO(root@st0wlaris ~) zfs set sharenfs=on zpool/media
    .oO(root@st0wlaris ~) zfs set sharenfs="ro" zpool/media

    Each file system shares the same pool of storage as every other file system living within the same pool. But the big advantage is that each file system behaves and can be controlled like a fully independent file system. So you can set quotas, different rules, whatever you so desire.

    There is a lot you can do with ZFS, and this just deals with getting it setup in the first place. I’ll be posting more as I do different things with the server.


    Be Sociable, Share!

    25 responses to “Solaris + ZFS = The Perfect Home File/Media Server” RSS icon

    • I’ve build something similar myself and posted my experiences at I also found the following site to be invaluable:

    • My ZFS experiences are at ( / blog / file-server in case the url gets truncated again)

    • May i also strongly recommend openfiler or even freenas

      In 2002 technologies like block replication, web based storeage mangagement, failover support was almost non existent.

      It might be an over kill for your setup but for an investment of $1,334.85 you may enjoy these additional features.

      • Leon -

        Hadn’t heard of Openfiler, but it looks quite robust. Would definitely be interesting to experiment with if file serving was the only consideration (truth be told, I do a lot of other tasks and having the ability to do things like run VMs under Solaris’ Xen-based xVM was a big part of the decision for me). FreeNAS is a very solid product, I’ve been a FreeBSD fan for years and have been following FreeNAS for a while.


    • What about encryption? I bet that RIAA won’t be very happy when it sees all those movies (for Popcorn Hour) stored on your server :-)

      • Cristian -

        Hah! Well, everything I have on there is either a rip of a DVD I physically own or have legally purchased online, or something I’ve recorded with a separate DVR (MythTV) box. But ZFS crypto support is actually in the works, targeted for later this year.

    • I have a very similar setup. I can’t get the temperature though in OpenSolaris. That would help a lot to see how much I’m stressing the drives. Do you know how to do it?

    • It looks like a nice setup, the only thing I would change would be to use the Western Digital RE3 1TB enterprise dive. It has 1.2 million hours between failure. Thanks for the great article.

      • Martin -

        Thanks! The RE3 drives are amazing, but they drive the cost up about $50/drive, which makes the whole setup $300 more. They do provide more features for RAID arrays, though, and are a great option if the budget is there.

    • “Keep in mind that with ZFS if your file system is ever running out of space, you can just toss in another disk, add it to the pool, and you’re done.”

      Not if you want the data on that drive to be stored with redundancy. RAIDZ arrays cannot be grown. Yes, you can add a drive to the pool, but that drive alone will have no redundancy, and since files will be spread across the whole pool, losing that drive could lead to huge data loss if you don’t manage your pool right.

      Your new drive must be part of a RAID1 or RAIDZ array to give redundancy. If you want to continue with two drives’ redundancy, then that means that growing your pool will require a minimum of 3 drives.

    • Unless there’s a new feature in the latest ZFS version that I’m unaware of, there’s something that needs to be clarified regarding growing the size of the zfs pool.

      (In ZFS, the storage pool is made up of volumes, or “zvol”s.)

      A raidz2 zvol cannot change in size. Once you create it, it’s set in stone. If you “toss in another disk”, you can add it to the overall pool, but it won’t be part of the original raidz2. You can add the new disk as just a stand-alone zvol (or better, buy two new disks and add them to the pool as a mirrored zvol.) The consequences of this is that now you have a pool with different redundancy levels, so you’ve reduced the fault tolerance of the system to that of the lowest zvol.

      It’s recommended to always add zvols of the same redundancy level. So in your case, to expand the pool, you would need to add a 3-disk raidz2 zvol.

    • Can’t you take like $200-$300 off by not going with a xeon ?

      Would it really be that much slower if you went with the cheapest motherboard you could find ?

      • It wouldn’t be slower at all. The cost resulted from my choice to use ECC memory in the system. ECC is generally considered “server-grade” by manufacturers, and thus less expensive “workstation” grade boards/CPUs don’t support it. The mobo I chose was the least expensive board I could find that supported ECC memory. The board and CPU come to about $375, and you could probably get that down to $100-125 with an AMD or Intel Core i7.

    • For that price, I think a Synology RS509 or similar would be better. And you will save over 80$ in your power supply bill per year

      • mitu – I checked out Synology’s product line, and saw the DS509 (was that the one you’re referencing, or did I fail at Google?) It seems to start around $1100 without any drives, so the cost would be significantly higher once you pop the $600 for drives and the solution a lot less flexible. Although you would definitely save on the power costs, for sure.

    • I’m curious… should your 320GB fail, how easy is it to point a freshly built So