bitdroppings

Bits dropped from st0w's various technical experimentations
RSS icon Email icon Home icon
  • Python: Extracting a file from a zip file with a different name

    Posted on July 23rd, 2010 st0w No comments

    When dealing with zip files, sometimes you may want to alter the filename before extraction. Whether it is to run the name through some kind of sanitization routine or adjust it in some other way, the docs don’t seem to offer a clear way to do this. Instead, I’ve found the following works quite nicely:

    import zipfile
     
    zipdata = zipfile.ZipFile('somefile.zip')
    zipinfos = zipdata.infolist()
     
    for zipinfo in zipinfos:
        zipinfo.filename = do_something_to(zipinfo.filename)
        zipdata.extract(zipinfo)

    The Python zipfile module provides the ability to reference the contents of a zip file via ZipInfo objects. These are tied to individual archive entries, but not by filename. During extraction, you can pass either an archive filename, or a ZipInfo object. If you alter the ZipInfo object’s filename attribute before extraction, extract(zipinfo) will then use the new name for extraction, but extract the original data.

  • Setting Up a New FreeBSD Server for Easy Maintenance

    Posted on May 18th, 2009 st0w 1 comment

    I’ve been a big fan of using FreeBSD for a server platform for a very long time. Since about 1995 or so, I would guess. This blog is running on FreeBSD, and I’ve always found it to be an incredibly well performing, robust, stable and easy to manage platform for servers.  But there are several pieces of software that can make your life much easier in keeping a system up-to-date, and I figured I’d document them for new FreeBSD users as I go through setting up a new system.

    A big part of that flexibility is the ports and packages system.  I won’t go into what it is in extreme depth, but suffice it to say that it allows you to easily install software from binary packages if you don’t have a specific need to compile them, or to alternatively easily configure and compile applications from source to fit your unique needs.  The ports system allows you to configure every little detail about how software will be built, and you can do that configuration only once in /etc/make.conf. Any subsequent builds or upgrades will automatically use those same configuration parameters.  But I digress.

    Before you do anything else, you should setup an alternate user account for yourself. It goes without saying that you should NEVER use the root account except for individual tasks for which you explicitly need it.  But even after all these years, I still see admins logging in as root all the time. Don’t do it. Install sudo.  For more details, check out this article on Softpanorama.

    Portsnap

    The ports collection is distributed as a massive hierarchy of directories and Makefiles that are used to configure the 20,207 (as of May 18, 2009) pieces of software offered through the ports system. You can install the ports tree during your initial FBSD install, which many people do, or you can install it after the fact. Keeping that hierarchy up-to-date is critical to maintaining a secure system.  In the old days, we used cvsup and synced to the ports repository.  Now, you should be using portsnap.

    Since FreeBSD 6.0, portsnap has been included by default.  When first setting up your ports tree, portsnap can download and install everything for you.  It’s as simple as:

    hostname59826# portsnap fetch
    Looking up portsnap.FreeBSD.org mirrors... 3 mirrors found.
    Fetching public key from portsnap2.FreeBSD.org... done.
    Fetching snapshot tag from portsnap2.FreeBSD.org... done.
    Fetching snapshot metadata... done.
    Fetching snapshot generated at Sun May 17 20:06:01 CDT 2009:
    3df0dd0aa6718a7ab040d0f48b789d2170ed64adf142a2100% of   56 MB 4674 kBps 00m00s
    Extracting snapshot... done.
    Verifying snapshot integrity... done.
    Fetching snapshot tag from portsnap2.FreeBSD.org... done.
    Fetching snapshot metadata... done.
    Updating from Sun May 17 20:06:01 CDT 2009 to Mon May 18 16:29:58 CDT 2009.
    Fetching 3 metadata patches.. done.
    Applying metadata patches... done.
    Fetching 0 metadata files... done.
    Fetching 88 patches.....10....20....30....40....50....60....70....80.... done.
    Applying patches... done.
    Fetching 5 new ports or files... done.
    hostname59826# portsnap extract
    /usr/ports/.cvsignore
    /usr/ports/CHANGES
    /usr/ports/COPYRIGHT
    /usr/ports/GIDs
    /usr/ports/KNOBS
    [... whole bunch of output deleted here ...]
    /usr/ports/x11/yakuake/
    /usr/ports/x11/yalias/
    /usr/ports/x11/yeahconsole/
    /usr/ports/x11/yelp/
    /usr/ports/x11/zenity/
    Building new INDEX files... done.
    hostname59826#

    And that’s it. When you’re updating it during routine times, you just use the fetch and update commands, as follows (note that you can combine them into a single command line):

    hostname59826# portsnap fetch update
    Looking up portsnap.FreeBSD.org mirrors... 3 mirrors found.
    Fetching snapshot tag from portsnap1.FreeBSD.org... done.
    Fetching snapshot metadata... done.
    Updating from Mon May 18 01:46:40 EDT 2009 to Mon May 18 17:29:58 EDT 2009.
    Fetching 3 metadata patches.. done.
    Applying metadata patches... done.
    Fetching 0 metadata files... done.
    Fetching 88 patches.....10....20....30....40....50....60....70....80.... done.
    Applying patches... done.
    Fetching 2 new ports or files... done.
    Removing old files and directories... done.
    Extracting new files:
    /usr/ports/CHANGES
    /usr/ports/KNOBS
    [... whole bunch of output deleted here ...]
    /usr/ports/x11/gdm/
    /usr/ports/x11/kde4/
    /usr/ports/x11/kdelibs3/
    /usr/ports/x11/xfce4-clipman-plugin/
    Building new INDEX files... done.
    hostname59826#

    Seems like it would be a good command to toss into cron, right? Well, in order to avoid the servers getting assaulted on a daily basis, ‘portsnap fetch’ won’t run from cron. Instead, you have to run ‘portsnap cron’ which waits a random amount of time, up to one hour, before syncing. You also shouldn’t cron the update portion, as you could run into problems if the tree attempts to update while you’re doing something with the ports tree. And if you’ve ever had to pull an all-nighter fixing something, you know this kind of thing can really confuse you at 3:00am when strange things start happening. Instead, add the following entry to cron:

    0       4       *       *       *       portsnap -I cron update && pkg_version -vIL'=>'

    This will download changes to ports, as well as the compiled index files. The pkg_version command will list all out of date ports, the output of which will be e-mailed to you. Then you can update them using portupgrade.

    Portupgrade

    Much has been written about portupgrade, ONLamp and the FreeBSDwiki have some good reviews. I won’t go into depth here, save for installing it, and reviewing the single command you can use to upgrade all of your installed ports.

    First, install it:

    hostname59826# cd /usr/ports/ports-mgmt/portupgrade
    hostname59826# make install clean

    Accept the default options (or change them if you wish), and let it compile and install. Before using it, you’ll want to setup /etc/make.conf to contain whatever options you prefer. Here’s what I typically use on new boxes:

    # It's a server.  We don't need GUIs... except Cairo needs to generate a single lib header that gtk needs..
    .if ${.CURDIR:M*/graphics/cairo}
    .else
    WITHOUT_GUI=yes
    WITHOUT_X11=yes
    .endif

    All it does is ensure ports doesn’t try to build in X11 support for any ports.

    To use portupgrade, the commands are simply:

    hostname59826# portupgrade -varR

    You can omit the ‘v’ for less verbose output. I typically issue the command whenever I get daily e-mails from pkg_version about updated ports being available.

    Portaudit

    Portaudit is an awesome little utility. It compares your installed port versions to its regularly updated database of known vulnerabilities. It can quickly and easily alert you to any insecure software that you have on your system. The whole thing can be set to run from cron, so all you have to do is look at an e-mail once a day and make any necessary updates. There’s no excuse to not be running it on a FreeBSD box.

    Install it from ports:

    hostname59826# cd /usr/ports/ports-mgmt/portaudit
    hostname59826# make install clean
    ===>  Vulnerability check disabled, database not found
    ===>  Extracting for portaudit-0.5.12
    ===>  Patching for portaudit-0.5.12
    ===>  Configuring for portaudit-0.5.12
    ===>  Building for portaudit-0.5.12
    ===>  Installing for portaudit-0.5.12
    ===>   Generating temporary packing list
    ===>  Checking if ports-mgmt/portaudit already installed
     
    ===>  To check your installed ports for known vulnerabilities now, do:
     
          /usr/local/sbin/portaudit -Fda
     
    ===>   Compressing manual pages for portaudit-0.5.12
    ===>   Registering installation for portaudit-0.5.12
    ===>  Cleaning for portaudit-0.5.12
    hostname59826# /usr/local/sbin/portaudit -Fda
    auditfile.tbz                                 100% of   55 kB  205 kBps
    New database installed.
    Database created: Mon May 18 20:10:02 CDT 2009
    0 problem(s) in your installed packages found.

    Done. Then anytime you want to check for vulnerabilities, just issue the command

    hostname59826# portaudit   
    0 problem(s) in your installed packages found.

    Once installed, portaudit will be automatically run every day from FreeBSD’s periodic function, which is called from the default crontab in /etc. So your job is almost done. Now, you should be sure that somebody is receiving and reviewing the daily security run output. And all your daily root mail. If you aren’t, edit /etc/mail/aliases and add a line like the following:

    root: me@mydomain.com

    Then run

    hostname59826# newaliases

    So now not only are you notified daily of any security issues, portaudit will also notify and prevent you from installing any ports with known vulnerabilities in them. The ports collection is aware of it, and will automatically call it before building or installing software.

    FreeBSD-update

    Keeping FreeBSD up to date used to be a real pain, as you had to either continually rebuild everything from source or update using install CDs. It. Was. Awful. Then Colin Percival wrote a nice little utility called freebsd-update, which provided the ability to do binary diff updates for security updates, and life got a whole lot easier.

    Now, freebsd-update is distributed as part of the base FBSD install and it’s to your advantage to use it. Security patches can be downloaded by:

    hostname59826# freebsd-update fetch
    Looking up update.FreeBSD.org mirrors... 2 mirrors found.
    Fetching public key from update5.FreeBSD.org... done.
    Fetching metadata signature for 7.1-RELEASE from update5.FreeBSD.org... done.
    Fetching metadata index... done.
    Fetching 2 metadata files... done.
    Inspecting system... done.
    Preparing to download files... done.
    Fetching 32 patches.....10....20....30. done.
    Applying patches... done.
     
    The following files will be updated as part of updating to 7.1-RELEASE-p5:
    /boot/kernel/kernel
    /boot/kernel/kernel.symbols
    /lib/libc.so.7
    /lib/libcrypto.so.5
    /rescue/[
    [... lots of extraneous junk deleted ...]
    /usr/src/sys/kern/kern_environment.c
    /usr/src/sys/kern/kern_time.c

    and then installed:

    hostname59826# freebsd-update install
    Installing updates... done.
    hostname59826#

    If any kernel patches were installed, you’ll have to reboot the box. You can cron the check for and download of new updates by adding the following to /etc/crontab:

    @daily                                  root    freebsd-update cron

    When run from cron, freebsd-update will only download updates and it won’t install them. It will e-mail you to notify you that updates are ready to be installed. This is a good thing. :) (You did configure your system to send root mail to you, right?)

    For more information, including all the various configuration options for freebsd-update and how to use it to do minor or major version upgrades (from 7.1 to 7.2, or from 6.0 to 7.0, for example), read the FreeBSD Handbook entry on freebsd-update.

    Feel free to leave comments with any other things you’d recommend be done when setting up a new FreeBSD box!

    Update: For some strange reason, FreeBSD 7.1 is still shipping with perl 5.6 as the default. Meanwhile, the 5.6 version is scheduled to be removed by the end of May, 2009. You’ll have to replace it with either 5.8 or 5.10: the choice is yours, do some research – some people claim 5.10 isn’t stable or is not a “true” release, merely a path to perl 6. Others have no qualms and use 5.10 on production machines.

    You can use portupgrade to replace perl5.6 with either of the two as follows:

    hostname59826# portupgrade -o lang/perl5.8 perl

    or

    hostname59826# portupgrade -o lang/perl5.10 perl
  • Fixing e1000g Network Glitches in Solaris by Upgrading to the Latest Dev Version

    Posted on May 16th, 2009 st0w No comments

    In my last post, I discussed building my home file server around OpenSolaris and ZFS. ZFS is trivial to setup, and after building it, I started moving over all my data from my external 1TB drive on my old computer. I got excited, and wanted to test the performance of the disks and the network, so while copying data I NFS mounted the new server from one of my other machines and tried to copy data off it as fast as possible. Much to my dismay, network access on the machine promptly became sluggish and stopped working and I was left quite ticked off at having built this powerful file server only to have it be unable to serve files properly.

    I did some quick Googling, and found this defect at OpenSolaris.org, which links to this underlying bug. The fix is either to upgrade Solaris to build nv_103 or newer, or as a workaround, add this to /kernel/drv/e1000g.conf:

    tx_hcksum_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
    lso_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;

    Then do “update_drv e1000g”, unplumb, and replumb the interfaces or reboot.  Unfortunately, I also discovered there are A LOT of bugs related to the e1000g, so I decided to upgrade a more recent kernel with all the fixes integrated.  Checking the system, I found kernel rev snv_101 is installed:

    .oO(st0w@st0wlaris ~) uname -a
    SunOS st0wlaris 5.11 snv_101a i86pc i386 i86pc Solaris

    So the first I tried was to update OpenSolaris and hopefully that would give me kernel 103 or higher. Updating Solaris is trivially easy, only requiring the following commands:

    .oO(st0w@st0wlaris ~) pfexec pkg refresh
    .oO(st0w@st0wlaris ~) pfexec pkg image-update

    And that’s literally it, just reboot after it completes and you’re done. One of the cool things about Solaris/pkg/ZFS is the concept of boot environments. Because ZFS supports snapshots, it upgrades the system using something like a snapshot of your current system (although it’s not technically a snapshot). It then upgrades the new revision, creating a boot environment (BE). After the image-update completes, you can view the BEs on your system as follows:

    .oO(st0w@st0wlaris ~) beadm list
     
    BE            Active Active on Mountpoint Space
    Name                 reboot               Used
    ----          ------ --------- ---------- -----
    opensolaris   yes    no        legacy     57.5K
    opensolaris-1 no     yes       -          2.59G

    Here you can see the opensolaris-1 BE that’s been created and will be active upon the next reboot. OpenSolaris.org has more documentation on BEs, and it’s worth reading. They’re pretty damn cool. Unfortunately upon rebooting, the kernel rev was the same. You can actually verify the packages that will be installed by pkg as follows:

    .oO(st0w@st0wlaris ~) pfexec pkg refresh
    .oO(st0w@st0wlaris ~) pfexec pkg image-update -nv

    Had I done that before, I would have seen that there wasn’t an update to the kernel available. Live and learn. I then discovered that the preview of OpenSolaris 2009.06 is available, and based on snv_111a. You can upgrade from OpenSolaris 2008.11 to the preview of 2009.06 fairly easily. Just change pkg’s origin repository to the developer repository, image-update, reboot, and you’re done.

    .oO(st0w@st0wlaris ~) pfexec pkg set-authority -O http://pkg.opensolaris.org/dev/ opensolaris.org
    .oO(st0w@st0wlaris ~) pfexec pkg refresh
    .oO(st0w@st0wlaris ~) pfexec pkg image-update
    .oO(st0w@st0wlaris ~) pfexec reboot

    Really. That’s it. It will create a new BE for you and upgrade that to OpenSolaris 2009.06, which is based on snv_111a. I can confirm that it does fix the network issues. Once OpenSolaris 2009.06 becomes an official release, you can just change your repository back to the release:

    .oO(st0w@st0wlaris ~) pfexec pkg set-authority -O http://pkg.opensolaris.org/ opensolaris.org
    .oO(st0w@st0wlaris ~) pfexec pkg refresh

    UPDATE: Note that as per the release of OpenSolaris 2009.06, which has recently occurred, the syntax for the above command has changed slightly. Before being able to do any updates, you will need to issue the following instead, if you are staying on the dev path:

    .oO(st0w@st0wlaris ~) pfexec pkg set-publisher -O http://pkg.opensolaris.org/dev opensolaris.org
    .oO(st0w@st0wlaris ~) pfexec pkg refresh
    .oO(st0w@st0wlaris ~) pfexec pkg image-update

    You’ll probably get a note that you have to update pkg, so go ahead and follow the instructions, then repeat the image-update command.

    If you want to switch to the production-grade release path, use the following commands and you’ll be all set:

    .oO(st0w@st0wlaris ~) pfexec pkg set-publisher -O http://pkg.opensolaris.org/ opensolaris.org
    .oO(st0w@st0wlaris ~) pfexec pkg refresh
    .oO(st0w@st0wlaris ~) pfexec pkg image-update

    But if you stay at the dev repository, you’ll continue to get new releases as they’re put live, roughly every two weeks. Dev releases can be buggy, so unless you have a reason to upgrade to dev releases, like a broken network driver ;) or software that’s not in the mainline repository, it’s probably best to switch back to the release repository.

  • Solaris + ZFS = The Perfect Home File/Media Server

    Posted on May 16th, 2009 st0w 35 comments

    For a while I’ve been wanting to build the perfect home file server.  When I last lived with a couple good friends, we had a FreeBSD box setup for serving content.  Now keep in mind this was back in 2002, so the whopping 300GB we had online was fairly sizable.  The disks were configured in a RAID 1 array, so as to ensure we wouldn’t lose data if we lost a drive.

    Well, we did.  But that was because one drive died and we (ok, I) didn’t replace it before the second one in the mirror also died.  Since then, I’ve been keeping my data on an external 1TB Western Digital drive, all the while being worried that if anything happened to the drive, I would again lose everything.  I needed a file server to maintain all my various data, and it needed to be both flexible and powerful.  Those cheapo NAS devices don’t offer the level of control and access that I wanted.

    My requirements for building a server were:

    • Secure – I want to be able to tweak and control access at every level
    • Able to easily add to and extend existing storage
    • Very fast network performance.  I use and love my Popcorn Hour to watch video in HD, so the server should be able to stream 1080p while doing any number of other tasks without issue
    • Ability to serve all different types of hosts: NFS, AFP, SMB, SCP/SFTP
    • Extremely fault-tolerant.  I’d like to be able to lose two disks and still be ok
    • Flexible. I do a number of other things/experiments, and I’d like to be able to use it for more than just serving files.
    • Inexpensive.  Self-explanatory, I’m a poor student.

    The storage flexibility is something that has been around for ages on expensive SAN solutions from people like EMC. I’ve worked pretty extensively with Solaris over the years, and have been closely following OpenSolaris over the past few years.  Solaris is all of these things, and with ZFS as the filesystem of choice, that same flexibility can be had for a much cheaper price.  ZFS is just the coolest thing to happen to data storage in ages. Read more about ZFS to see just how cool it is. It brings the ‘I,’ which stands for ‘Inexpensive,’ back into RAID.

    I ordered the following parts to come up with a fast, fault-tolerant, flexible server:

    QuantityDescriptionPrice
    1SUPERMICRO MBD-X7SBL-LN2 Motherboard$204.99
    1COOLER MASTER 4-in-3 Hardisk Cage$21.99
    1Intel Xeon E3110 3.0GHz Dual-Core CPU$169.99
    1LG 22X DVD±R SATA DVD Burner$25.99
    2Kingston 4GB (2 x 2GB) DDR2 800 (PC2 6400) ECC DRAM$106.98 ($53.49 ea)
    6Western Digital Caviar Black 1TB 7200 RPM SATA 3.0Gb/s Hard Drives$629.94 ($104.99 ea)
    1Rosewill RC-216 PCI Express SATA card$24.99
    1Western Digital Caviar SE 320GB 7200 RPM SATA 3.0Gb/s Hard Drive$49.99
    1CORSAIR CMPSU-650TX 650W Power Supply$99.99

    For a total of $1,334.85, pre-tax/shipping.  Which I think is pretty inexpensive for what you’re getting.  A few notes:

    ECC memory.  You don’t need to use ECC memory, but it’s really a bad idea to run ZFS without ECC. ECC memory helps to prevent any internal corruption, and since data integrity is important, it’s worth it. You’re spending all this money to build a file server with redundancy of disks to help prevent data loss, why wouldn’t you build redundancy into memory as well? So don’t run ZFS without ECC.

    You’ll see there’s no case on that list.  I reused an old ATX case I had around, which works fine with the microATX Super Micro motherboard.

    The motherboard comes with six SATA cables, you may want to order a few extra to have on hand in case you don’t already have some lying around. The power supply has eight SATA power connections, all of which will be consumed by the drives in this setup. So if you ever intend to connect additional drives, you’ll have to get a converter to use the Molex connectors to power them.

    The power supply.  Make sure you have enough juice from whatever power supply you use to power all the drives.  Do a little research if you’re unsure, as you don’t want to have a system with drives that aren’t getting enough power.

    The extra SATA card.  The mobo only has six SATA ports on it, and the system has eight drives.  This was the cheapest decent SATA II card I could find, with the downside being that I’ll have to replace it if I want to add more drives in the future.

    Once it’s all built, download the latest OpenSolaris ISO and follow the install guide. I won’t go over it here, as it’s very straight forward. One thing you should note is that when Solaris installs, it will require you to select an entire disk for the OS to use.  This is where you pick the smaller 320GB drive and let it do its thing.

    Once it reboots, it’s time to setup a big ZFS file system from the other drives. You have to decide whether you want to use RAID-Z or RAID-Z2.  The difference between the two is that RAID-Z uses one disk for parity, and RAID-Z2 uses two disks for parity.  If you go with RAID-Z, you’ll have more usable space at the cost of less fault-tolerance: lose more than one disk and you’re toast.  RAID-Z2 allows you to still retain data if you lose two disks, at the cost of losing one more disk’s worth of storage.  After my previous difficulties, and considering disks are so cheap these days, I opted to go with RAID-Z2. Keep in mind that with ZFS if your file system is ever running out of space, you can just toss in another disk, add it to the pool, and you’re done.

    CLARIFICATION (updated 5/19/2009): Thank you to Steve and Grant, who correctly pointed out in the comments that this last statement about simply tossing in another disk does not apply to RAID-Z and RAID-Z2 pools. Because of how ZFS is built, it is currently not possible to just extend a RAID-Z/2 set by adding a single drive. Although there has been discussion of how this could be implemented, it currently hasn’t been done. The same applies to shrinking RAID-Z stripes. Steve and Grant are correct, if you wish to maintain the redundancy provided by RAID-Z/2, you must add another set of drives to the pool, thus requiring at bare minimum two drives for RAID-Z or three for RAID-Z2.

    So lets get to actually creating that ZFS file system, with the examples using RAID-Z2.

    First, identify the disks in the system:

    .oO(root@st0wlaris ~) format
    Searching for disks...done
     
    AVAILABLE DISK SELECTIONS:
           0. c3t0d0 <DEFAULT cyl 60798 alt 2 hd 255 sec 126>
              /pci@0,0/pci8086,2940@1c/pci197b,2363@0/disk@0,0
           1. c3t1d0 <DEFAULT cyl 60798 alt 2 hd 255 sec 126>
              /pci@0,0/pci8086,2940@1c/pci197b,2363@0/disk@1,0
           2. c4d1 <DEFAULT cyl 60798 alt 2 hd 255 sec 126>
              /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
           3. c5d0 <DEFAULT cyl 60798 alt 2 hd 255 sec 126>
              /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
           4. c5d1 <DEFAULT cyl 60798 alt 2 hd 255 sec 126>
              /pci@0,0/pci-ide@1f,2/ide@1/cmdk@1,0
           5. c6d0 <DEFAULT cyl 38910 alt 2 hd 255 sec 63>
              /pci@0,0/pci-ide@1f,5/ide@0/cmdk@0,0
           6. c7d0 <DEFAULT cyl 60798 alt 2 hd 255 sec 126>
              /pci@0,0/pci-ide@1f,5/ide@1/cmdk@0,0
    Specify disk (enter its number): ^C
    .oO(root@st0wlaris ~)

    Since I know c6d0 is the only disk currently in use (it’s the only differently-sized disk), I know all the others will be used in my raidz2 setup. But to be sure, first verify the existing pool’s contents:

    .oO(root@st0wlaris ~) zpool status
    pool: rpool
    state: ONLINE
    scrub: none requested
    config:
     
           NAME        STATE     READ WRITE CKSUM
           rpool       ONLINE       0     0     0
             c6d0s0    ONLINE       0     0     0
     
    errors: No known data errors

    Yep, c6d0 is in use. So we’ll use all the other disks. So first, create the storage pool:

    .oO(root@st0wlaris ~) zpool create zpool raidz2 c3t0d0 c3t1d0 c4d1 c5d0 c5d1 c7d0
    .oO(root@st0wlaris ~) zpool status
    pool: rpool
    state: ONLINE
    scrub: none requested
    config:
     
           NAME        STATE     READ WRITE CKSUM
           rpool       ONLINE       0     0     0
             c6d0s0    ONLINE       0     0     0
     
    errors: No known data errors
     
    pool: zpool
    state: ONLINE
    scrub: none requested
    config:
     
           NAME        STATE     READ WRITE CKSUM
           zpool       ONLINE       0     0     0
             raidz2    ONLINE       0     0     0
               c3t0d0  ONLINE       0     0     0
               c3t1d0  ONLINE       0     0     0
               c4d1    ONLINE       0     0     0
               c5d0    ONLINE       0     0     0
               c5d1    ONLINE       0     0     0
               c7d0    ONLINE       0     0     0
     
    errors: No known data errors
    .oO(root@st0wlaris ~)

    Identify the available pool and file system space.

    .oO(root@st0wlaris ~) zpool list
    NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
    rpool   298G  7.33G   291G     2%  ONLINE  -
    zpool  5.44T   216K  5.44T     0%  ONLINE  -
    .oO(root@st0wlaris ~) zfs list
    NAME                     USED  AVAIL  REFER  MOUNTPOINT
    rpool                   11.3G   282G    72K  /rpool
    rpool/ROOT              3.31G   282G    18K  legacy
    rpool/ROOT/opensolaris  3.31G   282G  3.24G  /
    rpool/dump              4.00G   282G  4.00G  -
    rpool/export            23.2M   282G    19K  /export
    rpool/export/home       23.1M   282G    19K  /export/home
    rpool/export/home/st0w  23.1M   282G  23.1M  /export/home/st0w
    rpool/swap              4.00G   286G    16K  -
    zpool                    120K  3.56T  36.0K  /zpool

    You can see here the disparity between the first and second commands on zpool. 5.44TB of space, but you wind up with 3.56TB of actual usable space. This is expected, as two of the six 1TB drives have been effectively sacrificed to parity.

    Next just create any file systems you want, and you’re done. I know. Criminally easy, isn’t it?

    .oO(root@st0wlaris ~) zfs create zpool/media
    .oO(root@st0wlaris ~) zfs create zpool/software

    And now make them available read-only via NFS:

    .oO(root@st0wlaris ~) zfs set sharenfs=on zpool/media
    .oO(root@st0wlaris ~) zfs set sharenfs="ro" zpool/media

    Each file system shares the same pool of storage as every other file system living within the same pool. But the big advantage is that each file system behaves and can be controlled like a fully independent file system. So you can set quotas, different rules, whatever you so desire.

    There is a lot you can do with ZFS, and this just deals with getting it setup in the first place. I’ll be posting more as I do different things with the server.

    »crosslinked«