You can boot FreeBSD from ZFS, but my understanding is like Solaris,
only from a zfs mirror or lower raid level, not raidz or above. So if
you are still stuck with a mirror plus a raidz, whats it matter if
that mirror is zfs or something else? Plus we are only talking about
a /boot or / partition, you could settle for having a non-raid /boot
and have /usr and gang on zfs. Also, zfs prefers a 64 bit platform
with lots (1G or more is optimal) of memory, and tends to have
stability problems on any 32bit OS (related to kernel address space)
without tweaking.
Some of the things I appreciate most about zfs is the ease of use, and
the abundance of volume/fs features you usually only get out of
something like a Netapp (with or without additional licensing!), such
as directory based snapshots, snapshot import/export to other systems,
snapshot rollback, raw image creation and exporting (iscsi, etc) and
of course flexible volume sizes that you can limit the maximum usage
and/or guarantee an amount of free space for each volume if you don't
want to share all the free space with all volumes. Plus it allocates
"inodes" (for lack of the specific term coming to mind) on the fly, so
you don't have to worry about having too little at creation or later.
I've also heard appreciation for the checksums on individual files, so
in the case of a raid failure with data loss, it is still possible to
positively verify the integrity of individual files which may need to
be copied elsewhere (recovered).
In the ease of use department, you can go from empty disks to having
the newly created volume mounted and ready to use with one simple
command, no extensive waits, work, or brain cells. They have really
gone above and beyond normal expectations of required work by
practically obsoleting fstab, exports/dfstab, mount, tunefs,
exportfs/share, newfs, (insert volume manager here), fsck, fdisk,
disklabel/format, volume/fs resizer, ... To experienced admins,
using the above commands is not "hard" by any stretch, but it can be
refreshing to take advantage of the forthought to eliminate the need
and take care of the obvious. For example, if I wanted to remount
/data on /data2 instead, I would normally have to do:
umount /data
mv /data /data2 (or mkdir + chmod)
vi /etc/fstab (..... dont make mistakes here)
mount data2
versus:
zfs set mountpoint=/data2 z/data (done! assuming not in use.)
and, if you want, you can still instruct it to let you mount it
traditionally.
Example system 1:
Filesystem 1024-blocks Used Avail Capacity Mounted on
z/backups 276837120 0 276837120 0% /backups
z/data 375196672 98359552 276837120 26% /data
z 276837120 0 276837120 0% /z
z/backups/host1 280055552 3218432 276837120 1% /backups/host1
z/backups/host2 376857728 100020608 276837120 27% /backups/host2
z/backups/host3 276837120 0 276837120 0% /backups/host3
z/backups/host4 276837120 0 276837120 0% /backups/freespace
# zfs get compressratio z/backups/host1
NAME PROPERTY VALUE SOURCE
z/backups/host1 compressratio 1.41x -
Example system 2:
Filesystem 1024-blocks Used Avail Capacity Mounted on
z 472478976 0 472478976 0% /z
z/10g 10485760 2046464 8439296 20% /z/10g
z/400g 419430400 128 419430272 0% /z/400g
z/dvarchive 474987776 2508800 472478976 1% /z/dvarchive
z/gzip 472579328 100352 472478976 0% /z/gzip
z/obj 472970624 491648 472478976 0% /z/obj
z/ports 472654080 175104 472478976 0% /z/ports
z/ports/distfiles 472520192 41216 472478976 0% /z/ports/distfiles
z/ports2 472753920 274944 472478976 0% /z/ports2
z/ports2/distfiles 472520192 41216 472478976 0% /z/ports2/distfiles
z/src 472788608 309632 472478976 0% /z/src
Here, /z/10g and /z/400g are limited to 10g and 400g of usage respectively,
with no free space guarantee, so if someone fills up /z/obj completely,
/z/10g will also be full. If I apply a 10g free space reserve to /z/10g:
# zfs set reservation=10g z/10g
then the shared free space decreases by 10g, and df reflects it as long
as the volsize is not smaller than the shared free space (400g is the
counter example):
Filesystem 1024-blocks Used Avail Capacity Mounted on
z 464039552 0 464039552 0% /z
z/10g 10485760 2046464 8439296 20% /z/10g
z/400g 419430400 128 419430272 0% /z/400g
z/dvarchive 466548480 2508928 464039552 1% /z/dvarchive
z/gzip 464139904 100352 464039552 0% /z/gzip
z/obj 464531200 491648 464039552 0% /z/obj
z/ports 464214656 175104 464039552 0% /z/ports
z/ports/distfiles 464080768 41216 464039552 0% /z/ports/distfiles
z/ports2 464314496 274944 464039552 0% /z/ports2
z/ports2/distfiles 464080768 41216 464039552 0% /z/ports2/distfiles
z/src 464349312 309760 464039552 0% /z/src
The "Used" amount reflects the space actually used on disk,
so it could be smaller than what "du" or "ls" report if the files
are compressed such as the difference between /z/ports and
/z/ports2 which have the same contents but different compression levels,
or the "Used" could reflect 2 or 3x as much as your files consume
if you instructed zfs to store more than one copy of each file on disk.
Telling zfs to store more than one copy may help read performance
and may help ensure a file is readable incase of a raid redundancy
failure (too many disks failing at once).
Have I explored zfs? Yes ;) Have I used it yet on Solaris? Nope!
See here for a post on a quick start guide for zfs if you
want to see how easy and powerful it can be:
http://docs.freebsd.org/cgi/getmsg.cgi?fetch=1402809+0+/usr/local/www/db/text/2007/freebsd-current/20070408.freebsd-current
(if that doesnt work, click
http://docs.freebsd.org/cgi/mid.cgi?20070406214325.GB61039
and goto the "freebsd-current" link)
On Wed, Aug 15, 2007 at 04:19:22PM -0400, Peter Cole wrote:
That is good stuff. With the upcoming release of FreeBSD including support
for ZFS it will hopefully only be a matter of time before it too can boot
ZFS.
Peter Cole
Information Technologist
Michigan State University Press
-----Original Message-----
From: MSU Network Administrators Group [mailto:[log in to unmask]] On
Behalf Of Matt Kolb
Sent: Wednesday, August 15, 2007 2:07 PM
To: [log in to unmask]
Subject: [MSUNAG] ZFS boot
If any of you are using OpenSolaris, here is a great blog post about
converting to a ZFS root (/):
http://blogs.sun.com/timf/date/20070329
If you're unfamiliar with ZFS but want to learn more, check here:
http://www.opensolaris.org/os/community/zfs/whatis/
Now take what you've read there and imagine you can boot off of it.
No more wasted space or being locked in to a bad decision when you
(or your predecessor) initially setup the box.
Good stuff!
./mk
--
Matt Kolb <[log in to unmask]>
Academic Computing & Network Services
Michigan State University
|