LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Other *NIX Forums > *BSD
User Name
Password
*BSD This forum is for the discussion of all BSD variants.
FreeBSD, OpenBSD, NetBSD, etc.

Notices


Reply
  Search this Thread
Old 06-18-2012, 06:51 AM   #1
vermaden
Member
 
Registered: Jan 2006
Location: pl_PL.lodz
Distribution: FreeBSD
Posts: 406

Rep: Reputation: 89
HOWTO: ZFS Madness (BEADM on FreeBSD)


0. This is SPARTA!

Some time ago I found a good, reliable way of using and installing FreeBSD and described it in my Modern FreeBSD Install [1] [2] HOWTO. Now, more then a year later I come back with my experiences about that setup and a proposal of newer and probably better way of doing it.

1. Introduction

Same as year ago, I assume that You would want to create fresh installation of FreeBSD using one or more hard disks, but also with (laptops) and without GELI based full disk encryption.

This guide was written when FreeBSD 9.0 and 8.3 were available and definitely works for 9.0, but I did not try all this on the older 8.3, if You find some issues on 8.3, let me know I will try to address them in this guide.

Earlier, I was not that confident about booting from the ZFS pool, but there is some very neat feature that made me think ZFS boot is now mandatory. If You just smiled, You know that I am thinking about Boot Environments feature from Illumos/Solaris systems.

In case You are not familiar with the Boot Environments feature, check the Managing Boot Environments with Solaris 11 Express PDF white paper [3]. Illumos/Solaris has the beadm(1M) [4] utility and while Philipp Wuensche wrote the manageBE script as replacement [5], it uses older style used at times when OpenSolaris (and SUN) were still having a great time.
I last couple of days writing an up-to-date replacement for FreeBSD compatible beadm utility, and with some tweaks from today I just made it available at SourceForge [6] if You wish to test it. Currently its about 200 lines long, so it should be pretty simple to take a look at it. I tried to make it as compatible as possible with the 'upstream' version, along with some small improvements, it currently supports basic functions like list, create, destroy and activate.

Code:
# beadm
usage:
  beadm subcommand cmd_options

  subcommands:

  beadm activate beName
  beadm create [-e nonActiveBe | beName@snapshot] beName
  beadm create beName@snapshot
  beadm destroy beName
  beadm destroy beName@snapshot
  beadm list
There are several subtle differences between mine implementation and Philipp's one, he defines and then relies upon ZFS property called freebsd:boot-environment=1 for each boot environment, I do not set any other additional ZFS properties. There is already org.freebsd:swap property used for SWAP on FreeBSD, so we may use org.freebsd:be in the future, but is just a thought, right now its not used. My version also supports activating boot environments received with zfs recv command from other systems (it just updates appreciate /boot/zfs/zpool.cache file).

My implementation is also style compatible with current Illumos/Solaris beadm(1M) which is like the example below.
Code:
# beadm create -e default upgrade-test
Created successfully

# beadm list
BE           Active Mountpoint Space Policy Created
default      N      /          1.06M static 2012-02-03 15:08
upgrade-test R      -           560M static 2012-04-24 22:22
new          -      -             8K static 2012-04-24 23:40

# zfs list -r sys/ROOT
NAME                    USED  AVAIL  REFER  MOUNTPOINT
sys/ROOT                562M  8.15G   144K  none
sys/ROOT/default       1.48M  8.15G   558M  legacy
sys/ROOT/new              8K  8.15G   558M  none
sys/ROOT/upgrade-test   560M  8.15G   558M  none

# beadm activate default
Activated successfully

# beadm list
BE           Active Mountpoint Space Policy Created
default      NR     /          1.06M static 2012-02-03 15:08
upgrade-test -      -           560M static 2012-04-24 22:22
new          -      -             8K static 2012-04-24 23:40
The boot environments are located in the same please as in Illumos/Solaris, at pool/ROOT/environment place.

2. Now You're Thinking with Portals

The main purpose of the Boot Environments concept is to make all risky tasks harmless, to provide an easy way back from possible troubles. Think about upgrading the system to newer version, an update of 30+ installed packages to latest versions, testing software or various solutions before taking the final decision, and much more. All these tasks are now harmless thanks to the Boot Environments, but this is just the tip of the iceberg.

You can now move desired boot environment to other machine, physical or virtual and check how it will behave there, check hardware support on the other hardware for example or make a painless hardware upgrade. You may also clone Your desired boot environment and ... start it as a Jail for some more experiments or move Your old physical server install into FreeBSD Jail because its not that heavily used anymore but it still have to be available.

Other good example may be just created server on Your laptop inside VirtualBox virtual machine. After you finish the creation process and tests, You may move this boot environment to the real server and put it into production. Or even move it into VMware ESX/vSphere virtual machine and use it there.

As You see the possibilities with Boot Environments are unlimited.

3. The Install Process

I created 3 possible schemes which should cover most demands, choose one and continue to the next step.

3.1. Server with Two Disks

I assume that this server has 2 disks and we will create ZFS mirror across them, so if any of them will be gone the system will still work as usual. I also assume that these disks are ada0 and ada1. If You have SCSI/SAS drives there, they may be named da0 and da1 accordingly. The procedures below will wipe all data on these disks, You have been warned.

Code:
 1. Boot from the FreeBSD USB/DVD.
 2. Select the 'Live CD' option.
 3. login: root
 4. # sh
 5. # DISKS="ada0 ada1"
 6. # for I in ${DISKS}; do
    > NUMBER=$( echo ${I} | tr -c -d '0-9' )
    > gpart destroy -F ${I}
    > gpart create -s GPT ${I}
    > gpart add -t freebsd-boot -l bootcode${NUMBER} -s 128k ${I}
    > gpart add -t freebsd-zfs -l sys${NUMBER} ${I}
    > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ${I}
    > done
 7. # zpool create -f -o cachefile=/tmp/zpool.cache sys mirror /dev/gpt/sys*
 8. # zfs set mountpoint=none sys
 9. # zfs set checksum=fletcher4 sys
10. # zfs set atime=off sys
11. # zfs create sys/ROOT
12. # zfs create -o mountpoint=/mnt sys/ROOT/default
13. # zpool set bootfs=sys/ROOT/default sys
14. # cd /usr/freebsd-dist/
15. # for I in base.txz kernel.txz; do
    > tar --unlink -xvpJf ${I} -C /mnt
    > done
16. # cp /tmp/zpool.cache /mnt/boot/zfs/
17. # cat << EOF >> /mnt/boot/loader.conf
    > zfs_load=YES
    > vfs.root.mountfrom="zfs:sys/ROOT/default"
    > EOF
18. # cat << EOF >> /mnt/etc/rc.conf
    > zfs_enable=YES
    > EOF
19. # :> /mnt/etc/fstab
20. # zfs umount -a
21. # zfs set mountpoint=legacy sys/ROOT/default
22. # reboot
After these instructions and reboot we have these GPT partitions available, this example is on a 512MB disk.

Code:
# gpart show
=>     34  1048509  ada0  GPT  (512M)
       34      256     1  freebsd-boot  (128k)
      290  1048253     2  freebsd-zfs  (511M)

=>     34  1048509  ada1  GPT  (512M)
       34      256     1  freebsd-boot  (128k)
      290  1048253     2  freebsd-zfs  (511M)

# gpart list | grep label
   label: bootcode0
   label: sys0
   label: bootcode1
   label: sys1

# zpool status
  pool: sys
 state: ONLINE
 scan: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        sys           ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            gpt/sys0  ONLINE       0     0     0
            gpt/sys1  ONLINE       0     0     0

errors: No known data errors
3.2. Server with One Disk

If Your server configuration has only one disk, lets assume its ada0, then You need different points 5. and 7. to make, use these instead of the ones above.

Code:
5. # DISKS="ada0"
7. # zpool create -f -o cachefile=/tmp/zpool.cache sys /dev/gpt/sys*
All other steps are the same.

3.3. Road Warrior Laptop

The procedure is quite different for Laptop because we will use the full disk encryption mechanism provided by GELI and then setup the ZFS pool. Its not currently possible to boot off from the ZFS pool on top of encrypted GELI provider, so we will use setup similar to the Server with ... one but with additional local pool for /home and /root partitions. It will be password based and You will be asked to type-in that password at every boot. The install process is generally the same with new instructions added for the GELI encrypted local pool, I put them with different color to make the difference more visible.

Code:
 1. Boot from the FreeBSD USB/DVD.
 2. Select the 'Live CD' option.
 3. login: root
 4. # sh
 5. # DISKS="ada0"
 6. # for I in ${DISKS}; do
    > NUMBER=$( echo ${I} | tr -c -d '0-9' )
    > gpart destroy -F ${I}
    > gpart create -s GPT ${I}
    > gpart add -t freebsd-boot -l bootcode${NUMBER} -s 128k ${I}
    > gpart add -t freebsd-zfs -l sys${NUMBER} -s 10G ${I}
    > gpart add -t freebsd-zfs -l local${NUMBER} ${I}
    > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ${I}
    > done
 7. # zpool create -f -o cachefile=/tmp/zpool.cache sys /dev/gpt/sys0
 8. # zfs set mountpoint=none sys
 9. # zfs set checksum=fletcher4 sys
10. # zfs set atime=off sys
11. # zfs create sys/ROOT
12. # zfs create -o mountpoint=/mnt sys/ROOT/default
13. # zpool set bootfs=sys/ROOT/default sys
14. # geli init -b -s 4096 -e AES-CBC -l 128 /dev/gpt/local0
15. # geli attach /dev/gpt/local0
16. # zpool create -f -o cachefile=/tmp/zpool.cache local /dev/gpt/local0.eli
17. # zfs set mountpoint=none local
18. # zfs set checksum=fletcher4 local
19. # zfs set atime=off local
20. # zfs create local/home
21. # zfs create -o mountpoint=/mnt/root local/root
22. # cd /usr/freebsd-dist/
23. # for I in base.txz kernel.txz; do
    > tar --unlink -xvpJf ${I} -C /mnt
    > done
24. # cp /tmp/zpool.cache /mnt/boot/zfs/
25. # cat << EOF >> /mnt/boot/loader.conf
    > zfs_load=YES
    > geom_eli_load=YES
    > vfs.root.mountfrom="zfs:sys/ROOT/default"
    > EOF
26. # cat << EOF >> /mnt/etc/rc.conf
    > zfs_enable=YES
    > EOF
27. # :> /mnt/etc/fstab
28. # zfs umount -a
29. # zfs set mountpoint=legacy sys/ROOT/default
30. # zfs set mountpoint=/home local/home
31. # zfs set mountpoint=/root local/root
32. # reboot
After these instructions and reboot we have these GPT partitions available, this example is on a 4GB disk.

Code:
# gpart show
=>     34  8388541  ada0  GPT  (4.0G)
       34      256     1  freebsd-boot  (128k)
      290  2097152     2  freebsd-zfs  (1.0G)
  2097442  6291133     3  freebsd-zfs  (3G)

# gpart list | grep label
   label: bootcode0
   label: sys0
   label: local0

# zpool status
  pool: local
 state: ONLINE
 scan: none requested
config:

        NAME              STATE     READ WRITE CKSUM
        sys               ONLINE       0     0     0
          gpt/local0.eli  ONLINE       0     0     0

errors: No known data errors

  pool: sys
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        sys         ONLINE       0     0     0
          gpt/sys0  ONLINE       0     0     0

errors: No known data errors
4. Basic Setup after Install

1. Login as root with empty password.
login: root
password: [ENTER]


2. Create initial snapshot after install.
# zfs snapshot -r sys/ROOT/default@install

3. Set new root password.
# passwd

4. Set machine's hostname.
# echo hostname=hostname.domain.com >> /etc/rc.conf

5. Set proper timezone.
# tzsetup

6. Add some swap space.
If You used the Server with ... type, then use this to add swap.

Code:
# zfs create -V 1G -o org.freebsd:swap=on \
                   -o checksum=off \
                   -o sync=disabled \
                   -o primarycache=none \
                   -o secondarycache=none sys/swap
# swapon /dev/zvol/sys/swap
If You used the Road Warrior Laptop one, then use this one below, this was the swap space will also be encrypted.

Code:
# zfs create -V 1G -o org.freebsd:swap=on \
                   -o checksum=off \
                   -o sync=disabled \
                   -o primarycache=none \
                   -o secondarycache=none local/swap
# swapon /dev/zvol/local/swap
7. Create snapshot called configured or production
After You configured Your fresh FreeBSD system, added needed packages and services, create snapshot called configured or production so if You mess something, You can always go back in time to bring working configuration back. mess something.

# zfs snapshot -r sys/ROOT/default@configured

5. Enable Boot Environments

Here are some simple instructions on how to download and enable the beadm command line utility for easy Boot Environments administration.

Code:
# fetch -o /usr/sbin/beadm https://downloads.sourceforge.net/project/beadm/beadm
# chmod +x /usr/sbin/beadm
# rehash
# beadm list
BE      Active Mountpoint Space Policy Created
default NR     /           592M static 2012-04-25 02:03
6. WYSIWTF

Now we have a working ZFS only FreeBSD system, I will put some example here about what You now can do with this type of installation and of course the Boot Environments feature.

6.1. Create New Boot Environment Before Upgrade

1. Create new environment from the current one.
# beadm create upgrade
Created successfully


2. Activate it.
# beadm activate upgrade
Activated successfully


3. Reboot into it.
# shutdown -r now

4. Mess with it.

You are now free to do anything You like fo or the upgrade process, but even if You break everything, You still have a working default working environment.

6.2. Perform Upgrade within a Jail

This concept is about creating new boot environment from the desired one, lets call it jailed, then start that new environment inside a FreeBSD Jail and perform upgrade there. After You have finished all tasks related to this upgrade and You are satisfied with the achieved results, shutdown that Jail, set the boot environment into that just upgraded Jail called jailed and reboot into just upgraded system without any risks.

1. Create new boot environment called jailed.
# beadm create -e default jailed
Created successfully


2. Create /usr/jails directory.
# mkdir /usr/jails

3. Set mount point of new boot environment to /usr/jails/jailed dir.
# zfs set mountpoint=/usr/jails/jailed sys/ROOT/jailed

3.1. Make new Jail dataset mountable.
# zfs set canmount=noauto sys/ROOT/jailed

3.2. Mount new Jail dataset.
# zfs mount sys/ROOT/jailed

4. Enable FreeBSD Jails mechanism and the jailed Jail in /etc/rc.conf file.
# cat << EOF >> /etc/rc.conf
> jail_enable=YES
> jail_list="jailed"
> jail_jailed_rootdir="/usr/jails/jailed"
> jail_jailed_hostname="jailed"
> jail_jailed_ip="10.20.30.40"
> jail_jailed_devfs_enable="YES"
> EOF


5. Start the Jails mechanism.
# /etc/rc.d/jail start
Configuring jails:.
Starting jails: jailed.


6. Check if the jailed Jail started.
Code:
# jls
   JID  IP Address      Hostname                      Path
     1  10.20.30.40     jailed                        /usr/jails/jailed
7. Login into the jailed Jail.
# jexec 1 tcsh

8. PERFORM ACTUAL UPGRADE.

9. Stop the jailed Jail.
# /etc/rc.d/jail stop
Stopping jails: jailed.


10. Disable Jails mechanism in /etc/rc.conf file.
# sed -i '' -E s/"^jail_enable.*$"/"jail_enable=NO"/g /etc/rc.conf

11. Activate just upgraded jailed boot environment.
# beadm activate jailed
Activated successfully


12. Reboot into upgraded system.

6.3. Import Boot Environment from Other Machine

Lets assume, that You need to upgrade or do some major modification to some of Your servers, You will then create new boot environment from the default one, move it to other 'free' machine, perform these tasks there and after everything is done, move the modified boot environment to the production without any risks. You may as well transport that environment into You laptop/workstation and upgrade it in a Jail like in step 6.2 of this guide.

1. Create new environment on the production server.
# beadm create upgrade
Created successfully.


2. Send the upgrade environment to test server.
# zfs send sys/ROOT/upgrade | ssh TEST zfs recv -u sys/ROOT/upgrade

3. Activate the upgrade environment on the test server.
# beadm activate upgrade
Activated successfully.


4. Reboot into the upgrade environment on the test server.
# shutdown -r now

5. PERFORM ACTUAL UPGRADE AFTER REBOOT.

6. Sent the upgraded upgrade environment onto production server.
# zfs send sys/ROOT/upgrade | ssh PRODUCTION zfs recv -u sys/ROOT/upgrade

7. Activate upgraded upgrade environment on the production server.
# beadm activate upgrade
Activated successfully.


8. Reboot into the upgrade environment on the production server.
# shutdown -r nowCourier New


7. References

[1] http://forums.freebsd.org/showthread.php?t=10334
[2] http://forums.freebsd.org/showthread.php?t=12082
[3] http://docs.oracle.com/cd/E19963-01/pdf/820-6565.pdf
[4] http://docs.oracle.com/cd/E19963-01/.../beadm-1m.html
[5] http://anonsvn.h3q.com/projects/free.../wiki/manageBE
[6] https://sourceforge.net/projects/beadm/


The last part of the HOWTO remains the same as Year ago ...

You can now add your users, services and packages as usual on any FreeBSD system, have fun

Last edited by vermaden; 06-20-2012 at 07:21 AM.
 
Old 06-19-2012, 04:39 PM   #2
nixblog
Member
 
Registered: May 2012
Posts: 426

Rep: Reputation: 53
Thanks for sharing
 
Old 06-19-2012, 05:04 PM   #3
vermaden
Member
 
Registered: Jan 2006
Location: pl_PL.lodz
Distribution: FreeBSD
Posts: 406

Original Poster
Rep: Reputation: 89
Welcome
 
1 members found this post helpful.
Old 06-19-2012, 11:22 PM   #4
hitest
Guru
 
Registered: Mar 2004
Location: Canada
Distribution: Void, Debian, Slackware
Posts: 7,342

Rep: Reputation: 3746Reputation: 3746Reputation: 3746Reputation: 3746Reputation: 3746Reputation: 3746Reputation: 3746Reputation: 3746Reputation: 3746Reputation: 3746Reputation: 3746
vermaden,

That is awesome. Thank you!
 
Old 06-20-2012, 07:22 AM   #5
vermaden
Member
 
Registered: Jan 2006
Location: pl_PL.lodz
Distribution: FreeBSD
Posts: 406

Original Poster
Rep: Reputation: 89
Little ERRATA, thanks to srivo:

Quote:
3.1. Make new Jail dataset mountable.
# zfs set canmount=noauto sys/ROOT/jailed

3.2. Mount new Jail dataset.
# zfs mount sys/ROOT/jailed
 
Old 06-23-2012, 11:21 PM   #6
vermaden
Member
 
Registered: Jan 2006
Location: pl_PL.lodz
Distribution: FreeBSD
Posts: 406

Original Poster
Rep: Reputation: 89
Updates to the beadm utility:

- minor fixes and clean
- added -F switch for destroy option - does not need confirmation upon destroy
- implemented umount option with -f switch for umount -f (force)
- implemented mount option with several variants of usage, examples:

Code:
# beadm
usage:
  beadm subcommand cmd_options

  subcommands:

  beadm activate beName
  beadm create [-e nonActiveBe | -e beName@snapshot] beName
  beadm create beName@snapshot
  beadm destroy [-F] beName | beName@snapshot
  beadm list
  beadm mount
  beadm mount beName [mountpoint]
  beadm umount [-f] beName
  beadm rename origBeName newBeName

# beadm mount
update
  sys/ROOT/update  /

# beadm mount test /test
Mounted successfully on '/test'

# beadm mount default
Mounted successfully on '/tmp/tmp.KhAtHe'

# beadm mount
default
  sys/ROOT/default  /tmp/tmp.KhAtHe

test
  sys/ROOT/test            /test
  sys/ROOT/test/SOMETHING  /test/test

update
  sys/ROOT/update  /

# beadm umount test
Unmounted successfully

# beadm umount -f default
Unmounted successfully
Please report all problems and BUGs
 
2 members found this post helpful.
Old 09-06-2012, 08:07 AM   #7
vermaden
Member
 
Registered: Jan 2006
Location: pl_PL.lodz
Distribution: FreeBSD
Posts: 406

Original Poster
Rep: Reputation: 89
The beadm 0.8 has just been commited to the Ports tree:

http://freshports.org/sysutils/beadm

Changelog:

Code:
-- Introduce proper space calculation by each boot environment in *beadm list*
-- Rework the *beadm destroy* command so no orphans are left after destroying boot environment.
-- Fix the *beadm mount* and *beadm umount* commands error handling.
-- Rework consistency of all error and informational messages.
-- Simplify and cleanup code where possible.
-- Fix *beadm destroy* for 'static' (not cloned) boot environments received by *zfs receive* command.
-- Use mktemp(1) where possible.
-- Implement *beadm list -a* option to list all datasets and snapshots of boot environments.
-- Add proper mountpoint listing to the *beadm list* command.
   % beadm list
   BE      Active Mountpoint       Space Created
   default NR     /                11.0G 2012-07-28 00:01
   test1   -      /tmp/tmp.IUQuFO  41.2M 2012-08-27 21:20
   test2   -      -                56.6M 2012-08-27 21:20

-- Change snapshot format to the one used by original *beadm* command
(%Y-%m-%d-%H:%M:%S).
   % zfs list -t snapshot -o name -r sys/ROOT/default
   NAME
   sys/ROOT/default@2012-08-27-21:20:00
   sys/ROOT/default@2012-08-27-21:20:18

-- Implement *beadm list -D* option to display space that would be consumed by single boot environment if all other boot environments will be destroyed.
   % beadm list -D
   BE      Active Mountpoint       Space Created
   default NR     /                 9.4G 2012-07-28 00:01
   test1   -      /tmp/tmp.IUQuFO   8.7G 2012-08-27 21:20
   test2   -                        8.7G 2012-08-27 21:20

-- Add an option to BEADM DESTROY command to not destroy manually created snapshots used for boot environment.

   # beadm destroy test1
   Are you sure you want to destroy 'test1'?
   This action cannot be undone (y/[n]): y
   Boot environment 'test1' was created from existing snapshot
   Destroy 'default@test1' snapshot? (y/[n]): y
   Destroyed successfully

   # beadm destroy test1
   Are you sure you want to destroy 'test1'?
   This action cannot be undone (y/[n]): y
   Boot environment 'test1' was created from existing snapshot
   Destroy 'default@test1' snapshot? (y/[n]): n
   Origin snapshot 'default@test1' will be preserved
   Destroyed successfully
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: UFS vs. ZFS File-System Performance On FreeBSD 9.0 LXer Syndicated Linux News 0 12-12-2011 07:00 PM
LXer: Benchmarking ZFS On FreeBSD vs. EXT4 & Btrfs On Linux LXer Syndicated Linux News 0 07-27-2010 04:40 PM
LXer: Running ZFS With CAM-based ATA On FreeBSD 8.1 LXer Syndicated Linux News 0 07-26-2010 12:30 PM
LXer: Clang, Chromium, ZFS Improve On FreeBSD LXer Syndicated Linux News 0 04-24-2010 05:21 AM
ZFS support in FreeBSD jlliagre *BSD 16 06-25-2007 04:01 PM

LinuxQuestions.org > Forums > Other *NIX Forums > *BSD

All times are GMT -5. The time now is 06:26 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration