Solaris 11 : configuration du pool racine ZFS en miroir

Boris HUISGEN October 6, 2012

administration système solaris zfs

Cette procédure détaille la configuration en miroir du pool racine ZFS d’une installation de Solaris 11 x86 et reproductible pour un Solaris 10. L’installation du système a été réalisée sur le premier disque dur c4t0d0. Voici un mémo des commandes nécessaires.

root@solaris:~# cfgadm -s 'select=type(disk)'

Ap_Id                          Type         Receptacle   Occupant     Condition
sata7/0::dsk/c4t0d0            disk         connected    configured   ok
sata7/2::dsk/c4t2d0            disk         connected    configured   ok

root@solaris:~# fdisk -W -c c4t0d0p0

* /dev/rdsk/c4t0d0p0 default fdisk table
* Dimensions:
*    512 bytes/sector
*     63 sectors/track
*    255 tracks/cylinder
*   2088 cylinders
*

* Id    Act  Bhead  Bsect  Bcyl    Ehead  Esect  Ecyl    Rsect      Numsect
191   128  0      1      1       254    63     1023    16065      33527655
0     0    0      0      0       0      0      0       0          0
0     0    0      0      0       0      0      0       0          0
0     0    0      0      0       0      0      0       0          0

root@solaris:~# fdisk -B c4t2d0p0
root@solaris:~# fdisk -W - c4t2d0p0

* /dev/rdsk/c4t2d0p0 default fdisk table
* Dimensions:
*    512 bytes/sector
*     63 sectors/track
*    255 tracks/cylinder
*   2088 cylinders
*

* Id    Act  Bhead  Bsect  Bcyl    Ehead  Esect  Ecyl    Rsect      Numsect
191   128  0      1      1       254    63     1023    16065      33527655
0     0    0      0      0       0      0      0       0          0
0     0    0      0      0       0      0      0       0          0
0     0    0      0      0       0      0      0       0          0

root@solaris:~# prtvtoc /dev/rdsk/c4t0d0s0 | fmthard -s - /dev/rdsk/c4t2d0s0
root@solaris:~# format

Searching for disks...done

AVAILABLE DISK SELECTIONS:
0. c4t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63>
/pci@0,0/pci8086,2829@d/disk@0,0
1. c4t2d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63>
/pci@0,0/pci8086,2829@d/disk@2,0
Specify disk (enter its number): 0
selecting c4t0d0
[disk formatted]
/dev/dsk/c4t0d0s0 is part of active ZFS pool rpool. Please see zpool(1M).

format> verify

Primary label contents:

Volume name = <        >
ascii name  = <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63>
pcyl        = 2087
ncyl        = 2085
acyl        =    2
bcyl        =    0
nhead       =  255
nsect       =   63
Part      Tag    Flag     Cylinders        Size            Blocks
0       root    wm       1 - 2084       15.96GB    (2084/0/0) 33479460
1 unassigned    wm       0               0         (0/0/0)           0
2     backup    wu       0 - 2086       15.99GB    (2087/0/0) 33527655
3 unassigned    wm       0               0         (0/0/0)           0
4 unassigned    wm       0               0         (0/0/0)           0
5 unassigned    wm       0               0         (0/0/0)           0
6 unassigned    wm       0               0         (0/0/0)           0
7 unassigned    wm       0               0         (0/0/0)           0
8       boot    wu       0 -    0        7.84MB    (1/0/0)       16065
9 unassigned    wm       0               0         (0/0/0)           0

format> disk

selecting c4t2d0
[disk formatted]

format> verify

Primary label contents:

Volume name = <        >
ascii name  = <DEFAULT cyl 2085 alt 2 hd 255 sec 63>
pcyl        = 2087
ncyl        = 2085
acyl        =    2
bcyl        =    0
nhead       =  255
nsect       =   63
Part      Tag    Flag     Cylinders        Size            Blocks
0       root    wm       1 - 2084       15.96GB    (2084/0/0) 33479460
1 unassigned    wu       0               0         (0/0/0)           0
2     backup    wu       0 - 2086       15.99GB    (2087/0/0) 33527655
3 unassigned    wu       0               0         (0/0/0)           0
4 unassigned    wu       0               0         (0/0/0)           0
5 unassigned    wu       0               0         (0/0/0)           0
6 unassigned    wu       0               0         (0/0/0)           0
7 unassigned    wu       0               0         (0/0/0)           0
8       boot    wu       0 -    0        7.84MB    (1/0/0)       16065
9 unassigned    wu       0               0         (0/0/0)           0
root@solaris:~# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c4t0d0s0  ONLINE       0     0     0

errors: No known data errors
root@solaris:~# zpool attach -f rpool c4t0d0s0 c4t2d0s0
Make sure to wait until resilver is done before rebooting.
root@solaris:~# zpool status
pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Oct  1 23:30:04 2012
921M scanned out of 5.90G at 19.6M/s, 0h4m to go
921M resilvered, 15.23% done
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            c4t0d0s0  ONLINE       0     0     0
            c4t2d0s0  ONLINE       0     0     0  (resilvering)

errors: No known data errors

root@solaris:~# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c4t2d0s0
stage2 written to partition 0, 282 sectors starting at 50 (abs 16115)
stage1 written to partition 0 sector 0 (abs 16065)

See also

FreeBSD : cloner un serveur en live grâce à ZFS
Read more
FreeBSD : clean install avec pool ZFS racine
Read more
FreeBSD : booter sur un noyau alternatif
Read more