Patching and OS upgrades with live upgrade
 

upgrade liveupgrade from the install media of the target OS


lofiadm -a /export/install/wip/sol-u6.iso
mount -F hsfs /dev/lofi/1 /mnt
/mnt/Solaris_10/Tools/Installers/liveupgrade20

Find a target slice or volume for liveupgrade , can be a split mirror a slice or a different disk


metastat -p  shows our target mirror is d4
verify with format as per your own tastes

-bash-3.00# metastat -p
d5 -m d15 d25 1
d15 1 1 c1t0d0s5
d25 1 1 c1t1d0s5
d4 -m d40 d41 1
d40 1 1 c1t0d0s4
d41 1 1 c1t1d0s4
d1 -m d11 d21 1
d11 1 1 c1t0d0s1
d21 1 1 c1t1d0s1
d0 -m d10 d20 1
d10 1 1 c1t0d0s0
d20 1 1 c1t1d0s0


-bash-3.00# df -k
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/md/dsk/d0       20655025 10553221 9895254    52%    /
/devices                   0       0       0     0%    /devices
ctfs                       0       0       0     0%    /system/contract
proc                       0       0       0     0%    /proc
mnttab                     0       0       0     0%    /etc/mnttab
swap                 3386496    1456 3385040     1%    /etc/svc/volatile
objfs                      0       0       0     0%    /system/object
/platform/SUNW,SPARC-Enterprise-T5120/lib/libc_psr/libc_psr_hwcap2.so.1
                     20655025 10553221 9895254    52%    /platform/sun4v/lib/lib                                    c_psr.so.1
/platform/SUNW,SPARC-Enterprise-T5120/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
                     20655025 10553221 9895254    52%    /platform/sun4v/lib/spa                                    rcv9/libc_psr.so.1
fd                         0       0       0     0%    /dev/fd
/dev/md/dsk/d5       8167685  667790 7418219     9%    /var
swap                  524288       0  524288     0%    /tmp
swap                 3385056      16 3385040     1%    /var/run
/dev/lofi/1          2599020 2599020       0   100%    /mnt

 

   OS upgrades


-bash-3.00# lucreate -n s10u6 -c s10u5 -m /:/dev/md/dsk/d4:ufs
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <s10u6>.
Creating initial configuration for primary boot environment <s10u6>.
WARNING: The device </dev/md/dsk/d0> for the root file system mount point </> is not a physical device.
WARNING: The system boot prom identifies the physical device </dev/dsk/c1t0d0s0> as the system boot device.
Is the physical device </dev/dsk/c1t0d0s0> the boot device for the logical device </dev/md/dsk/d0>? (yes or no) yes
INFORMATION: Assuming the boot device </dev/dsk/c1t0d0s0> obtained from the system boot prom is the physical boot device for logical device </dev/md/dsk/d0>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <s10u6> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <s10u6> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices

Updating system configuration files.
The device </dev/dsk/c1t0d0s4> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <s10u5>.
Source boot environment is <s10u6>.
Creating boot environment <s10u5>.
Creating file systems on boot environment <s10u5>.
Creating <ufs> file system for </> in zone <global> on </dev/md/dsk/d4>.
Mounting file systems for boot environment <s10u5>.
Calculating required sizes of file systems              for boot environment <s10u5>.
Populating file systems on boot environment <s10u5>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment <s10u5>.
Creating compare database for file system </var>.
Creating compare database for file system </>.
Updating compare databases on boot environment <s10u5>.
Making boot environment <s10u5> bootable.
Setting root slice to Solaris Volume Manager metadevice </dev/md/dsk/d4>.
Population of boot environment <s10u5> successful.
Creation of boot environment <s10u5> successful.


Verify boot environment:

-bash-3.00# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
s10u5                      yes      yes    yes       no     -
s10u6                        yes      no     no        yes    -

upgrade or patch system (assume prior cdrom mounted from before)


Upgrade OS
-bash-3.00# luupgrade -u -n "new" -s /mnt/


42092 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </mnt/Solaris_10/Tools/Boot>
Validating the contents of the media </mnt>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <s10u6>.
Determining packages to install or upgrade for BE <s10u6>.
Performing the operating system upgrade of the BE <s10u6>.
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Updating package information on boot environment <s10u6>.
Package information successfully updated on boot environment <s10u6>.
Adding operating system patches to the BE <s10u6>.
The operating system patch installation is complete.
INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot
environment <s10u6> contains a log of the upgrade operation.
INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot
environment <s10u6> contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment <s10u6>. Before you activate boot
environment <s10u6>, determine if any additional system maintenance is
required or if additional media of the software distribution must be
installed.
The Solaris upgrade of the boot environment <s10u6> is complete.

Now we activate the new BE

luactivate "s10u6"
A Live Upgrade Sync operation will be performed on startup of boot environment <s10u6>.


**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).

2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:

     At the PROM monitor (ok prompt):
     For boot to Solaris CD:  boot cdrom -s
     For boot to network:     boot net -s

3. Mount the Current boot environment root slice to some directory (like
/mnt). You can use the following command to mount:

     mount -Fufs /dev/dsk/c1t0d0s0 /mnt

4. Run <luactivate> utility with out any arguments from the current boot
environment root slice, as shown below:

     /mnt/sbin/luactivate

5. luactivate, activates the previous working boot environment and
indicates the result.

6. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Activation of boot environment <s10u6> successful.

init 6 or shutdown

Patch OS

create the new BE as for upgrading


-bash-3.00# lucreate -n prepatch -c patch08 -m /:/dev/md/dsk/d4:ufs
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <patch08>.
Creating initial configuration for primary boot environment <patch08>.
WARNING: The device </dev/md/dsk/d0> for the root file system mount point </> is not a physical device.
WARNING: The system boot prom identifies the physical device </dev/dsk/c1t0d0s0> as the system boot device.
Is the physical device </dev/dsk/c1t0d0s0> the boot device for the logical device </dev/md/dsk/d0>? (yes or no) yes
INFORMATION: Assuming the boot device </dev/dsk/c1t0d0s0> obtained from the system boot prom is the physical boot device for logical device </dev/md/dsk/d0>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <patch08> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <patch08> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices

Updating system configuration files.
The device </dev/dsk/c1t0d0s4> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <patch08>.
Source boot environment is <patch08>.
Creating boot environment <patch08>.
Creating file systems on boot environment <patch08>.
Creating <ufs> file system for </> in zone <global> on </dev/md/dsk/d4>.
Mounting file systems for boot environment <patch08>.
Calculating required sizes of file systems              for boot environment <patch08>.
Populating file systems on boot environment <patch08>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment <patch08>.
Creating compare database for file system </var>.
Creating compare database for file system </>.
Updating compare databases on boot environment <patch08>.
Making boot environment <patch08> bootable.
Setting root slice to Solaris Volume Manager metadevice </dev/md/dsk/d4>.
Population of boot environment <patch08> successful.
Creation of boot environment <patch08> successful.

-bash-3.00# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
prepatch                      yes      yes    yes       no     -
patch08                     yes      no     no        yes    -

lumount the alt root (i know i stole the name from ibm)
-bash-3.00# lumount -n "patch08"
/.alt.patch08

now patch the system
for clusters
-bash-3.00# cd /patch
-bash-3.00# ./install_cluster -R /.alt.patch08

for single patches
-bash-3.00# patchadd -R /.alt.patch08 patchnumber

when completed luactivate the patched BE and reboot with init 6 or shutdown

example using a cluster:

-bash-3.99# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
d0 yes yes yes no -
patch08 yes no no yes -
-bash-3.00# lumount patch09
-bash-3.00# df -h
/dev/md/dsk/d4 10G 1.4G 8.6G 31% /.alt.patch09
/dev/md/dsk/d5 5G 29M 4.9G 2% /.alt.patch09/var
-bash-3.00# pwd
/tmp/9/9_Recommended
-bash-3.00# patchadd -R /.alt.patch08 -M /patch/9_Recommended ./patch_order
Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...
...
Patch number 116340-05 has been successfully installed.
See /.alt.patch09/var/sadm/patch/116340-05/log for details.
Patch packages installed:
SUNWgzip
SUNWsfinf
Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...

<output snip>

-bash-3.00# luactivate “patch08”

-bash-3.00# init 6