Jump to content United States-English
HP.com Home Products and Services Support and Drivers Solutions How to Buy
 Contact HP
HP.com home
HP Tru64 UNIX: Technical Updates for the Version 5.1B and Higher Operating System and Patches > Technical Updates for the Version 5.1B and Higher Operating System and Patches

Patch Kit Updates

 

Content starts here

The information in this section pertains to the patch kits available for Version 5.1B.

All V5.1B Patch Kits

The following notes pertain to all patch kits.

Some Patch Kits Cannot Be Removed (Feb. 2006)

You cannot remove a patch kit on systems that have New Hardware Delivery 7 (NHD-7) kit installed when either of the following conditions exist:

  • The patch kit you want to remove was installed before the NHD kit.

    For example, if you installed Patch Kit 2 and then installed NHD-7, you cannot remove that patch kit. However, if you later installed Patch Kit 4, you can remove that patch kit.

  • The patch kit was installed with NHD-7.

    Beginning with the release of Patch Kit 3, patch kits were incorporated into the NHD-7 kits. As a result, when you installed NHD-7, you automatically installed the current patch kit. These patch kits cannot be removed. However, you can remove any subsequent patch kits. For example, if you installed NHD-7 with Patch Kit 4 and later installed Patch Kit 5, you cannot remove Patch Kit 4, but can remove Patch Kit 5.

If you must remove the patch kit, the only solution is to rebuild your system environment by reinstalling the Version 5.1B operating system and then restoring your system to its state before you installed NHD-7 with the unwanted patch kit.

Version 5.1B-6 (Patch Kit 8)

The following notes pertain to Version 5.1B-6.
This release contains only defect fixes.
There are no new features or enhancements added in this release.

Version 5.1B-5 (Patch Kit 7)

The following notes pertain to Version 5.1B-5.

Support for 2 TB LUNs (April 2009)

The Tru64 UNIX CAM subsystem now supports a LUN size of up to 2 TB from the previous maximum limit of 1 TB. This allows Tru64 UNIX to take advantage of the increased LUN size supported by the storage arrays.

Version 5.1B-4 (Patch Kit 6)

The following notes pertain to Version 5.1B-4.

Manually Cloning a Cluster Using dupclone (May 2007)

This note describes how to manually clone a cluster prior to using the dupclone procedure included on the Version 5.1B-4 CD. See the Patch Kit Installation Instructions (PDF version) for information on using the dupclone script.

Before you being the cloning procedure, ensure that the cluster passes all dupatch pre-installation checks. If a pre-installation check fails, correct the problem. Failing to do so will prevent dupclone from installing the patch kit. The following example shows how to perform the pre-installation check:

NOTE: In the examples presented in this note, text that is preceded by <–– are comments, and should not be entered.

# pwd
/usr/pk6/patch_kit

# ./dupatch
Enter path to the top of the patch distribution, or enter "q" to quit : /usr/pk6/patch_kit

        * Previous session logs saved in session.log.[1-25]

Tru64 UNIX Patch Utility (Rev. 52-00)
==========================
        - This dupatch session is logged in /var/adm/patch/log/session.log

    Main Menu:
    ---------

    1)  Patch Installation
    2)  Patch Deletion
    3)  Patch Documentation

    4)  Patch Tracking
    5)  Patch Baseline Analysis/Adjustment

    h)  Help on Command Line Interface

    q)  Quit
Enter your choice: 1

Checking Cluster State...done

This system is part of a cluster which has not been prepared to do a 
rolling patch installation or deletion. Do you wish to perform this 
patch operation cluster-wide without using the rolling-patch mechanism?

Please answer y or n ? [y/n]: y

You have chosen to perform this patch operation using the no-roll patch 
method.No-roll patch is designed to patch the cluster as a unit. As a 
result, the entire cluster will be brought to init state 2. After the 
patch operation is completed, the entire cluster will be rebooted. In 
order for the reboot operation to complete correctly, it is necessary 
that cluster quorum be configured to allow for members to be rebooted 
without losing quorum. Do you wish to continue?

Please answer y or n ? [y/n]: y

Tru64 UNIX Patch Utility (Rev. 52-00)
 ==========================
         - This dupatch session is logged in /var/adm/patch/log/session.log

     Patch Installation Menu:
     -----------------------

    1)  Pre-Installation Check ONLY

    3)  Check & Install in Multi-User mode

    b)  Back to Main Menu
    q)  Quit

Enter your choice: 1
NOTE: The “Pre-Installation Check ONLY” procedure does not force the cluster to run-level 2; it remains at run-level 3 (Multiuser mode).

The following steps show you how to clone a two-member cluster with a quorum disk. The cluster is configured as follows:

Current disk set:

root1_domain -> dsk0a (member1 swap dsk0b)
root2_domain -> dsk1a (member2 swap dsk1b)
cluster_root -> dsk2a
cluster_usr -> dsk2g
cluster_var -> dsk2h
quorum disk -> dsk3

Alternative disk set:

alt_root1_domain -> dsk100a (member1 swap dsk100b)
alt_root2_domain -> dsk101a (member2 swap dsk101b)
alt_cluster_root -> dsk102a
alt_cluster_usr -> dsk102g
alt_cluster_var -> dsk102h
alt quorum disk -> dsk103

The steps for the procedure follow:

  1. When the cluster is booted, add additional storage, scan the bus, and gather wwid information. In the following example, specifying the hwmgr -show scsi -full command ensures that both members see the new storage.

    # hwmgr -scan scsi
    # hwmgr -show scsi -full 
    
            SCSI                DEVICE    DEVICE  DRIVER NUM  DEVICE FIRST
     HWID:  DEVICEID HOSTNAME   TYPE      SUBTYPE OWNER  PATH FILE   VALID PATH
    -------------------------------------------------------------------------
      297:  7        oscar      disk      none    2      4    dsk100 [2/2/10]
    
          WWID:01000010:6005-08b4-0001-48cd-0003-a000-0b03-0000 
    
    
          BUS   TARGET  LUN   PATH STATE
          ---------------------------------
          2     2       10    valid
          2     3       10    valid
          2     4       10    valid
          2     5       10    valid
    

  2. Clone the cluster common disk (cluster_root, cluster_usr and cluster_var).

    The following example assumes that the cloned disks are the exact same size as original. If they were not, you would use disklabel -e in place of disklabel -R to write customized partition table. The size of each partition uses for the alternative cluster common disk should be greater than or equal to the original cluster common disk.

    # disklabel -r dsk2 > /tmp/dsk2.lbl
    # disklabel -rw dsk102        
    # disklabel -R dsk102 /tmp/dsk2.lbl
    # disklabel -e dsk102   <-- Change the fstype field for 
                                partitions a, g, and h to “unused”. 
                                Otherwise, mkfdmn may fail with
                                “Memory fault(coredump)”
    
    # mkfdmn /dev/disk/dsk102a alt_cluster_root 
    # mkfset alt_cluster_root root 
    # mkfdmn /dev/disk/dsk102g alt_cluster_usr 
    # mkfset alt_cluster_usr usr 
    # mkfdmn /dev/disk/dsk102h alt_cluster_var 
    # mkfset alt_cluster_var var 
    
    # mkdir /clone  <-- create directory for mounting 
                        alternative disk set. 
    
    # mount alt_cluster_root#root /clone    
    # vdump -0f - / |vrestore -xf - -D /clone 
    # mount alt_cluster_usr#usr /clone/usr 
    # mount alt_cluster_var#var /clone/var 
    # vdump -0f - /usr |vrestore -xf - -D /clone/usr 
    # vdump -0f - /var |vrestore -xf - -D /clone/var
    

  3. Clone the member1 boot disk:

    # disklabel -r dsk0 > /tmp/dsk0.lbl
    # disklabel -rw -t advfs dsk100 
    # disklabel -R dsk100 /tmp/dsk0.lbl 

    You must use the t option to install boot blocks on the member boot disk.

    If the disks are different sizes use the disklabel -e command to modify the partition table size and offset for the root1 boot partition and swap partition. Modify the h partition to use the last 2048 blocks of the disk and set fstype field to cnx. The a partition must be equal too or larger than the original root1 boot disk.

    WARNING! Do not change the labell field in the header. It should remain “clu_member1.” or the CNX partition will not be created with the type of m member boot disk using the clu_partmgr command later in this procedure.
    # 
              size     offset    fstype  fsize  bsize  cpg  # ~Cyl values
      a:    524288          0    unused                     #      0 - 31
      b:  16777216     524288      swap                     #     32 - 1055
      c:  20971520          0    unused      0      0       #      0 - 1279
      d:         0          0    unused      0      0       #      0 - 0
      e:         0          0    unused      0      0       #      0 - 0
      f:         0          0    unused      0      0       #      0 - 0
      g:  10289152     393216    unused      0      0       #     24 - 651
      h:      2048   20969472       cnx                     #   1279*- 1279
    
    # disklabel -e dsk100   <-- Change the fstype field for 
                                partitions a, g, and h to “unused”. 
                                Otherwise, mkfdmn may fail with
                                “Memory fault(coredump)”
    
    # mkfdmn -o -r /dev/disk/dsk100a alt_root1_domain 
    # mkfset alt_root1_domain root 
    # mount alt_root1_domain#root /clone/cluster/members/member1/boot_partition
    # vdump -0f - /cluster/members/member1/boot_partition |vrestore -xf - -D \ 
      /clone/cluster/members/member1/boot_partition
    
    # /usr/sbin/cluster/clu_partmgr -v dsk1 > /tmp/clu_bdmgr.conf.alt-m1
    

    Modify the clu_bd_mgr.conf.alt-m1 file to reflect the new cluster_root disk (/dev/disk/dsk102a). No other fields should be changed. For example:

    # vi /tmp/clu_bdmgr.conf.alt-m1
    

    NOTE: You should disregard the line in the script that tells you not to edit a file. The following list shows the file as it appears and what you should change it to:
    • Original

      DO NOT EDIT THIS FILE
      ::TYP:m:CFS:/dev/disk/dsk2a:LSM:0::
    • Make this revision

      DO NOT EDIT THIS FILE
      ::TYP:m:CFS:/dev/disk/dsk102a:LSM:0::

    Enter the clu_partmgr command as follows:

    # /usr/sbin/cluster/clu_partmgr -m /tmp/clu_bdmgr.conf.alt-m1 dsk100 # Member1

  4. Clone the member2 boot disk.

    # disklabel -r dsk1 > /tmp/dsk1.lbl
    # disklabel -rw -t advfs dsk101 
    # disklabel -R dsk101 /tmp/dsk1.lbl  

    You must use the t option to install boot blocks on the member boot disk.

    If the disks are different sizes use the disklabel -e command to modify the partition table size and offset for the root1 boot partition and swap partition. Modify the h partition to use the last 2048 blocks of the disk and set fstype field to cnx. The a partition must be equal too or larger than the original root1 boot disk.

    WARNING! Do not change the labell field in the header. It should remain “clu_member1.” or the CNX partition will not be created with the type of m member boot disk using the clu_partmgr command later in this procedure.
    # disklabel -e dsk101   <-- Change the fstype field for 
                                partitions a, g, and h to “unused”. 
                                Otherwise, mkfdmn may fail with
                                “Memory fault(coredump)”
    
    # mkfdmn -o -r /dev/disk/dsk101a alt_root2_domain 
    # mkfset alt_root2_domain root 
    # mount alt_root2_domain#root /clone/cluster/members/member2/boot_partition
    # vdump -0f - /cluster/members/member2/boot_partition |vrestore -xf - -D \ 
      /clone/cluster/members/member2/boot_partition
    

    Modify the clu_bd_mgr.conf.alt-m1 file to reflect the new cluster_root disk (/dev/disk/dsk102a). No other fields should be changed:

    NOTE: You should disregard the line in the script that tells you not to edit a file. The following list shows the file as it appears and what you should change it to:
    • Original

      DO NOT EDIT THIS FILE
      ::TYP:m:CFS:/dev/disk/dsk2a:LSM:1::
    • Make this revision

      DO NOT EDIT THIS FILE
      ::TYP:m:CFS:/dev/disk/dsk102a:LSM:1::

    Enter the clu_partmgr command as follows:

    # /usr/sbin/cluster/clu_partmgr -m /tmp/clu_bdmgr.conf.alt-m2 dsk101 # Member2

  5. Modify the /etc/fdmns links to reflect new disks:

    # cd /clone/etc/fdmns/cluster_root
    # rm dsk2a
    # ln -s /dev/disk/dsk102a
    # cd /clone/etc/fdmns/cluster_usr
    # rm dsk2g
    # ln -s /dev/disk/dsk102g
    # cd /clone/etc/fdmns/cluster_var
    # rm dsk2h
    # ln -s /dev/disk/dsk102h
    # cd /clone/etc/fdmns/root1_domain
    # rm dsk0a
    # ln -s /dev/disk/dsk100a
    # cd /clone/etc/fdmns/root2_domain
    # rm dsk1a
    # ln -s /dev/disk/dsk101a
    
  6. Configure alternative quorum disk:

    # disklabel -r dsk3>/tmp/dsk3.lbl
    # disklabel -rw dsk103                   <--   alt quorum disk
    # disklabel -R dsk103 /tmp/dsk3.lbl
    
    If the disks are different sizes use the disklabel -e command to modify the partition table size and offset for the root1 boot partition and swap partition. Modify the h partition to use the last 2048 blocks of the disk and set fstype field to cnx. The a partition must be equal too or larger than the original root1 boot disk.

    NOTE: You should disregard the line in the script that tells you not to edit a file. The following list shows the file as it appears and what you should change it to:
    • Original

       DO NOT EDIT THIS FILE
      ::TYP:m:CFS:/dev/disk/dsk2a:LSM:1::
    • Make this revision

      DO NOT EDIT THIS FILE
      ::TYP:m:CFS:/dev/disk/dsk102a:LSM:1::

    Enter the clu_partmgr command as follows:

    # /usr/sbin/cluster/clu_partmgr -q /tmp/clu_bdmgr.conf.alt-q dsk103 # Quorum

  7. Modify each member's sysconfigtab to reflect cloned configuration.

    # file /dev/disk/dsk100h    <-- H partition of alternative member1 boot disk - seqdisk
    /dev/disk/dsk100h:        block special (19/834)
    
    # file /dev/disk/dsk101h    <-- H partition of alternative member2 boot disk - seqdisk
    /dev/disk/dsk101h:        block special (19/818)
    
    # file /dev/disk/dsk103h    <-- H parition of alternative qdisk
    /dev/disk/dsk103h:        block special (19/802)
    
    # file /dev/disk/dsk102a    <-- alternative cluster_root partition
    /dev/disk/dsk102a:        block special (19/723)
    NOTE: Write down major and minor numbers, you will need them to update the member specific sysconfigtab and update the cnx partition at boot time.
    • For member1:

      # cd /clone/cluster/members/member1/boot_partition/etc
      # cp sysconfigtab sysconfigtab.PreClone     <-- Make backup file
      # vi sysconfigtab

      Change following to reflect correct minor device numbers:

      clubase:
                      cluster_seqdisk_major=19
                      cluster_seqdisk_minor=834  <-- member1's boot disk H partition
                      cluster_qdisk_major=19
                      cluster_qdisk_minor=802    <-- qdisk H partition
                      .
      
      vm:
                      swapdevices=/dev/disk/dsk100b  <-- swap disk for member1
      
    • For member2:

      # cd /clone/cluster/members/member2/boot_partition/etc
      # cp sysconfigtab sysconfigtab.PreClone     <-- Make backup file 
      # vi sysconfigtab
      

      Change following to reflect correct minor device numbers:

      clubase:
                      cluster_seqdisk_major=19
                      cluster_seqdisk_minor=818  <-- member2's boot disk H partition
                      cluster_qdisk_major=19
                      cluster_qdisk_minor=802    <-- qdisk H partition
                      
      vm:
                      swapdevices=/dev/disk/dsk101b  <-- swap disk for member2
      
  8. This step is optional but recommended to test the newly cloned disks. If you elect not to test the newly cloned set of disks you can skip this step.

    • On both members:

      Shut down the system:

      >>> init

      Define the alternate/cloned boot member disk using wwidmgr console command for each member:

      >>> wwidmgr -show wwid
      >>> wwidmgr -quickset -item x -unit y
      >>> set bootdef_dev  dga1.1001.0.1.2   <-- alternative member1 boot disk
      

    • Boot each member:

      >>> boot

  9. Install Tru64 5.1b-4 (PK6). You can do this in either of the following ways:

    • Install Patch Kit 6 using the dupclone feature on the currently booted disk set (primary set).

    • Boot the newly created disk set (alternate set) and run dupatch script to install the desired patch kit.

    The following example uses the dupclone feature.

    # ./dupclone -r /clone -k /usr/PK6/patch_kit –license

    Define the alternate/cloned boot member disk using wwidmgr console command for each member:

    >>> wwidmgr -show wwid
    >>> wwidmgr -quickset -item x -unit y
    >>> set bootdef_dev  dga1.1001.0.1.2   <--- alternative member1 boot disk
    
    Boot each member:

    >>> boot

Installing V5.1B-4 to an Alternate Root (Jan. 2007)

To use the dupatch root option to install Version 5.1B-4 to an alternate root path, you must first install the latest patch tools. To do this:

  1. Download the Version 5.1-4 kit.

  2. Extract the kit into a convenient directory.

  3. Change directory to the one containing the kit and run dupatch. This automatically installs the latest tools if they are not already installed on the system.

  4. Select quit from the Main Menu.

  5. Rerun dupatch with the root option to install Version 5.1B-4 to an alternate root path.

Version 5.1B-3 (Patch Kit 5)

The following notes pertain to Version 5.1B-3.

Script Displays Nonexistent Command Name to Remove Cluster Member (May 2005)

When installing a Tru64 UNIX patch kit using the no-roll procedure and a cluster member is down, the noroll_versw script may display a message containing a nonexistent command to be used for removing a cluster member.

This occurs if the patch kit installs a patch that requires the use of a version switch. In that case, you would see a message similar to the following at the end of dupatch session:

Patch OSFPAT00074200510 has been identified as needing a version
switch. Once the following reboot is complete, please enter the
"/var/adm/patch/noroll/noroll_versw" command from any cluster member.  

Because the noroll_versw script expects all members to be up, running it when a member is down causes a message similar to the following to be displayed:

The noroll_versw command cannot continue because members 'member1' 
of this  cluster are not up. Please reboot these members before 
re-attempting this  command or remove the members using the 
clu_remove_member(8) command

However, there is no clu_remove_member command. The message should refer to the clu_delete_member member.

Version 5.1B-2 ( Patch Kit 4)

The following notes pertain to Version 5.1B-2 / Patch Kit 4.

Login Failure Possible with Rolling Upgrade and C2 Security Enabled (Mar. 2005)

Login failures may occur as a result of a rolling upgrade of on systems with Enhanced Security (C2) enabled. The failures may be exhibited in two ways:

  • With the following error message:

    Can't rewrite protected password entry for user
  • With the following set of error messages:

    login: Ignoring log file: /var/tcb/files/dblogs/log.00001: magic number 0, not 8
    login: log_get: read: I/O error
    Can't rewrite protected password entry for user

The problem may occur after the initial reboot of the lead cluster member or after the rolling upgrade is completed and the clu_upgrade switch procedure has been run. The following sections describe the steps you can take to prevent the problem or correct it after it occurs.

Preventing the problem

You can prevent this problem by performing the following steps before beginning the rolling upgrade:

  1. Disable the prpasswdd daemon from running on the cluster:

    # rcmgr -c set PRPASSWDD_ARGS \
    "`rcmgr get PRPASSWDD_ARGS` -disable"
  2. Stop the prpasswdd daemon on every node in the cluster:

    # /sbin/init.d/prpasswd stop
  3. Perform the rolling upgrade procedure through the clu_upgrade switch step and reboot all the cluster members.

  4. Perform one of the following actions:

    • If PRPASSWDD_ARGS did not exist before this upgrade (that is, if rcmgr get PRPASSWDD_ARGS at this point shows only -disable), then delete PRPASSWDD_ARGS:

      # rcmgr -c delete PRPASSWDD_ARGS
    • If PRPASSWDD_ARGS existed before this upgrade, then reset PRPASSWDD_ARGS to the original string:

      # rcmgr -c set PRPASSWDD_ARGS \
      "`rcmgr get PRPASSWDD_ARGS | sed 's/ -disable//'`"
  5. Check that PRPASSWDD_ARGS is now set to what you expect:

    # rcmgr get PRPASSWDD_ARGS
  6. Start the prpasswdd daemon on every node in the cluster:

    # /sbin/init.d/prpasswd start
  7. Complete the rolling upgrade.

Correcting the problem

If you have already encountered the problem, perform the following steps to clear it:

  1. Restart the prpasswdd daemon on every node in the cluster:

    # /sbin/init.d/prpasswd restart
  2. Reboot the lead cluster member.

  3. Check to see if the problem has been resolved. If it has been resolved, you are finished. If you still see the problem, continue to step 4.

  4. Try to force a change to the auth database by performing the following steps:

    1. Use edauth to add a harmless field to an account, the exact commands depend on your editor. For example, pick an account that does not have a vacation set and add u_vacation_end:

      # edauth
      s/:u_lock@:/u_vacation_end#0:u_lock@:/
      w
      q
    2. Check to see that the u_vacation_end#0 field was added to the account:

      # edauth -g
    3. Use edauth to remove the u_vacation_end#0 field from the account.

    If the edauth commands fail, do not stop. Continue with the following instructions.

  5. Check to see if the problem has been resolved. If it has been resolved, you are finished.

    If you still see the problem, observe the following warning and continue to step 6.

    Warning!

    Continue with the following steps only if the following conditions are met:

    • You encountered the described problem while doing a rolling upgrade of a cluster running Enhanced Security.

    • You performed all previous steps.

    • All user authentications (logins) still fail.

  6. Disable logins on the cluster by creating the file /etc/nologin:

    # touch /etc/nologin
  7. Disable the prpasswdd daemon from running on the cluster:

    # rcmgr -c set PRPASSWDD_ARGS \
    "`rcmgr get PRPASSWDD_ARGS` -disable"
  8. Stop the prpasswdd daemon on every node in the cluster:

    # /sbin/init.d/prpasswd stop
  9. Force a checkpoint of db_checkpoint, using the db_checkpoint command with the -1 (number 1) option :

    # /usr/tcb/bin/db_checkpoint -1 -h /var/tcb/files

    Continue with the instructions even if this command fails.

  10. Delete the files in the dblogs directory:

    # rm -f /var/tcb/files/dblogs/*
  11. Force a change to the auth database, as follows:

    • Use the edauth command to add a harmless field to an account, the exact commands depend on your editor. For example, pick an account that does not have a vacation set and enter the following:

      # edauth
      s/:u_lock@:/u_vacation_end#0:u_lock@:/
      w
      q
    • Check to see that the u_vacation_end#0 field was added to the account:

      # edauth -g
    • Use the edauth command to remove the u_vacation_end#0 field from the account.

    Warning!

    If the edauth command fails, do not proceed further. Contact HP support.

  12. If the edauth command was successful, perform one of the following actions:

    • If PRPASSWDD_ARGS did not exist before this upgrade (that is, if rcmgr get PRPASSWDD_ARGS at this point shows only -disable), then delete PRPASSWDD_ARGS:

      # rcmgr -c delete PRPASSWDD_ARGS
    • If PRPASSWDD_ARGS existed before this upgrade, then reset PRPASSWDD_ARGS to the original string:

      # rcmgr -c set PRPASSWDD_ARGS \
      "`rcmgr get PRPASSWDD_ARGS | sed 's/ -disable//'`"
  13. Check that PRPASSWDD_ARGS is now set to what you expect:

    # rcmgr get PRPASSWDD_ARGS
  14. Start the prpasswdd daemon on every node in the cluster:

    # /sbin/init.d/prpasswd start
  15. Re-enable logins on the cluster by deleting the file /etc/nologin:

    # rm /etc/nologin
  16. Check to see if the problem has been resolved. If it has not, contact HP support.

New Response to Message Suggested (Feb. 2005)

Section 2.1.3.6 in the Patch Summary and Release Notes for Patch Kit 4 document says that you can ignore an error message about a missing ladebug.cat file and the warning message that follows. We now suggest that if you see these messages during the setup stage, that you verify that the tagged files were properly created when you execute the preinstall stage.

In cases where the tagged files are not created, you can repeat the setup stage.

During the preinstall stage of a rolling upgrade you have the option of checking tagged files. We suggest that you override the default setting and select the check tag option.

Do Not Install Prior NHD Kits on a Patched System (Feb. 2005)

Do not install the NHD–5 or NHD–6 kits on your TruCluster system if you have installed this patch kit or earlier patch kits. Doing so may cause an incorrect system configuration. The installation code for these new hardware delivery kits does not correctly preserve some cluster subset files.

Turning on the TruCluster Server Sticky Connection Feature (Oct. 2004)

The sticky connection feature included in this kit enables cluster members to remember the servicing cluster member of a connection requested from a specific client. Whenever a new connection comes from an existing client, the cluster router recognizes the client and dispatches that packet to the same member that was previously servicing this client. In this way, all connections from a single client are serviced on a single cluster member.

By default, the sticky connection feature is disabled. You enable it by specifying the following tuning attribute in the /etc/sysconfigtab file:

clua: sticky_net_enabled=1

When enabled, this feature is configured to perform per connection port. You can then configure each port by specifying the “sticky” option in the /etc/clua_services file. For example, the following line in the /etc/clua_services file configures TCP port 3000 to be a sticky port:

    echome          3000/tcp        in_multi,static,sticky

This feature is described in the clua_services(4) and sys_attrs_clua(5) reference pages, which you can download by clicking on the following link:

sticky_manpages.tar

Save this file to a directory of your choice; for example, /tmp. The tar file contains both reference pages in gzip format.

To install them on your system, untar the file and move the gzip files to the appropriate reference page location. The default location is /usr/share/man/man4 and /usr/share/man/man5. The procedure is as follows:

  1. Change directories to the directory where you downloaded the sticky_manpage.tar file. For example:

    # cd /tmp
  2. Untar the file:

    # tar -xvf sticky_manpage.tar
    blocksize = 40
    x sys_attrs_clua.5.gz, 1792 bytes, 4 tape blocks
    x clua_services.4.gz, 6561 bytes, 13 tape blocks
  3. Copy the untarred files to the appropriate reference page directory. For example:

    # cp clua_services.4.gz /usr/share/man/man4
    # cp sys_attrs_clua.5.gz /usr/share/man/man5

After moving the files to the reference page directory, they should be available to your system using the man command. You can then remove the sticky_manpage.tar file and its contents from the /tmp directory.

Upgrades from pre-V5.1 Systems May Prevent Patch Installation (Oct. 2004)

If your system was upgraded to Version 5.1B and higher using the update installation procedure, attempts to install Version 5.1B patch kits may fail on a standalone (non-cluster) system. The problem is that some expected values may be missing or incorrectly in the /etc/sysconfigtab file.

When you run the dupatch utility, the installation process may run through its preliminary stages, and may erroneously leave the impression that the patch kit has been installed.

To work around this problem, take the following steps

  1. If you are uncertain whether the patch kit was installed, run the dupatch Patch Tracking -> List Release System Patch History menu items. If the dates shown indicate that the patches were not installed as expected, continue to step 2.

  2. Check whether your /etc/sysconfigtab file contains the entries new_vers_low and new_vers_high and that the values displayed for those variables is not zero. If either condition exists, edit your sysconfigtab file to add or modify the entry as follows:

    new_vers_high = 100
    new_vers_low = 1
  3. Reboot your system.

You should then be able to install the patch kit.

Keypad Input Fails in Mozilla (Sept. 21, 2004)

Turning on NumLock on a PC keyboard causes Mozilla to not accept input from the keypad. To correct the problem, turn off NumLock when you use Mozilla.

caa_stat Command Request from Non-Root User Fails with Error (Aug 2004)

Attempting to run the caa_stat command as a non-root user fails with the following error message:

    The CAA message catalog entry is missing.   

The installation of Version 5.1B-3 corrects this problem.

Missing Reference Pages Available for Download (Aug. 2004)

The following reference pages were not included in this patch kit, but are available for downloading:

netstat(1)
ip6rtrd.conf(4)
sys_attrs_ipv6(5)
sys_attrs_ee(5)
sys_attrs_vm(5)
nifftmt(7)
disklabel(8)
ifconfig(8)
ip6rtrd(8)
kdbx(8)

To download these reference pages, save the file pk4_manpages.tar to a directory of your choice; for example, /tmp. The pk4_manpages.tar file contains all ten reference pages in gzip format.

To install them on your system, untar the file and move the gzip files to the appropriate reference page location. The default location on your Tru64 UNIX system is /usr/share/man, which contains subdirectories such as man1, man2, and so on to correspond to the volume of the reference pages. The procedure is as follows:

  1. Change directories to the directory where you downloaded the pk4_manpage.tar file. For example:

    # cd /tmp
  2. Untar the file:

    # tar -xpvf pk4_manpage.tar
  3. Copy the untarred files to the appropriate reference page directory. For example:

    # cp netstat.1.gz /usr/share/man/man1
    # cp ip6rtrd.conf.4.gz /usr/share/man/man4
    # cp *.5.gz /usr/share/man/man5
    # cp nifftmt.7.gz /usr/share/man/man7
    # cp *.8.gz /usr/share/man/man8

Once you have moved the files to the reference page directory, they should be available to your system using the man command. You can then remove the pk4_manpage.tar file and its contents from the /tmp directory.

Restriction on Setting Max_LSM_IO_PERFORMANCE Variable (Aug. 2004)

Do not set the sysconfigtab variable Max_LSM_IO_PERFORMANCE = 1 when the root or cluster root domain is under LSM control. Doing so will cause the system to hang during the boot process. To enable this feature, which was added to improve performance on multiprocessor systems running heavy I/O loads, remove the root or cluster root domain from LSM control before setting the Max_LSM_IO_PERFORMANCE variable.

See sys_attrs_lsm(5) for more information.

Version 5.1B-1 (Patch Kit 3)

The following notes pertain to Version 5.1B-1 / Patch Kit 3.

Login Failure Possible with Rolling Upgrade and C2 Security Enabled (Mar. 2005)

See “Login Failure Possible with Rolling Upgrade and C2 Security Enabled (Mar. 2005)” for information about this problem.

Do Not Install Prior NHD Kits on a Patched System (Mar. 2005)

See “Do Not Install Prior NHD Kits on a Patched System (Feb. 2005)” for information about this problem.

Restriction on Setting Max_LSM_IO_PERFORMANCE Variable (Aug. 2004)

Do not set the sysconfigtab variable Max_LSM_IO_PERFORMANCE = 1 when the root or cluster root domain is under LSM control. Doing so will cause the system to hang during the boot process. To enable this feature, which was added to improve performance on multiprocessor systems running heavy I/O loads, remove the root or cluster root domain from LSM control before setting the Max_LSM_IO_PERFORMANCE variable.

Shell Argument List Exceeds Limit (Dec. 2003)

When installing a Release patch kit using dupatch from the command line, rather than using the dupatch menus, you may see a message such as /sbin/ls: arg list too long. This occurs because the large number of patches, when concatenated on the command line, exceed the shell's arg limit. To work around this problem, temporarily change the sysconfigtab value by entering the following command:

# sysconfig -r proc exec_disable_arg_limit=1 

You can then continue the command-line patch installation.

Multiple sys_check Utility Versions (Nov. 2003)

This patch kit contains a version of the sys_check utility that may be a lower version than the one installed on your system if you downloaded and installed the sys_check Web kit. If that is the case, installing this patch kit will downgrade the version of sys_check that is being used by the system. You can, however, restore the later version.

To determine the version recognized by your system, run the following command:

# /usr/sbin/use_sys_check -v

Note: You will be required to create a login account if you have not already done so. If you use the site's search facility, do not specify the .tar file extension.

This problem will be fixed in the next Version 5.1B patch kit.

If this version displayed is lower than the version you downloaded from the Web, you can set the system default to the higher version. In the following example, the sys_check version you installed from the Web is 126:

# /usr/sbin/use_sys_check 126 

If you plan to remove the patch kit, make sure that the system default version of sys_check is set to the version installed with the patch kit before removing the patch kit. After you delete the patch kit, the system default version of sys_check will automatically be set to the version of sys_check that you downloaded from the Web. This is because dupatch saves the symbolic links that point to the Web kit location when the patch gets installed and will restore these symbolic links when the patch gets deleted.

You will encounter problems if you delete the sys_check Web kit and then delete this patch kit, because dupatch will restore the symbolic links to the Web kit location when the patch is deleted. If you have deleted the Web kit, then the symbolic links will point to nonexistent files. You can fix this problem by reinstalling the sys_check Web kit.

Regression in find Command (Nov. 2003)

The find command delivered in Patch 1197.00 contains a regression that affects the traversing of very large directory structures. The following is an example of the error message generated if you should experience the problem:

$ find . -name abc
find: Cannot open file ./path/filename/ 
: Too many open files 

find: cannot open < ./aa5142>
find: cannot open < ./aa5143>
find: cannot open < ./aa5144>
find: cannot open < ./aa5144>

To correct this problem, you must install the following Early Release Patch (ERP) kit:

    T64KIT0020545-V51BB24-E-20031104.tar

You can find this ERP kit on the following Web site:

http://www.itrc.hp.com/service/patch/mainPage.do

Limited Access for Two Tunable Variables (Nov. 2003)

The pfilt_loopback and pfilt_physaddr tunable variables (delivered in Patch 1414.00) are only accessible in the kernel by using the dbx debugger.

In a future patch kit, these tunable variables will be implemented as kernel subsystem configurable attributes, accessible through the sysconfig command.

Patch Kit 2

The following notes pertain to Patch Kit 2.

Installation Problem on Cluster Systems Running Patch Kit 1 (Aug. 2003)

TruCluster Patch 7.00 will not be installed if you install Patch Kit 2 on a TruCluster system running Patch Kit 1. To work around this problem, take one of the following actions:

  • If you have installed Patch Kit 1 but have not yet installed Patch Kit 2, remove TruCluster Patch 7.00 before installing Patch Kit 2.

  • If you have installed Patch Kit 2, remove TruCluster Patch 7.00 (which was installed with Patch Kit 1) and then install Patch 7.00 from Patch Kit 2.

Removing Patch May Create Problem with vrestore Archive (July 2003)

Patch 685.00 in Patch Kit 2 installs Version 5.1 of the vdump and vrestore utilities. If you use that version to create an archive and then uninstall that patch, attempting to restore the archive will fail with the following message:

   vrestore: Need vrestore V5.1 to restore contents: terminating

This occurs because by removing the patch, vdump and vrestore revert to an earlier version, which does not recognize the Version 5.1 archive.

To restore the archive, you will have to reinstall Patch 685.00 or install the binary file for vrestore included in Patch Kit 2.

Problems with /sbin/dsfmgr Versions (Mar. 2003)

When downgrading to an earlier version of the Tru64 UNIX operating system (for example, by uninstalling a patch kit or installing an earlier version of the operating system), an older version of the device special files manager, dsfmgr, may have problems reading the newer version the dsfmgr .dat files. When that happens, you will see a message such as the following during the boot process:

   dsfmgr: Error: cannot open and read file /etc/dfsc.dat
   Bad status file data/format
   dsfmgr: Error: cannot open and read file /etc/dfsl.dat
   Bad status file data/format
   bcheckrc: Device naming failed to boot configure or verify.
   Please correct the problem and continue or reboot.

Uninstalling the Patch Kit

If you made the following changes to your system after installing the patch kit, you will have to undo those changes before you can uninstall the patch kit:

  • If you changed your hardware configuration (for example, by adding a new disk), the system configuration that existed prior to installing the patch kit might not recognize the new devices or may not provide the necessary support for them.

  • If you added new cluster members, the new members will not have an older state to revert to if you attempt to uninstall the patch kit.

To uninstall the patch kit, do the following:

  1. Remove all new hardware and new cluster members that you added after installing the patch kit.

  2. Run dupatch to uninstall the patch kit.

  3. Verify that the patch kit was successfully uninstalled.

You can now add the cluster members you removed and reinstall the hardware you removed, as long as the support for it existed in the pre-patched system. You can also reinstall the patch kit.

Actions You Can Take

The following list describes several actions you can take, followed by detailed steps for performing those actions:

  • The best workaround for this problem is to manually make backup copies of the /etc/dfsc.dat and /etc/dfsl.dat data files before you install the patch kit, and then manually restore them before rebooting the earlier version.

  • If you do not do that, you may still be able to retrieve the Version 1.0 files if you have not made too many disk configuration changes since the installation of the patch kit. The dsfmgr program saves up to eight earlier copies of the files.

  • If the Version 1.0 files are gone, you may be able to edit the existing files so they can be read by Version 1.0 of dsfmgr.

Typically, /etc/dfsc.dat is a clusterwide file, while /etc/dfsl.dat is a per-member Context Dependent Symbolic Link (CDSL). For example:

#  ls -l /etc/dfs?.dat
/etc/dfsc.dat
/etc/dfsl.dat -> ../cluster/members/{memb}/etc/dfsl.dat

The sbin/dsfmgr command makes backup copies that have the same file name but a new file extension: .h00, .h01, and so on, where .h00 is the latest backup. For example:

/etc/dfsc.h00
/etc/dfsl.h00 -> ../cluster/members/{memb}/etc/dfsl.h00

Backing Up Original Files

To make backup copies of the original Version V1.0 files before installing the kit, you must be logged in as root. Then proceed as follows:

  1. Copy the dfsc.dat file:

    #  cp /etc/dfsc.dat /etc/dfsc.dat.v10

  2. Determine how many members are in the cluster and make backup copies of each member's dfsl.dat file. (Noncluster systems have only member0.) This procedure requires that the cluster has booted far enough so that all member directories are visible. The following example cluster has 3 members, member0, member1 and member2. On a typical cluster, member0 is the left over stand-alone disk from the original system, so do not be surprised if it contains out-of-date files.

    #  cp ls /cluster/members
    member member0 member1 member2
    #  cp /cluster/members/member0/etc/dfsl.dat /cluster/members/member0/etc/dfsl.dat.v10
    #  cp /cluster/members/member1/etc/dfsl.dat /cluster/members/member1/etc/dfsl.dat.v10
    #  cp /cluster/members/member2/etc/dfsl.dat /cluster/members/member2/etc/dfsl.dat.v10

Determining Which Files Are Version 1.0

The script /etc/dn_fix_dat.sh is intended to restore Version 1.0 of dfsc.dat and dfsl.dat on the member on which it is run. Run /etc/dn_fix_dat.sh on each cluster member before you reboot the member. If /etc/dn_fix_dat.sh is missing or is failing, you will have to perform the following manual steps.

If you have already performed the installation, you will have to figure out which of the backup files, if any, are still Version 1.0. Search for files that begin with # 1.0 (the number of spaces may vary). For example:

#  grep -E '^#( *)1.0' /etc/dfsc.dat /etc/dfsc.h??
dfsc.h02

Of the Version 1.0 files you find, pick the .dat file if you can, otherwise use the dfsc.h?? file with the lowest number and make a copy of it. For example:

#  cp -p /etc/dfsc.h02 /etc/dfsc.dat.v10

Do the same for each of the member files. Note that you could find different .h?? extensions depending on the past history of the cluster:

#  grep -E '^#( *)1.0' /cluster/members/member0/etc/dfsl.dat /cluster/members/member0/etc/dfsl.h??
#  cp /cluster/members/member0/etc/dfsl.h02 /cluster/members/member0/etc/dfsl.dat.v10

.
.
.

When you uninstall the kit or downgrade the operating system, restore the files you saved before rebooting. If you forget, however, you can still do it after receiving a dsfmgr error messages. For example:

#  cp /etc/dfsc.dat /etc/dfsc.dat.v11
#  cp /etc/dfsc.dat.v10 /etc/dfsc.dat
#  cp /cluster/members/member0/etc/dfsl.dat /cluster/members/member0/etc/dfsl.dat.v11
#  cp /cluster/members/member0/etc/dfsl.dat.v10 /cluster/members/member0/etc/dfsl.dat

Repeat for other members as needed.

Edit Existing Files

If you did not make any copies and you cannot find Version 1.0 files, you can edit the latest version of the files:

Make backup copies of the files you are going to edit.

Search for the line that begins with V: 1.1 and change it to begin with # 1.0. Use the sed editor as follows (note that there are two spaces between V: and 1.1; type carefully):

#  cp /etc/dfsc.dat /etc/dfsc.dat.v11
#  sed 's/V:  1.1/#  1.0/' /etc/dfsc.dat.v11 > /etc/dfsc.dat
#  cp /cluster/members/member0/etc/dfsl.dat /cluster/members/member0/etc/dfsl.dat.v11
#  sed 's/V:  1.1/#  1.0/' /cluster/members/member0/etc/dfsl.dat.v11 > /cluster/members/member0/etc/dfsl.dat

Repeat for other members as needed. Reboot your system. If you continue to receive dsfmgr error messages after performing this procedure when you boot, you may have to perform a clean installation of the operating system. To prevent the data files from being copied and used again, you must clear the bootdef_dev environment variable before the installation:

 >>> set bootdef_dev ""

You will then have to configure your system.

Upgrade Problem with NHD6 (May 2003)

An interaction problem with New Hardware Delivery Kit 6 (NHD6) and Patch Kit 2 may cause the patch kit to not install correctly if the following conditions exist:

  • Your cluster is running Version 5.1A with Patch Kit 4 and NHD6 installed.

  • You attempt to perform a rolling upgrade to Version 5.1B, NHD6, and V5.1B Patch Kit 2.

To work around this problem, install the software as follows:

  1. Perform the upgrade by rolling Version 5.1B and NHD6. Complete the roll, including all clean up steps.

  2. Roll NHD6 (again) and Patch Kit 2, making sure you roll NHD6 first.

  3. After you invoke ./nhd_install, the following message and prompt will appear:

         Using kit at /mnt/540
         NHD version 542 is already installed
         Do you really want to install it again (y or n)
    

    Enter y followed by a carriage return.

  4. Install Patch Kit 2 using dupatch, and perform the steps to complete the roll.

This problem will be corrected in the NHD7 release.

Patch Kit 1

The following notes pertain to Patch Kit 1.

File Permission Error Caused by /etc/.mrg..protocols.sh script (Apr. 2003)

The installation of this kit causes a problem with the /etc/.mrg..protocols.sh script, in which the script creates a temporary protocols file in /tmp and moves it to /etc instead of copying it to /etc. The result is a change in the file permission for /etc/protocols; for example, the permission might change from 755 to 600.

The workaround is to check and, if necessary, change the permissions on /etc/protocols after installing Patch Kit 1.

Enabling envmond on AlphaServer ES47/ES80/GS1280 systems (Mar. 2003)

After installing this patch kit, the envmond daemon will be disabled on ES47, GS80, and GS1280 AlphaServers..

To enable environmental monitoring, change the entry ENVMON_CONFIGURED=0 to ENVMON_CONFIGURED=1 in the /etc/rc.config file. You can do this using one of the following commands:

/usr/sbin/envconfig -c ENVMON_CONFIGURED=1
/usr/sbin/rcmgr set ENVMON_CONFIGURED 1

Problems with /sbin/dsfmgr Versions (Mar. 2003)

See “Problems with /sbin/dsfmgr Versions (Mar. 2003)” for details.

Documentation Error in Release Notes (Feb. 2003)

In Section 1.1.5 of the release notes that shipped with kit, the reference to “audit's -m switch” is incorrect. The reference should be to the auditd -d option. The workaround to the problem described in that section should read as follows:

“...boot to single-user mode and remove the -d option from the audit configuration stored in /etc/rc.config.common or /etc/rc.config. “

Message when Installing ssh V3.2 (Jan. 2003)

When you install ssh V3.2 (Patch 389.00) you may see a message similar to the following. You can ignore this message:

AllowCshrcSourcingWithSubsystems is not valid
ForcePTTYAllocation is not valid
IdentityFile is not valid
AuthorizationFile is not valid
Secure Shell daemon (sshd2) started.
Printable version
Privacy statement Using this site means you accept its terms Feedback to webmaster
2002–2010 Hewlett-Packard Development Company, L.P.