5    Managing LSM Objects

This chapter describes how to manage LSM objects using LSM commands. You can also accomplish the tasks described in this chapter using:

For more information on an LSM command, see the reference page corresponding to its name. For example, for more information on the volassist command, enter:

#  man volassist

5.1    Managing LSM Disks

The following sections describe how to use LSM commands to manage LSM disks.

5.1.1    Creating an LSM Disk

You create an LSM disk when you initialize a disk or partition for LSM use. When you initialize a disk or partition for LSM use, LSM:

You can configure an LSM disk in a disk group or as a spare disk. If you configure the LSM disk in a disk group, LSM uses it to store data. If you configure an LSM disk as a spare, LSM uses it as a replacement for a failed LSM disk that contains a mirror or RAID 5 plex.

If the disk is new to the system, enter the voldctl enable command after entering the hwmgr -scan scsi command to make LSM recognize the disk.

To initialize a disk or partition as an LSM disk, you can use:

5.1.2    Displaying LSM Disk Information

To display detailed information for an LSM disk, enter:

# voldisk list disk

The following example contains information for an LSM disk called dsk5:

Device:    dsk5
devicetag: dsk5
type:      sliced
hostid:    servername
disk:      name=dsk5 id=942260116.1188.servername
group:     name=dg1 id=951155418.1233.servername
flags:     online ready autoimport imported
pubpaths:  block=/dev/disk/dsk5g char=/dev/rdisk/dsk5g
privpaths: block=/dev/disk/dsk5h char=/dev/rdisk/dsk5h
version:   n.n
iosize:    min=512 (bytes) max=2048 (blocks)
public:    slice=6 offset=16 len=2046748
private:   slice=7 offset=0 len=4096
update:    time=952956192 seqno=0.11
headers:   0 248
configs:   count=1 len=2993
logs:      count=1 len=453
Defined regions:
 config   priv     17-   247[   231]: copy=01 offset=000000 enabled
 config   priv    249-  3010[  2762]: copy=01 offset=000231 enabled
 log      priv   3011-  3463[   453]: copy=01 offset=000000 enabled

5.1.3    Renaming an LSM Disk

When you initialize an LSM disk, you can assign it a disk media name or use the default disk media name, which is the same as the disk access name assigned by the operating system software.

Caution

Each disk in a disk group must have a unique name. To avoid confusion, you might want to ensure that no two disk groups contain disks with the same name. For example, both the rootdg disk group and another disk group could contain disks with a disk media name of dsk3. Because most LSM commands operate on the rootdg disk group unless you specify otherwise, you might perform operations on the wrong disk if multiple disk groups contain identically named disks.

The voldisk list command displays a list of all the LSM disks on the system, in all disk groups.

To rename an LSM disk, enter:

# voledit rename old_dm_name new_dm_name

For example, to rename an LSM disk called disk03 to disk01, enter:

# voledit rename disk03 disk01

5.1.4    Placing an LSM Disk Off Line

You can place an LSM disk off line to:

Placing a disk off line closes its device file. You cannot place an LSM disk off line if it is in use.

To place an LSM disk off line:

  1. Remove the LSM disk from its disk group:

    # voldg -g disk_group rmdisk disk
    

  2. Place the LSM disk off line:

    # voldisk offline disk
    

5.1.5    Placing an LSM Disk On Line

To restore access to an LSM disk that you placed off line, you must place it on line. The LSM disk is placed in the free disk pool and is accessible to LSM again. After placing an LSM disk on line, you must add it to a disk group before an LSM volume can use it. If the disk belonged to a disk group previously, you can add it to the same disk group.

To place an LSM disk on line, enter:

# voldisk online disk

See Section 5.2.3 for information on adding an LSM disk to a disk group.

5.1.6    Moving Data from an LSM Disk

You can move (evacuate) LSM volume data to other LSM disks in the same disk group if there is sufficient free space. If you do not specify a target LSM disk, LSM uses any available LSM disk in the disk group that has sufficient free space. Moving data off an LSM disk is useful in the event of disk failure.

If the LSM disk contains a mirror plex or RAID 5 columns, do not move the contents of the LSM disk to another LSM disk that contains data from the same volume.

To move data off an LSM disk, enter:

# volevac [-g disk_group] source_disk target_disk

For example, to move data off an LSM disk called dsk8 and onto an LSM disk called dsk9, enter:

# volevac dsk8 dsk9

5.1.7    Removing an LSM Disk from LSM Control

You can remove an LSM disk from LSM control if you removed the disk from its disk group or deported its disk group.

See Section 5.2.7 for information on removing an LSM disk from a disk group. See Section 5.2.4 for information on deporting a disk group.

To remove an LSM disk, enter:

# voldisk rm disk

For example, to remove an LSM disk called dsk8, enter:

# voldisk rm dsk8

If you want to use the disk after it is removed from LSM control, you must initialize it using the disklabel command. See the disklabel(8) reference page for more information on the disklabel command.

5.2    Managing Disk Groups

The following sections describe how to use LSM commands to manage disk groups.

5.2.1    Displaying Disk Group Information

There are three common ways to display disk group information. You can display:

5.2.1.1    Displaying All LSM Disks

To display a list of all LSM disks and the disk group to which each belongs, enter:

# voldisk [-g disk_group] list

Information similar to the following is displayed:

DEVICE       TYPE      DISK         GROUP        STATUS  
dsk0         sliced    -            -            unknown  
dsk1         sliced    -            -            unknown  
dsk2         sliced    dsk2         rootdg       online  
dsk3         sliced    dsk3         rootdg       online  
dsk4         sliced    dsk4         rootdg       online  
dsk5         sliced    dsk5         rootdg       online  
dsk6         sliced    dsk6         dg1          online  
dsk7         sliced    -            -            unknown  
dsk8         sliced    dsk8         dg1          online  
dsk9         sliced    -            -            unknown  
dsk10        sliced    -            -            unknown  
dsk11        sliced    -            -            unknown  
dsk12        sliced    -            -            unknown  
dsk13        sliced    -            -            unknown

The following list describes the preceding information categories:

DEVICE The disk access name assigned by the operating system software.
TYPE The LSM disk type: sliced, simple, or nopriv.
DISK The LSM disk media name. An LSM disk media name is displayed only if the disk is in a disk group.
GROUP The disk group to which the disk belongs. A disk group name is displayed only if the disk is in a disk group.
STATUS

The status of the LSM disk:

  • online -- The disk is detected by LSM and running.

  • offline -- The disk has not been detected or was put off line manually.

  • unknown -- The disk was detected but is not initialized for use by LSM.

  • error -- The disk is detected but has experienced I/O errors.

  • failed was -- An LSM disk media name exists but the disk is not associated with a DEVICE. Displays the last device associated with this name.

5.2.1.2    Displaying Free Space in Disk Groups

To display the free space in one or all disk groups, enter:

# voldg [-g disk_group] free

Information similar to the following is displayed:

GROUP        DISK         DEVICE       TAG          OFFSET    LENGTH    FLAGS  
rootdg       dsk2         dsk2         dsk2         2097217   2009151   -  
rootdg       dsk3         dsk3         dsk3         2097152   2009216   -  
rootdg       dsk4         dsk4         dsk4         0         4106368   -  
rootdg       dsk5         dsk5         dsk5         0         4106368   -  
dg1          dsk6         dsk6         dsk6         0         2046748   -  
dg1          dsk8         dsk8         dsk8         0         2046748   -  
 

The value in the LENGTH column indicates the amount of free disk space in 512-byte blocks. (2048 blocks equal 1 MB.)

5.2.1.3    Displaying the Maximum Size for an LSM Volume in a Disk Group

To display the maximum size for an LSM volume that you can create in a disk group, enter:

# volassist [-g disk_group] maxsize

The following example displays the maximum size for an LSM volume that you can create in a disk group called dg1:

# volassist -g dg1 maxsize
Maximum volume size: 6139904 (2998Mb)

5.2.2    Creating a Disk Group

The default rootdg disk group is created when you install LSM and always exists on a system running LSM. You can create additional disk groups to organize your disks into logical sets. Each disk group that you create must have a unique name and contain at least one simple or sliced LSM disk. An LSM disk can belong to only one disk group. An LSM volume can use disks from only one disk group.

If you want to initialize LSM disks and create a new disk group at the same time, you can use the voldiskadd script. (See Section 4.1.2 for more information.)

Note

By default, LSM initializes each disk with one copy of the configuration database. If a disk group will have fewer than four disks, you should initialize each disk to have two copies of the disk group's configuration database to ensure that the disk group has multiple copies in case one or more disks fail. You must use the voldisksetup command to enable more than one copy of the configuration database (Section 5.1.1).

To create a new disk group using LSM disks, enter:

# voldg init new_disk_group disk ...

For example, to create a disk group called newdg using LSM disks called dsk100, dsk101, and dsk102, enter:

# voldg init newdg dsk100 dsk101 dsk102

5.2.3    Adding a Disk to a Disk Group

To add a disk to an existing disk group, enter:

# voldg [-g disk_group] adddisk disk

For example, to add the disk called dsk10 to a disk group called dg1, enter:

# voldg -g dg1 adddisk dsk10

5.2.4    Deporting a Disk Group

You can deport a disk group to make its volumes inaccessible. This enables you to:

You cannot deport the rootdg disk group.

Caution

Although LSM displays the disks in a deported disk group as available, removing or reusing the disks in a deported disk group results in data loss.

You must import a deported disk group before it can be used. See Section 5.2.5 for more information on importing a disk group.

To deport a disk group:

  1. If volumes in the disk group are in use, stop the volumes:

    # volume [-g disk_group] stopall
    

  2. Deport the disk group:

    # voldg deport disk_group
    

    If you no longer need the disk group, you can:

5.2.5    Importing a Disk Group

You can import a disk group to make a deported (inaccessible) disk group and its volumes accessible again. You cannot import a disk group if you reused any of its associated disks while it was deported.

To import a disk group:

  1. Specify the disk group to import:

    # voldg import disk_group
    

  2. Start all volumes within the disk group:

    # volume [-g disk_group] startall
    

5.2.6    Moving a Disk Group to Another System

You might want to move a set of disks from one system to another and retain the LSM objects and data on those disks. You can move any disk group except the rootdg disk group.

To move a disk group to another system:

  1. Stop all activity on the volumes in the disk group and unmount any file systems.

  2. Deport the disk group from the originating system:

    # voldg deport disk_group
    

  3. Physically move the disks to the new host system.

  4. Enter the following command on the new host system to scan for the disks:

    # hwmgr -scan scsi
    

  5. Import the disk group:

    # voldg import disk_group
    

  6. Start the volumes:

    # volume -g disk_group startall
    

5.2.7    Removing an LSM Disk from a Disk Group

You can remove an LSM disk from a disk group; however, you cannot remove:

To remove an LSM disk from a disk group:

  1. Verify that the LSM disk is not in use by listing all subdisks:

    # volprint -st
    

    Information similar to the following is displayed:

    Disk group: rootdg
     
    SD NAME         PLEX         DISK     DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
     
    sd dsk1-01      klavol-01    dsk1     0        1408     0/0       dsk1     ENA
    sd dsk2-02      klavol-03    dsk2     0        65       LOG       dsk2     ENA
    sd dsk2-01      klavol-01    dsk2     65       1408     1/0       dsk2     ENA
    sd dsk3-01      klavol-01    dsk3     0        1408     2/0       dsk3     ENA
    sd dsk4-01      klavol-02    dsk4     0        1408     0/0       dsk4     ENA
    sd dsk5-01      klavol-02    dsk5     0        1408     1/0       dsk5     ENA
    sd dsk6-01      klavol-02    dsk6     0        1408     2/0       dsk6     ENA
     
    

    The disks in the DISK column are currently in use by LSM volumes, and therefore you cannot remove those disks from a disk group.

  2. Remove the LSM disk from the disk group:

    # voldg [-g disk_group] rmdisk disk
    

    For example, to remove an LSM disk called dsk8 from the rootdg disk group, enter:

    # voldg rmdisk dsk8
    

The disk remains under LSM control. You can:

5.3    Managing the LSM Configuration Database

This section describes how to manage the LSM configuration database, including:

5.3.1    Backing Up the LSM Configuration Database

One important responsibility in managing a system with LSM is to periodically make a backup copy of the LSM configuration database. This helps you:

The saved configuration database (also called a description set) is a record of the objects in the LSM configuration (the LSM disks, subdisks, plexes and volumes) and which disk group each object belongs to.

Whenever you make a change to the LSM configuration, the backup copy becomes obsolete. As with any backup, the content is useful only as long as it accurately represents the current information. Any time the number, nature, or name of LSM objects change, consider making a backup of the LSM configuration database. The following list describes some of the changes that will invalidate a configuration database backup:

Note

Backing up the configuration database does not save the data in the volumes and does not save the configuration data for any volumes associated with the boot disk, if you encapsulated the boot disk.

Depending on the nature of a boot disk failure, you might need to restore the system partitions from backups or installation media to return to a state where the system partitions are not under LSM control. From there, you can redo the procedures to encapsulate the boot disk partitions into LSM volumes and add mirror plexes to those volumes.

See Section 6.5.6 for more information about recovering from a boot disk failure under LSM control.

See Section 5.4.2 for information on backing up volume data.

By default, LSM saves the entire configuration database to a timestamped directory called /usr/var/lsm/db/LSM.date.hostname. You can specify a different location for the backup, but the directory must not exist.

In the directory, the backup procedure creates:

To back up the LSM configuration database:

  1. Enter the following command, optionally specifying a directory location other than the default to store the LSM configuration database:

    # volsave [-d directory]
    

  2. Save the backup to tape or other removable media.

You can save multiple versions of the configuration database; each new backup is saved in the /usr/var/lsm/db directory with its own date and time stamp, as shown in the following example:

dr-xr-x---   3 root     system      8192 May  5 09:36 LSM.20000505093612.hostname
dr-xr-x---   3 root     system      8192 May 10 10:53 LSM.20000510105256.hostname

5.3.2    Restoring the LSM Configuration Database from Backup

You can restore the configuration database of a specific disk group or volume or the entire configuration (all disk groups and volumes except those associated with the boot disk). If you have saved multiple versions of the configuration, you can choose a specific one to restore. If you do not choose one, LSM restores the most recent version.

Note

Restoring the configuration database does not restore data in the LSM volumes. See Section 5.4.3 for information on restoring volume data.

To restore a backed-up LSM configuration database:

  1. Optionally, display a list of all available database backups:

    # ls /usr/var/lsm/db
    

    If you saved the configuration database to a different directory, specify that directory.

  2. Restore the chosen configuration database:

  3. Start the restored LSM volumes:

    # volume -g disk_group startall
    

    If the volumes will not start, you might need to manually edit the plex state. See Section 6.4.3.

  4. If necessary, restore the volume contents (data) from backup. See Section 5.4.3 for more information.

5.3.3    Changing the Size and Number of Configuration Database Copies

LSM maintains copies of the configuration database on separate physical disks within each disk group. When the disk group runs out of space in the configuration database, LSM displays a message similar to the following:

volmake: No more space in disk group configuration

This might happen in an LSM configuration that you restored from a system running a version of the operating system prior to Version 5.0. Earlier versions of LSM have smaller configuration databases.

If you run out of disk space, you can reduce the number of copies of the configuration database. However, make sure that there are sufficient copies of the configuration database available for redundancy.

To reduce the number of configuration copies:

  1. Display information about the disk group's configuration database:

    # voldg [-g disk_group] list
    

    The following example displays the number, size, and disk location of the configuration database information in the rootdg disk group:

    Group:   rootdg
    dgid:   783105689.1025.lsm
    import-id: 0.1
    flags:
    config:  seqno=0.1112 permlen=173 free=166 templen=6 loglen=26
    config disk dsk13 copy 1 len=173 state=clean online
    config disk dsk13 copy 2 len=173 state=clean online
    config disk dsk11g copy 1 len=347 state=clean online
    config disk dsk10g copy 1 len=347 state=clean online
    log disk dsk11g copy 1 len=52
    log disk dsk13 copy 1 len=26
    log disk dsk13 copy 2 len=26
    log disk dsk10g copy 1 len=52
    

    In the previous example:

  2. Reduce the number of configuration copies on a disk that has more than one:

    # voldisk moddb disk nconfig=1
    

    For example, to reduce the number of configuration copies on dsk13 from two to one, enter:

    # voldisk moddb dsk13 nconfig=1
    

5.4    Managing Volumes

The following sections describe how to use LSM commands to manage LSM volumes. See Chapter 4 for information on creating LSM volumes.

5.4.1    Displaying LSM Volume Information

The volprint command displays information about LSM objects that make up an LSM volume. The following table lists the abbreviations used in volprint output:

Abbreviation Specifies
dg Disk group name
dm Disk media name
pl Plex name
sd Subdisk name
v LSM volume name

To display LSM object information for an LSM volume, enter:

# volprint [-g disk_group] -ht volume

Information similar to the following is displayed:

Disk group: rootdg  [1]
 
V  NAME         USETYPE      KSTATE   STATE    LENGTH   READPOL   PREFPLEX
PL NAME         VOLUME       KSTATE   STATE    LENGTH   LAYOUT    NCOL/WID MODE
SD NAME         PLEX         DISK     DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
 
v  klavol       fsgen        ENABLED  ACTIVE   4096     SELECT    -  [2]
pl klavol-01    klavol       ENABLED  ACTIVE   4224     STRIPE    3/128    RW  [3]
sd dsk1-01      klavol-01    dsk1     0        1408     0/0       dsk1     ENA  [4]
sd dsk2-01      klavol-01    dsk2     65       1408     1/0       dsk2     ENA
sd dsk3-01      klavol-01    dsk3     0        1408     2/0       dsk3     ENA
pl klavol-02    klavol       ENABLED  ACTIVE   4224     STRIPE    3/128    RW
sd dsk4-01      klavol-02    dsk4     0        1408     0/0       dsk4     ENA
sd dsk5-01      klavol-02    dsk5     0        1408     1/0       dsk5     ENA
sd dsk6-01      klavol-02    dsk6     0        1408     2/0       dsk6     ENA
pl klavol-03    klavol       ENABLED  ACTIVE   LOGONLY  CONCAT    -        RW
sd dsk2-02      klavol-03    dsk2     0        65       LOG       dsk2     ENA
 

This example shows output for a volume that uses a three-column striped plex that has one mirror plex.

  1. Disk group name [Return to example]

  2. Volume name (klavol), usage type (fsgen), state (ENABLED ACTIVE), and size (4096) information. [Return to example]

  3. Plex information. This volume has two data plexes, klavol-01 and klavol-02, and a DRL plex, klavol-03. [Return to example]

  4. Subdisk information for the plex klavol-01. [Return to example]

5.4.2    Backing Up an LSM Volume

One of the more common tasks of a system administrator is helping users recover lost or corrupted files. To perform that task effectively, you must set up procedures for backing up LSM volumes and the LSM configuration database at frequent and regular intervals. You will need the saved configuration database as well as the backed-up data, if you need to restore a volume after a major failure. (For example, multiple disks in the same volume failed, and those disks contained the active configuration records for the disk group.)

See Section 5.3.1 for information on backing up the LSM configuration database.

Note

If the volume is part of an Advanced File System domain, use the AdvFS backup utilities instead of LSM to back up the volume. See your AdvFS documentation for more information on the backup utilities available. See Section 5.3.1 for more information on backing up the LSM configuration database.

The way you back up an LSM volume depends on the number and type of plexes in the volume:

5.4.2.1    Backing Up a Volume with a Single Concatenated or Striped Plex

To back up an LSM volume that has a single plex:

  1. If necessary, select a convenient time and inform users to save files and refrain from using the volume (the application or file system that uses the volume) while you back it up.

  2. Determine the size of the LSM volume and which disks it uses:

    # volprint -v [-g disk_group] volume
    

  3. Ensure there is enough free space in the disk group to create a temporary copy of the LSM volume. The free space must be on disks that are not used in the volume you want to back up:

    # voldg [-g disk_group] free
    

  4. If the volume contains a UNIX File System, unmount it.

  5. Create a temporary mirror plex for the LSM volume, running this operation in the background:

    # volassist snapstart volume &
    

  6. Create a new volume from the temporary plex. (The snapshot keyword automatically uses the temporary plex to create the new volume.)

    # volassist snapshot volume temp_volume
    

    The following example creates a temporary LSM volume called vol1_backup for an LSM volume called vol1:

    # volassist snapshot vol1 vol1_backup
    

  7. Remount and resume use of the original LSM volume.

  8. Start the temporary LSM volume:

    # volume start temp_volume
    

  9. Back up the temporary LSM volume to your default backup device:

    # dump 0 /dev/rvol/disk_group/temp_volume
    

    The following example backs up an LSM volume called vol1_backup in the rootdg disk group:

    # dump 0 /dev/rvol/rootdg/vol1_backup
    

  10. Stop and remove the temporary LSM volume:

    # volume stop temp_volume
    # voledit -r rm temp_volume
    

See the dump(8) reference page for more information about the dump command.

5.4.2.2    Backing Up a Volume with Mirror Plexes

Volumes with mirror plexes can remain in use while you back up their data, but any writes to the volume during the backup might result in inconsistency between the volume's data and the data that was backed up.

Caution

If the LSM volume has only two data plexes, redundancy is not available during the backup.

To back up an LSM volume that has mirror plexes:

  1. Dissociate one of the volume's plexes, which leaves the plex as an image of the LSM volume at the time of dissociation:

    # volplex dis plex
    

    The following example dissociates a plex called vol01-02:

    # volplex dis vol01-02
    

  2. Create a temporary LSM volume using the dissociated plex. Run the command in the background, as it might take a long time depending on the size of the plex:

    # volmake -U fsgen vol temp_volume plex=plex &
    

    The following example creates an LSM volume called vol01-temp using a plex called vol01-02:

    #  volmake -U fsgen vol vol01-temp plex=vol01-02 &
    

  3. Start the temporary volume:

    # volume start temp_volume
    

  4. Back up the temporary LSM volume to your default backup device:

    # dump 0 /dev/rvol/disk_group/temp_volume
    

    The following example backs up an LSM volume called vol1_backup in the rootdg disk group:

    # dump 0 /dev/rvol/rootdg/vol1_backup
    

  5. Stop and remove the temporary LSM volume:

    # volume stop temp_volume
    # voledit -r rm temp_volume
    

  6. Reattach the dissociated plex to the original volume. If the volume is very large, you can run this operation in the background:

    # volplex att volume plex &
    

    LSM automatically resynchronizes the plexes when you reattach the dissociated plex. This operation might take a long time, depending on the size of the volume. Running this process in the background returns control of the system to you immediately instead of after the resynchronization is complete.

See the dump(8) reference page for more information about the dump command.

5.4.2.3    Backing Up a Volume with a RAID 5 Plex

You can back up a volume that uses a RAID 5 plex, but you must either stop all applications from using the volume while the backup is in process or allow the backup to occur while the volume is in use.

If the volume remains in use during the backup, the volume data might change before the backup completes, and therefore the backup data will not be an exact copy of the volume's contents.

To back up a volume with a RAID 5 plex, enter:

# dump 0 /dev/rvol/disk_group/volume

5.4.3    Restoring an LSM Volume from Backup

The way you restore an LSM volume depends on what the volume is used for and if the volume is configured and active.

Note

If the volume is part of an AdvFS domain, consult your AdvFS documentation for the best method of restoring backed-up data.

If the volume is used for an application such as a database, see that application's documentation for the recommended method for restoring backed-up data.

To restore a backed-up volume:

5.4.4    Starting an LSM Volume

LSM automatically starts volumes when the system boots. You can manually start an LSM volume if you:

To start an LSM volume, enter:

# volume start [-g disk_group] volume

To start all volumes in a disk group (for example, after importing the disk group), enter:

# volume [-g disk_group] startall

5.4.5    Stopping an LSM Volume

LSM automatically stops LSM volumes when the system shuts down. When you no longer need an LSM volume, you can stop it then remove it. You cannot stop an LSM volume if a file system is using it.

To stop an LSM volume:

  1. If applicable, stop a file system from using the LSM volume.

  2. Stop the LSM volume:

    # volume [-g disk_group] stop volume
    

    For example, to stop an LSM volume called vol1 in the dg1 disk group, enter:

    # volume -g dg1 stop vol1
    

    To stop all volumes, enter:

    # volume stopall
    

5.4.6    Removing an LSM Volume

Removing an LSM volume destroys the data in that volume. Remove an LSM volume only if you are sure that you do not need the data in the LSM volume or the data is backed up elsewhere. When an LSM volume is removed, the space it occupied is returned to the free space pool.

The following procedure also unencapsulates UNIX File Systems or Advanced File Systems.

To remove an LSM volume:

  1. If applicable, stop a file system from using the LSM volume.

  2. If the volume was configured as secondary swap, remove references to the LSM volume from the vm:swapdevice entry in the sysconfigtab file. If the swap space was configured using the /etc/fstab file, update this file accordingly. These changes are effective on the next reboot.

    See the System Administration guide and the swapon(8) reference page for more information.

  3. Stop the LSM volume:

    # volume [-g disk_group] stop volume
    

  4. Remove the LSM volume:

    # voledit -r rm volume
    

    This step removes the plexes and subdisks and the volume itself.

  5. If the volume contained an encapsulated file system, do one of the following:

5.4.7    Recovering an LSM Volume

You might need to recover an LSM volume that has become disabled. Alert icons and the Alert Monitor window might provide information when an LSM volume recovery is needed. (See the System Administration guide for more information about the Alert Monitor.) Recovering an LSM volume starts the disabled volume and, if applicable, resynchronizes mirror plexes or RAID 5 parity.

To recover an LSM volume, enter the following command, specifying either the volume or a disk, if the disk is used by several volumes:

# volrecover [-g disk_group] -sb volume|disk

(The -s option starts all disabled volumes, and the -b option runs the command in the background.)

For example, to recover an LSM volume called vol01, enter:

# volrecover -sb vol01

To recover all LSM objects (subdisks, plexes, or volumes) that use a disk called dsk5, enter:

# volrecover -sb dsk5

If you do not specify a disk group, LSM volume name, or disk name, all volumes are recovered. If recovery of an LSM volume is not possible, restore the LSM volume from backup.

5.4.8    Renaming an LSM Volume

You can rename an LSM volume. The new LSM volume name must be unique within the disk group. If the LSM volume has a file system or is part of an AdvFS domain, you must update the /etc/fstab and /etc/fdmn files.

To rename an LSM volume, enter:

# voledit rename old_volume new_volume

Be sure to update the relevant files in the /etc directory. If this is not done and the system is rebooted, subsequent commands on a volume using its previous name will fail.

5.4.9    Changing LSM Volume Permission, User, and Group Attributes

By default, the device special files for LSM volumes are created with read and write permissions granted only to the owner. Databases or other applications that perform raw I/O might require device special files to have other settings for the permission, user, and group attributes.

You must use LSM commands to change the permission, user, and group attributes for LSM volumes. The LSM commands ensure that settings for these attributes are stored in the LSM database, which keeps track of all settings for LSM objects.

Do not change the permission, user, or group attributes by using the chmod, chown, or chgrp commands directly on the device special files associated with LSM volumes. These standard UNIX commands do not store the required values in the LSM configuration database.

To change Tru64 UNIX user, group, and permission attributes, enter:

# voledit [-g disk_group] set \
user=username group=groupname mode=permission volume

The following example changes the user, group, and permission attributes for an LSM volume called vol1:

# voledit set user=new_user group=admin mode=0600 vol1

5.5    Managing Plexes

The following sections describe how to use LSM commands to manage plexes.

5.5.1    Displaying Plex Information

You can display information about all plexes or about one specific plex.

5.5.1.1    Displaying General Plex Information

To display general information for all plexes, enter:

# volprint -pt

Information similar to the following is displayed:

Disk group: rootdg
 
PL NAME         VOLUME       KSTATE   STATE    LENGTH   LAYOUT    NCOL/WID MODE
 
pl tst-01       tst          ENABLED  ACTIVE   262144   CONCAT    -        RW
pl tst-02       tst          DETACHED STALE    262144   CONCAT    -        RW
pl vol5-01      vol5         ENABLED  ACTIVE   409696   RAID      8/32     RW
pl vol5-02      vol5         ENABLED  LOG      2560     CONCAT    -        RW
 

5.5.1.2    Displaying Detailed Plex Information

To display detailed information about a specific plex, enter:

# volprint -lp [plex]

Information similar to the following is displayed:

Disk group: rootdg
 
Plex:   p1
info:   len=500
type:   layout=CONCAT
state:  state=EMPTY kernel=DISABLED io=read-write
assoc:  vol=v1 sd=dsk4-01
flags:  complete
 
Plex:   p2
info:   len=1000
type:   layout=CONCAT
state:  state=EMPTY kernel=DISABLED io=read-write
assoc:  vol=v2 sd=dsk4-02
flags:  complete
 
Plex:   vol_mir-01
info:   len=256
type:   layout=CONCAT
state:  state=ACTIVE kernel=ENABLED io=read-write
assoc:  vol=vol_mir sd=dsk2-01
flags:  complete
 
Plex:   vol_mir-02
info:   len=256
type:   layout=CONCAT
state:  state=ACTIVE kernel=ENABLED io=read-write
assoc:  vol=vol_mir sd=dsk3-01
flags:  complete
 
Plex:   vol_mir-03
info:   len=0 (sparse)
type:   layout=CONCAT
state:  state=ACTIVE kernel=ENABLED io=read-write
assoc:  vol=vol_mir sd=(none)
flags:
logging: logsd=dsk3-02 (enabled)

5.5.2    Adding a Data Plex

You can add a data plex to a volume to create a mirror data plex. You cannot create a mirror data plex on a disk that already contains a data plex for the volume.

The data from the original plex is copied to the added plex, and the plexes are synchronized. This process can take a long time depending on the size of the volume, so you should run the command in the background (using the & operator).

Note

Adding a data plex does not add a DRL plex to the volume. It is highly recommended that volumes with mirror plexes have a DRL plex. See Section 5.5.3 for more information on adding a log plex to a volume.

To add a data plex, enter:

#  volassist mirror volume [disk] &

5.5.3    Adding a Log Plex

You can add a log plex (DRL plex or RAID 5 log plex) to a volume that has mirrored data plexes or a RAID 5 data plex. However, if the volume is used for secondary swap, it should not have a DRL. You use the same command to add both DRL plexes and RAID 5 logs.

To improve performance, the DRL plex should not be on the same disk as one of the volume's data plexes. To ensure that LSM does not create a DRL plex on the same disk as a data plex, use the volprint -ht command to display volume information to identify an LSM disk that is not part of the volume.

To add a log plex to a volume, enter:

#  volassist addlog volume disk

5.5.4    Moving Data to a New Plex

You can move the data from a striped or concatenated plex to a new plex to:

For a move operation to be successful:

To move data from one plex to another:

  1. Display the size of the plex you want to move:

    # volprint -ht volume
    

    Information similar to the following is displayed:

    Disk group: rootdg
     
    V  NAME         USETYPE      KSTATE   STATE    LENGTH   READPOL   PREFPLEX
    PL NAME         VOLUME       KSTATE   STATE    LENGTH   LAYOUT    NCOL/WID MODE
    SD NAME         PLEX         DISK     DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
     
    v  DataVol      fsgen        ENABLED  ACTIVE   204800   SELECT    -
    pl DataVol-01   DataVol      ENABLED  ACTIVE   204800   STRIPE    8/128    RW
    sd dsk0-01      DataVol-01   dsk0     0        25600    0/0       dsk0     ENA
    sd dsk1-01      DataVol-01   dsk1     0        25600    1/0       dsk1     ENA
    sd dsk2-01      DataVol-01   dsk2     0        25600    2/0       dsk2     ENA
    sd dsk3-01      DataVol-01   dsk3     0        25600    3/0       dsk3     ENA
    sd dsk4-01      DataVol-01   dsk4     0        25600    4/0       dsk4     ENA
    sd dsk6-01      DataVol-01   dsk6     65       25600    5/0       dsk6     ENA
    sd dsk7-01      DataVol-01   dsk7     0        25600    6/0       dsk7     ENA
    sd dsk8-01      DataVol-01   dsk8     0        25600    7/0       dsk8     ENA
    pl DataVol-02   DataVol      ENABLED  ACTIVE   204800   STRIPE    8/128    RW
    sd dsk10-01     DataVol-02   dsk10    0        25600    0/0       dsk10    ENA
    sd dsk11-01     DataVol-02   dsk11    0        25600    1/0       dsk11    ENA
    sd dsk12-01     DataVol-02   dsk12    0        25600    2/0       dsk12    ENA
    sd dsk13-01     DataVol-02   dsk13    0        25600    3/0       dsk13    ENA
    sd dsk14-01     DataVol-02   dsk14    0        25600    4/0       dsk14    ENA
    sd dsk15-01     DataVol-02   dsk15    0        25600    5/0       dsk15    ENA
    sd dsk18-01     DataVol-02   dsk18    0        25600    6/0       dsk18    ENA
    sd dsk19-01     DataVol-02   dsk19    0        25600    7/0       dsk19    ENA
    pl DataVol-03   DataVol      ENABLED  ACTIVE   LOGONLY  CONCAT    -        RW
    sd dsk6-02      DataVol-03   dsk6     0        65       LOG       dsk6     ENA
     
    

    In this example, the volume has two striped data plexes of 204800 sectors (100 MB).

  2. Ensure there is enough space on other LSM disks to move the plex's data.

  3. Create a new plex with the characteristics you want.

  4. Enter the following command line (set to run in the background) to attach the new plex to the volume and move the data from the old plex to the new plex, optionally removing the old plex upon successful completion of the move:

    # volplex [-o rm] mv old_plex new_plex &
    

    The volume remains active and usable during this operation.

5.5.5    Reattaching a Plex

If you removed a plex from a volume and did not recursively remove it and its objects, you can reattach it to the volume.

To reattach a plex to a volume, enter:

# volplex att volume plex

5.5.6    Removing a Plex

You can remove a plex from an LSM volume to reduce the number of plexes in a volume.

Note

The following restrictions apply:

To remove a data plex from a volume with mirror plexes:

  1. Dissociate the plex from its volume, and optionally remove the old plex after successful completion of the dissociation:

    # volplex [-o rm] dis plex
    

  2. If you did not use the option in step 1, remove the plex:

    # voledit -r rm plex
    

Removing the plex also removes all associated subdisks in that plex. The disks remain under LSM control, and you can use them for other volumes or remove them from LSM control.

To remove the log plex from a RAID 5 volume:

  1. Dissociate the log plex from the RAID 5 volume (using the -o force option):

    # volplex -o force dis log_plex
    

  2. Remove the plex and its subdisks:

    # voledit -r rm log_plex
    

5.6    Managing Subdisks

The following sections describe how to use LSM commands to manage subdisks.

5.6.1    Displaying Subdisk Information

You can display information about all subdisks or one specific subdisk.

5.6.1.1    Displaying General Subdisk Information

To display general information for all subdisks, enter:

# volprint -st

Information similar to the following is displayed:

Disk group: rootdg
 
SD NAME      PLEX        DISK   DISKOFFS    LENGTH  [COL/]OFF DEVICE  MODE
 
sd dsk2-01   vol_mir-01  dsk2   0           256        0      dsk2    ENA
sd dsk3-02   vol_mir-03  dsk3   0            65      LOG      dsk3    ENA
sd dsk3-01   vol_mir-02  dsk3   65          256        0      dsk3    ENA
sd dsk4-01   p1          dsk4   17          500        0      dsk4    ENA
sd dsk4-02   p2          dsk4   518        1000        0      dsk4    ENA
 

5.6.1.2    Displaying Detailed Subdisk Information

To display detailed information about a specific subdisk, enter:

# volprint -l subdisk

The following example shows information about a subdisk called dsk12-01:

Disk group: rootdg
 
Subdisk:  dsk12-01
info:     disk=dsk12 offset=0 len=2560
assoc:    vol=vol5 plex=vol5-02 (offset=0)
flags:    enabled
device:   device=dsk12 path=/dev/disk/dsk12g diskdev=82/838

5.6.2    Joining Subdisks

You can join two or more subdisks to form a single, larger subdisk. Subdisks can be joined only if they belong to the same LSM volume and occupy adjacent regions of the same disk. For a volume with striped plexes, the subdisks must be in the same column. The joined subdisk can have a new subdisk name or retain the name of one of the subdisks being joined.

To join subdisks, enter:

# volsd join subdisk1 subdisk2 new_subdisk

5.6.3    Splitting Subdisks

You can divide a subdisk into two smaller subdisks. Once split, you can move the data in the smaller subdisks to different disks. This is useful for reorganizing volumes or for improving performance. The new, smaller subdisks occupy adjacent regions within the same region of the disk that the original subdisk occupied.

You must specify a size for the first subdisk; the second subdisk consists of the rest of the space in the original subdisk.

If the subdisk to be split is associated with a plex, both of the resultant subdisks are associated with the same plex. You cannot split a log subdisk.

To split a subdisk and assign each subdisk a new name, enter:

# volsd -s size split original_subdisk new_subdisk1 new_subdisk2

To split a subdisk and retain the original name for the first subdisk and assign a new name to the second subdisk, enter:

# volsd -s size split original_subdisk new_subdisk

5.6.4    Moving Subdisks

You can move the data in subdisks to a different disk to improve performance. The disk space occupied by the data in the original subdisk is returned to the free space pool.

Ensure that the following conditions are met before you move data in a subdisk:

To move data from one subdisk to another, enter:

# volsd mv source_subdisk target_subdisk

5.6.5    Removing a Subdisk

You can remove a subdisk that is not associated with or needed by an LSM volume. Removing a subdisk returns the disk space to the free space pool in the disk group. To remove a subdisk, you must dissociate the subdisk from a plex, then remove it.

To remove a subdisk:

  1. Display information about the subdisk to identify any volume or plex associations:

    # volprint -l subdisk
    

  2. Do one of the following to remove the subdisk: