Configuring LSM for maximum performance

»

HP Tru64 UNIX

Tru64 UNIX

» Tru64 UNIX V5.1B-6
» Tru64 UNIX V5.1B-5
» Documentation
» Information library
» Software web index
» Software products library
» Patch database
» Services
» Developer & Solution Partner Program
» Send us your comments
» Support Statements

Evolving business value

» Tru64 UNIX to HP-UX 11i transition benefits calculator
» Alpha RetainTrust Program
» Transition

Related links

» Alpha systems
» HP-UX 11i
» Integrity servers
» Linux
» HP storage
» HP solutions
HP-UX 11i: measurably better TCO!
Content starts here

What is LSM?

The host-based storage management utility for Tru64 UNIX is the Logical Storage Manager, or LSM.

LSM allows system administrators to take advantage of software RAID, be it concatenation, striping (RAID0), mirroring (RAID1) or RAID5. To do so, administrators create volumes (virtual disks) using available physical disks. An LSM volume is treated just like any other disk, but its data can be spread across multiple physical disks in whatever RAID fashion suits the associated application.

LSM volume

LSM configuration changes can be made on-line without disrupting users, thus allowing an I/O workload to be immediately re-balanced as needed.

LSM works in both single-system and TruCluster environments allowing for uninterrupted service if a cluster member leaves the cluster.

   

LSM performance tips

» Fast Failover Policy for LSM Mirrored Volume I/O (avail. with Tru64 UNIX V5.1B-3)
» LSM overview
» LSM for storage availability (manual)

Related best practices

» Backing Up LSM Volumes Using the Fast Plex Attach Feature
» Placing the Cluster Root File Systems Under LSM Control
» Aligning LSM Disks and Volumes to Hardware RAID Devices
» Ensuring Redundancy of LSM Configuration Databases on a Fibre Channel
» Using LSM to Create Mirrored Volumes for Boot Disk Partitions
» All best practices

Engineering tips

» Configuring Tru64 UNIX for Large Memory Applications in a NUMA Environment (pdf)
» NFS
» Tips home
» All best practices
 

How should I configure LSM for maximum performance?

Configuring LSM for maximum performance depends on the volume type and the underlying hardware.

The following are some general guidelines to keep in mind when creating an LSM volume.

  • Choose the type of volume based on application requirements and available hardware. Note the tradeoffs between performance, price and availability.

LSM volume type

  • For LSM mirrored volumes, mirror across disk controllers for high availability. If each mirror of a mirrored volume is striped, then stripe down controllers.
     
  • If you are configuring Dirty-Region-Logging (DRL) for mirrored volumes to improve mirrored volume recovery time after a system crash, avoid creating DRLs on the same disks that store volume data.
     
  • If the application typically issues multiple simultaneous I/Os to an LSM striped volume (mirrored or not), avoid splitting individual I/Os. Choose a stripe size that equals the application I/O transfer size, thus balancing I/O across the disks in the stripe set. Otherwise, split individual large I/Os across the physical disks in the stripe set using a multiple of the number of disks in the stripe set. Note that LSMs default stripe width, 64k, works best for most environments. It is not uncommon for a workload to change or not be easily understood, so do not over-optimize for such applications.
     
  • If the application typically issues a large number of small I/Os to an LSM striped, non-mirrored volume on a system with multiple CPUs (GS1280, for example), use the LSM sysconfig variable, Max_LSM_IO_PERFORMANCE, to reduce the need for certain spinlocks, thus further improving performance. Note that this sysconfig variable is available in V5.1B of Tru64 UNIX.
     
  • For LSM RAID5 volumes, choose a stripe size that results in a full-stripe write, avoiding a read-modify-write cycle. The default stripe size, 16k, works best for most environments.
     
  • If you are using LSM along with hardware RAID, avoid striping at both the LSM and hardware RAID level unless you configured a large stripe width for LSM over multiple hardware RAID units. So, for a performance benefit, the LSM volume should be striped over multiple RAID sets on different controllers with the LSM stripe size set to a multiple of the full hardware RAID stripe size. The following diagram illustrates this configuration.

LSM RAID

Otherwise, use LSM to mirror across hardware RAID units for high-availability.

  • If in a single-system environment, use LSMs volstat command to help benchmark performance. If in a TruCluster environment, run volstat from the cluster member driving I/O.


LSM maintains 4-8 copies of a configuration database which describes the LSM configuration. These copies should be spread across controllers for high availability. LSM cannot run without loading its configuration database.

The following are some general guidelines to keep in mind when initializing LSM.

  • Ensure configuration database copies are spread across controllers. In the V5.X stream of Tru64 UNIX, LSM will do this for you for storage that is not fibre-connected. For fibre-connected storage, manually spread LSM configuration databases across controllers within the fabric for high availability.
     
  • Disks under LSM control have a private region which in part stores the LSM configuration database. In the V5.X stream of Tru64 UNIX, the default private region size was increased from 1024 sectors to 4096 sectors. If creating a small LSM configuration (generally fewer than 10 physical disks under LSM control), you may want to consider using a smaller private region size, as it will affect the time it takes to startup LSM and the time it takes to make configuration changes. While we're talking a matter of seconds, if every second counts and the configuration is tight, a smaller private region size is the right choice.
     
  • When physical disks are added to LSM, they are organized into diskgroups. In a TruCluster environment, keep LSM diskgroups on shared storage as much as possible, avoiding hybrid diskgroups where more than one cluster member must be available to access those volumes. Hybrid disk groups provide a host of problems in light of cluster members leaving and joining the cluster as configuration changes are occurring. The following diagrams illustrate these configurations.

    LSM highly available disk group
    LSM Highly Available Disk Group

    LSM Hybrid Diskgroup (avoid!)
    LSM Hybrid Diskgroup (avoid!)