Chapter 9. XVM Volume Manager Operation

This chapter describes various aspects of the way the XVM Volume Manager operates. It includes the following sections:

Cluster System Startup

When you boot a cluster system that includes XVM logical volumes, the following operations take place:

  1. The system boots and probes all disks (SGI SAN disks and FC-hub disks, internal SCSI, etc.)

  2. If booting from an XVM system disk, the XVM Volume Manager reads all the XVM labels and creates a local view of all volumes. Cluster volumes are not visible at this point.

  3. An rc script initializes third-party SAN devices, such as PRISA.

  4. If not booting from an XVM system disk, an rc script initiates the reading of all the labels and creates a view of all local volumes. Cluster volumes are not visible at this point.

  5. The cluster is initialized.

  6. On each node in the cluster, the XVM Volume Manager reads all of the labels on the disk and creates a cluster-wide view of all volumes including the third-party SAN volumes.

Note that this procedure implies that XVM volumes are not visible until the cluster has been initialized (i.e., volumes are unavailable in single-user mode). The order of XVM initialization requires that your root device cannot be on a third-party SAN disk.

Mirror Revives

A mirror revive is the process of synchronizing data on the members of a mirror. A mirror revive is initiated at the following times:

  • A mirror with more than one piece is initially constructed.

  • A piece is attached to a mirror.

  • The system is booted with mirrors that are not synchronized.

  • A node in a cluster crashes when the mirror is open. For information on this situation, see “XVM Mirror Revive Resources”.

A message is written to the SYSLOG when a mirror begins reviving. Another message is written to the SYSLOG when this process is complete. Should the revive fail for any reason, a message will be written to the system console as well as to the SYSLOG.

For large mirror components, the process of reviving may take a long time. You cannot halt a mirror revive once it has begun except by detaching all but one of the pieces of the mirror.

There are some mirrors that may not need to revive on creation or when the system reboots. For information on creating these mirrors, see ??? and “ The -norevive Mirror Creation Option” in Chapter 2.

While a mirror is in the process of reviving, you can configure the XVM logical volume that contains the mirror, and you can perform I/O to the mirror. Displaying the mirror volume element will show what percentage of the mirror blocks have been synchronized.

If a mirror revive is required while a previously-initiated mirror revive is still occurring, the mirror revive can be queued; this is displayed as the state of the mirror when you display its topology.

You can modify the system performance of mirror revives with the XVM tunable parameters. For information on the XVM tunable parameters that affect mirror revives, see “XVM Mirror Revive Resources”.

Mirror Revives on Recovery in a Cluster

When a node in a cluster crashes, a mirror in a node may start reviving. This happens when the node that crashed was using the mirror and may have left the mirror in a dirty state, with the pieces of the mirror unequal. When this occurs, it is necessary for the XVM Volume Manager to forcibly resynchronize all of the pieces.

Full mirror resynchronization is performed when a node crashes while the node was using a mirror. This may take some amount of time.

XVM Mirror Revive Resources

If your system performance of mirror revives seems slow, you may need to reconfigure the mirror revive resources. The mirror revive resources are dynamic variables that are set by XVM tunable parameters

You can increase the number of threads and decrease the number of parallel I/O processes that are used for the revive process; this number is controlled by the xvm_max_revive_rsc parameter. Decreasing the resources causes less interference with an open file system at the cost of increasing the total time to revive the data.

Under the IRIX operating system, you should decrease the number of threads available to do work if you are sharing XLV and XVM mirrors on the same system; this number is controlled by the xvm_max_revive_threads parameter. This will prevent the XVM Volume Manager from stealing too many resources from the XLV Volume Manager. You can increase the number of threads if you want more revives to run in parallel.

As a general guideline:

  • Increase the xvm_max_revive_rsc variable if you want to revive as quickly as possible and do not mind the performance impact on normal I/O processes

  • Decrease the xvm_max_revive_rsc variable if you want to have a smaller impact on a particular filesystem

  • Under IRIX, decrease the xvm_max_revive_threads threads if the XLV and XVM Volume Managers are sharing the same system

Modifying Mirror Revive Resources under IRIX

Under IRIX, the mirror revive resources are in the /var/sysgen/mtune/xvm file. Use the systune(1M) command to see the current value of a resource, as in the following example:

# systune xvm_max_revive_threads
     xvm_max_revive_threads = 1 (0x1)

To set a new value for a resource, use the systune(1M) command as in the following example:

# systune xvm_max_revive_threads 2
     xvm_max_revive_threads =1 (0x1)
     Do you really want to change xvm_max_revive_threads to 2 (0x2)?(y/n) y

The change takes effect immediately, and lasts across reboots. If you want the change to last only until the next reboot, use the -r option of the systune(1M) command.

Modifying Mirror Revive Resources under Linux

Under Linux, you can execute the modinfo(1M) command on the XVM module to view descriptions of the XVM tunable parameters, along with their minimum, maximum, and default values.

To change the values of the mirror revive resources while loading the XVM module, you add the tunable parameters to the insmod(1M) command. For example, to change the values of xvm_max_revive_rsc in the xvm-standalone.o module, use the following command:

insmod xvm-standalone.o xvm_max_revive_rsc=8

To view the current values of the tunable parameters, you can view the contents of the files in /proc/sys/dev/xvm, using the cat or the sysctl command:

# cat /proc/sys/dev/xvm/xvm_max_revive_rsc
4
# sysctl dev.xvm.xvm_max_revive_rsc
dev.xvm.xvm_max_revive_rsc = 4

If the values are dynamically tunable, then you can change the value by writing to those /proc/sys/dev/xvm files:

# echo 6 > /proc/sys/dev/xvm/xvm_maxfs_revive
# cat /proc/sys/dev/xvm/xvm_maxfs_revive
6

You can also use the sysctl command to change the value:

# sysctl -w dev.xvm.xvm_maxfs_revive
dev.xvm.xvm_maxfs_revive = 6

XVM Subsystem Parameters

The XVM subsystem maintains a set of subsystem parameters that reflect aspects of the XVM kernel that is currently running. These parameters are as follows:

apivers 

The version of the library that the kernel is compatible with

config gen 

A marker that indicates whether the XVM configuration has changed since the last time the subsystem information was checked

privileged 

Indicates whether the current invocation of the XVM cli is privileged and thus capable of making configuration changes (otherwise only viewing is permitted)

clustered 

Indicates whether the kernel is cluster-aware

cluster initialized  

Indicates whether the cluster services have been initialized

You can view the status of these parameters by using the -subsystem option of the show command, as in the following example:

xvm:local> show -subsystem
XVM Subsystem Information:
--------------------------

apivers:              19

config gen:           15

privileged:           1

clustered:            0

cluster initialized:  0