Clusterworx System Administrators Guide.book - Abel Keogh

Clusterworx System Administrators Guide.book - Abel Keogh Clusterworx System Administrators Guide.book - Abel Keogh

abelkeogh.com
from abelkeogh.com More from this publisher
10.11.2012 Views

Clusterworx 3.4.2 System Administrator’s Guide A guide to reducing the total cost of cluster ownership by simplifying, streamlining, and automating all aspects of cluster management. DOC-CWX3XSA-A 03.16.07

<strong>Clusterworx</strong> 3.4.2<br />

<strong>System</strong> Administrator’s <strong>Guide</strong><br />

A guide to reducing the total cost of cluster ownership by<br />

simplifying, streamlining, and automating all aspects of<br />

cluster management.<br />

DOC-CWX3XSA-A<br />

03.16.07


Notice<br />

i<br />

Notice<br />

This manual and the product(s) described herein are furnished under license and may be used or copied only<br />

in accordance with the terms of such license. The content of this manual is furnished for informational use<br />

only, is subject to change without notice, and should not be construed as a commitment or obligation by<br />

Linux Networx, Inc. Linux Networx, Inc. assumes no responsibility or liability for any errors or inaccuracies<br />

that may appear in this manual, but invites users to contact us with any discrepancies or for additional<br />

clarification.<br />

Except as permitted by such license, no part of this publication may be reproduced, stored in a retrieval<br />

system, or transmitted, in any form or by any means, electronic, mechanical, or otherwise, without the prior<br />

written permission of Linux Networx, Inc.<br />

Linux Networx, the cube logo, and <strong>Clusterworx</strong> are registered trademarks of Linux Networx, Inc. LS-1 and<br />

Icebox are trademarks of Linux Networx, Inc. Linux is a trademark of Linus Torvalds. Other company<br />

product names and service names may be trademarks or service marks of other companies or individuals.<br />

© 2007 Linux Networx, Inc. All rights reserved.<br />

Linux Networx, Inc.<br />

14944 Pony Express Road<br />

Bluffdale, Utah 84065<br />

USA<br />

801.562.1010<br />

Part number: DOC-CWX3XSA-A<br />

Revision: 03.16.07<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Preface<br />

Introduction<br />

The <strong>Clusterworx</strong> <strong>System</strong> Administrator's <strong>Guide</strong> is written in modular style where<br />

each section builds upon another to deliver progressively advanced scenarios and<br />

configurations. Depending on your system configuration and implementation, certain sections of this guide<br />

may be optional, but warrant your attention as the needs of your system evolve. This guide assumes that you,<br />

the reader, have a working knowledge of Linux.<br />

Audience<br />

This guide's intended audience is the system administrator who will be working with the <strong>Clusterworx</strong><br />

software to manage and control the cluster system.<br />

Linux Networx Documentation on the Web<br />

This and all Linux Networx technical documentation is available via the Linux Networx support website at<br />

http://www.linuxnetworx.com. Access to documentation is limited to Linux Networx customers only.<br />

Related Documentation<br />

Please refer to the following for information on related Linux Networx products:<br />

Icebox User's <strong>Guide</strong><br />

Note<br />

SUSE Linux documentation is available on http://www.novell.com/documentation/suse.html.<br />

Feedback<br />

Linux Networx welcomes your feedback. If you have any questions, comments, or requests concerning this<br />

document, please e-mail us at writer@lnxi.com. Please include the document's title, part number, and revision<br />

information.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

ii


Introduction<br />

Annotations<br />

iii<br />

Annotations<br />

This guide uses the following annotations throughout the text:<br />

Electric Shock!<br />

Indicates impending danger. Ignoring these messages may result in serious injury or death.<br />

Warning!<br />

Warns users about how to prevent equipment damage and avoid future problems.<br />

Note<br />

Informs users of related information and provides details to enhance or clarify user activities.<br />

Tip<br />

Identifies techniques or approaches that simplify a process or enhance performance.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Customer Education<br />

Customer Education<br />

Training from Linux Networx provides system administrators, developers, and other IT professionals with<br />

the education, skill, and tools needed to successfully manage Linux supercomputing cluster systems. Courses<br />

help increase productivity and allow your technical team to train in a Linux cluster environment at our stateof-the-art<br />

Solutions Center.<br />

Linux Networx training helps keep you abreast of new technology, teaches you how to integrate systems into<br />

your existing infrastructure, and can even provide you with the ability to learn the intricacies of your system<br />

before it arrives at your site—this allows you to develop the skills necessary to maximize your technology<br />

investment from day one.<br />

For details regarding course availability and enrollment, please visit www.linuxnetworx.com.<br />

Technical Support<br />

Linux Networx support technicians are available on the phone or online to answer any questions you have<br />

about your system. Our online support database includes thousands of articles created by certified Linux<br />

Networx support professionals to resolve customer issues. This database is subject to regular updates and is<br />

expanded and refined to ensure that you have access to the very latest information. From our online support<br />

you can also find updates, download patches, or post specific questions about your system.<br />

Support Options<br />

Linux Networx offers three levels of support:<br />

Priority Support Supplies basic support needs including unlimited online support and upgrades.<br />

Priority-plus Support Upgraded basic support including regular system checks and reviews.<br />

Premium Support The most comprehensive support package. Includes training, qualified kernel updates,<br />

and round-the-clock phone support.<br />

To learn more about our support options, or visit www.linuxnetworx.com.<br />

To upgrade or add support, contact Linux Networx at www.linuxnetworx.com.<br />

Contact Information<br />

Linux Networx Technical Support:<br />

1-800-459-7138<br />

(6:00 a.m.—6:00 p.m. MST)<br />

www.linuxnetworx.com<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

iv


Technical Support<br />

Contact Information<br />

v<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Table of Contents<br />

Notice .............................................................................................................................. i<br />

Preface ..........................................................................................................................ii<br />

Introduction ....................................................................................................................ii<br />

Audience .....................................................................................................................ii<br />

Linux Networx Documentation on the Web ..........................................................ii<br />

Related Documentation.............................................................................................ii<br />

Feedback .....................................................................................................................ii<br />

Annotations................................................................................................................iii<br />

Customer Education .................................................................................................... iv<br />

Technical Support ........................................................................................................ iv<br />

Support Options ........................................................................................................ iv<br />

Contact Information ................................................................................................. iv<br />

Chapter 1<br />

Getting Started ............................................................................................................ 1<br />

<strong>System</strong> Requirements .................................................................................................1<br />

Hardware Requirements ........................................................................................... 1<br />

Operating <strong>System</strong> Requirements ............................................................................. 2<br />

Software Requirements ............................................................................................. 2<br />

Upgrades .......................................................................................................................3<br />

Installing <strong>Clusterworx</strong> ..................................................................................................4<br />

Setting Up a <strong>Clusterworx</strong> Master Host.................................................................... 4<br />

Migration Utility....................................................................................................... 13<br />

<strong>Clusterworx</strong> Services ............................................................................................... 14<br />

Chapter 2<br />

Licensing .................................................................................................................... 15<br />

Overview ..................................................................................................................... 15<br />

License Installation .................................................................................................... 15<br />

License Authentication ............................................................................................. 17<br />

License Administration ............................................................................................. 17<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

vi


Table of Contents<br />

vii<br />

License Viewer..........................................................................................................17<br />

Chapter 3<br />

Introduction to <strong>Clusterworx</strong> ...................................................................................... 19<br />

Overview ..................................................................................................................... 19<br />

Comprehensive <strong>System</strong> Monitoring.......................................................................19<br />

Version Controlled Image Management................................................................19<br />

Fast Multicast Provisioning.....................................................................................20<br />

<strong>Clusterworx</strong> Interface ................................................................................................ 21<br />

Customizing the Interface .......................................................................................22<br />

Chapter 4<br />

Host Administration .................................................................................................. 23<br />

Clustered Environments ........................................................................................... 23<br />

Host Configuration .................................................................................................... 23<br />

Hosts ........................................................................................................................... 25<br />

Add a Host .................................................................................................................25<br />

Edit a Host .................................................................................................................28<br />

Disable a Host ...........................................................................................................30<br />

Delete a Host .............................................................................................................31<br />

Partitions .................................................................................................................... 32<br />

Add a Partition..........................................................................................................32<br />

Edit a Partition ..........................................................................................................34<br />

Disable a Partition ....................................................................................................35<br />

Delete a Partition ......................................................................................................35<br />

Regions ....................................................................................................................... 36<br />

Add a Region .............................................................................................................36<br />

Edit a Region .............................................................................................................38<br />

Delete a Region .........................................................................................................39<br />

Instrumentation ......................................................................................................... 40<br />

States ..........................................................................................................................40<br />

<strong>Clusterworx</strong> Message Log........................................................................................41<br />

Menu Controls ..........................................................................................................41<br />

General Tab ...............................................................................................................42<br />

CPU Tab.....................................................................................................................45<br />

Memory Tab ..............................................................................................................45<br />

Disk Tab .....................................................................................................................46<br />

Network Tab..............................................................................................................46<br />

Kernel Tab .................................................................................................................47<br />

Load Tab ....................................................................................................................47<br />

Temperature Tab ......................................................................................................48<br />

Chapter 5<br />

User Administration ................................................................................................... 49<br />

Working Environment ................................................................................................ 49<br />

Default User Administration Settings .................................................................... 51<br />

User Configuration...................................................................................................51<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Roles ........................................................................................................................... 52<br />

Add a Role................................................................................................................. 52<br />

Edit a Role ................................................................................................................. 54<br />

Delete a Role ............................................................................................................. 55<br />

Privileges ................................................................................................................... 56<br />

Groups .........................................................................................................................57<br />

Add a Group.............................................................................................................. 57<br />

Edit a Group.............................................................................................................. 60<br />

Delete a Group.......................................................................................................... 61<br />

Users ........................................................................................................................... 62<br />

Add a User................................................................................................................. 62<br />

Edit a User Account ................................................................................................. 64<br />

Disable a User Account ........................................................................................... 65<br />

Delete a User Account ............................................................................................. 66<br />

Chapter 6<br />

Power Control ............................................................................................................ 67<br />

Icebox Administration ............................................................................................... 67<br />

Add an Icebox........................................................................................................... 68<br />

Power Management ................................................................................................. 77<br />

Hosts Subtab ............................................................................................................. 77<br />

Iceboxes Subtab........................................................................................................ 80<br />

Chapter 7<br />

Imaging ...................................................................................................................... 83<br />

Overview ..................................................................................................................... 83<br />

Payload Management ............................................................................................... 84<br />

Linux Distributions .................................................................................................. 84<br />

Create a Payload....................................................................................................... 85<br />

Add a Package to an Existing Payload .................................................................. 92<br />

Remove a Payload Package .................................................................................... 94<br />

Payload Authentication Management................................................................... 98<br />

Payload Local User and Group Account Management.....................................101<br />

Payload File Configuration...................................................................................106<br />

Edit a Payload File with the Text Editor .............................................................107<br />

Add and Update Payload Files or Directories ....................................................108<br />

Delete Payload Files...............................................................................................109<br />

Delete a Payload.....................................................................................................109<br />

Install <strong>Clusterworx</strong> into the Payload ...................................................................110<br />

Kernel Management ................................................................................................ 112<br />

Create a Kernel.......................................................................................................112<br />

Edit a Kernel ...........................................................................................................117<br />

Delete a Kernel from VCS .....................................................................................119<br />

Image Management ................................................................................................ 120<br />

Create an Image......................................................................................................120<br />

Delete an Image from VCS....................................................................................123<br />

Managing Partitions...............................................................................................124<br />

RAID Partitions.......................................................................................................127<br />

Table of Contents<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

viii


Table of Contents<br />

ix<br />

Edit a Partition ....................................................................................................... 129<br />

Delete a Partition ................................................................................................... 131<br />

User-Defined File <strong>System</strong>s ................................................................................... 132<br />

Diskless Hosts ........................................................................................................ 135<br />

RAM Disk................................................................................................................ 138<br />

Plug-ins for the Boot Process ............................................................................... 140<br />

Version Control <strong>System</strong> (VCS) ...............................................................................144<br />

Version Branching................................................................................................. 144<br />

Version Control Check-in ..................................................................................... 146<br />

Version Control Check-out................................................................................... 147<br />

VCS Management .................................................................................................. 148<br />

Version Status ........................................................................................................ 149<br />

VCS Host Compare................................................................................................ 150<br />

Chapter 8<br />

Provisioning ............................................................................................................. 153<br />

Overview ...................................................................................................................153<br />

Selecting an Image ..................................................................................................154<br />

Advanced Provisioning Options.......................................................................... 156<br />

Configuring DHCP................................................................................................. 158<br />

Provisioning Channels .............................................................................................160<br />

DistributionService.profile................................................................................... 161<br />

Chapter 9<br />

Runner ...................................................................................................................... 163<br />

Overview ...................................................................................................................163<br />

Connect to a Host ...................................................................................................164<br />

View Host Output ....................................................................................................166<br />

Execute Commands on Hosts ...............................................................................167<br />

Disconnect from a Host ..........................................................................................169<br />

Chapter 10<br />

Instrumentation Service .......................................................................................... 171<br />

<strong>Clusterworx</strong> Monitoring and Event Subsystem ..................................................171<br />

Monitors ....................................................................................................................172<br />

Custom Monitors ................................................................................................... 172<br />

Metrics .......................................................................................................................175<br />

Metric Selector....................................................................................................... 176<br />

Listeners and Loggers ............................................................................................179<br />

Listeners.................................................................................................................. 179<br />

Loggers .................................................................................................................... 181<br />

Chapter 11<br />

Command-Line Interface ......................................................................................... 185<br />

Command-Line Syntax and Conventions .............................................................185<br />

CLI Commands ...................................................................................................... 186<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


ccp .............................................................................................................................. 192<br />

conman ..................................................................................................................... 193<br />

cwhost ...................................................................................................................... 196<br />

cwpower ................................................................................................................... 204<br />

cwprovision .............................................................................................................. 206<br />

cwuser ....................................................................................................................... 209<br />

dbix ............................................................................................................................ 215<br />

dbx ............................................................................................................................. 216<br />

imgr ............................................................................................................................ 217<br />

kmgr ........................................................................................................................... 218<br />

Example 1................................................................................................................218<br />

Example 2................................................................................................................218<br />

pdcp ........................................................................................................................... 219<br />

pdsh ........................................................................................................................... 222<br />

pmgr .......................................................................................................................... 225<br />

powerman ................................................................................................................. 226<br />

vcs .............................................................................................................................. 228<br />

xms ............................................................................................................................ 231<br />

Glossary ....................................................................................................................233<br />

Appendix ..................................................................................................................237<br />

Pre-configured Metrics ............................................................................................ 237<br />

CPU ..........................................................................................................................237<br />

Disk ..........................................................................................................................239<br />

Icebox ......................................................................................................................240<br />

Image .......................................................................................................................240<br />

Kernel ......................................................................................................................240<br />

Load .........................................................................................................................241<br />

LinuxBIOS...............................................................................................................241<br />

LS-1 1950i and 1435a ............................................................................................242<br />

LS-1 2950i................................................................................................................243<br />

LS-1 1435a Only .....................................................................................................243<br />

Memory ...................................................................................................................244<br />

Network ...................................................................................................................246<br />

OS.............................................................................................................................247<br />

Payload ....................................................................................................................248<br />

Index .........................................................................................................................249<br />

<strong>Clusterworx</strong> End User License Agreement ..............................................................259<br />

Table of Contents<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

x


Table of Contents<br />

xi<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


<strong>System</strong> Requirements<br />

Operating <strong>System</strong> Requirements<br />

2<br />

Operating <strong>System</strong> Requirements<br />

Warning!<br />

Please consult Linux Networx before upgrading your Linux distribution or kernel. Upgrading to a<br />

distribution or kernel not approved for use on your system may render <strong>Clusterworx</strong> inoperable or<br />

otherwise impair system functionality. Technical Support is not provided for unapproved system<br />

configurations.<br />

<strong>Clusterworx</strong> Hosts<br />

This version of <strong>Clusterworx</strong> runs on the following operating systems and architectures:<br />

SUSE LINUX ENTERPRISE SERVER 10<br />

64-bit AMD64/EM64T hardware.<br />

SUSE LINUX ENTERPRISE SERVER 9 (SP1-3)<br />

32-bit x86 Intel or AMD hardware.<br />

64-bit AMD64/EM64T hardware.<br />

REDHAT ENTERPRISE LINUX 4 (UPDATES 1-4)<br />

32-bit x86 Intel or AMD hardware.<br />

64-bit AMD64/EM64T hardware.<br />

REDHAT ENTERPRISE LINUX 3 (UPDATES 1-8)<br />

32-bit x86 Intel or AMD hardware.<br />

64-bit AMD64/EM64T hardware.<br />

WINDOWS XP<br />

32-bit x86 Intel or AMD Hardware (<strong>Clusterworx</strong> client only).<br />

WINDOWS 2000<br />

32-bit x86 Intel or AMD Hardware (<strong>Clusterworx</strong> client only).<br />

<strong>Clusterworx</strong> Clients<br />

For Linux client installations, <strong>Clusterworx</strong> uses the same platforms supported by host installations. Windows<br />

installations use Windows 2000 or Windows XP only.<br />

<strong>Clusterworx</strong> Kernel Support<br />

Linux Networx recommends using the kernel that shipped with your version of Linux.<br />

Software Requirements<br />

<strong>Clusterworx</strong> requires the installation of the following RPM packages:<br />

DHCP (included with your distribution)<br />

Mkelfimage (available on the <strong>Clusterworx</strong> CD or via ftp://ftp.lnxi.com/)<br />

If you require PXE Boot support, you must also install:<br />

TFTP<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Upgrades<br />

Why should I upgrade my version of <strong>Clusterworx</strong>?<br />

Improvements to monitoring, provisioning (formerly cloning), and image management.<br />

New host status monitors and events.<br />

User, group, and role management.<br />

Cluster partitions and regions for host organization.<br />

Version-controlled image management.<br />

Free updates for eligible customers (see list of requirements below).<br />

How do I qualify for a free <strong>Clusterworx</strong> upgrade?<br />

You have a current support contract and are in good financial standing with Linux Networx.<br />

You purchased the previous version of <strong>Clusterworx</strong>.<br />

You are willing to upgrade to a supported distribution.<br />

Your system meets the minimum system requirements (see <strong>System</strong> Requirements on page 1).<br />

Meet minimum network requirements (Multicast, Etherboot).<br />

Should I update my version of Linux?<br />

Upgrades<br />

Software Requirements<br />

For many companies, updating a production system may not be cost-effective. However, updated drivers and<br />

libraries may significantly increase performance over older Linux releases.<br />

Note<br />

If you are running third-party software applications, contact the application manufacturers about<br />

support for newer versions of Linux.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

3


Installing <strong>Clusterworx</strong><br />

Setting Up a <strong>Clusterworx</strong> Master Host<br />

4<br />

Installing <strong>Clusterworx</strong><br />

With your Linux distribution installed and your hardware and software qualified, you are ready to begin<br />

installing <strong>Clusterworx</strong>. (See Operating <strong>System</strong> Requirements on page 2 for a list of supported distributions.)<br />

Note<br />

To upgrade your version of <strong>Clusterworx</strong>, see Migration Utility on page 13.<br />

Setting Up a <strong>Clusterworx</strong> Master Host<br />

Warning!<br />

A <strong>Clusterworx</strong> Master Host installation requires that you install the mkelfimage package. This package is<br />

available in RPM, SRPM, and source packages and may be obtained from your <strong>Clusterworx</strong> CD or the<br />

Linux Networx FTP site:<br />

ftp://ftp.lnxi.com/pub/mkelfImage<br />

1. Generate or convert the /etc/hosts file for the servers and the compute hosts, then place this file on all<br />

management hosts.<br />

2. Install <strong>Clusterworx</strong> on the primary and secondary management hosts:<br />

A. Log into Linux as the root user on the <strong>Clusterworx</strong> Master Host, then Insert the <strong>Clusterworx</strong> CD. If<br />

the installation does not begin automatically after inserting the CD, enter run:<br />

/install.sh from the command line.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Installing <strong>Clusterworx</strong><br />

Setting Up a <strong>Clusterworx</strong> Master Host<br />

B. Select <strong>Clusterworx</strong> Server, then enter the install directory (by default, /opt/cwx) and Master Host<br />

name. The Master Host name (e.g., cwxhost) must be on the cluster management network and be<br />

identified in /etc/hosts. For information on changing the name of the Master Host, see Rename the<br />

<strong>Clusterworx</strong> Master Host on page 29.<br />

Warning!<br />

To prevent accidentally overwriting <strong>Clusterworx</strong>, install <strong>Clusterworx</strong> on a local file system that is not<br />

shared with any other hosts.<br />

C. Run /install.sh on each secondary management host (or on the Master Host<br />

if your system uses only one management host) and install each <strong>Clusterworx</strong> component.<br />

D. Select Server (cwxhost), install directory (/opt/cwx), and enter the name of the new host. Click<br />

Next.<br />

Note<br />

If you have only one management host on your cluster, install the <strong>Clusterworx</strong> Server option. Following<br />

installation, <strong>Clusterworx</strong> will ask you to install the Server option on the same host. Select Yes. If more<br />

than one management host is installed on your cluster, install the <strong>Clusterworx</strong> Server option on the<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

5


Installing <strong>Clusterworx</strong><br />

Setting Up a <strong>Clusterworx</strong> Master Host<br />

6<br />

primary management host. Then install Server on each additional management host. Use the primary<br />

host name as the <strong>Clusterworx</strong> Server<br />

Note<br />

If the management network is something other than 192.168.0.0 following an installation or upgrade, you<br />

will need to update it in the /opt/cwx/dhcp/dhcpd.conf.template file.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


E. Select the options to install on this host.<br />

Tip<br />

Installing <strong>Clusterworx</strong><br />

Setting Up a <strong>Clusterworx</strong> Master Host<br />

To modify any unwanted items, modify /opt/cwx/etc/<strong>Clusterworx</strong>.profile and set any unwanted items<br />

(e.g., Version Control Server) to false.<br />

F. If the management network is different from 192.168.0.xxx, change the /opt/cwx/dhcp/<br />

dhcpd.conf.template file.<br />

3. Install mkelfImage<br />

A. Download file ftp://ftp.lnxi.com/pub/mkelfImage/mkelfImage-2.5-0.i386.rpm<br />

B. Install rpm -ivh <br />

4. Log out and log in again on all host management machines on which you installed part of <strong>Clusterworx</strong>.<br />

Note<br />

Failure to log out and log in may prevent <strong>Clusterworx</strong> from launching.<br />

5. Install the software license obtained from Linux Networx on the Master Host by running deploy<br />

license.jar /opt/cwx. Verify that there were no errors.<br />

6. After deploying the license, run the cwxlicense utility program to verify that all of the cluster’s MAC<br />

addresses are in the license.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

7


Installing <strong>Clusterworx</strong><br />

Setting Up a <strong>Clusterworx</strong> Master Host<br />

8<br />

7. Log in as “root” and use “root” as the password. The License Viewer appears. The list of licensed MAC<br />

addresses appears under Authorized Hosts (this list should match the MAC Address list you gathered<br />

previously).<br />

Note<br />

If you find a discrepancy in your Authorized Hosts list (MAC Addresses), contact Linux Networx<br />

Technical Support.<br />

8. Run the /etc/init.d/cwx status command to verify that the following services are running:<br />

AuthenticationService<br />

DHCPService<br />

DNA.<br />

DatabaseService<br />

FileService.<br />

HostAdministrationService.<br />

IceboxAdministrationService<br />

ImageAdministrationService<br />

InstrumentationService<br />

KernelAdministrationService<br />

LicenseAdministrationService<br />

LicenseUpdateService.<br />

NotificationService<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


PayloadAdministrationService<br />

ProvisioningService<br />

RNA<br />

SynchronizationService<br />

VersionService<br />

VersionService.<br />

Installing <strong>Clusterworx</strong><br />

Setting Up a <strong>Clusterworx</strong> Master Host<br />

9. Import configuration files from the backup configuration or from the new configuration files produced<br />

by Linux Networx. You may also set up hosts, Iceboxes, users, and other related information (including<br />

partitions, regions, groups, and roles) manually through the <strong>Clusterworx</strong> GUI.<br />

Note<br />

You must obtain a copy of the converter script from Linux Networx—this utility creates output that you<br />

can import into <strong>Clusterworx</strong> using the dbix utility.<br />

A. Import files to dbix:<br />

Icebox.conf (Converts the icebox.conf file to icebox.dbix)<br />

converter -i= -o=dbix > /tmp/icebox.dbix<br />

Example: converter -i=/opt/lnxi/var/cwx/icebox.conf -o=dbix > /tmp/icebox.dbix<br />

cluster.xml (Converts the cluster.xml file to cluster.uber)<br />

converter -c= -o=uber > /tmp/cluster.uber<br />

Example: converter -c=/etc/cluster.xml -o=uber > /tmp/cluster.uber<br />

powerman.conf (Converts the cluster.uber file to powerman.conf)<br />

converter -u= -o=powerman > /tmp/powerman.conf<br />

Example: converter -u=/tmp/cluster.uber -o=powerman > /tmp/powerman.conf<br />

conman.conf (Converts the cluster.uber file to conman.conf)<br />

converter -u= -o=conman > /tmp/conman.conf<br />

Example: converter -u=/tmp/cluster.uber -o=conman > /tmp/conman.conf<br />

/etc/hosts (Converts the cluster.uber file to /etc/hosts)<br />

converter -u= -o=hosts > /tmp/hosts<br />

Example: converter -u=/tmp/cluster.uber -o=hosts > /tmp/hosts<br />

rhosts (Converts the cluster.uber file to rhosts)<br />

converter -u= -o=rhosts > /tmp/rhosts<br />

Example: converter -u=/tmp/cluster.uber -o=rhosts > /tmp/rhosts<br />

nodes.conf<br />

converter -n= -o=dbix > /tmp/nodes.dbix<br />

Example: converter -n=/opt/lnxi/var/netboot/nodes.conf -o=dbix > /tmp/nodes.dbix<br />

B. Update <strong>Clusterworx</strong>. Verify the integrity of all converted files before importing into the dbix database.<br />

If the data does not import as expected, delete the imported data through the <strong>Clusterworx</strong><br />

GUI and try again.<br />

Run the dbix utility, passing the recently created dbix files:<br />

dbix < /tmp/icebox.dbix<br />

dbix < /tmp/cluster.dbix<br />

Note<br />

Running this step on the nodes.dbix file is not always required if you have imported from the cluster.dbix<br />

file. Log on to <strong>Clusterworx</strong> and check your hosts. If they do not exist, run dbix < /tmp/nodes.dbix.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

9


Installing <strong>Clusterworx</strong><br />

Setting Up a <strong>Clusterworx</strong> Master Host<br />

10<br />

C. Update configuration files.<br />

(Verify the integrity of all converted files before implementing.)<br />

/etc/hosts<br />

back up original file<br />

cp /etc/hosts /tmp/hosts.orig<br />

install converted file<br />

cp /etc/hosts<br />

powerman.conf<br />

back up original file<br />

cp /etc/powerman/powerman.conf /tmp/powerman.conf.orig<br />

install converted file<br />

cp /etc/powerman/powerman.conf<br />

conman.conf<br />

back up original file<br />

cp /etc/conman.conf /tmp/conman.conf.orig<br />

install converted file<br />

cp /etc/conman.conf<br />

Note<br />

It is important to note that, by default, the <strong>Clusterworx</strong> password is “root”. For information on how to<br />

change this password, see Edit a User Account on page 64. Furthermore, when you provision a host,<br />

<strong>Clusterworx</strong> sets up a root account for your hosts.<br />

10. Type cwx to launch the <strong>Clusterworx</strong> client software, then log in as “root” using “root” as the password.<br />

Tip<br />

To run <strong>Clusterworx</strong> from a remote share, map the network drive where you installed <strong>Clusterworx</strong> and<br />

create a copy of the shortcut on your local machine.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Client Installation<br />

Installing <strong>Clusterworx</strong><br />

Setting Up a <strong>Clusterworx</strong> Master Host<br />

Select the Client installation option to allow access to <strong>Clusterworx</strong> from a local PC that is not part of the<br />

cluster. The client install offers superior performance because it significantly reduces network traffic.<br />

However, the client should be able to connect to every host and must be on the cluster’s internal network.<br />

Tip<br />

A VPN offers secure remote access to the cluster. VPN options include D-Link, Cisco, and software<br />

(Pptp). For information about specific VPN capabilities, please contact technical support.<br />

LINUX CLIENT<br />

1. Select Client from the installation options dialog.<br />

2. Specify the Installation Directory and Host Name, then click Next.<br />

Note<br />

The name of the installation directory may not contain spaces (e.g., “C:\Program Files”).<br />

Furthermore, the <strong>Clusterworx</strong> Server or Master Host must use a valid host name that is resolvable<br />

through name resolution (i.e., DNS, /etc/hosts). For information on changing the name of the Master<br />

Host, see Rename the <strong>Clusterworx</strong> Master Host on page 29.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

11


Installing <strong>Clusterworx</strong><br />

Setting Up a <strong>Clusterworx</strong> Master Host<br />

12<br />

WINDOWS CLIENT<br />

1. Insert the <strong>Clusterworx</strong> CD in your CD/DVD-ROM drive and allow the <strong>Clusterworx</strong> installer to launch. If<br />

the installer does not start automatically, launch the autorun.cmd:<br />

d:\autorun.cmd<br />

2. Select Client from the installation options dialog.<br />

3. Specify the Installation Directory and Host Name, then click Next.<br />

4. After the installation is complete, use Explorer to navigate to the installation directory.<br />

5. Copy the <strong>Clusterworx</strong> shortcut to your desktop. You will use this shortcut to launch <strong>Clusterworx</strong>.<br />

Tip<br />

You may also start <strong>Clusterworx</strong> from the command-line interface. For example:<br />

c:\cwx\bin\cloak.exe c:\cwx\bin\cwx.cmd<br />

or<br />

c:\cwx.cmd<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Migration Utility<br />

Note<br />

When upgrading to a new version of <strong>Clusterworx</strong>, you must also upgrade any Client and Payload<br />

installations to prevent abnormal operations.<br />

Upgrading to <strong>Clusterworx</strong> 3.4.2<br />

Warning!<br />

Installing <strong>Clusterworx</strong><br />

Migration Utility<br />

If you are using Third-party IPMI metrics and are upgrading to <strong>Clusterworx</strong> 3.4.2, you must de-select all<br />

custom IPMI metrics from the Metrics Selector dialog. After upgrading, re-populate the Metrics.profile.<br />

Upgrading from <strong>Clusterworx</strong> 3.3.x<br />

The process of upgrading to the latest version of <strong>Clusterworx</strong> is almost completely automatic. However, to<br />

prevent data loss or disruption of activity on your cluster, please back up the following before installing<br />

<strong>Clusterworx</strong>:<br />

Your <strong>Clusterworx</strong> database and all system data located in CWXHOME$/etc.<br />

InstrumentationMonitors.profile, InstrumentationListeners.profile, Logging.profile, and<br />

DistributionService.profile from /opt/cwx/etc.<br />

Note<br />

After installing <strong>Clusterworx</strong>, you may apply any special modifications or enhancements you made to<br />

these profiles.<br />

Upgrading from <strong>Clusterworx</strong> 3.1.x to <strong>Clusterworx</strong> 3.3.0<br />

Warning!<br />

If you are upgrading from <strong>Clusterworx</strong> 3.1.2 to 3.3.0, you must change the default ramdisk block size in<br />

any kernels you will continue to use. Change this variable from 4096 to 1024<br />

(i.e., ramdisk_blocksize=1024). To change this variable, see To Edit a Kernel on page 117.<br />

When upgrading <strong>Clusterworx</strong>, do not run the installation script from the current installation directory.<br />

To upgrade the Payload or Client, you must delete the installation directory (i.e., /opt/cwx) prior to reinstalling.<br />

1. Open a console as the root user.<br />

2. Back up your <strong>Clusterworx</strong> settings by entering the following from the command line:<br />

dbix -x > cwx-3.1.2.db<br />

3. Stop the <strong>Clusterworx</strong> services on the host:<br />

/etc/init.d/cwx stop<br />

4. Move the bin, lib, and sys directories:<br />

mkdir /tmp/cwx.old<br />

mv /opt/cwx/bin /tmp/cwx.old<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

13


Installing <strong>Clusterworx</strong><br />

<strong>Clusterworx</strong> Services<br />

14<br />

mv /opt/cwx/lib /tmp/cwx.old<br />

mv /opt/cwx/etc /tmp/cwx.old<br />

5. Perform the CD installation procedure for the <strong>Clusterworx</strong> Server Installation (see Setting Up a<br />

<strong>Clusterworx</strong> Master Host on page 4).<br />

6. Re-import your original settings:<br />

cwmigration3_3_0 cwx-3.1.2.db > cwx-3.3.0.db<br />

dbix -d<br />

dbix -i < cwx-3.3.0.db<br />

Warning!<br />

Executing the dbix -d command will delete your database. Back up cwx.3.1.2.db and cwx.3.3.0.db before<br />

proceeding.<br />

7. Deploy a new license.jar (obtained from Technical Support).<br />

8. Upgrade <strong>Clusterworx</strong> in the payload:<br />

A. Remove (or move) the <strong>Clusterworx</strong> directory from the payload.<br />

B. Perform a standard payload installation (see Install <strong>Clusterworx</strong> into the Payload on page 110).<br />

cd /opt/cwx/imaging//payloads//<br />

/mnt/cdrom/install.sh<br />

C. Check-in the changes (if desired) and re-provision the hosts.<br />

Note<br />

If you are upgrading from <strong>Clusterworx</strong> 3.0 to <strong>Clusterworx</strong> 3.3.0, you must first migrate your system from<br />

3.0 to 3.1.x. After moving to 3.1.x, you may safely upgrade to 3.3.0.<br />

Tip<br />

If you are upgrading from 3.2.1 to 3.3.0, remove all users’ license-administration privileges from the<br />

database by entering the following commands from the CLI:<br />

for user in `dbix -x privilege-roles.license-administration | cut -c40- | cut -d. -f1 | uniq`; do dbix -d roleprivileges.$user.license-administration;<br />

done<br />

dbix -d privilege-roles.license-administration<br />

<strong>Clusterworx</strong> Services<br />

Linux employs several services that perform a variety of tasks and act as the nucleus of the system. These<br />

services are started and stopped from scripts that usually exist in /etc/init.d, but the services themselves may<br />

exist in other locations. <strong>Clusterworx</strong>, typically installed in /opt/cwx, is controlled by one of these services—<br />

this allows you to manage <strong>Clusterworx</strong> services using standard Linux tools such as chkconfig and service.<br />

Standard functions for services include start, stop, restart, and status. For example:<br />

cd /etc/init.d; ./cwx status<br />

/etc/init.d/cwx stop<br />

/etc/init.d/cwx start<br />

chkconfig --list cwx<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Chapter 2<br />

Licensing<br />

Overview<br />

The <strong>Clusterworx</strong> system license is a digitally signed file containing details about what modules are installed<br />

in your system, each module's feature set, license expiration settings (if defined), the hosts available to the<br />

cluster, and the customer for whom the license was created. The Licensing module serves three primary<br />

functions:<br />

License Installation<br />

License Authentication<br />

License Administration<br />

License Installation<br />

The license is packaged as a <strong>Clusterworx</strong>-deployable Java Archive (JAR) file and is usually named license.jar.<br />

This self-extracting and installable file is deployed into the <strong>Clusterworx</strong> home directory through the same<br />

process used to install other modules; however, <strong>Clusterworx</strong> and the LicenseAdministrationService must be<br />

running on the Master Host at the time the you deploy the license file.<br />

Note<br />

If you are upgrading from a previous version of <strong>Clusterworx</strong>, a new license is required.<br />

To Install the License<br />

1. Open a console to the Master Host.<br />

2. Verify that the <strong>Clusterworx</strong> services are running on the Master Host, noting that the<br />

LicenseAdministrationService is in the list of running services:<br />

# /etc/init.d/cwx status<br />

(Alternatively, you could also enter service cwx status)<br />

3. If <strong>Clusterworx</strong> is stopped, you can start it by running the following command as root:<br />

# /etc/init.d/cwx start<br />

(Alternatively, you could also enter service cwx start)<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

15


License Installation<br />

16<br />

4. Deploy the license.jar file to the <strong>Clusterworx</strong> home directory:<br />

/opt/cwx (usually)<br />

Note<br />

If you need to reinstall the license, enter the following:<br />

# deploy license.jar /opt/cwx<br />

5. Verify your output to ensure the license was successfully installed (see License Viewer on page 17).<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


License Authentication<br />

License Authentication<br />

License Viewer<br />

When you deploy a new license, <strong>Clusterworx</strong> verifies that the new license contains a valid Linux Networx<br />

signature.<br />

License Administration<br />

After installing a license, you may view license details with the License Viewer. The License Viewer is a GUI<br />

application that provides a visual representation of the <strong>Clusterworx</strong> system license. The License Viewer does<br />

not allow you to edit license data.<br />

License Viewer<br />

To launch the viewer, enter the following command as root from a console prompt:<br />

# cwxlicense<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

17


License Administration<br />

License Viewer<br />

18<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Chapter 3<br />

Introduction to<br />

<strong>Clusterworx</strong><br />

Overview<br />

<strong>Clusterworx</strong> reduces the total cost of cluster ownership by streamlining and simplifying all aspects of cluster<br />

management. Through a single point of control, you can automate repetitive installation and configuration<br />

tasks. <strong>Clusterworx</strong> automates problem determination and system recovery, and monitors and reports health<br />

information and resource utilization.<br />

<strong>Clusterworx</strong> provides administrators with increased power and flexibility in controlling cluster system<br />

resources, and improved scalability and performance allows <strong>Clusterworx</strong> to manage cluster systems of any<br />

size. Version-controlled provisioning allows administrators to easily install the operating system (OS) and<br />

applications to all hosts in the cluster and facilitates changes to an individual host or group of hosts. Changes<br />

are saved automatically.<br />

Comprehensive <strong>System</strong> Monitoring<br />

<strong>Clusterworx</strong> uses multiple monitoring features to improve system efficiency. These monitors allow you to<br />

examine system functionality from individual host components to the application level and help track system<br />

health, trends, and bottlenecks. With the information collected through these monitors, you can more easily<br />

plan for future computing needs—the more efficiently your cluster system operates, the more jobs it can run.<br />

Over the life of your system, you can accelerate research and time-to-market.<br />

<strong>Clusterworx</strong> provides results in near real-time and uses only a minute amount of the CPU. All data is<br />

displayed in a portable and easy-to-deploy Java-based GUI that runs on both Linux and Windows. Monitoring<br />

values include CPU usage, disk I/O, file system usage, kernel and operating system information, CPU load,<br />

memory usage, network information and bandwidth, and swap usage. <strong>Administrators</strong> may also write plug-ins<br />

to add functionality or monitor a specific device or application.<br />

Version Controlled Image Management<br />

Until recently, version-controlled image management was unavailable for cluster systems. Version control<br />

greatly simplifies the task of cluster administration by allowing system administrators to track upgrades and<br />

changes to the system image. If a problem arises with a system image, system administrators can even revert<br />

to a previous, more robust version of the image. By allowing system administrators to update the operating<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

19


Overview<br />

Fast Multicast Provisioning<br />

20<br />

system and other applications both quickly and efficiently, version control ensures that organizations<br />

receive the highest return on their cluster system investment.<br />

Fast Multicast Provisioning<br />

Thanks to fast multicast provisioning, <strong>Clusterworx</strong> can add or update new images in a matter of minutes—no<br />

matter how many hosts your system contains. This saves time by allowing system administrators to quickly<br />

provision and incrementally update the cluster system as needed; and since updates take only a few minutes,<br />

this means less down-time and fewer system administration headaches.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


<strong>Clusterworx</strong> Interface<br />

<strong>Clusterworx</strong> Interface<br />

Fast Multicast Provisioning<br />

The <strong>Clusterworx</strong> interface is composed primarily of a navigation tree and a series of tabbed dialogs that<br />

allow you to navigate and configure the cluster.<br />

Server Name<br />

Navigation<br />

Tree<br />

Server Name The name of the server on which <strong>Clusterworx</strong> is running.<br />

Navigation Tree The navigation tree contains an expandable list of cluster elements (e.g., hosts, partitions,<br />

groups, users, images). Because the tree is tab-specific, it displays only those elements that pertain to the<br />

selected tab.<br />

Tabs Tabs appear along the top of a pane and allow you to navigate and configure cluster elements.<br />

Subtabs Subtabs perform the same function as tabs but appear along the bottom of a pane.<br />

Upper/Lower Panes The upper and lower panes allow you to view cluster information in a structured<br />

environment.<br />

Tabs<br />

Subtabs<br />

(Upper Pane)<br />

(Lower Pane)<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

21


<strong>Clusterworx</strong> Interface<br />

Customizing the Interface<br />

22<br />

Customizing the Interface<br />

<strong>Clusterworx</strong> supports the use of the native Java LookAndFeel for each of the supported platforms. This<br />

allows you to configure <strong>Clusterworx</strong> to look like other applications already running on your system (i.e., the<br />

GTK look and feel for Linux users or the standard Windows interface for Windows users). By default,<br />

<strong>Clusterworx</strong> uses the MetalLookAndFeel and a custom Linux Networx color scheme for all platforms.<br />

Tip<br />

Using the default look and feel reduces memory consumption and improves performance.<br />

To Configure the GTK Look and Feel<br />

1. Open your user profile:<br />

$CWXHOME/etc/user-.profile<br />

2. Add the following line to your user profile:<br />

ui.manager: com.sun.java.swing.plaf.gtk.GTKLookAndFeel<br />

To Configure the Windows Look and Feel<br />

1. Open your user profile:<br />

$CWXHOME/etc/user-.profile<br />

2. Add the following line to your user profile:<br />

ui.manager: com.sun.java.swing.plaf.windows.WindowsLookAndFeel<br />

Note<br />

The interface look and feel can be set globally by commenting out the ui.manager setting contained in<br />

$CWXHOME/etc/system-<strong>Clusterworx</strong>.profile and adding one of the above lines to the profile. However, it<br />

is recommended that interface customizations be configured on a per-user basis.<br />

To override global settings for an individual user, add a new ui.manager configuration to the user profile.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Chapter 4<br />

Host Administration<br />

Clustered Environments<br />

In a clustered environment, there is always at least one host that acts as the master of the remaining hosts<br />

(for large systems, multiple masters may be required). This host, commonly referred to as the <strong>Clusterworx</strong><br />

Master Host, is reserved exclusively for managing the cluster and is not typically available to perform tasks<br />

assigned to the remaining hosts.<br />

To manage the use of the remaining hosts in the cluster, you can divide the hosts (as needed) into partitions<br />

and regions. Partitions include a strict set of hosts that may not be shared with other partitions. Regions are a<br />

subset of a partition and may share any hosts that belong to the same partition. Hosts contained within a<br />

partition may belong to a single region or may be shared with multiple regions. Dividing up the system can<br />

help simplify cluster management and allows you to have different privileges on various parts of the system.<br />

Clusters Partitions<br />

Regions Hosts<br />

Host Configuration<br />

Shared<br />

Hosts<br />

<strong>Clusterworx</strong> provides configuration, power management, and real-time monitoring (instrumentation) for all<br />

hosts in the cluster. The Host tab allows you to:<br />

Add hosts to partitions and regions.<br />

Define each host (including the <strong>Clusterworx</strong> Master Host).<br />

Assign hosts to Iceboxes (see Power Control on page 67).<br />

Edit an existing host.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

23


Host Configuration<br />

24<br />

Delete a host(s).<br />

Define, edit, and delete partitions.<br />

Define partition relationships to regions and hosts.<br />

Define, edit, and delete regions.<br />

Define region relationships to groups (see User Administration on page 49).<br />

View statistical data for the system (see Instrumentation on page 40).<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Hosts<br />

The following sections outline the fundamentals of adding, editing, and deleting hosts.<br />

Add a Host<br />

Adding a host is as simple as describing it to <strong>Clusterworx</strong>. To add a host, you must provide the host name,<br />

description, MAC address, IP address, and the partition and region to which the host belongs. Hosts may be<br />

added only after you have set up a Master Host (see Setting Up a <strong>Clusterworx</strong> Master Host on page 4).<br />

To Add a Host<br />

1. Select the Hosts tab, then select the Configuration subtab.<br />

2. Select New Host from the File menu or right-click on the cluster icon in the navigation tree and select<br />

New Host. A new host pane appears.<br />

3. Enter the name of the new host in the Name field.<br />

4. (Optional) Enter a description of the new host in the Description field.<br />

5. (Optional) Select the name of the partition to which this host belongs from the drop-down menu.<br />

6. Create Regions, Interfaces, and Icebox assignments as needed, then click Apply to create the new host or<br />

click Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

Hosts<br />

Add a Host<br />

25


Hosts<br />

Add a Host<br />

26<br />

ASSIGN REGIONS<br />

The Regions subtab allows you to identify any regions to which the host belongs.<br />

1. (Optional) Select the Regions subtab and click Add. The Select Regions dialog appears.<br />

2. Select the region to which the host belongs. To select multiple regions, use the Shift or Ctrl keys.<br />

3. Click OK.<br />

CREATE INTERFACES<br />

The Interfaces subtab allows you to create new interfaces and assign management responsibilities.<br />

1. Click the Interfaces subtab at the bottom of the host pane. The New Interface dialog appears.<br />

2. Enter the host’s MAC and IP addresses.<br />

Tip<br />

To find the MAC address of a new, un-provisioned host, you must watch the output from the serial<br />

console. Etherboot displays the host’s MAC address on the console when the host first boots. For<br />

example:<br />

Etherboot 5.1.2rc5.eb7 (GPL) Tagged ELF64 ELF (Multiboot) for EEPRO100]<br />

Relocating _text from: [000242d8,00034028) to [17fdc2b0,17fec000)<br />

Boot from (N)etwork (D)isk (F)loppy or from (L)ocal?<br />

Probing net...<br />

Probing pci...Found EEPRO100 ROM address 0x0000<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


[EEPRO100]Ethernet addr: 00:02:B3:11:03:77<br />

Searching for server (DHCP)...<br />

(*If conman is set up and working, this information is also contained in the conman file.)<br />

To find the MAC address on a host that is already running, enter ifconfig -a from the CLI and look for the<br />

HWaddr of the management interface.<br />

3. If <strong>Clusterworx</strong> will use this interface to manage the host (i.e., provisioning will use this interface and<br />

monitoring data will transmit from this interface) check the Management option.<br />

4. Click OK.<br />

ICEBOX PORT ASSIGNMENTS<br />

1. (Optional) Select the Iceboxes subtab and click Add to assign which Icebox will control the host (you can<br />

select only one Icebox at a time). The Icebox Selection dialog appears.<br />

2. Specify the port (1–10) through which the host is attached to the Icebox.<br />

3. (Optional) Select Primary if the host is connected to multiple Iceboxes. <strong>Clusterworx</strong> uses the serial access<br />

and temperature from the primary connection.<br />

4. Click OK.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

Hosts<br />

Add a Host<br />

27


Hosts<br />

Edit a Host<br />

28<br />

Edit a Host<br />

Editing hosts allows you to change information previously saved about a host, edit host configurations, or<br />

move hosts in and out of partitions and regions.<br />

To Edit a Host<br />

1. Select the Hosts tab, then select the Configuration subtab.<br />

2. Select a host from the navigation tree. To select multiple hosts, use the Shift or Ctrl keys.<br />

3. Select Edit from the Edit menu or right-click on the host(s) in the navigation tree and select Edit.<br />

<strong>Clusterworx</strong> displays the host pane for each selected host. From this view, you can make changes to the<br />

host(s).<br />

Warning!<br />

Changing the name of the Master Host may prevent the cluster from functioning correctly. For<br />

information on changing the name of the Master Host, see Rename the <strong>Clusterworx</strong> Master Host on<br />

page 29.<br />

4. Click Apply to accept the changes or click Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


RENAME THE CLUSTERWORX MASTER HOST<br />

Changing the name of the <strong>Clusterworx</strong> Master Host may prevent applications such as <strong>Clusterworx</strong> and Host<br />

Failover from operating correctly. Before changing the name of the Master Host, you should always consider<br />

any applications that require the use of this name (i.e., job schedulers, mpi “machines” files, and other thirdparty<br />

software). In some cases, you may need to consult with application vendors regarding special<br />

instructions on changing the host name.<br />

When you change the host name, all <strong>Clusterworx</strong> services, hosts, and clients must be able to resolve the new<br />

name. To ensure that your system functions properly after renaming the Master Host, you must update the<br />

host name in several files.<br />

Note<br />

Before you begin, enter /etc/init.d/cwx stop to shut down <strong>Clusterworx</strong> on the system.<br />

1. On the Master Host, edit the following files:<br />

/opt/cwx/@genesis.profile (host & system.rna.host)<br />

/etc/sysconfig/network/profiles/*<br />

Tip<br />

If you are running a version of <strong>Clusterworx</strong> older than 3.3.x, you must also edit the Activator profile:<br />

/opt/cwx/etc/Activator.profile (*.host=)<br />

2. On the compute hosts, in payloads, and on clients:<br />

/opt/cwx/@genesis.profile (system.rna.host)\<br />

3. Additionally, the new host name must be resolvable. This means that the local /etc/hosts or DNS settings<br />

must be updated on all <strong>Clusterworx</strong> servers, hosts, payloads, clients, and other vendor-specific<br />

configuration files that contain host name information.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

Hosts<br />

Edit a Host<br />

29


Hosts<br />

Disable a Host<br />

30<br />

Disable a Host<br />

Disabling a host allows you to render the host temporarily inoperative without removing it from the cluster.<br />

To Disable a Host<br />

1. Select the Hosts tab, then select the Configuration subtab.<br />

2. Select a host from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the host in the navigation tree and select Edit.<br />

<strong>Clusterworx</strong> displays the host pane.<br />

4. Check the Disable Host option.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Delete a Host<br />

Deleting a host will remove it from the cluster.<br />

To Delete a Host<br />

1. Select the Hosts tab, then select the Configuration subtab.<br />

2. Select the host you want to delete from the navigation tree. To select multiple hosts, use the Shift or Ctrl<br />

keys.<br />

3. Select Delete from the File menu or right-click on the host(s) in the navigation tree and select Delete.<br />

<strong>Clusterworx</strong> asks you to confirm your action.<br />

4. Click OK to remove the host(s) or click Cancel to abort this action.<br />

Note<br />

To recover a host deleted from a cluster, see Add a Host on page 25. To disable a host temporarily, see<br />

Disable a Host on page 30.<br />

Hosts<br />

Delete a Host<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

31


Partitions<br />

Add a Partition<br />

32<br />

Partitions<br />

The following sections outline the fundamentals of adding, editing, and deleting partitions.<br />

Add a Partition<br />

Partitions are used to separate clusters into non-overlapping collections of hosts. Hosts that belong to a<br />

partition may not be used by anyone who is not authorized to access the partition. Within the partition, host<br />

access may be shared between regions of users.<br />

To Add a Partition<br />

1. Select the Hosts tab, then select the Configuration subtab.<br />

2. Select New Partition from the File menu or right-click the cluster icon in the navigation tree and select<br />

New Partition. A new partition pane appears.<br />

3. Enter the name of the new partition in the Name field.<br />

4. (Optional) Enter a description of the new partition in the Description field.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


5. Select the Regions tab and click Add. The Select Regions dialog appears.<br />

6. Select region(s) you want to include in this partition and click OK. Use the Shift and Ctrl keys to select<br />

multiple regions.<br />

7. Select the Hosts tab and click Add to display the Select Hosts dialog.<br />

8. Select a host(s) to add to this partition and click OK. Use the Shift and Ctrl keys to select multiple hosts.<br />

9. Click Apply to accept the changes or click Close to abort this action.<br />

Partitions<br />

Add a Partition<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

33


Partitions<br />

Edit a Partition<br />

34<br />

Edit a Partition<br />

Editing a partition allows you to change previously saved information about a partition. You can edit or<br />

remove regions, alter partition configurations, disable partitions, or remove partitions from the host.<br />

To Edit a Partition<br />

1. Select the Hosts tab, then select the Configuration subtab.<br />

2. Select a partition from the navigation tree. To select multiple partitions, use the Shift or Ctrl keys.<br />

3. Select Edit from the Edit menu or right-click on the partition(s) in the navigation tree and select Edit.<br />

<strong>Clusterworx</strong> displays the partition pane. From this view, you may make changes to the partition.<br />

4. Click Apply to accept the changes or click Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Disable a Partition<br />

Partitions<br />

Disable a Partition<br />

Temporarily disabling a partition allows you to take the partition out of service without returning the hosts<br />

associated with it to the default partition.<br />

To Disable a Partition<br />

1. Select the Hosts tab, then select the Configuration subtab.<br />

2. Select a partition from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the partition in the navigation tree and select Edit.<br />

<strong>Clusterworx</strong> displays the partition pane.<br />

4. Check the Disable Partition option.<br />

5. Click Apply to accept the changes or click Close to abort this action.<br />

Delete a Partition<br />

Deleting a partition allows you to remove unused partitions from the system. In some cases, you may prefer<br />

to temporarily disable a partition.<br />

Note<br />

If you delete a partition, all regions and hosts associated with the partition will move to the default<br />

partition. To delete regions and hosts, refer to Regions on page 36 and Hosts on page 25.<br />

To Delete a Partition<br />

1. Select the Hosts tab, then select the Configuration subtab.<br />

2. Select the partition you want to delete from the navigation tree. To select multiple partitions, use the Shift<br />

or Ctrl keys.<br />

3. Select Delete from the File menu or right-click on the partition(s) in the navigation tree and select Delete.<br />

<strong>Clusterworx</strong> asks you to confirm your action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

35


Regions<br />

Add a Region<br />

36<br />

Regions<br />

The following sections outline the fundamentals of adding, editing, and deleting regions.<br />

Add a Region<br />

A region is a subset of a partition and may share any hosts that belong to the same partition—even if the hosts<br />

are currently used by another region. Adding a region(s) allows you to more closely allocate resources to<br />

specific groups and users.<br />

To Add a Region<br />

1. Select the Hosts tab, then select the Configuration subtab.<br />

2. Select New Region from the File menu or right-click on a partition in the navigation tree and select New<br />

Region. A new region pane appears.<br />

3. Enter the name of the new region in the Name field.<br />

4. (Optional) Enter a description of the new region in the Description field.<br />

5. (Optional) Select the name of the partition to which to assign the region from the drop-down menu.<br />

Note<br />

Regions not assigned to a partition become part of the default or unassigned partition.<br />

6. Select the Hosts subtab, then click Add to assign a host(s) to the region. The Select Hosts dialog appears.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


7. Select the host(s) you want to add to the region from the Select Hosts dialog (use the Shift or Ctrl keys to<br />

select multiple hosts).<br />

8. Click OK to add the host(s) or click Cancel to abort this action.<br />

9. Select the Groups subtab, then click Add. The Select Groups dialog appears.<br />

Regions<br />

Add a Region<br />

10. From the Select Groups dialog, select the group(s) you want to add to the region (use the Shift or Ctrl keys<br />

to select multiple groups). Adding groups to the region defines which users may access the hosts assigned<br />

to the region.<br />

11. Click OK to add the group(s) or click Cancel to abort this action.<br />

Tip<br />

A common mistake made when defining regions is forgetting to assign groups to the region. If you forget<br />

to assign groups, the hosts appear to be non-existent.<br />

12. Click Apply to add the new region or click Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

37


Regions<br />

Edit a Region<br />

38<br />

Edit a Region<br />

Editing regions allows you to change previously saved information about a region or to modify region<br />

memberships by adding or removing groups or hosts.<br />

To Edit a Region<br />

1. Select the Hosts tab, then select the Configuration subtab.<br />

2. Select a region from the navigation tree. To select multiple regions, use the Shift or Ctrl keys.<br />

3. Select Edit from the Edit menu or right-click on the region(s) in the navigation tree and select Edit. The<br />

region pane appears.<br />

4. From this view, you may make changes to the Region(s).<br />

5. Click Apply to accept the changes or click Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Delete a Region<br />

Deleting a region allows you to remove unused regions from the system.<br />

To Delete a Region<br />

1. Select the Hosts tab, then select the Configuration subtab.<br />

Regions<br />

Delete a Region<br />

2. Select the region you want to delete from the navigation tree. To select multiple regions, use the Shift or<br />

Ctrl keys.<br />

3. Select Delete from the File menu or right-click on the region(s) in the navigation tree and select Delete.<br />

<strong>Clusterworx</strong> asks you to confirm your action.<br />

4. Click OK to remove the region(s) or click Cancel to abort this action.<br />

Note<br />

If you delete a region, all hosts associated with the region return to the partition (or parent region) to<br />

which the region belonged. If the region was not part of a partition, the hosts move to the default<br />

partition.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

39


Instrumentation<br />

States<br />

40<br />

Instrumentation<br />

The <strong>Clusterworx</strong> instrumentation service provides the ability to monitor system health and activity for every<br />

host in the cluster. Hosts may be monitored collectively to provide a general system overview, or individually<br />

to allow you to view the configuration of a particular host (useful when diagnosing problems with a<br />

particular host or configuration). From the Instrumentation tab, you can view statistical data for the<br />

following areas:<br />

General<br />

CPU<br />

Memory<br />

Disk<br />

Network<br />

Kernel<br />

Load<br />

Temperature<br />

Tip<br />

When using the <strong>Clusterworx</strong> client by exporting an X session over an SSH connection, enabling the<br />

gradient fill and anti-aliasing options for instrumentation may adversely affect the performance of the<br />

GUI. This is common on slower systems. To improve system performance, disable the Gradient Fill and<br />

Anti-Aliasing options under the View menu. For best performance, install a <strong>Clusterworx</strong> Client.<br />

States<br />

<strong>Clusterworx</strong> uses the following icons to provide visual cues about system status. These icons appear next to<br />

each host viewed with the instrumentation service or from the navigation tree. Similar icons appear next to<br />

clusters, partitions, and regions to indicate the status of hosts contained therein.<br />

States<br />

Healthy<br />

Unknown<br />

Off<br />

Provisioning<br />

Disabled<br />

Logging<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

Healthy Informational Warning Critical Error


<strong>Clusterworx</strong> Message Log<br />

Instrumentation<br />

<strong>Clusterworx</strong> Message Log<br />

<strong>Clusterworx</strong> also tracks messages logged for each host in the cluster. The <strong>Clusterworx</strong> message log is located<br />

on the instrumentation overview screen. If you select multiple hosts (or a container such as a cluster,<br />

partition, or region), the log shows messages for any host in the selection. If you select a single host, the<br />

message log shows messages for this host only. Messages have three severity levels: error, warning, and<br />

information. For details on instrumentation event monitoring, see Instrumentation Service on page 171.<br />

Menu Controls<br />

The output for the instrumentation service is easily configured and displayed using menu controls. These<br />

controls are divided between the Edit and View menus.<br />

Edit Menu<br />

Interval Set the frequency (in seconds) with which to gather and display data—10, 5, or 1.<br />

Metrics Select and display custom metrics defined for your system—this option is not available to all tab<br />

views. See Metrics on page 175 for information on defining metrics.<br />

Filter List hosts that are in specific states (general tab only).<br />

View Menu<br />

Overview Display all statistical data for the selected host(s).<br />

Thumbnail Display a simplified, graphical status for the selected host(s). Each thumbnail includes CPU,<br />

memory, and disk statistics.<br />

List View pre-configured and user-defined metrics for the selected host(s) in tabular form.<br />

Sort Organize and display statistical data according to the name or state of the host(s).<br />

Size Change the display size of thumbnails (Small, Medium, Large).<br />

Anti-Aliasing Apply smoothing to line graphs.<br />

Gradient Fill Apply fill colors to line graphs.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

41


Instrumentation<br />

General Tab<br />

42<br />

General Tab<br />

The General tab provides details about the health, system configuration, and resource utilization of the<br />

host(s) selected in the navigation tree. You may compile and display system data into an overview, thumbnail<br />

view, or list view by selecting the corresponding option from the View menu.<br />

Overview<br />

The Overview option displays the overall status of the selected host(s), including system health, usage<br />

statistics, and any messages generated by the host(s). See States on page 40 for a list of system health<br />

indicators and <strong>Clusterworx</strong> Message Log on page 41 for information regarding generated messages.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Instrumentation<br />

General Tab<br />

THUMBNAIL VIEW<br />

The Thumbnail view is available only for the General tab and displays a graphical representation of the<br />

system health, CPU usage, memory availability, and disk space. From this view you may sort the hosts by<br />

name or state (from the View menu), or define a filter in the Edit menu to display only those hosts that are in<br />

a specific state. See States on page 40 for a list of system health indicators.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

43


Instrumentation<br />

General Tab<br />

44<br />

LIST VIEW<br />

Available only for the General tab, the List view displays all pre-configured and custom metrics being<br />

observed by the instrumentation service. To add metrics, see Instrumentation Service on page 171.<br />

Tip<br />

You may copy and paste the contents of list view tables for use in other applications.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


CPU Tab<br />

Select the CPU tab to monitor the CPU utilization for the selected host(s).<br />

Memory Tab<br />

Select the Memory tab to monitor the physical and virtual memory utilization for the selected host(s).<br />

Instrumentation<br />

CPU Tab<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

45


Instrumentation<br />

Disk Tab<br />

46<br />

Disk Tab<br />

Select the Disk tab to monitor the disk I/O and usage for the selected host(s).<br />

Network Tab<br />

Select the Network tab to monitor packet transmissions and errors for the selected host(s).<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Kernel Tab<br />

Select the Kernel tab to monitor the kernel information for the selected host(s).<br />

Load Tab<br />

Select the Load tab to monitor the load placed on the selected host(s).<br />

Instrumentation<br />

Kernel Tab<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

47


Instrumentation<br />

Temperature Tab<br />

48<br />

Temperature Tab<br />

Select the Temperature tab to view the temperature readings for the selected host(s). Each temperature<br />

summary contains up to five temperature readings—four processor temperatures followed by the ambient<br />

host temperature. On hosts that support IPMI, these temperature readings differ slightly—two processor<br />

temperatures, two power supply temperatures, and the ambient host temperature<br />

Note<br />

The processor temperature readings for IPMI-based hosts indicate the amount of temperature change<br />

that must occur before the CPU’s thermal control circuitry activates to prevent damage to the CPU.<br />

These are not actual CPU temperatures.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Chapter 5<br />

User Administration<br />

Working Environment<br />

<strong>Clusterworx</strong> allows you to configure groups, users, roles, and privileges to establish a working environment<br />

on the cluster. A group refers to an organization with shared or similar needs that is structured using specific<br />

roles (permissions and privileges) and region access that may be unique to the group or shared with other<br />

groups. Members of a group (users) inherit all rights and privileges defined for the group(s) to which they<br />

belong.<br />

Groups<br />

Group<br />

Roles<br />

Users<br />

Regions<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

49


Working Environment<br />

50<br />

For example, a user assigned to multiple groups (as indicated by the following diagram) has different rights<br />

and privileges within each group. This flexibility allows you to establish several types of user roles: full<br />

administration, group administration, user, or guest.<br />

Multi-Group Users<br />

Note<br />

<strong>Clusterworx</strong> currently supports adding users and groups to payloads only—it does not support the<br />

management of local users and groups on the Master Host. Users with local Unix accounts do not<br />

automatically have <strong>Clusterworx</strong> accounts, and this information cannot be imported into <strong>Clusterworx</strong>.<br />

If you are using local authentication in your payloads and intend to add <strong>Clusterworx</strong> users or groups,<br />

ensure that the user and group IDs (UIDs and GIDs, respectively) match up between the accounts on the<br />

Master Host and <strong>Clusterworx</strong>. Otherwise, NFS and Runner may not work properly.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Default User Administration Settings<br />

<strong>Clusterworx</strong> implements the following structure during the installation process:<br />

The root and guest user accounts are created.<br />

The root, guest, power, and users groups are created.<br />

The root and user roles are created.<br />

All privileges allowed by the installed license are created.<br />

User Configuration<br />

The Configuration subtab allows users to:<br />

Create, modify, or delete groups.<br />

Create, modify, or delete users.<br />

Create modify, or delete roles.<br />

Assign or delete privileges.<br />

Default User Administration Settings<br />

User Configuration<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

51


Roles<br />

Add a Role<br />

52<br />

Roles<br />

The following sections outline the fundamentals of adding, editing, and deleting roles. Roles are associated<br />

with groups and privileges, and define the functionality assigned to each group. Several groups can use the<br />

same role.<br />

Add a Role<br />

Adding a role to <strong>Clusterworx</strong> allows you to define and grant system privileges to groups.<br />

To Add a Role<br />

1. Select the Users tab.<br />

2. Select New Role from the File menu or right-click on the Roles icon in the navigation tree and select New<br />

Role. A new role pane appears.<br />

3. Enter the name of the new role in the Name field.<br />

4. (Optional) Enter a description of the role in the Description field.<br />

5. Click Apply to save the role or click Close to abort this action.<br />

Note<br />

Adding or revoking privileges will not affect users that are currently logged into <strong>Clusterworx</strong>. Changes<br />

will take effect only after the users reboot.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


GRANT PRIVILEGES<br />

The privileges tab allows you to assign permissions to a role. Any user with the role will have these<br />

permissions in the system. See Privileges on page 56.<br />

1. To add privileges to a role(s), select the Privileges subtab and click Add.<br />

2. Select the privilege(s) to grant the current role (use the Shift or Ctrl keys to select multiple privileges).<br />

3. Click OK to apply the privileges to the role or click Cancel to abort this action.<br />

ASSIGN ROLES TO GROUPS<br />

<strong>Clusterworx</strong> allows you to assign a role(s) to multiple groups. This permits users to have varied levels of<br />

access throughout the system.<br />

1. To assign a role(s) to a group, click Add.<br />

2. Select the group(s) to which to assign the role(s). Use the Shift or Ctrl keys to select multiple roles.<br />

3. Click OK to apply the role to the group(s) or click Cancel to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

Roles<br />

Add a Role<br />

53


Roles<br />

Edit a Role<br />

54<br />

Edit a Role<br />

Editing roles allows you to modify privileges defined for a group.<br />

To Edit a Role<br />

1. Select the Users tab.<br />

2. Select a role from the navigation tree. To select multiple roles, use the Shift or Ctrl keys.<br />

3. Select Edit from the Edit menu or right-click on the role(s) in the navigation tree and select Edit.<br />

<strong>Clusterworx</strong> displays the role pane for each selected role.<br />

4. From this view, you may make changes to the role.<br />

5. Click Apply to accept the changes or click Close to abort this action.<br />

Note<br />

Deleting a role will not affect the privileges of a user that is currently logged into <strong>Clusterworx</strong>. Changes<br />

will take effect only after you restart the <strong>Clusterworx</strong> client.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Delete a Role<br />

Deleting a role removes user privileges assigned to the role.<br />

To Delete a Role<br />

1. Select the Users tab.<br />

2. Select the role you want to delete from the navigation tree. To select multiple roles, use the Shift or Ctrl<br />

keys.<br />

3. Select Delete from the File menu or right-click on the role(s) in the navigation tree and select Delete.<br />

<strong>Clusterworx</strong> asks you to confirm your action.<br />

4. Click OK to remove the role(s) or click Cancel to abort this action.<br />

Note<br />

Deleting a role will not affect the privileges of a user that is currently logged into <strong>Clusterworx</strong>. Changes<br />

will take effect only after you restart the <strong>Clusterworx</strong> client.<br />

Roles<br />

Delete a Role<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

55


Roles<br />

Privileges<br />

56<br />

Privileges<br />

Privileges are permissions or rights that grant varying levels of access to system users. <strong>Clusterworx</strong> allows<br />

you to assign privileges as part of a role, then assign the role to specific user groups. Users assigned to<br />

multiple groups will have different roles and access within each group. This flexibility allows you to establish<br />

several types of roles you can assign to users: full administration, group administration, user, or guest. See<br />

User Administration on page 49. The following table lists the privileges established for various <strong>Clusterworx</strong><br />

modules at the function and sub-function levels:<br />

Module Name Description<br />

Host Administration Ability to administer cluster resources.<br />

Charting Ability to configure cluster historical charting.<br />

Configuration Ability to configure resource partition, region, and host settings.<br />

Instrumentation Ability to configure cluster monitors.<br />

Power Ability to manage the cluster power settings.<br />

Shell Ability to execute parallel remote commands.<br />

Icebox Administration Ability to manage the cluster Iceboxes.<br />

Charting Ability to configure Icebox historical charting.<br />

Configuration Ability to configure cluster Iceboxes.<br />

Instrumentation Ability to configure Icebox monitors.<br />

Power Ability to manage the Icebox power settings.<br />

Shell Ability to manage Icebox remote connections.<br />

Image Administration Ability to manage host images.<br />

Provisioning Ability to deploy host images.<br />

User Administration Ability to administer users.<br />

Configuration Ability to configure user settings, groups, and roles.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Groups<br />

The following sections outline the fundamentals of adding, editing, and deleting groups.<br />

Add a Group<br />

Adding a group creates a collection of users with shared or similar needs (e.g., an engineering, testing, or<br />

administrative group).<br />

To Add a Group<br />

1. Select the Users tab.<br />

2. Select New Group from the File menu or right-click on the Groups icon in the navigation tree and select<br />

New Group. A new group pane appears.<br />

3. Enter the group name in the Name field.<br />

4. (Optional) <strong>Clusterworx</strong> assigns a system-generated Group ID. Enter any changes to the ID in the Group<br />

ID field.<br />

5. (Optional) Enter a description for the group in the Description field.<br />

6. After making all configurations, click Apply to add the group or click Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

Groups<br />

Add a Group<br />

57


Groups<br />

Add a Group<br />

58<br />

ADD USERS<br />

The Users subtab allows you to identify the users that belong to the current group. Users are allowed to be<br />

part of any number of groups, but granting access to multiple groups may allow users unnecessary privileges<br />

to various parts of the system. See Roles on page 52.<br />

1. To add a user to the group, click Add. The Add Users dialog appears.<br />

2. Select the user(s) to add to the group (use the Shift or Ctrl keys to select multiple users).<br />

3. Click OK to add the user to the group(s) or click Cancel to abort this action.<br />

ASSIGN ROLES<br />

The Roles subtab allows you to assign specific roles to the group.<br />

1. To assign a role(s) to the group, click Add. The Add Roles dialog appears.<br />

2. Select the role(s) to assign the group (use the Shift or Ctrl keys to select multiple users).<br />

3. Click OK to assign the role(s) to the group or click Cancel to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


ASSIGN REGIONS<br />

The Regions subtab allows you to grant a group access to specific regions of the system. See Host<br />

Administration on page 23.<br />

1. To assign a region(s) to the group, click Add. The Add Regions dialog appears.<br />

2. Select the region(s) to assign the group (use the Shift or Ctrl keys to select multiple regions).<br />

3. Click OK to assign the region(s) to the group or click Cancel to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

Groups<br />

Add a Group<br />

59


Groups<br />

Edit a Group<br />

60<br />

Edit a Group<br />

Editing a group allows you to change previously saved information about a group or modify group<br />

memberships by adding or removing users.<br />

To Edit a Group<br />

1. Select the Users tab.<br />

2. Select a group from the navigation tree. To select multiple groups, use the Shift or Ctrl keys.<br />

3. Select Edit from the Edit menu or right-click on the group(s) in the navigation tree and select Edit.<br />

<strong>Clusterworx</strong> displays the group pane for each selected group.<br />

4. From this view, you may make changes to the Group.<br />

5. Click Apply to accept the changes or click Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Delete a Group<br />

Deleting a group allows you to remove unused groups from the system.<br />

To Delete a Group<br />

1. Select the Users tab.<br />

2. Select the group you want to delete from the navigation tree. To select multiple groups, use the Shift or<br />

Ctrl keys.<br />

3. Select Delete from the File menu or right-click on the group(s) in the navigation tree and select Delete.<br />

<strong>Clusterworx</strong> asks you to confirm your action.<br />

4. Click OK to remove the group(s) or click Cancel to abort this action.<br />

Groups<br />

Delete a Group<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

61


Users<br />

Add a User<br />

62<br />

Users<br />

The following sections outline the fundamentals of adding, editing, and deleting users.<br />

Add a User<br />

Adding a user to <strong>Clusterworx</strong> creates an account for the user and grants access to the system.<br />

To Add a User<br />

1. Select the Users tab.<br />

2. Select New User from the File menu or right-click a region and select New User. A new user pane<br />

appears.<br />

3. Enter the user’s login name in the User Name field.<br />

4. (Optional) <strong>Clusterworx</strong> assigns a system-generated user ID. Enter any changes to the ID in the User ID<br />

field.<br />

5. Enter the user’s first and last name in the Full Name field.<br />

6. Enter a new password for the user in the Password field.<br />

7. Re-enter the password in the Confirm Password field.<br />

8. (Optional) Specify the user’s home directory in the Home Directory field (e.g., /home/username).<br />

9. (Optional) Enter a shell for this user or click the drop-down menu to select an existing shell. By default,<br />

<strong>Clusterworx</strong> uses /bin/bash.<br />

10. After making all configurations, click Apply to make the changes or click Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


GROUPS SUBTAB<br />

The groups tab allows you to identify the groups to which the user belongs. Users are allowed to be part of<br />

any number of groups, but granting access to multiple groups may allow users unnecessary privileges to<br />

various parts of the system. See Roles on page 52.<br />

1. To add the user to a group(s), click Add.<br />

2. Select the group(s) to which to assign the user (use the Shift or Ctrl keys to select multiple groups).<br />

3. Click OK to add the user to the group(s) or click Cancel to abort this action.<br />

EDIT BUTTON<br />

Use the Edit button to change a group to be the primary group.<br />

PRIMARY GROUP<br />

Each user must have a single primary group. If you do not specify a primary group, <strong>Clusterworx</strong><br />

automatically assigns the user to the “users” group. If the “Create a private group for the user” checkbox is<br />

selected, <strong>Clusterworx</strong> adds a new group with the same name as the user. If you are using third-party power<br />

controls, the power group must be the primary group for at least one user.<br />

DISABLE ACCOUNT<br />

This check box indicates whether or not the user can log into this account. Selecting this option signifies that<br />

the user account will not be available for inclusion in payloads. This allows you to temporarily disable the<br />

user account without deleting it from <strong>Clusterworx</strong>.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

Users<br />

Add a User<br />

63


Users<br />

Edit a User Account<br />

64<br />

Edit a User Account<br />

Editing a user account allows you to change information previously saved about a user.<br />

To Edit a User<br />

1. Select the Users tab.<br />

2. Select a user from the navigation tree. To select multiple users, use the Shift or Ctrl keys.<br />

3. Select Edit from the Edit menu or right-click on the user(s) in the navigation tree and select Edit.<br />

<strong>Clusterworx</strong> displays a user pane for each user selected. From this view, you may make changes to the<br />

user account.<br />

4. Click Apply to accept the changes or click Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Disable a User Account<br />

Disabling a user account allows you to render the account temporarily inoperative without removing it.<br />

To Disable a User Account<br />

1. Select the Users tab.<br />

2. Select a user from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the user in the navigation tree and select Edit.<br />

<strong>Clusterworx</strong> displays the user pane.<br />

4. Check the Disable Account option.<br />

5. Click Apply. The user icon in the navigation tree changes status to disabled.<br />

Users<br />

Disable a User Account<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

65


Users<br />

Delete a User Account<br />

66<br />

Delete a User Account<br />

Deleting a user allows you to remove unused user accounts from the system. To temporarily disable a user<br />

account, see Disable a User Account on page 65.<br />

To Delete a User<br />

1. Select the Users tab.<br />

2. Select the user you want to delete from the navigation tree. To select multiple users, use the Shift or Ctrl<br />

keys.<br />

3. Select Delete from the File menu or right-click on the user(s) in the navigation tree and select Delete.<br />

<strong>Clusterworx</strong> asks you to confirm your action.<br />

4. Click OK to remove the user(s) or click Cancel to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Chapter 6<br />

Power Control<br />

Icebox Administration<br />

The Icebox administration feature provides you with the ability to add an Icebox, view or edit the Icebox<br />

configuration, and control Icebox functions. <strong>Clusterworx</strong> is integrated with the Icebox to provide power<br />

management, remote reset, architecture-independent temperature monitoring, and serial access for each host<br />

installed in the cluster. For specific information about the Icebox, please refer to the Icebox User's <strong>Guide</strong> (see<br />

Linux Networx Documentation on the Web on page ii).<br />

Note<br />

Ensure that the Icebox is at least version 3.1, build 53 (or higher). To verify your version and build, use<br />

the version command on the Icebox. Refer to the Icebox User's <strong>Guide</strong> for details.<br />

The following is a list of common site-specific configuration items referenced in this section of the guide:<br />

IP Address<br />

Host name<br />

Netmask<br />

SSH<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

67


Icebox Administration<br />

Add an Icebox<br />

68<br />

Add an Icebox<br />

To Add an Icebox<br />

1. Select the Power Control tab.<br />

2. Select the Configuration subtab.<br />

3. Select New Icebox from the File menu or right-click on the cluster icon in the navigation tree and select<br />

New Icebox. An Icebox pane appears.<br />

4. Enter the name of the new Icebox in the Name field.<br />

5. (Optional) Enter a description of the Icebox in the Description field.<br />

6. Enter a new password for the Icebox in the Password field (this is the administrative password).<br />

7. Re-enter the password in the Confirm Password field.<br />

8. Enter the IP address for the Icebox in the Address field.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


9. Enter the MAC address for the Icebox in the MAC field.<br />

Tip<br />

After entering a name and an IP address, click Connect to connect to the Icebox. <strong>Clusterworx</strong> sets the<br />

MAC address for you.<br />

Note<br />

When gathering data from an Icebox, you must first connect to the Icebox by clicking Connect.<br />

Otherwise, you are communicating only with the <strong>Clusterworx</strong> database.<br />

Icebox Administration<br />

Add an Icebox<br />

10. Click Connect. A group of subtabs appears along the bottom of the Icebox pane.<br />

11. Use the subtabs to configure the Icebox. When finished, click Apply to add the Icebox or click Close to<br />

abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

69


Icebox Administration<br />

Add an Icebox<br />

70<br />

GENERAL SUBTAB<br />

The Configuration General subtab allows you to select various Icebox settings. These include enabling port<br />

history, enabling port authentication, enabling concurrent ports, and enabling temperature shutdowns.<br />

Enable Port History Instructs the Icebox to automatically output the last 16k of data transferred on the<br />

console port. If you connect to any of the console ports (1-10 or Aux 1-2), the Icebox outputs the last 16k of<br />

data rather than simply display a blank screen. This is beneficial if you want to display boot messages when<br />

you connect to a port. For information on how to view a history of all console port activity, see the Icebox<br />

User’s <strong>Guide</strong>.<br />

Enable Port Authentication Select this option to increase security for remote access to console ports. With<br />

this option enabled, users that connect to consoles by Telnetting to a specific TCP port must authenticate<br />

before gaining access to the console. If this option is disabled, users gain immediate access to the console and<br />

are not required to enter a password. The need to use this setting depends on your specific application and<br />

how you typically access hosts in your system. If you use Conman and are behind a secure network, you may<br />

elect to leave this option disabled.<br />

Enable Concurrent Ports Select this option to enable multiple simultaneous connections to a serial port. See<br />

the Icebox User’s <strong>Guide</strong> for additional information.<br />

Enable Temperature Shutdown Enables the ability to shut down hosts whose temperature readings exceed<br />

the temperature thresholds setting (strongly recommended). See Temperature Thresholds on page 72.<br />

Note<br />

Icebox temperature shutdown settings do not apply to IPMI-based hosts.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


PORTS SUBTAB<br />

Icebox Administration<br />

Add an Icebox<br />

The Configuration Ports subtab allows you to change port settings. This feature, originally configured in host<br />

administration, also allows you to view which ports are connected to which hosts (see Add a Host on<br />

page 25).<br />

To edit port settings, double-click the port or select a port and click Edit. The Edit Port dialog appears.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

71


Icebox Administration<br />

Add an Icebox<br />

72<br />

Note<br />

The Edit Port dialog does not allow you to edit a host-port association designated as a Serial port. See Add<br />

a Host on page 25 to modify these settings.<br />

Auto Power On This option allows you to configure any of the host ports (1-10) to power on automatically<br />

when the Icebox is turned on. For example, if the system loses power, all ports with Auto Power On enabled<br />

turn on once power is restored. Otherwise, you must manually restore power to the ports.<br />

Enable Flow Control Configure the hardware flow control setting for each of the 10 host and 2 auxiliary<br />

console ports. Hardware flow control allows the transaction receiver to tell the transmitter to stop sending<br />

data (e.g., if the receiver’s buffer is getting too full). This can eliminate dropped data due to buffer overflow.<br />

When this option is enabled, it is important to ensure that the host’s control software is configured to support<br />

it. Typically, this option is disabled unless critical data is transmitting at the fastest baud rate (115200).<br />

Baud Rate Configure the baud rate for each of the 10 host and 2 auxiliary console ports. In order to<br />

establish proper console communication with hosts, this setting must match on both the Icebox port and the<br />

host. In situations where third party peripherals such as switches and UPS equipment function only at a<br />

slower baud rate, lower the baud setting to match. The fastest setting (115200) is recommended whenever<br />

supported.<br />

Note<br />

Baud settings must be the same for the kernel, the Icebox, and LinuxBIOS.<br />

Temperature Thresholds This option allows you to set up to five temperature thresholds for each host (four<br />

processor temperatures followed by the ambient host temperature). Should the host’s temperature exceed<br />

any of these thresholds, the Icebox will shut off power to the host (i.e., a hard power off). This option requires<br />

that Enable Temperatures Shutdown is enabled (see Enable Temperature Shutdown on page 70).<br />

Temperatures are monitored internally.<br />

Note<br />

Icebox temperature thresholds do not apply to IPMI-based hosts.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


NETWORK SUBTAB<br />

Icebox Administration<br />

Add an Icebox<br />

The Configuration Network subtab allows you to view the Icebox network settings (e.g., the DHCP status,<br />

address, and gateway).<br />

1. If the DHCP Status is OFF, the Icebox uses locally stored static IP information to configure its network.<br />

You can change this information by editing the address, netmask, and gateway settings in the upper and<br />

lower panes.<br />

2. If the DHCP Status is ON, the Icebox requests its IP configuration from the DHCP server—otherwise, you<br />

must modify these settings from the Hosts Administration tab or directly from the Icebox.<br />

3. After making changes to the configuration, click Apply to save changes or click Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

73


Icebox Administration<br />

Add an Icebox<br />

74<br />

SNMP SUBTAB<br />

The Configuration SNMP subtab allows you to view or modify Icebox SNMP settings. The Simple Network<br />

Management Protocol (SNMP) allows you to monitor and control all managed devices through a common<br />

interface. The protocol consists of Get, Set, and Trap operations on the Management Information Base (MIB).<br />

The MIB is a tree-shaped information structure that defines what sort of data can be manipulated via SNMP.<br />

1. To modify SNMP settings or traps, click the respective checkbox to enable or disable each item.<br />

2. Click Apply to save changes to these settings or click Cancel to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


FILTERS SUBTAB<br />

Icebox Administration<br />

Add an Icebox<br />

The Configuration Filters subtab allows you to set the Icebox filter settings. IP filtering allows you to grant or<br />

deny an IP address (or range of addresses) access to a particular service on the Icebox. By default,<br />

<strong>Clusterworx</strong> provides all connections with access to any available services.<br />

1. To modify the Filter Policy, click the pull-down menu and select Deny or Allow filtering.<br />

Warning!<br />

Deny applies to all filtering policies. If you select Deny but do not apply any rules, you can accidentally<br />

lock yourself out of your Iceboxes. To regain access, you must connect to the Icebox directly through a<br />

serial cable.<br />

2. To add a filter, click New to launch the Add Filter dialog.<br />

3. Select a filter service—available services include Dport, NIMP, SNMP, SSH, and Telnet.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

75


Icebox Administration<br />

Add an Icebox<br />

76<br />

4. Enter the IP address of the Icebox.<br />

5. Click the Mask pull-down menu to select the Net Mask.<br />

6. Click OK to save changes to these settings or click Cancel to abort this action.<br />

7. If you are finished configuring the Icebox, click Apply to save these settings or click Close to abort this<br />

action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Power Management<br />

Power Management<br />

Hosts Subtab<br />

The Power Management feature provides you with the ability to remotely reset, power up, power down, and<br />

cycle power on various hosts in the system. The Power Management subtab uses the Host and Icebox views<br />

to examine and modify Icebox power administration.<br />

Hosts Subtab<br />

The Power Management Hosts subtab allows you to control power for each host connected to the Icebox (the<br />

controls in this pane are essentially the same as they are when you connect directly to the Icebox). To modify<br />

hosts associated with Icebox ports, see Hosts on page 25.<br />

Host Controls<br />

To begin using this feature, select a host(s) from the navigation tree (use the Shift and Ctrl keys to select<br />

multiple hosts). A power icon appears for each selected host.<br />

BEACON ON<br />

To identify a specific host in a cluster for troubleshooting purposes, click Beacon On to flash a light from the<br />

host. The host indicator on your screen display flashes red.<br />

Note<br />

When you turn on a beacon for an IPMI-supported host, the beacon will flash for only a few seconds. To<br />

extend the beacon duration, see cwpower on page 204.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

77


Power Management<br />

Hosts Subtab<br />

78<br />

BEACON OFF<br />

Turn off the beacon.<br />

ON<br />

Turn on power to the host.<br />

Note<br />

If you are unable to power a host on or off, the port may be locked. See the Icebox User’s <strong>Guide</strong> for<br />

information on port locking.<br />

OFF<br />

Immediately turn off power to the host.<br />

RESET<br />

Send a signal to the motherboard to perform a soft boot of the host.<br />

CYCLE<br />

Turn the power off, then on. This is especially useful for multiple hosts.<br />

Note<br />

The different colors of Icebox ports on your screen signify the following:<br />

Bright green Port is on.<br />

Dark green Port is off.<br />

Red Beacon is flashing.<br />

Grayed out Status unavailable. The host may be connected to an auxiliary port.<br />

SHUTDOWN<br />

Halt all applications and services running on the host and, if the hardware allows, power the host off. If you<br />

have used the /sbin/shutdown command to successfully shut down and reboot hosts at the next power cycle,<br />

you should be safe to enable this option. To enable shutdown, set the shutdown.button.enable option in<br />

HostAdministrationService.profile to true.<br />

Warning!<br />

Using the shutdown option requires that the BIOS is enabled to support boot at power up—the default<br />

behavior for LinuxBIOS. This setting, also referred to as Power State Control or Power On Boot, is<br />

typically enabled for most server-type motherboards.<br />

If you do not enable this BIOS setting, hosts that are shut down may become unusable until you press the<br />

power button on each host. For the location of your host power switch, please consult your host<br />

installation documentation.<br />

Note<br />

The power connection to the host remains active unless you click Off.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Tip<br />

To return the host to normal operational status, cycle the power.<br />

REBOOT<br />

Shuts down and restarts all applications and services on the host.<br />

Power Management<br />

Hosts Subtab<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

79


Power Management<br />

Iceboxes Subtab<br />

80<br />

Iceboxes Subtab<br />

The Power Management Iceboxes subtab allows you to control power for each port on the Icebox (the<br />

controls in this pane are essentially the same as they are when you connect directly to the Icebox). To begin<br />

using this feature, select an Icebox from the navigation tree. A power icon appears for each port on the<br />

Icebox.<br />

BEACON ON<br />

To identify a host connected to a particular port on the Icebox, select the port and click Beacon On. The host<br />

flashes a beacon light and the port indicator on screen flashes red.<br />

Note<br />

The beacon function works only if the hardware installed in your cluster supports Icecards. Hosts<br />

without Icecards do not support beacons.<br />

BEACON OFF<br />

Turn off the beacon.<br />

ON<br />

Turn on power to the host.<br />

OFF<br />

Turn off power to the host.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


RESET<br />

Send a signal to the motherboard to perform a soft boot of the host.<br />

CYCLE<br />

Turn the power off, then on. This is especially useful for multiple hosts.<br />

Note<br />

The different colors of Icebox ports on your screen signify the following:<br />

Bright green Port is on.<br />

Dark green Port is off.<br />

Red Beacon is flashing.<br />

Grayed out Status unavailable.<br />

Power Management<br />

Iceboxes Subtab<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

81


Power Management<br />

Iceboxes Subtab<br />

82<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Chapter 7<br />

Imaging<br />

Overview<br />

<strong>Clusterworx</strong> version-controlled image management allows you to create and store images that can be used to<br />

install and configure hosts in your system. An image may contain file system information, utilities used for<br />

provisioning, one payload, and one kernel—although you may create and store many payloads and kernels.<br />

The payload contains the operating system, applications, libraries, configuration files, locale and time zone<br />

settings, file system structure, selected local user and group accounts (managed by <strong>Clusterworx</strong>), and any<br />

centralized user authentication settings to install on each host (e.g., NIS, LDAP, and Kerberos). The kernel is<br />

the Linux kernel.<br />

Note<br />

For a list of <strong>Clusterworx</strong>-supported operating systems, see Operating <strong>System</strong> Requirements on page 2.<br />

+<br />

=<br />

Payload<br />

Kernel<br />

Image<br />

Stored<br />

payloads<br />

Stored<br />

kernels<br />

Image<br />

This chapter provides both GUI and command-line interface directions to assist you in building and<br />

configuring an image. The image configuration process allows you to select a kernel and payload, and also<br />

configures the boot utilities and partition layout. Once the new image is complete, you can use provisioning<br />

to apply the image to the host(s). See Provisioning on page 153.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

83


Payload Management<br />

Linux Distributions<br />

84<br />

Payload Management<br />

Payloads are stored versions of the operating system and any applications installed on the hosts. Payloads are<br />

compressed and transferred to the hosts via multicast during the provisioning process.<br />

Linux Distributions<br />

Before you can begin working with payloads, you must first ensure that your Linux distribution is installed<br />

and available for use:<br />

Red Hat Installations<br />

1. Mount disk 1 and copy the contents of the entire disk to a location on the hard drive:<br />

mount /mnt/cdrom<br />

or<br />

mount -o loop RHEL4-x86_64-WS-disc1.iso /mnt/cdrom<br />

mkdir /mnt/redhat<br />

cp -r /mnt/cdrom/* /mnt/redhat<br />

2. Mount disk 2 and copy the *.rpm files from the RPMS directory to the RPMS directory on the hard drive:<br />

cp /mnt/cdrom/RedHat/RPMS/*.rpm /mnt/redhat/RedHat/RPMS<br />

3. Repeat this process with all other binary CD-ROMs.<br />

SuSE Linux Enterprise Server Installations<br />

1. Mount disk 1 and copy the contents of the entire disk to a location on the hard drive:<br />

mount /media/cdrom<br />

or<br />

mount -o loop SLES-9-x86-64-CD1.iso /media/cdrom<br />

mkdir /mnt/suse<br />

cp -r /media/cdrom/* /mnt/suse<br />

2. Mount disk 2 and copy the SuSE directory that contains the RPMs to the SuSE directory on the hard<br />

drive:<br />

cp -r /media/cdrom/suse/* /mnt/suse/suse<br />

3. Repeat this process with all other binary CD-ROMs.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Create a Payload<br />

Payload Management<br />

Create a Payload<br />

Payloads are initially created using a supported Linux distribution installation media (CD-ROM, FTP, NFS) to<br />

build a base payload (see Operating <strong>System</strong> Requirements on page 2 for a list of supported distributions) or by<br />

importing a payload from a previously provisioned host. Additions and changes are applied by adding or<br />

removing packages, or by editing files through the GUI or CLI. Changes to the Payload are managed by the<br />

<strong>Clusterworx</strong> Version Control <strong>System</strong> (VCS). Package information and files are stored and may be browsed<br />

through <strong>Clusterworx</strong>.<br />

Warning!<br />

Please consult Linux Networx before upgrading your Linux distribution or kernel. Upgrading to a<br />

distribution or kernel not approved for use on your system may render <strong>Clusterworx</strong> inoperable or<br />

otherwise impair system functionality. Technical Support is not provided for unapproved system<br />

configurations.<br />

To Create a New Payload<br />

To create a new payload from a Linux distribution, do the following:<br />

1. Select the Imaging tab.<br />

2. Select New Payload from the File menu or right-click on the Payloads entry in the navigation tree and<br />

select New Payload. A new payload pane appears.<br />

Note<br />

To create a new payload from an existing host, see To Create a Payload from an Existing Host on page 90.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

85


Payload Management<br />

Create a Payload<br />

86<br />

3. Enter the name of the new payload in the Name field.<br />

4. (Optional) Enter a description of the new payload in the Description field.<br />

5. Click select to display the Package Source dialog.<br />

6. Select the Scheme desired.<br />

7. Enter the Location of the top level directory for the Linux distribution or, if the File scheme is selected,<br />

click Browse to locate the directory.<br />

Tip<br />

If you are creating multiple payloads from the same distribution source, it may be faster and easier to<br />

copy the distribution onto the hard drive. This also prevents you from having to switch CD-ROMs during<br />

the payload creation process. See Red Hat Installations on page 84 and SuSE Linux Enterprise Server<br />

Installations on page 84 for specific details on installing these distributions.<br />

8. (Optional) Enter a Host (only if the selected scheme is http:// or ftp://).<br />

9. (Optional) Enter Username and Password (only if you selected Use Authentication).<br />

10. Click OK to continue or click Cancel to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Payload Management<br />

Create a Payload<br />

11. As the distribution loads, the Task Progress dialog appears. This dialog displays the status of the payload<br />

and identifies any errors that occur during the load process.<br />

Tip<br />

Select Hide on Completion to close the Task Progress dialog if no errors or warnings occur.<br />

Note<br />

If <strong>Clusterworx</strong> is unable to detect payload attributes, the Distribution Unknown dialog appears. From this<br />

dialog, select the distribution type that most closely resembles your distribution and <strong>Clusterworx</strong> will<br />

attempt to create your payload.<br />

12. (Optional) Modify the Architecture field.<br />

13. (Optional) Set the Locale and Time Zone.<br />

14. Click the Packages subtab.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

87


Payload Management<br />

Create a Payload<br />

88<br />

15. (Optional) Click Add. The Select Categories dialog appears.<br />

16. Select which payload categories to install or remove by clicking the checkbox next to the package.<br />

Note<br />

When you select a “core” category to include in a payload, <strong>Clusterworx</strong> automatically selects packages<br />

that are essential in allowing the capability to run. However, you may include additional packages at any<br />

time. See Add a Package to an Existing Payload on page 92.<br />

17. Click OK to continue or click Cancel to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Payload Management<br />

Create a Payload<br />

18. Click Apply to save changes and build the payload. Click Revert or Close to abort this action. The payload<br />

progress dialog appears.<br />

Tip<br />

If an RPM installation error occurs during the payload creation process, <strong>Clusterworx</strong> enables the Details<br />

button and allows you to view which RPM produced the error.<br />

To view error information about a failed command, click the command description field. You may copy<br />

the contents of this field and run it from the CLI to view specific details about the error.<br />

19. Click Check In to import the new payload into VCS. See also Version Control <strong>System</strong> (VCS) on page 144.<br />

20. (Optional) Select the Authentication subtab to add configurations for centralized authentication services.<br />

See Payload Authentication Management on page 98 for more information.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

89


Payload Management<br />

Create a Payload<br />

90<br />

To Create a Copy of an Existing Payload<br />

1. Select the Imaging tab.<br />

2. Select a payload from the navigation tree, then right-click on the payload and select Copy.<br />

Tip<br />

You may also open a payload for editing, then click the Copy button.<br />

3. <strong>Clusterworx</strong> prompts you for the name of the new payload.<br />

4. Enter the name of the new payload and click OK. Click Cancel to abort this action.<br />

To Create a Payload from an Existing Host<br />

Creating a payload from an existing host is helpful in situations where a specific host is already configured<br />

the way you want it. This feature allows you to create new payloads that use the configuration and distribute<br />

the image to other hosts.<br />

Note<br />

On RHEL4, temporarily disable SE Linux while importing the payload. If you do not require SE Linux,<br />

you may want to leave it disabled.<br />

To disable SE Linux:<br />

1. Navigate to the Imaging tab.<br />

2. Select the kernel you are using and edit the kernel parameters.<br />

3. Add selinux=0 as a parameter.<br />

4. Reboot the host and import the payload.<br />

1. Select the Imaging tab.<br />

2. Click Import Payload from the File menu. The Create Payload from Existing Installation dialog appears.<br />

3. Enter the name of the host to use to create a payload or select a host from the pull-down menu.<br />

4. Enter a name for the payload in the Name field.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


5. (Optional) Enter a description of the payload in the Description field.<br />

Payload Management<br />

Create a Payload<br />

6. (Optional) Review the Excluded Files list and remove any files you want to include in the payload (use the<br />

Shift or Ctrl keys to select multiple files).<br />

Warning!<br />

If you include a symlink when creating a payload, excluding the target produces a dangling symbolic link.<br />

This link may cause an exception and abort payload creation when <strong>Clusterworx</strong> attempts to repair<br />

missing directories.<br />

7. (Optional) Enter the location of any file you want to exclude from the payload and click Add. Click<br />

Browse to locate a file on your system.<br />

8. Click OK to create the payload or click Cancel to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

91


Payload Management<br />

Add a Package to an Existing Payload<br />

92<br />

Add a Package to an Existing Payload<br />

Adding a package to a payload allows you to make additions or changes to the default Linux installation. For<br />

a list of supported distributions, see Operating <strong>System</strong> Requirements on page 2.<br />

To Add a Package to an Existing Payload<br />

1. Select the Imaging tab.<br />

2. Select a payload from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the payload in the navigation tree and select Edit.<br />

4. Click the Packages subtab.<br />

5. Click Add. The Package Source dialog appears.<br />

6. Select the Scheme desired.<br />

7. Enter the Location of the top level directory for the Linux distribution, a directory containing RPM<br />

packages, or the location of an individual package. If you selected the File scheme, click Browse to locate<br />

the package.<br />

Note<br />

If the browse button does not launch a dialog, a DNS name resolution error may exist. The DNSresolvable<br />

server name must be specified in the client—not the IP address.<br />

If you have several packages in a directory, select the directory. <strong>Clusterworx</strong> displays all packages in the<br />

directory—from here, you can choose which packages you want to install (see page 93). <strong>Clusterworx</strong><br />

resolves package dependencies (see Payload Package Dependency Checks on page 96)<br />

8. (Optional) Enter a Host (only if the selected scheme is http:// or ftp://).<br />

9. (Optional) Enter Username and Password (only if you selected Use Authentication).<br />

10. Click OK to continue (or click Cancel to abort this action and return to the Payload pane).<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


11. The Select Packages dialog appears.<br />

12. Select which package(s) to install by clicking the checkbox next to the package.<br />

13. Click OK to continue or click Cancel to abort this action.<br />

14. Click Apply to save changes. Click Revert or Close to abort this action.<br />

Note<br />

Payload Management<br />

Add a Package to an Existing Payload<br />

Before adding the package, <strong>Clusterworx</strong> performs a package dependency check. See Payload Package<br />

Dependency Checks on page 96 for information about dependency errors.<br />

15. Click Check In to check the payload into VCS.<br />

16. Update the image to use the new payload<br />

17. Re-provision the hosts with the new image.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

93


Payload Management<br />

Remove a Payload Package<br />

94<br />

Remove a Payload Package<br />

The Packages subtab provides a view into the current packages installed in the payload. See also Payload<br />

Package Dependency Checks on page 96.<br />

To Remove a Payload Package<br />

1. Select the Imaging tab.<br />

2. Select a payload from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the payload in the navigation tree and select Edit.<br />

4. Click the Packages subtab.<br />

5. From the package list, select a package group(s) or expand the group to view individual packages. To<br />

select multiple items, use the Shift or Ctrl keys.<br />

Tip<br />

To view individual packages instead of package groups, change the View Packages By option.<br />

6. Click Delete.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


7. <strong>Clusterworx</strong> asks you to confirm your action.<br />

8. Click OK to remove the package(s) or click Cancel to abort this action.<br />

9. Click Apply to save changes. Click Revert or Close to abort this action.<br />

Note<br />

Payload Management<br />

Remove a Payload Package<br />

Before adding the package, <strong>Clusterworx</strong> performs a package dependency check. See Payload Package<br />

Dependency Checks on page 96 for information about dependency errors.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

95


Payload Management<br />

Remove a Payload Package<br />

96<br />

Payload Package Dependency Checks<br />

Before performing package addition, update, or removal, <strong>Clusterworx</strong> performs a package dependency<br />

check. Any failures identified through the dependency check are displayed in the Resolve Dependency<br />

Failures dialog. From this dialog, you can choose a course of action to address the failure(s).<br />

ADDING A PACKAGE<br />

When adding a package, you may correct dependency failures by selecting one of the following options:<br />

Add packages needed to resolve dependency failures.<br />

Ignore packages that have dependency failures.<br />

Force package installation, ignoring dependency failures.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


REMOVING A PACKAGE<br />

Payload Management<br />

Remove a Payload Package<br />

When removing a package, you may correct dependency failures by selecting one of the following options:<br />

Ignore packages that have dependency failures.<br />

Force package deletion, ignoring dependency failures.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

97


Payload Management<br />

Payload Authentication Management<br />

98<br />

Payload Authentication Management<br />

The Payload Authentication subtab manages the authentication settings for the payload. This tab allows you<br />

to enable, disable, or modify the settings for supported remote authentication schemes. <strong>Clusterworx</strong> supports<br />

the following remote authentication schemes:<br />

Network Information Service (NIS)<br />

Lightweight Directory Access Protocol (LDAP)<br />

Kerberos (a network authentication protocol)<br />

To Configure NIS Authentication<br />

1. Select the Imaging tab.<br />

2. Select a payload from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the payload in the navigation tree and select Edit.<br />

4. Select the Authentication subtab.<br />

5. Select the NIS tab.<br />

6. Click the Use NIS option.<br />

7. Enter the NIS domain.<br />

8. (Optional) Enter the NIS Server.<br />

9. Click Apply to save changes. Click Revert or Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


To Configure LDAP Authentication<br />

1. Select the Imaging tab.<br />

2. Select a payload from the navigation tree.<br />

Payload Management<br />

Payload Authentication Management<br />

3. Select Edit from the Edit menu or right-click on the payload in the navigation tree and select Edit.<br />

4. Select the Authentication subtab.<br />

5. Select the LDAP tab.<br />

6. Click the Use LDAP option.<br />

7. Enter the LDAP Base DN (Distinguished Name).<br />

8. Enter the LDAP Server.<br />

9. (Optional) Click Use SSL connections if you want to connect to the LDAP server via SSL.<br />

10. Click Apply to save changes. Click Revert or Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

99


Payload Management<br />

Payload Authentication Management<br />

100<br />

To Configure Kerberos Authentication<br />

1. Select the Imaging tab.<br />

2. Select a payload from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the payload in the navigation tree and select Edit.<br />

4. Select the Authentication subtab.<br />

5. Select the Kerberos tab.<br />

6. Click the Use Kerberos option.<br />

7. Enter the Kerberos Realm.<br />

8. Enter the Kerberos KDC (Key Distribution Center).<br />

9. Enter the Kerberos Server.<br />

10. Click Apply to save changes. Click Revert or Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Payload Local User and Group Account Management<br />

Payload Management<br />

Payload Local User and Group Account Management<br />

The Local Accounts payload management tab provides a means for managing local accounts in payloads.<br />

This tab allows you to:<br />

Add a local user or group account known to <strong>Clusterworx</strong> to the payload (see User Administration on<br />

page 49).<br />

Delete a local user or group account from the payload.<br />

Note<br />

Local account management does not support moving local accounts from the host.<br />

Local user and group accounts that are reserved for system use do not display and cannot be added or<br />

deleted. The root account is added automatically. <strong>Clusterworx</strong> handles group dependencies.<br />

Tip<br />

Software that requires you to add groups (e.g., Myrinet Group) can be managed through user accounts.<br />

Add a Local User Account to a Payload<br />

1. Select the Imaging tab.<br />

2. Select a payload from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the payload in the navigation tree and select Edit.<br />

4. Select the Local Accounts subtab.<br />

5. Select the Users tab.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

101


Payload Management<br />

Payload Local User and Group Account Management<br />

102<br />

6. Click Add. The Add User dialog appears.<br />

7. Select the user(s) to add to the payload (use the Shift or Ctrl keys to select multiple users).<br />

8. Click OK to add the user(s) or click Cancel to abort this action.<br />

9. Click Apply to complete the process. Click Revert or Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Delete a Local User Account from a Payload<br />

1. Select the Imaging tab.<br />

2. Select a payload in the navigation tree.<br />

Payload Management<br />

Payload Local User and Group Account Management<br />

3. Select Edit from the Edit menu or right-click on the payload in the navigation tree and select Edit.<br />

4. Select the Local Accounts subtab.<br />

5. Select the Users tab.<br />

6. Select the user(s) to remove from the payload (use the Shift or Ctrl keys to select multiple users).<br />

7. Click Delete to remove the user(s).<br />

8. Click Apply to complete the process. Click Revert or Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

103


Payload Management<br />

Payload Local User and Group Account Management<br />

104<br />

Add a Group User Account to a Payload<br />

1. Select the Imaging tab.<br />

2. Select a payload from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the payload in the navigation tree and select Edit.<br />

4. Select the Local Accounts subtab.<br />

5. Select the Groups tab.<br />

6. Click Add. The Add Group dialog appears.<br />

7. Select the group(s) to add to the payload (use the Shift or Ctrl keys to select multiple users).<br />

8. Click OK to add the group(s) or click Cancel to abort this action.<br />

9. Click Apply to complete the process. Click Revert or Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Delete a Group User Account from a Payload<br />

1. Select the Imaging tab.<br />

2. Select a payload from the navigation tree.<br />

Payload Management<br />

Payload Local User and Group Account Management<br />

3. Select Edit from the Edit menu or right-click on the payload in navigation tree and select Edit.<br />

4. Select the Local Accounts subtab.<br />

5. Select the Groups tab.<br />

6. Select the group(s) to remove from the payload (use the Shift or Ctrl keys to select multiple groups).<br />

7. Click Delete to remove the group(s).<br />

8. Click Apply to complete the process. Click Revert or Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

105


Payload Management<br />

Payload File Configuration<br />

106<br />

Payload File Configuration<br />

The Configuration tab allows you to set up configuration options when creating or editing a payload<br />

including DHCP Network, Network, Serial Console, Virtual Console, and more. When you click Apply, the<br />

scripts that correspond to the selected item(s) run on the payload. It is important to note that the selected<br />

script(s) run at the time you click Apply—this list is not an indication of scripts that have run at some point on<br />

the system.<br />

Note<br />

The list of options available is based on the distribution selected. The options displayed in the example<br />

below are SuSE-based distributions (SuSE Linux Enterprise Server 9).<br />

To Configure a Payload<br />

1. Select the Imaging tab.<br />

2. Select a payload from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the payload in the navigation tree and select Edit.<br />

4. Select the Configuration subtab<br />

5. Click the check box by each option you want to configure.<br />

6. Click Apply to complete the configuration. Click Revert or Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Edit a Payload File with the Text Editor<br />

Payload Management<br />

Edit a Payload File with the Text Editor<br />

<strong>Clusterworx</strong> allows you to edit payload files with a text editor. Files edited in this manner are treated as plain<br />

text and only basic editing tools such as insert, cut, and paste are available.<br />

To Edit a Payload File with the Text Editor<br />

1. Select the Imaging Tab.<br />

2. Select and load a payload from the navigation tree.<br />

3. Select Edit file from the Edit menu. The Remote File Chooser appears.<br />

4. Select the file to edit and click Open. The text editor window appears.<br />

5. Edit the file as necessary, then click OK to save changes or click Cancel to abort this action.<br />

6. Click Apply to complete the configuration. Click Revert or Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

107


Payload Management<br />

Add and Update Payload Files or Directories<br />

108<br />

Add and Update Payload Files or Directories<br />

Adding and updating payload files allows you to select a file or directory from the local file system and copy it<br />

into the payload.<br />

To Add or Update a Payload File or Directory<br />

1. Select the Imaging Tab.<br />

2. Select a payload from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the payload and select Edit.<br />

4. Select Add File from the Edit menu. The Add File or Directory dialog appears.<br />

5. Enter the source for the new file in the Source field or click Browse to locate the source.<br />

6. Enter the destination for the new file in the Destination field or click Browse to select the destination.<br />

Note<br />

The destination specified is relative to the payload root.<br />

7. Click OK to save changes or click Cancel to abort this action.<br />

8. Click Apply to complete the process. Click Revert or Close to abort this action.<br />

Tip<br />

If a working copy of a payload is available, you can enter the payload directory and make changes to the<br />

payload manually from the CLI. Working copies of payloads are stored at:<br />

/opt/cwx/imaging//payloads/<br />

From this directory, enter chroot to change the directory to your root (/) directory. After making changes,<br />

check the payload into VCS.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Delete Payload Files<br />

Deleting a payload file allows you to exclude a specific file(s) from a payload.<br />

To Delete a File from a Payload<br />

1. Select the Imaging Tab.<br />

2. Select a payload from the navigation tree.<br />

3. Select Delete File(s) from the Edit menu. The Remote File Chooser appears.<br />

Payload Management<br />

Delete Payload Files<br />

4. Select the file(s) you want to remove, then click Delete to remove the files or Cancel to abort this action.<br />

5. Click Apply to complete the process. Click Revert or Close to abort this action.<br />

Delete a Payload<br />

To Delete a Working Copy of a Payload<br />

1. Select the Imaging tab.<br />

2. Select a payload from the navigation tree.<br />

3. Select Delete from the File menu or right-click on the payload in the navigation tree and select Delete.<br />

Tip<br />

Once you check the payload into VCS, you may remove the directory from within your working user<br />

directory (e.g., to save space):<br />

/opt/cwx/imaging///<br />

To verify that your changes were checked in, use the VCS status option. See Version Control <strong>System</strong><br />

(VCS) on page 144 for details on using the version control system.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

109


Payload Management<br />

Install <strong>Clusterworx</strong> into the Payload<br />

110<br />

Install <strong>Clusterworx</strong> into the Payload<br />

When working with payloads, <strong>Clusterworx</strong> requires that each payload contain some basic <strong>Clusterworx</strong><br />

services. These services allow <strong>Clusterworx</strong> to control various parts of the system, including instrumentation<br />

services.<br />

Warning!<br />

Installing a <strong>Clusterworx</strong> payload in a RedHat Enterprise Linux 3 update 3 or 4 environment produces an<br />

out-of-memory error. To correct this issue, you must install the prefinalize script contained in the /misc<br />

directory. See To Install the <strong>Clusterworx</strong> Prefinalize Script on page 143.<br />

To Install <strong>Clusterworx</strong> into the Payload<br />

1. Access the payload’s root directory:<br />

cd/opt/cwx/imaging//payloads/<br />

2. Run the install script from the <strong>Clusterworx</strong> Installation CD (e.g., /mnt/cdrom/install.sh).<br />

3. Select the Payload option.<br />

4. Enter the name of the <strong>Clusterworx</strong> Server.<br />

Note<br />

The <strong>Clusterworx</strong> Server must be a valid host name that is resolvable through name resolution (i.e., DNS,<br />

/etc/hosts).<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


5. Enter the installation directory (e.g., /opt/cwx).<br />

6. Click Next.<br />

Tip<br />

Payload Management<br />

If the hosts will communicate with one another, you may prefer to use host names (e.g., n1, n2, n3) rather<br />

than IP addresses (i.e., 192.168.0.1, 192.168.0.2, 192.168.0.3). Computers use “name resolution” to convert<br />

between numbers and names—most commonly through a local /etc/hosts file or with a Domain Name<br />

Service (DNS).<br />

For typical <strong>Clusterworx</strong> users, the local /etc/hosts file already exists on the Master Host. To make this file<br />

available to all hosts, copy the file into the payload (or simply edit the file in the payload). If you need to<br />

create this file, use the dbx command to create an /etc/hosts formatted list of the hosts in your cluster. To<br />

save this list, redirect the output to a file:<br />

dbx -f:hosts<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

111


Kernel Management<br />

Create a Kernel<br />

112<br />

Kernel Management<br />

Kernels may be customized for particular applications and used on specific hosts to achieve optimal system<br />

performance. <strong>Clusterworx</strong> uses VCS to help you manage kernels used on your system.<br />

Create a Kernel<br />

The following sections review the steps necessary to create a kernel for use in provisioning your cluster.<br />

To Create a Kernel Using an Existing Binary<br />

Note<br />

For information on building a new kernel from source, see To Build a New Kernel from Source on<br />

page 114.<br />

1. Select the Imaging tab.<br />

2. Select New Kernel from the File menu or right-click on the Kernels entry in the navigation tree and<br />

select New Kernel. A new kernel pane appears.<br />

3. Enter the name of the Kernel.<br />

4. (Optional) Enter a description of the kernel.<br />

5. Select the hardware architecture.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


6. Enter the location of the kernel binary or click Browse to open the Remote File Chooser.<br />

Kernel Management<br />

Create a Kernel<br />

7. Specify the location of the modules directory (e.g., /lib/modules) or click Browse to open the Remote File<br />

Chooser.<br />

8. Click Apply to create the kernel. Click Revert or Close to abort this action.<br />

9. (Optional) Click Check In to import the kernel into VCS.<br />

10. Click Close.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

113


Kernel Management<br />

Create a Kernel<br />

114<br />

To Create a Copy of an Existing Kernel<br />

1. Select the Imaging tab.<br />

2. Select a kernel from the navigation tree, then right-click on the payload and select Copy.<br />

Tip<br />

You may also open a kernel for editing, then click the Copy button.<br />

3. <strong>Clusterworx</strong> prompts you for the name of the new kernel.<br />

4. Enter the name of the new kernel and click OK. Click Cancel to abort this action.<br />

To Build a New Kernel from Source<br />

If you want to use a stock vendor kernel already loaded on your system, see To Create a Kernel Using an<br />

Existing Binary on page 112. Otherwise, use the following procedure to build a new kernel from source:<br />

Warning!<br />

Please consult Linux Networx before upgrading your Linux distribution or kernel. Upgrading to a<br />

distribution or kernel not approved for use on your system may render <strong>Clusterworx</strong> inoperable or<br />

otherwise impair system functionality. Technical Support is not provided for unapproved system<br />

configurations.<br />

1. Obtain and install the kernel source RPM for your distribution from your distribution CD-ROMs or distribution<br />

vendor. This places the kernel source code under /usr/src, typically in a directory named<br />

linux-2..- (if building a Red Hat Enterprise Linux 4 kernel, <strong>Clusterworx</strong> places<br />

the source code into /usr/src/kernels/2..-).<br />

Tip<br />

Because you don’t need the kernel source RPM in your payload, install the RPM on the host.<br />

2. If present, review the README file inside the kernel source for instructions on how to build and<br />

configure the kernel.<br />

Note<br />

It is highly recommended the you use, or at least base your configuration on one of the vendor’s standard<br />

kernel configurations.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Kernel Management<br />

Create a Kernel<br />

3. Typically, a standard configuration file is installed in the /boot directory, usually as<br />

config-2..-. You may also use a stock configuration file installed as .config in<br />

the kernel source directory or available in a sub-directory (typically /configs) of the kernel source<br />

directory.<br />

Tip<br />

To use a stock configuration, copy it to the kernel source directory and run make oldconfig.<br />

4. Build the kernel and its modules using the make bzImage && make modules command. If your<br />

distribution uses the Linux 2.4 kernel, use make dep && make bzImage && make modules but DO NOT<br />

install the kernel.<br />

5. Open <strong>Clusterworx</strong>.<br />

6. Select the Imaging tab.<br />

7. Select Source Kernel from the File menu.<br />

8. Enter the name of the Kernel.<br />

9. (Optional) Enter a description of the kernel.<br />

10. Select the hardware architecture.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

115


Kernel Management<br />

Create a Kernel<br />

116<br />

11. Enter the location of the kernel source (i.e., where you unpacked the kernel source) or click Browse to<br />

open the Remote File Chooser. By default, kernel source files are located in /usr/src.<br />

12. (Optional) Enter the binary path of the kernel (e.g., arch/i386/boot/bzImage).<br />

13. Click Apply to complete the process. Click Revert or Close to abort this action.<br />

14. Click Check In to import the new kernel into VCS.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Edit a Kernel<br />

To Edit a Kernel<br />

1. Select the Imaging tab.<br />

2. Select a kernel from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the kernel in the navigation tree and select Edit.<br />

4. (Optional) Edit the kernel’s description in the Description field.<br />

5. (Optional) Edit the Parameters field.<br />

Kernel Management<br />

Edit a Kernel<br />

6. (Optional) Click Update to update a kernel that has been recompiled for some reason (e.g., a change in<br />

kernel configuration). <strong>Clusterworx</strong> updates the kernel based on the Source Directory and Binary Path<br />

used when you created the kernel. See To Create a Kernel Using an Existing Binary on page 112.<br />

7. (Optional) Click Properties to view the “.config” and “<strong>System</strong>.map” files for the kernel (if they existed<br />

when you imported the kernel).<br />

8. (Optional) Click Add to include new modules in this kernel. You may select modules individually (files<br />

ending in *.o) or you can add a directory and allow <strong>Clusterworx</strong> to automatically select all modules and<br />

directories recursively. See also Modules Subtab on page 118.<br />

9. (Optional) Select modules to remove from the kernel and click Delete.<br />

10. Click Apply to complete the process. Click Revert or Close to abort this action.<br />

11. Click Check In to commit changes to the kernel into VCS.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

117


Kernel Management<br />

Edit a Kernel<br />

118<br />

MODULES SUBTAB<br />

Many provisioning systems use a basic kernel to boot and provision the host, then reboot with an optimized<br />

kernel that will run on the host. <strong>Clusterworx</strong> requires only a single kernel to boot and run; however, you<br />

must compile any additional functionality into the kernel (i.e., monolithic) or add loadable kernel modules to<br />

the kernel (i.e., modular). <strong>Clusterworx</strong> loads the modules during the provisioning process.<br />

Note<br />

If you encounter problems when provisioning hosts on your cluster, check to see that you compiled your<br />

kernel correctly. If you compiled a modular kernel, you must include ethernet or file system modules<br />

before the host can provision properly. Use the Icebox serial console to watch the host boot.<br />

Tip<br />

In some cases, it may be necessary to install kernel modules on a host during the provisioning process,<br />

but not load them at boot time. Because an image ties a kernel and payload together, modules can be<br />

copied to the host by adding them to an image rather than adding them to a payload.<br />

To add modules to an image, run mkdir -p ramdisk/lib/modules from the images directory. For example,<br />

if you were running as root and your image name were ComputeHost:<br />

cd /opt/cwx/imaging/root/images/ComputeHost<br />

mkdir -p ramdisk/lib/modules//kernel/<br />

mkdir -p ramdisk/lib/modules//kernel/net/e1000<br />

Then copy the modules you want to an appropriate subdirectory of the modules directory:<br />

cp /usr/src/linux/drivers/net/e1000/e1000.o<br />

ramdisk/lib/modules//kernel/net<br />

ramdisk/lib/modules//kernel/net/e1000/<br />

You may wish to look at your local /lib/modules directory if you have questions about the directory<br />

structure. During the boot process, the kernel automatically loads the modules that were selected in the<br />

kernel configuration screen. The additional modules will be copied to the host during the finalize stage.<br />

This method keeps the payload independent from the kernel and allows you to load the modules after the<br />

host boots.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Delete a Kernel from VCS<br />

To Delete a Working Copy of a Kernel<br />

1. Select the Imaging tab.<br />

2. Select a kernel from the navigation tree.<br />

Kernel Management<br />

Delete a Kernel from VCS<br />

3. Select Delete from the File menu or right-click on the kernel in the navigation tree and select Delete.<br />

Note<br />

Before you delete the working copy of your kernel, use the VCS status option to verify that the kernel is<br />

checked in. See Version Control <strong>System</strong> (VCS) on page 144 for details on using version control.<br />

Tip<br />

Once you check the kernel into VCS, you may delete the working copy of the kernel from your working<br />

directory (e.g., to save space).<br />

/opt/cwx/imaging///<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

119


Image Management<br />

Create an Image<br />

120<br />

Image Management<br />

Images contain exactly one payload and one kernel, and allow you to implement tailored configurations on<br />

various hosts throughout the cluster.<br />

Warning!<br />

Please consult Linux Networx before upgrading your Linux distribution or kernel. Upgrading to a<br />

distribution or kernel not approved for use on your system may render <strong>Clusterworx</strong> inoperable or<br />

otherwise impair system functionality. Technical Support is not provided for unapproved system<br />

configurations.<br />

Create an Image<br />

To Create an Image<br />

1. Select the Imaging tab.<br />

2. Select New Image from the File menu or right-click on the Images entry in the navigation tree and select<br />

New Image. A New Image pane appears.<br />

3. Enter the name of the new image in the Name field.<br />

4. (Optional) Enter a description of the new image in the Description field.<br />

5. Define a Kernel by clicking Browse. To install additional kernel modules that do not load at boot time,<br />

see Modules Subtab on page 118.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


6. Define a Payload by clicking Browse.<br />

7. Define the partition scheme used for the compute hosts—the partition scheme must include a root (/)<br />

partition. See To Create a Partition for an Image on page 124.<br />

Note<br />

Kernel support for selected file systems must be included in the selected kernel (or as modules).<br />

Image Management<br />

Create an Image<br />

8. (Optional) Click the Advanced button to display the Advanced Options dialog. This dialog allows you to<br />

configure partitioning behavior and payload download settings (see Advanced Imaging Options on<br />

page 121).<br />

9. (Optional) Implement RAID. See Managing Partitions on page 124.<br />

10. (Optional) If you need to make modifications to the way hosts boot during the provisioning process,<br />

select the RAM Disk tab. See RAM Disk on page 138.<br />

11. Click Apply to complete the process. Click Revert or Close to abort this action.<br />

Advanced Imaging Options<br />

The Advanced Options dialog allows you to configure partitioning behavior and payload download settings.<br />

These settings are persistent, but may be overridden from the Advanced Provisioning Options dialog. See<br />

Advanced Provisioning Options on page 156.<br />

PARTITIONING BEHAVIOR<br />

This option allows you to configure the partition settings used when provisioning a host. You may<br />

automatically partition a host if the partitioning scheme changes, always re-create all partitions (including<br />

those that are exempt from being overwritten), or choose to never partition the host. see Managing Partitions<br />

on page 124.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

121


Image Management<br />

Create an Image<br />

122<br />

PAYLOAD DOWNLOAD<br />

The payload options allow you to automatically download a payload if a newer version is available (or if the<br />

current payload is not identical to that contained in the image), always download the payload, or choose to<br />

never download a payload.<br />

boot.profile<br />

<strong>Clusterworx</strong> generates the file, boot.profile, each time you save an image (overwriting the previous file in<br />

/etc/boot.profile). The boot profile contains information about the image and is required for the boot process<br />

to function properly. You may configure the following temporary parameters:<br />

dmesg.level The verbosity level (1-8) of the kernel—1 (the default) is the least verbose and 8 is<br />

the most.<br />

partition Configure the hard drive re-partitioning status (Automatic, Always, Never). By<br />

default, Automatic.<br />

partition.once Override the current drive re-partitioning status (Default, On, Off). By default,<br />

Default.<br />

image Configure the image download behavior (Automatic, Always, Never). By default,<br />

Automatic. Always and Never will download the image even if it is up-to-date.<br />

image.once Override the current image download behavior (Default, On, Off). By default,<br />

Default. To view the current download behavior, see Advanced Imaging Options<br />

on page 121.<br />

image.path Specifies where to store the downloaded image. By default, /mnt.<br />

To change the configuration of one of these parameters, add the parameter (e.g., dmesg.level: 7) to the<br />

boot.profile and provision using that image. You may also configure most of these values from the GUI. See<br />

Selecting an Image on page 154.<br />

Note<br />

Changes made to image settings remain in effect until the next time you save the image.<br />

To Create a Copy of an Existing Image<br />

1. Select the Imaging tab.<br />

2. Select an image from the navigation tree, then right-click on the image and select Copy.<br />

Tip<br />

You may also open an image for editing, then click the Copy button.<br />

3. <strong>Clusterworx</strong> prompts you for the name of the new image.<br />

4. Enter the name of the new image and click OK. Click Cancel to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Delete an Image from VCS<br />

To Delete a Working Copy of an Image<br />

1. Select the Imaging tab.<br />

Image Management<br />

Delete an Image from VCS<br />

2. Select the image you want to delete from the navigation tree. To select multiple images, use the Shift or<br />

Ctrl keys.<br />

3. Select Delete from the File menu or right-click on the image(s) in the navigation tree and select Delete.<br />

Tip<br />

Once you check the image into VCS, you may remove the directory from within your working user<br />

directory (e.g., to save space).<br />

/opt/cwx/imaging///<br />

To verify that your changes were checked in, use the VCS status option. See Version Control <strong>System</strong><br />

(VCS) on page 144 for details on using version control.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

123


Image Management<br />

Managing Partitions<br />

124<br />

Managing Partitions<br />

To Create a Partition for an Image<br />

1. Select the Imaging tab.<br />

2. Select an image from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the image in the navigation tree and select Edit.<br />

4. Select the Partitions subtab.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


5. Click Add. The New Partition dialog appears.<br />

Image Management<br />

Managing Partitions<br />

6. Select a file system type from the Filesystem pull-down menu. To create a diskless host, see Diskless<br />

Hosts on page 135.<br />

7. Enter the device on which to add the partition or select a device from the pull-down menu. Supported<br />

devices include the following, but the most common is /dev/hda because hosts typically have only one<br />

disk and use IDE:<br />

/dev/hda—Primary IDE Disk<br />

/dev/hdb—Secondary IDE Disk<br />

/dev/sda—Primary SCSI Disk<br />

/dev/sdb—Secondary SCSI Disk<br />

8. Enter a Mount Point or select one from the pull-down menu.<br />

9. (Optional) Enter the fstab options. The /etc/fstab file controls where directories are mounted and,<br />

because <strong>Clusterworx</strong> writes and manages the fstab on the hosts, any changes made on the hosts are<br />

overwritten during provisioning.<br />

10. (Optional) Enter the mkfs options to use when creating the file system (i.e., file size limits, symlinks,<br />

journaling). For example, to change the default block size for ext3 to 4096, enter -b 4096 in the mkfs<br />

options field.<br />

11. (Optional) If creating an NFS mount, enter the NFS host.<br />

12. (Optional) If creating an NFS mount, enter the NFS share.<br />

13. (Optional) Uncheck the Format option to make the partition exempt from being overwritten or formatted<br />

when you provision the host. This may be overridden by the Force Partitioning option or from the<br />

boot.profile (see Selecting an Image on page 154 and boot.profile on page 122).<br />

Note<br />

After partitioning the hard disk(s) on a host for the first time, you can make a partition on the disk<br />

exempt from being overwritten or formatted when you provision the host. However, deciding not to<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

125


Image Management<br />

Managing Partitions<br />

126<br />

format the partition may have an adverse affect on future payloads—some files may remain from<br />

previous payloads. This option is not allowed if the partition sizes change when you provision the host.<br />

14. Select the partition size:<br />

Fixed size allows you to define the size of the partition (in MBs).<br />

Fill to end of disk allows you to create a partition that uses any space that remains after defining partitions<br />

with fixed sizes.<br />

Tip<br />

It is wise to allocate slightly more memory than is required on some partitions. To estimate the amount of<br />

memory needed by a partition, use the du -hc command.<br />

15. Click Apply to save changes or click Cancel to abort this action.<br />

16. (Optional) Click Check In to import the image into VCS.<br />

17. Click Apply to complete the process. Click Revert or Close to abort this action.<br />

Note<br />

<strong>Clusterworx</strong> generates the file, boot.profile, each time you save an image. For a description of the<br />

information contained in this file, see boot.profile on page 122.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


RAID Partitions<br />

To Create a RAID Partition<br />

When adding a RAID partition, the host typically requires two disks and at least two previously created<br />

software RAID partitions (one per disk).<br />

1. Select the Imaging tab.<br />

2. Select an image from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the image in the navigation tree and select Edit.<br />

4. Select the Partitions subtab.<br />

Image Management<br />

RAID Partitions<br />

5. Click Add to create the appropriate number of software RAID partitions for the RAID you are creating.<br />

See To Create a Partition for an Image on page 124.<br />

Note<br />

The RAID button is disabled until you create at least two RAID partitions.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

127


Image Management<br />

RAID Partitions<br />

128<br />

6. Click the RAID button to assign the partitions a file system, mount point, and RAID level. The Add RAID<br />

dialogue appears.<br />

7. Select a file system type from the Filesystem pull-down menu.<br />

8. Enter a Mount point or select one from the pull-down menu.<br />

9. Select a RAID level from the RAID Level pull-down menu. This level affects the size of the resulting RAID<br />

and the number of RAID partitions required to create it (e.g., RAID0 and RAID1 require 2 RAID<br />

partitions, RAID5 requires 3 RAID partitions).<br />

10. (Optional) Enter the fstab options. The /etc/fstab file controls where directories are mounted and,<br />

because <strong>Clusterworx</strong> writes and manages the fstab on the hosts, any changes made on the hosts are<br />

overwritten during provisioning.<br />

11. (Optional) Enter the mkfs options to use when creating the file system (i.e., file size limits, symlinks,<br />

journaling). For example, to change the default block size for ext3 to 4096, enter -b 4096 in the mkfs field.<br />

12. From the RAID Members list, select the currently unused RAID partitions to include in this RAID.<br />

13. Click Ok to save changes or click Cancel to abort this action.<br />

14. Click Apply to complete the process. Click Revert or Close to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Edit a Partition<br />

To Edit a Partition on an Image<br />

1. Select the Imaging tab.<br />

2. Select an image from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the image in the navigation tree and select Edit.<br />

4. Select the Partitions subtab.<br />

5. Select the partition you want to edit from the list.<br />

Image Management<br />

Edit a Partition<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

129


Image Management<br />

Edit a Partition<br />

130<br />

6. Click Edit. The Edit Partition dialog appears.<br />

7. Make any necessary changes to the partition, then click Apply to accept the changes. Click Cancel to<br />

abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Delete a Partition<br />

To Delete a Partition from an Image<br />

1. Select the Imaging tab.<br />

2. Select an image from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the image in the navigation tree and select Edit.<br />

4. Select the Partitions subtab.<br />

5. Select the partition you want to delete from the list. To select multiple partitions, use the Shift or Ctrl<br />

keys.<br />

6. Click Delete.<br />

Image Management<br />

Delete a Partition<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

131


Image Management<br />

User-Defined File <strong>System</strong>s<br />

132<br />

User-Defined File <strong>System</strong>s<br />

Establishing a user-defined file system allows you to create a raw partition that you may format with a file<br />

system not supported by <strong>Clusterworx</strong>.<br />

To Create a Partition with a User-defined File <strong>System</strong><br />

1. Select the Imaging tab.<br />

2. Select an image from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the image in the navigation tree and select Edit.<br />

4. Select the Partitions subtab.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


5. Click Add. The New Partition dialog appears.<br />

6. Select User Defined from the Filesystem pull-down menu.<br />

Image Management<br />

User-Defined File <strong>System</strong>s<br />

7. Enter the device on which to add the partition or select a device from the pull-down menu. Supported<br />

devices include the following, but the most common is /dev/hda because hosts typically have only one<br />

disk and use IDE:<br />

/dev/hda—Primary IDE Disk<br />

/dev/hdb—Secondary IDE Disk<br />

/dev/sda—Primary SCSI Disk<br />

/dev/sdb—Secondary SCSI Disk<br />

8. Create a plug-in to write the line for /etc/fstab during the boot process. See Plug-ins for the Boot Process<br />

on page 140.<br />

9. Select the partition size:<br />

Fixed partition size allows you to define the size of the partition (in MBs).<br />

Fill to end of disk allows you to create a partition that uses any space that remains after defining partitions<br />

with fixed sizes.<br />

Tip<br />

When working with diskless hosts, it is wise to allocate slightly more memory than is required on some<br />

partitions. To estimate the amount of memory needed by a partition, use the du -hc command.<br />

It is important to note that memory allocated to a partition is not permanently consumed. For example,<br />

consider programs that need to write temporary files in a /tmp partition. Although you may configure the<br />

partition to use a maximum of 50 MB of memory, the actual amount used depends on the contents of the<br />

partition. If the /tmp partition is empty, the amount of memory used is 0 MB.<br />

10. Click Apply to save changes or click Cancel to abort this action.<br />

11. Click Check In to import the image into VCS.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

133


Image Management<br />

User-Defined File <strong>System</strong>s<br />

134<br />

12. Click Apply to complete the process. Click Revert or Close to abort this action.<br />

Note<br />

<strong>Clusterworx</strong> generates the file, boot.profile, each time you save an image. See boot.profile on page 122 for<br />

a description of the information contained in this file.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Diskless Hosts<br />

Image Management<br />

Diskless Hosts<br />

<strong>Clusterworx</strong> provides support for diskless hosts. For optimal performance, <strong>Clusterworx</strong> implements diskless<br />

hosts by installing the operating system into the host’s physical memory, generally referred to as RAMfs or<br />

TmpFS. Because the OS is stored in memory, it is recommended that you use a minimal Linux installation to<br />

prevent consuming excess memory. An optimized Linux installation is typically around 100-150MB, but may<br />

be as small as 30MB depending on which libraries are installed. <strong>Clusterworx</strong> also supports local scratch or<br />

swap space on the hosts.<br />

Note<br />

Potentially large directories like /home should never be stored in RAM. Rather, they should be shared<br />

through a global storage solution.<br />

Warning!<br />

When using diskless hosts, the file system is stored in memory. Changes made to the host’s file system<br />

will be lost when the host reboots. If changes are required, make them in the payload first.<br />

To Configure a Diskless Host<br />

1. Select the Imaging tab.<br />

2. Select an image from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the image in the navigation tree and select Edit.<br />

4. Select the Partitions subtab.<br />

5. Click Add. The New Partition dialog appears.<br />

6. Select the tmpfs or nfs file system type from the Filesystem pull-down menu.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

135


Image Management<br />

Diskless Hosts<br />

136<br />

Note<br />

Although diskless hosts may use either tmpfs or nfs partitions, they must use only one type. If you are<br />

converting or editing a diskless host, change all partitions to the same type.<br />

7. Enter the Mount Point or select one from the pull-down menu (diskless hosts use root “/” as the mount<br />

point).<br />

Tip<br />

In most Linux installations, the majority of the OS is stored in the /usr directory. To help conserve<br />

memory, you may elect to share the /usr directory via NFS or another global file system.<br />

8. (Optional) Enter the fstab options. The /etc/fstab file controls where directories are mounted.<br />

Note<br />

Because <strong>Clusterworx</strong> writes and manages the fstab on the hosts, any changes made on the hosts are<br />

overwritten during provisioning.<br />

9. Select the partition size:<br />

Fixed partition size allows you to define the size of the partition (in MBs).<br />

Fill to end of disk allows you to create a partition that uses any space that remains after defining partitions<br />

with fixed sizes.<br />

Tip<br />

It is wise to allocate slightly more memory than is required on some partitions. To estimate the amount of<br />

memory needed by a partition, use the du -hc command.<br />

It is important to note that memory allocated to a partition is not permanently consumed. For example,<br />

consider programs that need to write temporary files in a /tmp partition. Although you may configure the<br />

partition to use a maximum of 50 MB of memory, the actual amount used depends on the contents of the<br />

partition. If the /tmp partition is empty, the amount of memory used is 0 MB.<br />

10. Click Apply to save changes or click Cancel to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


11. Click Check In to import the image into VCS.<br />

12. Click Apply to complete the process. Click Revert or Close to abort this action.<br />

Note<br />

Image Management<br />

Diskless Hosts<br />

<strong>Clusterworx</strong> generates the file, boot.profile, each time you save an image. See boot.profile on page 122 for<br />

a description of the information contained in this file.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

137


Image Management<br />

RAM Disk<br />

138<br />

RAM Disk<br />

The RAM Disk is a small disk image that is created and loaded with the utilities required to provision the<br />

host. When the host first powers on, it loads the kernel and mounts the RAM Disk as the root file system. In<br />

order for host provisioning to succeed, the RAM Disk must contain specific boot utilities. Under typical<br />

circumstances, you will not need to add boot utilities unless you are creating something such as a custom,<br />

pre-finalized script that needs utilities not required by standard Linux versions (e.g., modprobe).<br />

Note<br />

<strong>Clusterworx</strong> uses two “skeleton” RAM Disks—one for ia32 and another for both AMD-64 and EM64T.<br />

These skeleton disks are located in /opt/cwx/ramdisks and should never be modified manually. All<br />

changes must be performed through <strong>Clusterworx</strong> or in /opt/cwx/imaging//images/<br />

/ramdisk.<br />

To Add Boot Utilities<br />

Adding boot utilities to the RAM Disk allows you to create such things as custom, pre-finalized scripts using<br />

utilities that are not required for standard Linux versions.<br />

1. Select the Imaging tab.<br />

2. Select an image from the navigation tree.<br />

3. Select Edit from the Edit menu or right-click on the image in the navigation tree and select Edit.<br />

4. Click the RAM Disk subtab. Default files from the skeleton RAM Disk are grayed out—any changes or<br />

updates appear in black.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


5. Click Add. The Add File to RAM Disk dialog appears.<br />

6. Enter the boot utility path in the Source field or click Browse to locate a utility.<br />

7. Specify the Destination location in which to install the boot utility in the RAM Disk file system.<br />

8. Click OK to install the boot utility or click Cancel to abort this action.<br />

9. (Optional) Select Add Debug Utilities to apply additional debugging utilities to the RAM Disk.<br />

10. Click Apply to complete the process. Click Revert or Close to abort this action.<br />

Note<br />

Image Management<br />

RAM Disk<br />

<strong>Clusterworx</strong> generates the file, boot.profile, each time you save an image. See boot.profile on page 122 for<br />

a description of the information contained in this file.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

139


Image Management<br />

Plug-ins for the Boot Process<br />

140<br />

Plug-ins for the Boot Process<br />

A host requires a boot process to initialize hardware, load drivers, and complete the necessary tasks to<br />

initiate a login prompt. The boot process is composed of five main stages and allows you to include additional<br />

plug-ins at each stage to expand system capabilities. During the boot process, the system moves from stage to<br />

stage installing any plug-ins specified. If you do not specify any plug-ins, the host will boot using the built-in<br />

boot process. The boot process is as follows:<br />

initialize Stage one creates writable directories and loads any kernel modules.<br />

identify Stage two uses DHCP to get the IP address and host name.<br />

partition Stage three creates partitions and file systems.<br />

image Stage four downloads and extracts the payload.<br />

finalize Stage five configures <strong>Clusterworx</strong> services to run with the host name retrieved from<br />

DHCP.<br />

Note<br />

Stage 1 Stage 2 Stage 3 Stage 4 Stage 5<br />

initialize<br />

/plugins/postinitialize<br />

/plugins/preidentify<br />

All plug-ins must be added inside the RAM Disk under /plugins/.<br />

Warning!<br />

identify<br />

/plugins/postidentify<br />

/plugins/preparation<br />

When working with prefinalize scripts, you must make special considerations in a RedHat Enterprise<br />

Linux 3 update 3 or 4 environment. To ensure that <strong>Clusterworx</strong> payload installations work properly, you<br />

may need to merge your script with the Prefinalize script contained in the /misc directory. See To Install<br />

the <strong>Clusterworx</strong> Prefinalize Script on page 143.<br />

Although you can add or override functionality during the pre or post stages of the boot process,<br />

overriding stages other than pre or post may cause the boot process to fail.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

partition<br />

/plugins/postpartition<br />

/plugins/preimage<br />

image<br />

/plugins/postimage<br />

/plugins/prefinalize<br />

finalize


To Add a Plug-in<br />

The following example depicts how to run a script during the boot process.<br />

Image Management<br />

Plug-ins for the Boot Process<br />

1. Write a shell or Perl script to run during the boot process. For example, to run a script immediately after<br />

partitioning a drive, name the script postpartition and add it to the plugins directory in the RAMdisk<br />

(i.e., /plugins/).<br />

Note<br />

You must add all necessary utilities for your plug-in script to the RAM Disk. For example, if you use a Perl<br />

script as a plug-in, you must add the Perl binary and all necessary shared libraries and modules to the<br />

RAM Disk. The shared libraries for a utility may be determined using the ldd(1) command. Please note<br />

that adding these items significantly increases the size of the RAM Disk. See To Add Boot Utilities on<br />

page 138.<br />

2. Select the Imaging tab.<br />

3. Select an image from the navigation tree.<br />

4. Select Edit from the Edit menu or right-click on the image in the navigation tree and select Edit.<br />

5. Click the RAM Disk subtab.<br />

6. Click Add. The Add File to RAM Disk dialog appears.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

141


Image Management<br />

Plug-ins for the Boot Process<br />

142<br />

7. Enter the boot utility path in the Source field or click Browse to locate a plug-in.<br />

8. Specify the install location in the Destination field.<br />

Note<br />

All scripts must be installed in the /plugins/ directory. However, you can overwrite other utilities.<br />

9. Click OK to install the utility or click Cancel to abort this action.<br />

10. (Optional) Select Add Debug Utilities to apply additional debugging utilities to the RAM Disk.<br />

11. Click Apply to complete the process. Click Revert or Close to abort this action.<br />

Note<br />

<strong>Clusterworx</strong> generates the file, boot.profile, each time you save an image. See boot.profile on page 122 for<br />

a description of the information contained in this file.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


To Install the <strong>Clusterworx</strong> Prefinalize Script<br />

Image Management<br />

Plug-ins for the Boot Process<br />

In a RedHat Enterprise Linux 3 update 3 or 4 environment, you must add the prefinalize script to ensure that<br />

the <strong>Clusterworx</strong> payload installation functions correctly.<br />

1. Copy the prefinalize script from the /misc directory on the <strong>Clusterworx</strong> ISO to your hard drive:<br />

cp /mnt/cdrom/misc/prefinalize /root/prefinalize<br />

2. Change the script to an executable:<br />

chmod +x prefinalize<br />

3. Select the Imaging tab and create a new image or edit an existing RedHat Enterprise Linux 3 or 4 image.<br />

4. Under the RAM Disk sub tab, click the Add button.<br />

5. Click Browse to select the prefinalize script as the source and enter /plugins/prefinalize as the<br />

destination.<br />

6. Click OK.<br />

7. Click Apply to save changes or Revert or Close to abort this action.<br />

Tip<br />

If a prefinalize script already exists, you can merge the scripts or contact support for additional<br />

assistance. By default, no additional prefinalize scripts exist.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

143


Version Control <strong>System</strong> (VCS)<br />

Version Branching<br />

144<br />

Version Control <strong>System</strong> (VCS)<br />

The <strong>Clusterworx</strong> Version Control <strong>System</strong> allows users with privileges to manage changes to payloads,<br />

kernels, or images (similar in nature to managing changes in source code with a version control system). The<br />

Version Control <strong>System</strong> is accessed via the VCS menu and supports common Check-Out and Check-In<br />

operations. Items are version controlled by the user—when an item is checked out, it can be modified locally<br />

and checked back in. For information on initially placing a payload, kernel, or image under version control,<br />

see Payload Management on page 84, Kernel Management on page 112, or Image Management on page 120.<br />

Version Branching<br />

Image management works with VCS to allow you to branch any payload, kernel, or image under version<br />

control arbitrarily from any version. The following diagram illustrates version branching for a kernel. The<br />

process begins with a working copy of a kernel that is checked into VCS as a versioned kernel. The kernel is<br />

then checked out of VCS, modified (as a working copy of the kernel), and checked back into VCS as a new,<br />

versioned branch of the original kernel.<br />

VCS<br />

Tip<br />

If another user checks out a copy of the same item you are working with and checks it back into VCS<br />

before you do, you must either discard your changes and check out the latest version of the item or<br />

create a new branch that does not contain the items checked in by the other user.<br />

Note<br />

Check In<br />

(Working Copy version 0)<br />

A Working Copy of a payload, kernel, or image is currently present in the working area (e.g. /opt/cwx/<br />

imaging//payloads). A Versioned payload, kernel, or image is a revision of a payload, kernel, or<br />

image stored in VCS.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

VCS 1<br />

Check Out<br />

Check In<br />

VCS<br />

1 2<br />

(Versioned) (Versioned)<br />

(Working Copy version 1)<br />

New Branch


Version Branching Example<br />

Version Control <strong>System</strong> (VCS)<br />

Version Branching<br />

Suppose, for example, that a payload under version control was gradually optimized to suit specific hardware<br />

contained in a cluster. If the optimization were performed in stages (where each stage was a different VCS<br />

revision), VCS would contain multiple versions of the payload. Now suppose that you added some new hosts<br />

with slightly different hardware specifications to the cluster, but the last few revisions of the payload use<br />

optimizations that are incompatible with the new hardware. Using the version branching feature, you could<br />

create a new branch of the payload based on an older version that does not contain the offending<br />

optimizations. The new branch could be used with the new hosts, while the remaining hosts could use the<br />

original payload.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

145


Version Control <strong>System</strong> (VCS)<br />

Version Control Check-in<br />

146<br />

Version Control Check-in<br />

To Check In a Payload, Kernel, or Image<br />

1. After making changes to a payload, kernel, or image, click Check In or select Check In from the VCS<br />

menu. The VCS Import dialog appears.<br />

2. (Optional) Enter an alias to use when referring to this version. The alias is the name displayed in the VCS<br />

Log between the parentheses:<br />

1()<br />

February 26, 2004 9:14:17 AM MST, root<br />

Description of changes...<br />

3. (Optional) Select Branch to create a new branch of this item. Do not select this option if you want<br />

<strong>Clusterworx</strong> to create a new revision on the current branch.<br />

Note<br />

If another user checks out a copy of the same item you are working with and checks it back into VCS<br />

before you do, you must either discard your changes and check out the latest version of the item or<br />

create a new branch that does not contain the items checked in by the other user.<br />

4. (Optional) Click Status to view information about the item (i.e., repository, module, location, revision).<br />

See also Version Status on page 149.<br />

5. Click OK to continue or click Cancel to abort this action.<br />

Tip<br />

VCS Check In may fail if you have insufficient disk space. To monitor the amount of available disk space,<br />

configure the disk space monitor to log this information, e-mail the administrator, or run a script when<br />

disk space is low. See <strong>Clusterworx</strong> Monitoring and Event Subsystem on page 171 for details.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Version Control Check-out<br />

To Check Out a Payload, Kernel, or Image<br />

1. Select the Imaging tab.<br />

2. Select Check Out from the VCS Menu. The VCS Check Out dialog appears.<br />

Version Control <strong>System</strong> (VCS)<br />

Version Control Check-out<br />

3. Select the payload, kernel, or image you want to check out of VCS (use the Shift or Ctrl keys to select<br />

multiple items).<br />

Note<br />

When you check out a payload, kernel, or image, <strong>Clusterworx</strong> creates a working copy of the item. If you<br />

check out the root of a payload, kernel, or image, <strong>Clusterworx</strong> selects the tip revision.<br />

Warning!<br />

Every time a user creates a payload (or checks a payload out of VCS), <strong>Clusterworx</strong> stores a working copy<br />

of the payload in the user’s /opt/cwx/imaging directory. To accommodate this process, <strong>Clusterworx</strong><br />

requires a minimum of 10 GB of disk space. Once the payload is checked into VCS, the user may safely<br />

remove the contents of the imaging directory.<br />

4. Click OK. <strong>Clusterworx</strong> places the item(s) into a working directory where you may make changes. Click<br />

Cancel to abort this action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

147


Version Control <strong>System</strong> (VCS)<br />

VCS Management<br />

148<br />

VCS Management<br />

The VCS management option allows you to view the change history for a particular package, kernel, or<br />

image.<br />

To Launch the VCS Management Console<br />

1. Select the Imaging tab.<br />

2. Select VCS Management from the VCS menu. The VCS Management dialog appears.<br />

3. Select a payload, kernel, or image for which to display a change history.<br />

Tip<br />

Click the A (Add), M (Modify), or D (Delete) options to include or exclude specific information.<br />

4. To remove a payload, kernel, or image, select the item from the navigation tree and click Delete. When<br />

deleting a version of any item, all subsequent versions are also deleted (i.e., deleting version 4 also<br />

removes versions 5, 6, and so on).<br />

Warning!<br />

If you select Payloads, Kernels, or Images from the navigation tree, clicking Delete will remove ALL<br />

payloads, kernels, or images from the system.<br />

5. To copy a payload, kernel, or image, right-click on the item in the navigation tree and select Copy.<br />

<strong>Clusterworx</strong> prompts you for a new name, then creates a new copy of the item in VCS.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Version Status<br />

Version Control <strong>System</strong> (VCS)<br />

Version Status<br />

In the event that a payload, kernel, or image is already under version control, you may view its version status<br />

through the VCS menu.<br />

Tip<br />

To view a summary of changes made to an item since it was last checked into VCS, select Check Out<br />

from the VCS menu. See VCS Management on page 148.<br />

To View Version Status<br />

1. Select Status from the VCS menu. The VCS Status dialog appears.<br />

2. When finished, click OK to close the dialog.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

149


Version Control <strong>System</strong> (VCS)<br />

VCS Host Compare<br />

150<br />

VCS Host Compare<br />

The Host Compare feature allows you to compare the payload currently installed on a host with the latest<br />

version of the payload stored in VCS. This is useful when determining whether or not to re-provision a host<br />

with a new payload. Similar to the VCS Management Console, this option displays all additions,<br />

modifications, and deletions made to the payload since you last used it to provision the host.<br />

TO EXCLUDE FILES FROM THE COMPARISON LIST<br />

1. Open the file, /opt/cwx/etc/exclude.files (a copy of this file should exist on all hosts):<br />

proc<br />

dev/pts<br />

etc/ssh/ssh_host_dsa_key<br />

etc/ssh/ssh_host_dsa_key.pub<br />

etc/ssh/ssh_host_key<br />

etc/ssh/ssh_host_key.pub<br />

etc/ssh/ssh_host_rsa_key<br />

etc/ssh/ssh_host_rsa_key.pub<br />

media<br />

mnt<br />

root/.ssh<br />

scratch<br />

sys<br />

tmp<br />

usr/local/src<br />

usr/share/doc<br />

usr/src<br />

var/cache/<br />

var/lock<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


var/log<br />

var/run<br />

var/spool/anacron<br />

var/spool/at<br />

var/spool/atjobs<br />

var/spool/atspool<br />

var/spool/clientmqueue<br />

var/spool/cron<br />

var/spool/mail<br />

var/spool/mqueue<br />

var/tmp<br />

2. Edit the file as needed, then save your changes.<br />

Tip<br />

It is best to edit this file while it is in the payload so it can be copied to all hosts.<br />

VersionControlService.profile<br />

Version Control <strong>System</strong> (VCS)<br />

VCS Host Compare<br />

<strong>Clusterworx</strong> uses VersionControlService.profile, a global default exclude list that is not distribution-specific.<br />

You may add files or directories to this list to prevent <strong>Clusterworx</strong> from checking them into VCS—particularly<br />

helpful when importing payloads from the working directory. To remove items from the exclusion list,<br />

comment them out of the profile.<br />

Also contained in the VersionControlService.profile, the deflate.temp:/ parameter allows you to specify<br />

an alternate path for large files created while importing a payload.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

151


Version Control <strong>System</strong> (VCS)<br />

VCS Host Compare<br />

152<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Chapter 8<br />

Provisioning<br />

Overview<br />

The <strong>Clusterworx</strong> provisioning capability allows you to create an image, then apply the image to multiple<br />

hosts. The following illustration depicts an image that is provisioned to multiple hosts.<br />

+<br />

=<br />

Payload<br />

Kernel<br />

Image<br />

Provision<br />

Host<br />

Host<br />

Host<br />

Host<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

153


Selecting an Image<br />

154<br />

Selecting an Image<br />

The <strong>Clusterworx</strong> provisioning service allows you to select a versioned image from VCS or a working copy of<br />

an image from your working directory.<br />

To Select an Image<br />

1. Select the Provisioning tab.<br />

2. Select the host(s) you want to provision from the navigation tree (use the Shift or Ctrl keys to select<br />

multiple hosts).<br />

3. Select the Versioned Images or Working Images subtab.<br />

Note<br />

A Working Copy of an image is currently present in the working area (e.g., /opt/cwx/imaging//<br />

payloads). A Versioned image is a revision of an image stored in VCS. See Version Control <strong>System</strong> (VCS)<br />

on page 144 for details on using the version control system.<br />

4. Select the image you want to use to provision the host(s).<br />

5. (Optional) Click the Advanced button to display the Advanced Options dialog (see Advanced<br />

Provisioning Options on page 156.). This dialog allows you to override partitioning, payload, and kernel<br />

verbosity settings.<br />

6. Click Provision to distribute the image to the selected hosts. <strong>Clusterworx</strong> asks you to confirm your action.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


7. Click Yes to provision the host(s) or click No to abort this action.<br />

Warning!<br />

If you click Yes, <strong>Clusterworx</strong> re-provisions the hosts using the new image. Any pending or running jobs<br />

on the selected host(s) are lost.<br />

Tip<br />

To conserve space in the /opt directory, you may delete cached .ebi and .payload files from<br />

/opt/cwx/provision/cache/.<br />

Please note, however, that removing ALL cached files may require more time to provision hosts. To<br />

speed up the provisioning process, do not delete the latest cached file—removing cached payloads that<br />

are in use on hosts forces <strong>Clusterworx</strong> to re-image the hosts the next time they are provisioned.<br />

Tip<br />

To disable the provisioning confirmation dialog, edit the ProvisioningService.profile and set the<br />

provisioning.confirm option to true:<br />

# Default value for the presence of a confirmation dialog<br />

# before provisioning (in the Provisioning tab)<br />

# true: Show the confirmation dialog<br />

# false: Skip the confirmation dialog<br />

provisioning.confirm: true<br />

Selecting an Image<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

155


Selecting an Image<br />

Advanced Provisioning Options<br />

156<br />

Advanced Provisioning Options<br />

The Advanced Options dialog allows you to temporarily configure partitioning behavior, payload download<br />

settings, and Kernel verbosity. These settings are not persistent, they simply override those configurations<br />

made using the Advanced Image Options dialog. See Advanced Imaging Options on page 121.<br />

USE WORKING COPY OF KERNEL<br />

Enable this option to use the working copy of the kernel in place of its version-controlled equivalent. Because<br />

working copies of kernels are often shared, hosts associated with the working copy are updated to use the<br />

latest version when they reboot—but only if the kernel was modified or used to provision other hosts.<br />

USE WORKING COPY OF PAYLOAD<br />

Enable this option to use the working copy of the payload in place of its version-controlled equivalent.<br />

Because working copies of payloads are often shared, hosts associated with the working copy are updated to<br />

use the latest version when they reboot—but only if the payload was modified or used to provision other<br />

hosts.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


SCHEDULE PROVISION AT NEXT REBOOT<br />

Selecting an Image<br />

Advanced Provisioning Options<br />

Enable this option to postpone provisioning until the next time you reboot the hosts. Provisioning channels<br />

are created and hosts are assigned to the new image, but the hosts cannot reboot or cycle power without<br />

being provisioned.<br />

Tip<br />

To change the default scheduled provisioning setting, edit $CWXHOME/etc/ProvisioningService.profile<br />

as follows:<br />

provisioning.nextreboot:{true|false}<br />

Scheduling a provision at next reboot can be especially useful when used with PBS. For example, you may<br />

make updates to a payload, then schedule provisioning to occur only after the current tasks are complete. To<br />

do this, the root user (who must be allowed to submit jobs) can submit a job to each host instructing it to<br />

reboot.<br />

Root can submit jobs to PBS only if acl_roots is configured. To configure acl_roots, run qmgr and enter the<br />

following from the qmgr prompt:<br />

qmgr: set server acl_roots += root<br />

If you already set up additional ACLs, you will also need to add root to those ACLs. For example, suppose you<br />

have an acl_users list that allows access to a queue, workq. The command to add root to the ACL would be:<br />

# set queue workq acl_users += root<br />

The following is a sample PBS script you might use to reboot hosts:<br />

#################################################<br />

#!/bin/bash<br />

for i in `seq 1 64`<br />

do<br />

echo \#PBS -N Reboot_n$i > Reboot_n$i.pbs<br />

echo \#PBS -joe >> Reboot_n$i.pbs<br />

echo \#PBS -V >> Reboot_n$i.pbs<br />

echo \#PBS -l nodes=n$i >> Reboot_n$i.pbs<br />

echo \#PBS -q workq >> Reboot_n$i.pbs<br />

echo \#PBS -o /dev/null >> Reboot_n$i.pbs<br />

echo \/sbin\/reboot >> Reboot_n$i.pbs<br />

echo done >> Reboot_n$i.pbs<br />

qsub < Reboot_n$i.pbs<br />

rm Reboot_n$i.pbs<br />

done<br />

#################################################<br />

PARTITION THIS TIME<br />

This option allows you override the current partition settings. You may automatically partition an image if<br />

the partition changed, force partitioning to re-create all partitions—including those that are exempt from<br />

being overwritten (see Managing Partitions on page 124), or choose not to partition the host.<br />

DOWNLOAD PAYLOAD THIS TIME<br />

The payload options allow you to automatically download a payload if a newer version is available (or if the<br />

current payload is not identical to that contained in the image), force <strong>Clusterworx</strong> to download a new copy of<br />

the image—regardless of the image status, or choose not to download a payload.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

157


Selecting an Image<br />

Configuring DHCP<br />

158<br />

KERNEL VERBOSITY<br />

The kernel verbosity level (1-8) allows you to control debug messages displayed by the kernel during<br />

provisioning. The default value, 1, is the least verbose and 8 is the most.<br />

Configuring DHCP<br />

Provisioning also allows you to modify DHCP settings. By default, when provisioning occurs, <strong>Clusterworx</strong><br />

automatically modifies DHCP settings and restarts the protocol. If you make manual DHCP modifications<br />

and want <strong>Clusterworx</strong> to stop, start, restart, or reload DHCP, use the controls in the DHCP menu.<br />

Note<br />

When working with DHCP, ensure that the server installation includes DHCP and, if the subnet on<br />

which the cluster will run differs from 192.168.0.0, edit the file in the <strong>Clusterworx</strong> DHCP installation<br />

directory (i.e., /opt/cwx/dhcp/dhcpd.conf.template).<br />

If you update your existing <strong>Clusterworx</strong> installation to a newer version on the <strong>Clusterworx</strong> Master Host,<br />

any changes to the dhcpd.conf.template file will be lost and will need to be updated again.<br />

To Configure DHCP Settings<br />

The DHCP menu allows you to perform the following operations:<br />

Stop<br />

Start<br />

Restart<br />

Reload (re-creates the dhcpd.conf file)<br />

Tip<br />

Changes made to /etc/dhcpd.conf are overwritten when you provision the host. Any changes to DHCP<br />

should be made to /opt/cwx/dhcp/dhcpd.conf.template.<br />

To Configure Multicast Routes<br />

Note<br />

When provisioning with SuSE Linux Enterprise Server 9 (or newer) or RHEL4, the default multicast<br />

configuration may not work properly. Ensure that multicast routing is configured to use the management<br />

interface.<br />

SLES<br />

1. Enter the following from the CLI to temporarily add the route (where eth0 is the management interface):<br />

route add -net 239.192.0.0 netmask 255.255.255.0 dev eth0<br />

2. Make the change persistent by entering the following:<br />

vi /etc/sysconfig/network/routes<br />

Then add:<br />

239.192.0.0 0.0.0.0 255.255.255.0 eth0 multicast<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


RHEL4<br />

Selecting an Image<br />

Configuring DHCP<br />

1. Enter the following from the CLI to temporarily add the route (where eth0 is the management interface):<br />

route add -net 239.192.0.0 netmask 255.255.255.0 dev eth0<br />

2. Make the change persistent by entering the following:<br />

vi /etc/sysconfig/network-scripts/route-eth0<br />

Then add:<br />

239.192.0.0/24 dev eth0<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

159


Provisioning Channels<br />

Configuring DHCP<br />

160<br />

Provisioning Channels<br />

On certain occasions, you may need to use more than 20 provisioning channels. For example, you may need<br />

to use more than 10 images on your cluster. Because each image requires the use of two channels (one for the<br />

EBI and one for the payload), you must create more channels to accommodate the additional images. In such<br />

situations, you must edit /opt/cwx/etc/DistributionService.profile, /opt/cwx/etc/Activator.profile, and<br />

/opt/cwx/etc/system-<strong>Clusterworx</strong>.profile.<br />

Note<br />

By default, 20 provisioning channels (00-19) are defined.<br />

Add Provisioning Channels<br />

1. In the DistributionService.profile, copy the last two channels and change all ports, IP addresses, and<br />

numbers. For an explanation of the contents of this file, see DistributionService.profile on page 161.<br />

Note<br />

The multicast IP addresses are the same for the channel pairs but the ports are different.<br />

2. In the Activator.profile, add a host and command entry for the new channels. For each additional<br />

multicast channel added to DistributionService.profile, you must include the following in<br />

Activator.profile:<br />

DistributionService.provisioning-.host=<br />

DistributionService.provisioning-.command=DistributionService<br />

-channel\:provisioning-<br />

For example:<br />

DistributionService.provisioning-20.host=cwxhost<br />

DistributionService.provisioning-20.command=DistributionService<br />

-channel\:provisioning-20<br />

3. Locate the system.dna.activate line in the system-<strong>Clusterworx</strong>.profile and add the new channel names to<br />

the end of that line.<br />

4. Restart <strong>Clusterworx</strong>.<br />

5. (Optional) To view the additional channels, run executive list DistributionService.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


DistributionService.profile<br />

Provisioning Channels<br />

DistributionService.profile<br />

/opt/cwx/etc/DistributionService.profile contains the options used to set the <strong>Clusterworx</strong> provisioning<br />

channels. You may create additional channels or change multicast options by altering the contents of this<br />

file. If multiple <strong>Clusterworx</strong> hosts exist in a heterogeneous network, you may need to define several multicast<br />

sub-classes. The structure of DistributionService.profile is as follows:<br />

DistributionService.profile<br />

channels.provisioning-00.file: {system.home}/distribution/provisioning-00<br />

<strong>Clusterworx</strong> automatically creates symbolic links as required.<br />

channels.provisioning-00.interface: {host}<br />

The host name of the <strong>Clusterworx</strong> Server (signified by the { host} variable).<br />

channels.provisioning-00.registrar.address: {host.address}<br />

The IP address of the <strong>Clusterworx</strong> Server (signified by the { host.address} variable).<br />

channels.provisioning-00.registrar.port: 10000<br />

Valid ports are currently set at 10000-10019. Additional multicast addresses can be added if more than 20<br />

provisioning channels are required.<br />

channels.provisioning-00.multicast.address: 239.192.0.128<br />

Valid multicast addresses are 224.0.0.0 through 239.255.255.255, 239.x.x.x is reserved within an organization.<br />

If multiple <strong>Clusterworx</strong> installations exist on the same network, you must specify different multicast<br />

ranges.<br />

channels.provisioning-00.multicast.port: 10000<br />

Multicast transmission port.<br />

channels.provisioning-00.multicast.size: 1446<br />

Multicast packet size. Packet size must not exceed 1446 for EBI's. For non-EBI's (payloads), this value should<br />

be between 4096 bytes (4K) and 32767 bytes (32K-1).<br />

channels.provisioning-00.multicast.ttl: 1<br />

Packet Time-To-Live. The number of router hops (0–255) the packet is allowed to make.<br />

channels.provisioning-00.multicast.throttle: 10000000<br />

The maximum number of bytes allowed to transfer per second. By default, the DistributionService allows a<br />

data transfer rate of up to 10MB per second (e.g., a 100Mb Ethernet network has a theoretical maximum<br />

bandwidth of 12.5MB). You may increase or decrease this value to increase or decrease the amount of bandwidth<br />

consumed by the multicast transmission.<br />

See warning on page 161.<br />

channels.provisioning-00.multicast.wastegate: 100000<br />

Number of nanoseconds between multicast packets (check switch manufacturer). High-speed switch<br />

should be 0, low-quality switches use a high value. By default, this value is 10000.<br />

See warning below.<br />

Warning!<br />

Do not change the multicast throttle or wastegate values without consulting Technical Support for<br />

assistance.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

161


Provisioning Channels<br />

DistributionService.profile<br />

162<br />

Note<br />

When provisioning with SuSE Linux Enterprise Server 9 (or newer) or RedHat 4, the default multicast<br />

configuration may not work properly. Ensure that multicast routing is configured to use the management<br />

interface. See To Configure Multicast Routes on page 158.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Chapter 9<br />

Runner<br />

Overview<br />

The <strong>Clusterworx</strong> Runner allows you to remotely execute simultaneous commands on multiple hosts. For<br />

example, you could use the Cluster Copy (ccp) command to copy files to all hosts, or copy and install an RPM<br />

on all hosts. For additional information, see ccp on page 192.<br />

Note<br />

Runner is not a true shell. Avoid using commands that require terminal input or interaction from the<br />

user—these commands may delay processing for your system. Commands to avoid include ping, reboot,<br />

top, halt, and those associated with editing files. If a command you enter does not process quickly, click<br />

Abort.<br />

Tip<br />

If you re-provision a host or execute a command that closes the host’s network connection (e.g., reboot),<br />

close the host’s subtab in Runner. This will prevent you from accidentally sending a command to that<br />

host. If a command is issued, the GUI will not respond until the host returns to normal status.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

163


Connect to a Host<br />

164<br />

Connect to a Host<br />

Note<br />

Before you can open a connection to a host, you must add the host to the host tree.<br />

To Connect to a Host<br />

1. Select the Runner tab.<br />

2. Click Connect. The Hosts dialog appears.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Connect to a Host<br />

3. Select the host to which you want to connect and click Add. To select multiple hosts, use the Shift or Ctrl<br />

keys. Click Cancel to abort this action.<br />

Runner displays the connected host(s) in the navigation tree.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

165


View Host Output<br />

166<br />

View Host Output<br />

To View Host Output<br />

1. Select the host for which to display output from the navigation tree. To select multiple hosts, use the Shift<br />

or Ctrl keys.<br />

2. Click Open. <strong>Clusterworx</strong> displays an output subtab for each host.<br />

3. The output tab displays the output from any commands executed on the host. See Execute Commands on<br />

Hosts on page 167.<br />

4. To close the output tab for a specific host, select the tab and click Close. To close the output tabs for all<br />

open hosts, click Close All.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Execute Commands on Hosts<br />

Execute Commands on Hosts<br />

When executing commands on hosts, there are two options available: executing commands on all hosts to<br />

which you are connected, or executing commands on a specific host.<br />

To Execute a Command on All Hosts<br />

Enter a CLI command in the upper pane.<br />

Note<br />

If a command fails, <strong>Clusterworx</strong> displays an error message in the Errors pane.<br />

Tip<br />

Running interactive commands in batch mode will not work; however, many of these commands have a<br />

batch mode that will run non-interactively and exit. The following table contains additional details<br />

regarding some of these commands:<br />

Command Description<br />

top $ top -b -n 1 -b Invokes batch mode<br />

n Refers to the number of iterations before exiting.<br />

This example will print top once, then exit.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

167


Execute Commands on Hosts<br />

168<br />

Command Description<br />

ping $-c 1 -c Tells the PING command to run a number of times, then exit.<br />

To Execute a Command on a Specific Host<br />

1. Select the output tab for the host on which you want to execute the command.<br />

2. Enter the command in the host’s output pane. See tip on page 167.<br />

Note<br />

If a command fails, <strong>Clusterworx</strong> displays an error message in the Errors pane.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

This example will ping the host once, then exit.


Disconnect from a Host<br />

To Disconnect from a Host<br />

Disconnect from a Host<br />

1. Select the host from which to disconnect from the navigation tree. To select multiple hosts, use the Shift<br />

or Ctrl keys.<br />

2. Click Disconnect. Runner closes the output tab(s) for the host(s) and removes them from the navigation<br />

tree.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

169


Disconnect from a Host<br />

170<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Chapter 10<br />

Instrumentation Service<br />

<strong>Clusterworx</strong> Monitoring and Event Subsystem<br />

<strong>Clusterworx</strong> uses a monitoring and event system (including an event log) to track system values. This system<br />

includes monitors, metrics, listeners, and loggers that collect values from the cluster, then display this<br />

information using the <strong>Clusterworx</strong> instrumentation GUI (see Instrumentation on page 40). You can extend<br />

the standard monitoring and event system to include custom values and set thresholds for user-defined<br />

events. For example:<br />

Monitoring custom values using scripts.<br />

Displaying custom values in the <strong>Clusterworx</strong> list view.<br />

Setting thresholds on values and taking an action if these thresholds are exceeded.<br />

Logging custom error conditions in the <strong>Clusterworx</strong> log.<br />

Running custom scripts as event actions.<br />

Monitors run at a set interval and collect information from each host. Listeners receive information about<br />

metrics from the instrumentation service, then determine if the values are reasonable. If a listener<br />

determines that a metric is above or below a set threshold, the listener triggers a logger to take a specific<br />

action.<br />

Typically, configuration files are host-specific and are located in the /opt/cwx/etc directory. If you modify the<br />

configuration files, copy these files into the payload to make them available on each host after you provision,<br />

then restart <strong>Clusterworx</strong>.<br />

Warning!<br />

Re-installing <strong>Clusterworx</strong> will overwrite the configuration files. Please create backups of these files if<br />

they are modified.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

171


Monitors<br />

Custom Monitors<br />

172<br />

Monitors<br />

Monitors run periodically on the hosts and provide metrics that are gathered, processed, and displayed using<br />

the <strong>Clusterworx</strong> instrumentation GUI. All standard <strong>Clusterworx</strong> monitors are configured in the<br />

InstrumentationMonitors.profile in the /opt/cwx/etc directory. The format of the monitor configuration in the<br />

file is generally as follows (where is in milliseconds):<br />

: com.lnxi.instrumentation.server.<br />

.interval: <br />

When working with standard monitors, it is strongly recommended that you leave all monitors enabled—<br />

however, you can increase how often these monitors run. Raising the interval can reduce CPU time and<br />

network use for monitoring. Because <strong>Clusterworx</strong> uses very little CPU processing time on the compute hosts,<br />

values as high as 1 second (1000 milliseconds) are nearly undetectable. By default, some monitors are set to<br />

run at 5 seconds (5000 milliseconds) or longer.<br />

Example<br />

#<br />

# A monitor for network statistics.<br />

# This information is polled every second.<br />

#<br />

network: com.lnxi.instrumentation.server.NetworkMonitor<br />

network.interval: 1000<br />

Note<br />

The comments were added for readability only.<br />

Custom Monitors<br />

Custom monitors can be added to <strong>Clusterworx</strong> by using a special monitor called the Command Monitor. The<br />

Command Monitor can use values from any user-defined program or script that returns the information in a<br />

format <strong>Clusterworx</strong> can process. To use the Command Monitor, add an entry to the<br />

InstrumentationMonitors.profile with the following information:<br />

: com.lnxi.instrumentation.server.CommandMonitor<br />

.command: /path/to/script<br />

.interval: 5000<br />

Note<br />

The must be unique for each monitor.<br />

Warning!<br />

Test scripts carefully! Running an invalid script may cause undesired results with <strong>Clusterworx</strong>.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Monitors<br />

Custom Monitors<br />

Because the Command Monitor typically invokes a script (e.g., bash, perl), using values of less than 5 seconds<br />

is not recommended (but is supported). To use the Command Monitor, the program or script called must<br />

return values to STDOUT in key:value pairs that use the following format:<br />

hosts...:\n<br />

hosts...:\n<br />

The refers to the name of the host from which you are running the script.<br />

The is the same name used in the InstrumentationMonitors.profile.<br />

The parameter refers to what is being monitored.<br />

The is the return value for that key. The script can return one or more items as long as they all have a<br />

key and value. The value can be any string or number, but the script is responsible for the formatting. The \n<br />

at the end is a newline character (required).<br />

Custom Monitors Example<br />

The following example uses perl to monitor how many users are logged into the host. The script will return<br />

two values: how many people are logged in and who the people are. The script name is /opt/cwx/bin/who.pl<br />

and returns who.who and who.count.<br />

#!/usr/bin/perl -w<br />

# Basic modules are allowed<br />

use IO::File;<br />

use Sys::Hostname;<br />

$host = hostname;<br />

my @users;<br />

# This opens the program and runs it. Don't forget the '|' on the end<br />

my $fh = new IO::File('/usr/bin/who |');<br />

# If the program was started<br />

if (defined $fh) {<br />

# Then loop through its output until you get an eof.<br />

while (defined($line = )) {<br />

if ($line =~ m/^\w+.*/) {<br />

$line =~ m/^(\w+).*$/;<br />

push(@users,$1);<br />

}<br />

}<br />

# Close the file.<br />

$fh->close();<br />

}<br />

# Remove duplicate entries of who.<br />

%seen = ();<br />

foreach $item (@users) {<br />

push(@uniq, $item) unless $seen{$item}++;<br />

}<br />

# Count how many items are in the array for our count<br />

$count = scalar(@uniq);<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

173


Monitors<br />

Custom Monitors<br />

174<br />

# Rather than an array of values, just return a single text string;<br />

foreach $users(@uniq) {<br />

$who .= “$users,”;<br />

}<br />

chop($who);<br />

print “hosts.”. $host . “.who.count:” . $count .“\n”;<br />

print “hosts.”. $host . “.who.who:” . join(“,”, $who).“\n”;<br />

When you run the script on host “n2” (assuming that perl and the perl modules above are installed correctly),<br />

the following prints to STDOUT:<br />

[root@n2 root]# ./who.pl<br />

hosts.n2.who.count:1<br />

hosts.n2.who.who:root<br />

To configure <strong>Clusterworx</strong> to run this script and collect the values, add the configuration to the<br />

InstrumentationMonitors.profile:<br />

who: com.lnxi.instrumentation.server.CommandMonitor<br />

who.command: /opt/cwx/bin/who.pl<br />

who.interval: 5000<br />

Note<br />

Before changes can take effect, you must restart <strong>Clusterworx</strong> on each host where the custom monitor is<br />

installed.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Metrics<br />

Metrics<br />

Custom Monitors<br />

Metrics refer to data collected by monitors that is processed and displayed by the <strong>Clusterworx</strong><br />

instrumentation service. The types of metrics collected are tab-specific (unique to each tab) and <strong>Clusterworx</strong><br />

allows you to view metrics for an individual host or group of hosts. For a list of available metrics, see Preconfigured<br />

Metrics on page 237.<br />

Note<br />

Before you can display a custom metric, you must define a custom monitor to collect the data. See Custom<br />

Monitors on page 172.<br />

To Select Displayed Metrics<br />

1. Select the Hosts tab.<br />

2. Select the Instrumentation subtab.<br />

3. Select the host(s) for which you want to define the metrics.<br />

4. Select the Tab that identifies the metric type you will set.<br />

Note<br />

To define metrics for the General tab, select List from the View menu.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

175


Metrics<br />

Metric Selector<br />

176<br />

5. Select Metrics from the Edit menu. The Metric Selector appears.<br />

6. Select the metrics you want to include, then click OK.<br />

Metric Selector<br />

The Metric Selector reads from Metrics.profile in the /opt/cwx/etc directory on each <strong>Clusterworx</strong> client. You<br />

may add custom metrics to this profile by making additions in the proper file format:<br />

hosts...label:<br />

hosts...description:<br />

hosts...type:java.lang.<br />

hosts...pattern:<br />

The is the host name.<br />

The is the title displayed in the <strong>Clusterworx</strong> list monitoring view and in the metric selector<br />

dialog.<br />

The indicates what the monitor does and appears in the metric selector dialog.<br />

The is either “Number” or “String.” Numbers are right justified and Strings are left-justified in the<br />

<strong>Clusterworx</strong> list view.<br />

The helps set the column width for the <strong>Clusterworx</strong> list monitoring view. The column width should<br />

reflect the number of characters typically returned by the value. If the returned value has 10-12 characters,<br />

the pattern would be 12 zeros (000000000000). For example, if the returned value is a percent, the pattern<br />

should be “100%” or 4 zeros (0000).<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Custom Metrics Example<br />

Continuing with the example introduced in Custom Monitors on page 172, add the following to the<br />

Metrics.profile on the <strong>Clusterworx</strong> client—then restart the <strong>Clusterworx</strong> client:<br />

hosts.who.count.label=Who Count<br />

hosts.who.count.description=Number of users logged in.<br />

hosts.who.count.type=java.lang.Number<br />

hosts.who.count.pattern=00<br />

hosts.who.who.label=Who's On<br />

hosts.who.who.description=Who's logged in.<br />

hosts.who.who.type=java.lang.String<br />

hosts.who.who.pattern=0000000<br />

The new metrics appear in the Metric Selector dialog.<br />

Metrics<br />

Metric Selector<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

177


Metrics<br />

Metric Selector<br />

178<br />

The “who” additions also appear in the Instrumentation List view:<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Listeners and Loggers<br />

Listeners and Loggers<br />

Listeners<br />

Listeners constantly read gathered metrics and allow you to set the threshold for every available metric. If the<br />

system exceeds a threshold, <strong>Clusterworx</strong> executes a logger to address the issue. Standard loggers include<br />

sending messages to the centralized <strong>Clusterworx</strong> message log, logging to a file, logging to the serial console,<br />

and shutting down the host.<br />

Listeners<br />

Listeners are configured in the InstrumentationListeners.profile. <strong>Clusterworx</strong> includes several standard<br />

listeners (pre-configured):<br />

Listener Description<br />

Icebox temperature warning Logs a warning if temperature exceeds 55° C.<br />

Icebox temperature error Logs an error and safely shuts down the host if the temperature exceeds 60° C.<br />

<strong>System</strong> Load information Logs an informational message if the system load exceeds 2.1.<br />

Disk Usage information Logs an informational message when the disk is almost filled to capacity. Included<br />

in code, but commented out.<br />

Swap Usage information Logs an informational message if the host is using swap (needs configuration).<br />

Included in code, but commented out.<br />

The format of the InstrumentationListeners.profile is as follows:<br />

: com.lnxi.instrumentation.server.ThresholdListener<br />

.severity: <br />

.interval: <br />

.metric: hosts.{host.moniker}..<br />

.: <br />

.message: Example {1} limit {0} exceeded on host {3} (current value of {2})<br />

.channels: {log} {file}<br />

The is the name of the listener—every listener name must be unique.<br />

The refers to the warning level for the <strong>Clusterworx</strong> log. Accepted values are “error,” “warning,”<br />

and “information.”<br />

The is the amount of time (in milliseconds) that the value must exceed the threshold before<br />

triggering a logger. Specifying an interval is recommended when monitoring metrics that typically return<br />

spikes in their values. The following example depicts the temperature fluctuation of a host. If a listener is<br />

configured with an interval of 60,000 milliseconds, the host’s temperature must exceed the threshold for 60<br />

seconds before a logger is triggered. If an interval is not specified, the listener triggers a logger every time the<br />

temperature exceeds the threshold. If the value falls below the threshold, the listener’s interval is reset.<br />

Temperature<br />

�reshold<br />

60 seconds<br />

1Mi t 2Mi t 3Mi t<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

179


Listeners and Loggers<br />

Listeners<br />

180<br />

Note<br />

The ThresholdListener can be used to set a threshold on many of the standard metrics included with<br />

<strong>Clusterworx</strong>. For more information on available metrics and how to use them, see Pre-configured Metrics<br />

on page 237.<br />

The {host.moniker} represents the name of the host in <strong>Clusterworx</strong> and should not be changed.<br />

The is the key of the item that will be monitored.<br />

The : option allows you to monitor a high or a low threshold.<br />

The refers to the numeric threshold set for the listener. Users can monitor any of the standard, preconfigured<br />

metrics. See Pre-configured Metrics on page 237.<br />

The message is user-configurable and contains the content of the log message or email message. Several<br />

variables are available in the message:<br />

{0} = Set Threshold<br />

{1} = Metric Name<br />

{2} = Metric Value at the time the listener was triggered<br />

{3} = Hostname<br />

The channels are the pre-configured loggers. The standard loggers are log, file, console, email, script,<br />

powercycle, poweroff, poweron, reset, reboot, halt, and shutdown (explained in detail under Pre-defined<br />

Loggers on page 181). Each listener should use at least one or more of these loggers. Loggers run in order and<br />

should be listed in the same format (i.e., {}) as other Java variables in this file. For example, {log} {file}<br />

{console}.<br />

Example Listener Configuration<br />

Continuing with the “who” example in this chapter, a listener can be created to see how many users are<br />

logged into the system by adding the following configuration to the InstrumentationListeners.profile on the<br />

host. After adding this configuration, you must restart <strong>Clusterworx</strong> services on the host.<br />

who: com.lnxi.instrumentation.server.ThresholdListener<br />

who.severity: information<br />

who.interval: 10000<br />

who.metric: hosts.{host.moniker}.who.count<br />

who.maximum: 1<br />

who.message: {1}: The current number of users logged in ({2}) exceeded the limit of {0} on host {3}.<br />

who.channels: {log} {console}<br />

This configuration sets up a threshold that allows only one user to log into the system at a time (with an<br />

interval of 10 seconds). The message displayed on the serial console and sent to the <strong>Clusterworx</strong> message log<br />

is “Who Count: The current number of users logged in (2) exceeded the limit of 1 on host n2.”<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Loggers<br />

Pre-defined Loggers<br />

Listeners and Loggers<br />

Loggers<br />

Also called Channels, pre-defined loggers are basically aliases (e.g., {log} or {file}) rather than the full logger<br />

name (e.g., loggers.com.lnxi.instrumentation.). A logger refers to the event taken when a threshold<br />

exceeds its maximum or minimum value. <strong>Clusterworx</strong> uses the following pre-configured loggers:<br />

Logger Action<br />

beacon Turns on the beacon for the host.<br />

console Logs a message to the console. This typically includes the serial console on<br />

/dev/ttyS0. (You may use the Icebox serial console or conman to log these messages.)<br />

email Sends the message via email. Requires SMTP and a configured email address. By default, this logger is<br />

commented out.<br />

file Logs a message to a file on each host, typically /opt/cwx/log/event.log.<br />

halt Sends a “halt” command to the host (this is the same as a shutdown on most machines). If the hosts<br />

fail to power off after a halt, try shutdown.<br />

log Logs a message to the centralized <strong>Clusterworx</strong> message log.<br />

powercycle Uses the Icebox to perform a hard power cycle.<br />

poweroff Uses the Icebox to execute a hard power off.<br />

poweron Uses the Icebox to turn the power on.<br />

reboot Sends a “reboot” command to the host.<br />

reset Uses the Icebox to do a hard reset.<br />

script Runs a user-defined script on the host. Currently set to use /bin/logger to send a message using syslog,<br />

but other scripts could be added.<br />

shutdown Sends a “shutdown” command to the host (same as halt on most machines). If the hosts fail to power<br />

off after a shutdown, try halt.<br />

Tip<br />

To configure the types of messages displayed by loggers, see TemplateFormatter on page 183.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

181


Listeners and Loggers<br />

Loggers<br />

182<br />

Custom Loggers<br />

Although pre-configured loggers are typically sufficient, they may be extended or modified to include<br />

additional capabilities. In the following example, script loggers are added to run scripts on the hosts.<br />

Note<br />

For the purposes of this example, refers to the name of the new logger you are creating and<br />

/path/to/script refers to the absolute location of the script to be run.<br />

1. Copy the ShellLogger configuration (shown below) from Logging.profile to create another logger.<br />

loggers.com.lnxi.instrumentation.: \<br />

com.xeroone.logging.ShellLogger<br />

loggers.com.lnxi.instrumentation..command: /path/to/script<br />

loggers.com.lnxi.instrumentation..formatter: \<br />

com.lnxi.instrumentation.event<br />

recorders.com.lnxi.instrumentation..enabled: true<br />

recorders.com.lnxi.instrumentation..loggers: \<br />

com.lnxi.instrumentation.<br />

recorders.com.lnxi.instrumentation..severity: debug<br />

2. Set up a channel to allow the new logger to be called easily. At the bottom of Logging.profile, add the<br />

following:<br />

channels.com.lnxi.instrumentation.: \<br />

com.lnxi.instrumentation.<br />

3. Add an alias (e.g., {script}) to the InstrumentationService.profile to allow the new logger to be aliased by<br />

the listener.<br />

:com.lnxi.instrumentation.<br />

Tip<br />

To configure the types of messages displayed by loggers, see TemplateFormatter on page 183.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


TemplateFormatter<br />

Listeners and Loggers<br />

Loggers<br />

You may extend the abilities of pre-configured and custom loggers using the template field of the<br />

TemplateFormatter. The template field allows you to configure the types of messages displayed by loggers.<br />

For example, the message template type used in the following example is %m:<br />

formatters.com.lnxi.instrumentation.event: \<br />

com.xeroone.logging.TemplateFormatter<br />

formatters.com.lnxi.instrumentation.event.template: %m<br />

The following table contains a list of supported message templates:<br />

Template Description<br />

%N Sequential record number. This number resets each time the virtual machine restarts.<br />

%T Creation time.<br />

%C Channel.<br />

%S Severity.<br />

%M Message.<br />

%E Event.<br />

%EN Event name.<br />

%ET Event trace.<br />

%AN Application name.<br />

%AM Application moniker.<br />

%AST Application start time.<br />

%AV Application version.<br />

%HN Host name.<br />

%HM Host moniker.<br />

%MS Memory size.<br />

%MF Memory free.<br />

%OSN Operating system name.<br />

%OSV Operating system version.<br />

%% Literal % character.<br />

'' Literal ' (single quote) character.<br />

' Escape character for quoted text.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

183


Listeners and Loggers<br />

Loggers<br />

184<br />

<strong>Clusterworx</strong> Message Log<br />

The <strong>Clusterworx</strong> message log is located on the instrumentation overview screen. If you select multiple hosts<br />

(or a container such as a cluster, partition, or region), the log shows messages for any host in the selection. If<br />

you select a single host, the message log shows messages for this host only. Messages have three severity<br />

levels: error, warning, and informational.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Chapter 11<br />

Command-Line Interface<br />

Command-Line Syntax and Conventions<br />

CLI commands documented in this guide adhere to the following rules—commands entered incorrectly may<br />

produce the “Command not recognized” error message.<br />

Convention Description<br />

xyz Items in bold indicate mandatory parameters or keywords (e.g., all).<br />

Angle brackets and italics indicate a user-defined variable (e.g., an IP address or host name)<br />

[x] [ ] Square brackets indicate optional items.<br />

[x|y|z] [ | ] Square brackets with a vertical bar indicate a choice of an optional value.<br />

{x|y|z} { | } Braces with a vertical bar indicate a choice of a required value.<br />

[x{y|z}] [ { | } ] A combination of square brackets and braces with vertical bars indicates a required choice<br />

of an optional parameter.<br />

Tip<br />

Help for all CLI commands is available through man pages.<br />

Note<br />

All CLI command arguments documented in this chapter are shown using colon notation only<br />

({--partition:|-p:}). You may also use a space or an equal sign (i.e., --description , -M=) with these<br />

arguments.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

185


Command-Line Syntax and Conventions<br />

CLI Commands<br />

186<br />

CLI Commands<br />

CLI Commands<br />

ccp {<br />

[:]<br />

[:]|<br />

[{-usage|-help|-?}]<br />

}<br />

conman {<br />

[[-b [ ...]]|<br />

[-d [:]]|<br />

[-e ]|<br />

[-f]|<br />

[-F ]|<br />

[-h]|<br />

[-j]|<br />

[-l ]|<br />

[-L]|<br />

[-m]|<br />

[-q]|<br />

[-Q]|<br />

[-r]|<br />

[-v]|<br />

[-V]]<br />

<br />

}<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


CLI Commands<br />

cwhost {<br />

[partadd [{--description:|-d:} ] [--enable:] [--disable:]<br />

[{--regions:|-R} [,...]] [{--hosts:|-h} [,...]] |<br />

[partmod {[{--name:|-n:} ] [{--description:|-d:} ]<br />

[--enable:] [--disable:] [{--regions:|-R} [,...]]<br />

[{--hosts:|-h} [,...]]} ]|<br />

[partdel ]|<br />

[partshow [[ ...]]]|<br />

[regionadd [{--description:|-d:} ] [{--partition:|-p:} ]<br />

[--enable:] [--disable:] [{--hosts:|-h} [,...]]<br />

[{--groups:|-g} [,...]] ]|<br />

[regionmod {--name:|-n:} [{--description:|-d:} ]<br />

[{--partition:|-p:} ] [--enable:] [--disable:]<br />

[{--hosts:|-h} [,...]]<br />

[{--groups:|-g} [,...]] ]]|<br />

[regiondel ]|<br />

[regionshow [[ ...]]]|<br />

[hostadd [ ] [{--description:|-d:} ]<br />

[--enable:] [--disable:] [{--partition:|-p:} ]<br />

[{--regions:|-R:} [,,...]]<br />

[{--iceboxes:|-i:} :[,:,...:]]]|<br />

[hostmod [{--name:|-n:} ] [{--interfaces:|-I} |[,|]]<br />

[{--description:|-d:} ] [--enable:] [--disable:]<br />

[{--partition:|-p:} ]<br />

[{--regions:|-R:} [,,...]]<br />

[{--iceboxes:|-i:} :[,:,...:]]]|<br />

[hostdel ]|<br />

[hostshow [[ ...]]]|<br />

[ifaceadd [{--management:|-M:}]]|<br />

[ifacemod | [{--management:|-M:}] [--mac:|-m:} ] [{--ip:|-i:} ]<br />

[{--hostname:|-h:} ]]|<br />

[ifacedel |]|<br />

[ifaceshow [|[ | ...|]]]|<br />

[iceboxadd [{--description:|-d:} ]<br />

[{--password:|-p:} ] [{--hosts:|-h:} :[,:...]]]|<br />

[iceboxmod [{--name:|-n:} ] [{--mac:|-m:} ] [{--ip:|-i:} ]<br />

[{--description:|-d:} ] [{--password:|-p:} ]<br />

[{--hosts:|-h:} :[,:...]]]|<br />

[iceboxdel ]|<br />

[iceboxshow [[ ...]]]|<br />

[inflate [ ...]]|<br />

[deflate [ ...]]|<br />

[{--verbose|-v}]|<br />

[-signature]|<br />

[{-usage|-help|-?}]<br />

}<br />

Command-Line Syntax and Conventions<br />

CLI Commands<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

187


Command-Line Syntax and Conventions<br />

CLI Commands<br />

188<br />

CLI Commands<br />

cwpower {<br />

{<br />

[--on:|-1:]|<br />

[--off:|-0:]|<br />

[--cycle:|-C:]|<br />

[--reset:|-R:]|<br />

[--powerstatus:|-S:]|<br />

[--reboot:|-r:]|<br />

[--halt:|-h:]|<br />

[--down:|-d:]|<br />

[--hoststatus:|-s:]|<br />

[--flash|-f]|<br />

[--unflash|-u]|<br />

[--beacon|-b]|<br />

[{--duration|-F} [|force]]|<br />

[--severity|-e]|<br />

[{--verbose:|-v:} [--progressive:|-p:]]<br />

}<br />

[ ...]|<br />

[-signature]|<br />

[{-usage|-help|-?}]<br />

}<br />

cwprovision {<br />

[{--download-path:|-d:}<br />

{--image:|-i:}<br />

{--image.revision:|-I:}<br />

{--kernel:|-k:}[]<br />

[{--kernel-log-level:|-l:}[]]<br />

{--payload:|-p:}[]<br />

[{--payload-download:|-D:}yes|no|default]<br />

[{--repartition:|-R:}yes|no|default]<br />

[{--working-image:|-w:}]|<br />

[{--next-reboot:|-n:}]]|<br />

[{--query-last-image:|-q} [--uncompressed-hostnames:|-u]]<br />

[ ...]}|<br />

[-signature]|<br />

[{-usage|-help|-?}]<br />

}<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


CLI Commands<br />

Command-Line Syntax and Conventions<br />

CLI Commands<br />

cwuser {<br />

[useradd [{--description:|-c:}] [{--home:|-d:}] [{--group:|-g:}]<br />

[{--groups:|-G:}[,,...]]<br />

[{--password:|-p:}] [{--shell:|-s:}] [{--uid:|-u:}]<br />

[{--enable:|-U}] [{--disable:|-L:}] [{--normal:|-n:}] ]|<br />

[usermod [{--description:|-c:}] [{--home:|-d:}] [{--group:|-g:}]<br />

[{--groups:|-G:}[,,...]]<br />

[{--password:|-p:}] [{--shell:|-s:}] [{--uid:|-u:}]<br />

[{--enable:|-U}] [{--disable:|-L:}] [{--name:|-l:}] ]|<br />

[userdel ]|<br />

[usershow [[ ...]]]|<br />

[passwd ]|<br />

[encryptpasswd]|<br />

[groupadd [{--description:|-d:}] [{--gid:|-g:}]<br />

[[{--roles:|-r:}] [,...]] [{--regions:|-R:}[,...]] ]|<br />

[groupmod [{--description:|-d:}] [{--gid:|-g:}]<br />

[[{--roles:|-r:}] [,,...]] [{--regions:|-R:}[,,...]]<br />

[{--name:|-n:}] ]|<br />

[groupdel ]|<br />

[groupshow [[ ...]]]|<br />

[roleadd [{--description:|-d:}] [{--privileges:|-p:}[,,...]] ]|<br />

[rolemod [{--description:|-d:}] [{--privileges:|-p:}[,,...]]<br />

[{--name:|-n:}] ]|<br />

[roledel ]|<br />

[roleshow [[ ...]]]|<br />

[privshow [[ ...]]]|<br />

[{--verbose|-v}]|<br />

[-signature]|<br />

[{-usage|-help|-?}]<br />

}<br />

dbix {<br />

[{-d|--delete} [ ...]]|<br />

[{-i|--import} ] |<br />

[{-x|--export} [ ...]]|<br />

[{-usage|-help|-?}]<br />

}<br />

dbx {<br />

[{--domain:|-d} ] [{--format:|-f:} ] [{-usage|-help|-?}] [-runtime[:verbose]]<br />

[-signature] [-splash]<br />

}<br />

imgr {<br />

{--image:|-i:} [{--kernel:|-k:}] [{--kernel-revision:|-K:}]<br />

[{--payload:|-p:}] [{--payload.revision:|-P:}] [{--force:|-f:}] [{--list:|-l:}]|<br />

[{-usage|-help|-?}]<br />

}<br />

kmgr {<br />

{--name:|-n:} [{--description:|-d:}]<br />

{--path:|-p:} [{--kernel:|-k:}]<br />

[{--architecture:|-a:}] [{--modules:|-m:}] [{--binary:|-b:}] [{--list:|-l:}]|<br />

[{-usage|-help|-?}]<br />

}<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

189


Command-Line Syntax and Conventions<br />

CLI Commands<br />

190<br />

CLI Commands<br />

pdcp {[<br />

[-w [,...,]]|<br />

[-x [,...,]]|<br />

[-a]|<br />

[-i]|<br />

[-r]|<br />

[-p]|<br />

[-q]|<br />

[-f ]|<br />

[-l ]|<br />

[-t ]|<br />

[-d]]<br />

[ ... ]<br />

<br />

}<br />

pdsh {<br />

[[-w [,...,]]|<br />

[-x [,...,]]|<br />

[-a]|<br />

[-i]|<br />

[-q]|<br />

[-f ]|<br />

[-s]|<br />

[-l ]|<br />

[-t ]|<br />

[-u ]|<br />

[-n ]|<br />

[-d]|<br />

[-S]|<br />

[,...,]]<br />

<br />

}<br />

pmgr {<br />

[[{--description:|-d:}] [{--include:|-i:}]<br />

[{--include-from:|-I:}] [{--location:|-l:}] [{--silent:|-s:}]<br />

[{--exclude:|-x:}]] [{--exclude-from:|-X:}] |<br />

[{-usage|-help|-?}]<br />

}<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


CLI Commands<br />

powerman {<br />

[[{--on|-1}]|<br />

[{--off|-0}]|<br />

[{--cycle|-c}]|<br />

[{--reset|-r}]|<br />

[{--flash|-f}]|<br />

[{--unflash|-u}]|<br />

[{--list|-l}]|<br />

[{--query|-q}]|<br />

[{--node|-n}]|<br />

[{--beacon|-b}]|<br />

[{--temp|-t}]|<br />

[{--help|-h}]|<br />

[{--license|-L}]|<br />

[{--destination|-d} host[:port]]|<br />

[{--version|-V}]|<br />

[{--device|-D}]|<br />

[{--telemetry|-T}]|<br />

[{--exprange|-x}]]<br />

[ ...]<br />

}<br />

vcs {<br />

[{identify| id}]|<br />

[status]|<br />

[include ]|<br />

[exclude ]|<br />

[archive ]|<br />

[import -R: -M: [-n:] [-d:] []]|<br />

[commit [-n:] [-d:] []]|<br />

[branch [-n:] [-d:] []]|<br />

[{checkout | co} -R: -M: [-r:||]]|<br />

[{update | up} [-r:||] []]|<br />

[name [-R:] [-M:] [-r:||] ]|<br />

[describe [-R:] [-M:] [-r:||] ]|<br />

[{narrate | log} [-R: -M:] [-r:||]]|<br />

[iterate [-R: [-M: [-r:||]]]]|<br />

[list]|<br />

[{-usage|-help|-?}]<br />

}<br />

xms<br />

Command-Line Syntax and Conventions<br />

CLI Commands<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

191


ccp<br />

CLI Commands<br />

192<br />

ccp<br />

ccp {<br />

[:]<br />

[:]|<br />

[{-usage|-help|-?}]<br />

}<br />

Description<br />

The Cluster Copy (ccp) command provides a file transfer service between two file systems—the client and the<br />

host running the file service.<br />

Note<br />

This command is effective only when used in the Runner. From a single host it works much like rcp.<br />

Parameters<br />

[:]<br />

The name of the source host and the location of the source file.<br />

[:]<br />

The name of the destination host and the location to which you will copy the<br />

file.<br />

[{-usage|-help|-?}] (Optional) Display help information for the command and exit. All other options<br />

are ignored.<br />

Note<br />

Only the source or the destination file location may be remote, not both. The local file location may be a<br />

relative path, but the path for the remote file location must be absolute.<br />

Tip<br />

You can install an RPM on the hosts by using ccp to put it in a temporary directory, then use the Runner<br />

feature to install it in parallel. Unless you install the RPM into the payload, it will not remain on the host<br />

when you re-provision it.<br />

Example<br />

The following example copies /etc/hosts from the remote n2 to the local directory specified. However, when<br />

used from the Runner, this command copies /etc/hosts from n2 to all other hosts that are selected. This is<br />

very similar to pdcp, except that ccp allows users to copy files from any host in the cluster to all other hosts—<br />

pdcp copies only from the Master Host to the other hosts.<br />

ccp n2:/etc/hosts /etc<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


conman<br />

conman {<br />

[[-b [ ...]]|<br />

[-d [:]]|<br />

[-e ]|<br />

[-f]|<br />

[-F ]|<br />

[-h]|<br />

[-j]|<br />

[-l ]|<br />

[-L]|<br />

[-m]|<br />

[-q]|<br />

[-Q]|<br />

[-r]|<br />

[-v]|<br />

[-V]]<br />

<br />

}<br />

Description<br />

conman<br />

CLI Commands<br />

The ConMan client allows you to connect to remote consoles managed by ConManD. Console names are<br />

separated by spaces or commas and matched to the configuration via globbing. Regular expression matching<br />

can be enabled with the -r option.<br />

ConMan supports three console access modes: monitor (read-only), interactive (read-write), and broadcast<br />

(write-only). Unless otherwise specified, ConMan opens the console session in interactive mode (the default).<br />

Parameters<br />

[-b [ ...]]<br />

(Optional) Broadcast to multiple host consoles (write-only). You may enter a<br />

range of hosts or a space-delimited list of hosts (e.g., host[1-4 7 9]).<br />

Data sent by the client is copied to all specified consoles in parallel, but console<br />

output is not sent back to the client. You can use this option in conjunction with<br />

-f or -j.<br />

[-d [:]]<br />

(Optional) Specify the location of the ConManD daemon, overriding the default<br />

[127.0.0.1:7890]. This location may contain a host name or IP address and be<br />

followed by an optional colon and port number.<br />

[-e ] (Optional) Specify the client escape character, overriding the default (&).<br />

[-f] (Optional) Specify that write-access to the console should be forced, thereby<br />

stealing the console away from existing clients with write privileges. As<br />

connections are terminated, ConManD informs the original clients of who<br />

perpetrated the theft.<br />

[-F ] (Optional) Read console names or patterns from a file with the specified name.<br />

Only one console name may be specified per line. Leading and trailing white<br />

space, blank lines, and comments (i.e., lines beginning with a #) are ignored.<br />

[-h] (Optional) Display a summary of the command-line options.<br />

[-j] (Optional) Specify that write-access to the console should be joined, thereby<br />

sharing the console with existing clients that have write privileges. As privileges<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

193


conman<br />

CLI Commands<br />

194<br />

are granted, ConManD informs the original clients that privileges have been<br />

granted to new clients.<br />

[-l ] (Optional) Log console session output to a file with the specified name.<br />

[-L] (Optional) Display license information.<br />

[-m] (Optional) Monitor a console (read-only).<br />

[-q] (Optional) Query ConManD for consoles matching the specified names or<br />

patterns. Output from this query can be saved to file for use with the -F option.<br />

[-Q] (Optional) Enable quiet-mode, suppressing informational messages. This mode<br />

can be toggled on and off from within a console session via the &Q escape.<br />

[-r] (Optional) Match console names via regular expressions instead of globbing.<br />

[-v] (Optional) Enable verbose mode.<br />

[-V] (Optional) Display version information.<br />

The name of the host to which to connect.<br />

ESCAPE CHARACTERS<br />

ConMan supports the following escapes and assumes the default escape character (&):<br />

&? Display a list of all escapes currently available.<br />

&. Terminate the connection.<br />

&& Send a single escape character.<br />

&B Send a serial-break to the remote console.<br />

&F Switch from read-only to read-write via a force.<br />

&I Display information about the connection.<br />

&J Switch from read-only to read-write via a join.<br />

&L Replay the last 4KB of console output. This escape requires that logging is<br />

enabled for the console in the ConManD configuration.<br />

&M Switch from read-write to read-only.<br />

&Q Toggle quiet-mode to display or suppress informational messages.<br />

&R Reset the host associated with this console. This escape requires that resetcmd<br />

is specified in the ConManD configuration.<br />

&Z Suspend the client.<br />

ENVIRONMENT<br />

The following environment variables may be used to override default settings.<br />

CONMAN_HOST Specifies the host name or IP address at which to contact ConManD, but may be<br />

overridden with the -d command-line option. Although a port number separated<br />

by a colon may follow the host name (i.e., host:port), the CONMAN_PORT<br />

environment variable takes precedence. If you do not specify a host, the default<br />

host IP address (127.0.0.1) is used.<br />

CONMAN_PORT Specifies the port on which to contact ConManD, but may be overridden by the<br />

-d command-line option. If not set, the default port (7890) is used.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


CONMAN_ESCAPE The first character of this variable specifies the escape character, but may be<br />

overridden by the -e command-line option. If not set, the default escape<br />

character (&) is used.<br />

Warning!<br />

Client and server communications are not yet encrypted.<br />

Example 1<br />

To connect to host console n1, enter:<br />

conman n1<br />

Note<br />

Once in conman, enter &. to exit or &? to display a list of conman commands.<br />

Example 2<br />

To broadcast (write-only) to multiple hosts, enter:<br />

conman -b n[1-10]<br />

Tip<br />

To view the output of broadcast commands on a group of hosts, use the conmen command before you<br />

begin entering commands from conman. Conmen opens a new window for each host and displays the<br />

host output.<br />

For example, the following command opens new consoles for hosts n2-n4:<br />

conmen n[2-4]<br />

conman<br />

CLI Commands<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

195


cwhost<br />

CLI Commands<br />

196<br />

cwhost<br />

cwhost {<br />

[partadd [{--description:|-d:} ] [--enable:] [--disable:]<br />

[{--regions:|-R} [,...]] [{--hosts:|-h} [,...]] |<br />

[partmod {[{--name:|-n:} ] [{--description:|-d:} ]<br />

[--enable:] [--disable:] [{--regions:|-R} [,...]]<br />

[{--hosts:|-h} [,...]]} ]|<br />

[partdel ]|<br />

[partshow [[ ...]]]|<br />

[regionadd [{--description:|-d:} ] [{--partition:|-p:} ]<br />

[--enable:] [--disable:] [{--hosts:|-h} [,...]]<br />

[{--groups:|-g} [,...]] ]|<br />

[regionmod {--name:|-n:} [{--description:|-d:} ]<br />

[{--partition:|-p:} ] [--enable:] [--disable:]<br />

[{--hosts:|-h} [,...]]<br />

[{--groups:|-g} [,...]] ]]|<br />

[regiondel ]|<br />

[regionshow [[ ...]]]|<br />

[hostadd [ ] [{--description:|-d:} ]<br />

[--enable:] [--disable:] [{--partition:|-p:} ]<br />

[{--regions:|-R:} [,,...]]<br />

[{--iceboxes:|-i:} :[,:,...:]]]|<br />

[hostmod [{--name:|-n:} ] [{--interfaces:|-I} |[,|]]<br />

[{--description:|-d:} ] [--enable:] [--disable:]<br />

[{--partition:|-p:} ]<br />

[{--regions:|-R:} [,,...]]<br />

[{--iceboxes:|-i:} :[,:,...:]]]|<br />

[hostdel ]|<br />

[hostshow [[ ...]]]|<br />

[ifaceadd [{--management:|-M:}]]|<br />

[ifacemod | [{--management:|-M:}] [--mac:|-m:} ] [{--ip:|-i:} ]<br />

[{--hostname:|-h:} ]]|<br />

[ifacedel |]|<br />

[ifaceshow [|[ | ...|]]]|<br />

[iceboxadd [{--description:|-d:} ]<br />

[{--password:|-p:} ] [{--hosts:|-h:} :[,:...]]]|<br />

[iceboxmod [{--name:|-n:} ] [{--mac:|-m:} ] [{--ip:|-i:} ]<br />

[{--description:|-d:} ] [{--password:|-p:} ]<br />

[{--hosts:|-h:} :[,:...]]]|<br />

[iceboxdel ]|<br />

[iceboxshow [[ ...]]]|<br />

[inflate [ ...]]|<br />

[deflate [ ...]]|<br />

[{--verbose|-v}]|<br />

[-signature]|<br />

[{-usage|-help|-?}]<br />

}<br />

Description<br />

The Host Administration (cwhost) utility allows you to add, modify, view the current state of, or delete any<br />

partition, region, host, interface, or Icebox in your cluster.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Subcommands<br />

partadd<br />

Add a partition to the cluster.<br />

cwhost<br />

CLI Commands<br />

[{--description:|-d:} ]<br />

(Optional) A brief description of the partition. If you do not specify a<br />

description, this field remains blank.<br />

[--enable:] [--disable:] (Optional) Indicates whether or not the partition is enabled. If you do not specify<br />

this option, <strong>Clusterworx</strong> will enable the partition.<br />

[{--regions:|-R} [,...]]<br />

(Optional) The list of regions that are members of this partition. If you do not<br />

specify any regions, none are included in the partition.<br />

[{--hosts:|-h} [,...]]<br />

(Optional) The list of hosts that are members of this partition. If you do not<br />

specify any hosts, none are included in the partition.<br />

The name of the partition to add.<br />

partmod<br />

Modify a partition on the cluster. Unchanged entries remain the same.<br />

[{--name:|-n:} ]<br />

(Optional) Change the partition name. If you do not specify a name,<br />

<strong>Clusterworx</strong> uses the current partition name.<br />

[{--description:|-d:} ]<br />

(Optional) A brief description of the partition. If you do not specify a<br />

description, <strong>Clusterworx</strong> uses the current partition description.<br />

[--enable:] [--disable:] (Optional) Indicates whether or not the partition is enabled. If you do not specify<br />

this option, the partition remains in its original state.<br />

[{--regions:|-R} [,...]]<br />

(Optional) The list of regions that are members of this partition. If you do not<br />

specify any regions, the partition remains in its original state.<br />

[{--hosts:|-h} [,...]]<br />

(Optional) The list of hosts that are members of this partition. If you do not<br />

specify any hosts, the partition remains in its original state.<br />

The name of the partition to add.<br />

partdel<br />

Delete a partition from the cluster.<br />

The name of the partition to delete.<br />

partshow<br />

Display the current settings for a partition(s).<br />

[[ ...]]<br />

(Optional) The name(s) of the partition(s) for which to display the current<br />

settings. Multiple entries are delimited by spaces. Leave this option blank to<br />

display all partitions.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

197


cwhost<br />

CLI Commands<br />

198<br />

regionadd<br />

Add a region to a partition.<br />

[{--description:|-d:} ]<br />

(Optional) A brief description of the region. If you do not specify a description,<br />

this field remains blank.<br />

[{--partition:|-p:} ]<br />

(Optional) The partition to which this region belongs. If you do not specify a<br />

partition, <strong>Clusterworx</strong> assigns the region to the default or unassigned partition.<br />

[--enable:] [--disable:] (Optional) Indicates whether or not the region is enabled. If you do not specify<br />

this option, <strong>Clusterworx</strong> will enable the region.<br />

[{--hosts:|-h} [,...]]<br />

(Optional) The list of hosts that are members of this region. If you do not specify<br />

this option, the region will not contain any member hosts.<br />

[{--groups:|-g} [,...]]<br />

(Optional) The list of groups that may access this region. If you do not specify<br />

this option, the region will not be available to any groups.<br />

The name of the new region.<br />

regionmod<br />

Modify a region on the cluster. Unchanged entries remain the same.<br />

{--name:|-n:} (Optional) Change the region name. If you do not specify a name, <strong>Clusterworx</strong><br />

uses the current region name.<br />

[{--description:|-d:} ]<br />

(Optional) A brief description of the region. If you do not specify a description,<br />

<strong>Clusterworx</strong> uses the current region description.<br />

[{--partition:|-p:} ]<br />

(Optional) The partition to which this region belongs. If you do not specify a<br />

partition, <strong>Clusterworx</strong> assigns the region to the original partition specified.<br />

[--enable:] [--disable:] (Optional) Indicates whether or not the region is enabled. If you do not specify<br />

this option, the region remains in its original state.<br />

[{--hosts:|-h} [,...]]<br />

(Optional) The list of hosts that are members of this region. If you do not specify<br />

any hosts, the region remains in its original state.<br />

[{--groups:|-g} [,...]]<br />

(Optional) The list of groups that may access this region. If you do not specify<br />

any groups, the region remains in its original state.<br />

The name of the region to modify.<br />

regiondel<br />

Delete a region from the cluster.<br />

The name of the region to delete.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


egionshow<br />

Display the current settings for a region(s).<br />

[[ ...]]<br />

(Optional) The name of the region(s) for which to display the current settings.<br />

Multiple entries are delimited by spaces. Leave this option blank to display all<br />

regions.<br />

cwhost<br />

CLI Commands<br />

hostadd<br />

Add a host to the cluster.<br />

[ ]<br />

The name of each new host, its MAC address, and its IP address. The first host<br />

specified is the management interface. Multiple entries are space-delimited.<br />

[{--description:|-d:} ]<br />

(Optional) A brief description of the host. If you do not specify a description, this<br />

field remains blank.<br />

[--enable:] [--disable:] (Optional) Indicates whether or not the host is enabled. If you do not specify this<br />

option, <strong>Clusterworx</strong> enables the host.<br />

[{--partition:|-p:} ]<br />

(Optional) The partition to which this host belongs. If you do not specify a<br />

partition, <strong>Clusterworx</strong> assigns the host to the default or unassigned partition.<br />

[{--regions:|-r:} [,,...]]<br />

(Optional) The region(s) to which this host belongs. If you do not specify a<br />

region, <strong>Clusterworx</strong> does not assign the host to any region. Multiple entries are<br />

comma-delimited.<br />

[{--iceboxes:|-i:} :[,:,...:]]<br />

(Optional) The Icebox(es) and port(s) to which this host is connected. If you do<br />

not specify an Icebox and port, <strong>Clusterworx</strong> assumes that the host is not<br />

connected to an Icebox. Multiple entries are comma-delimited.<br />

hostmod<br />

Modify a host on the cluster—unchanged entries remain the same.<br />

The name of the host to modify.<br />

{--name:|-n:} The host’s new name.<br />

[{--interfaces:|-I} |[,|]]<br />

(Optional) A list of interfaces with which this host is associated. If none of the<br />

specified interfaces are management interfaces, <strong>Clusterworx</strong> marks the first<br />

interface as the management interface.<br />

[{--description:|-d:} ]<br />

(Optional) A brief description of the host. If you do not specify a description,<br />

<strong>Clusterworx</strong> uses the current host description.<br />

[--enable: {yes|no}] (Optional) Indicates whether or not the host is enabled. If you do not specify this<br />

option, the host remains in its original state.<br />

[{--partition:|-p:} ]<br />

(Optional) The partition to which this host belongs. If you do not specify a<br />

partition, the host remains associated with the original partition specified.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

199


cwhost<br />

CLI Commands<br />

200<br />

[{--regions:|-r:} [,,...]]<br />

(Optional) The region(s) to which this host belongs. If you do not specify a<br />

partition, the host will not belong to any region. Multiple entries are commadelimited.<br />

[{--iceboxes:|-i:} :[,:,...:]]<br />

(Optional) The Iceboxes and ports to which this host is connected. If you do not<br />

specify an Icebox and port, <strong>Clusterworx</strong> assumes that the host is not connected<br />

to an Icebox. Multiple entries are comma-delimited.<br />

hostdel<br />

Delete a <strong>Clusterworx</strong> host.<br />

The name of the host to delete.<br />

hostshow<br />

Display the current settings for a host(s).<br />

[[ ...]]<br />

(Optional) The name of the host(s) for which to display the current settings.<br />

Multiple entries are delimited by spaces. Leave this option blank to display all<br />

hosts.<br />

ifaceadd<br />

Add an interface to the cluster.<br />

The name of the host on which you added the interface.<br />

The MAC address of the interface.<br />

The IP address of the interface.<br />

[{--management:|-M:}]<br />

(Optional) Specify whether or not this interface is a management interface. If<br />

you do not specify this option, <strong>Clusterworx</strong> assumes that this interface is not a<br />

management interface.<br />

ifacemod<br />

Modify an interface on the cluster—unchanged entries remain the same.<br />

The MAC address of the interface.<br />

The IP address of the interface.<br />

[{--management:|-M:}]<br />

(Optional) Specify whether or not this interface is a management interface. If<br />

you do not specify this option, the interface remains in its original state.<br />

[--mac:|-m:} ] (Optional) Change the interface’s hardware or MAC address.<br />

[{--ip:|-i:} ] (Optional) Change the interface’s IP address.<br />

[{--hostname:|-h:} ]<br />

(Optional) Change the host to which this interface belongs.<br />

ifacedel<br />

Delete an interface from the cluster.<br />

The MAC address of the interface to delete.<br />

The IP address of the interface to delete.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


ifaceshow<br />

Display the current settings for an interface(s).<br />

cwhost<br />

CLI Commands<br />

[|[ | ...|]]<br />

(Optional) The MAC or IP address(es) of the interface(s) for which to display the<br />

current settings. Multiple entries are delimited by spaces. Leave this option<br />

blank to display all interfaces.<br />

iceboxadd<br />

Add an Icebox to the cluster.<br />

The name of the new Icebox.<br />

The MAC address of the new Icebox.<br />

The IP address of the new Icebox.<br />

[{--description:|-d:} ]<br />

(Optional) A brief description of the Icebox. If you do not specify a description,<br />

this field remains blank.<br />

[{--password:|-p:} ]<br />

(Optional) The Icebox’s administrative password. If you do not specify a<br />

password, <strong>Clusterworx</strong> uses the default password “icebox”.<br />

[{--hosts:|-h:} :[,:...]]<br />

(Optional) A list of hosts connected to the Icebox and the ports to which they are<br />

connected. If you do not specify this option, <strong>Clusterworx</strong> assumes that the hosts<br />

are not connected to an Icebox.<br />

iceboxmod<br />

Modify an Icebox on the cluster—unchanged entries remain the same.<br />

The name of the Icebox to modify.<br />

[{--name:|-n:} ]<br />

(Optional) The Icebox’s new name.<br />

[{--mac:|-m:} ] (Optional) Change the Icebox’s hardware or MAC address.<br />

[{--ip:|-i:} ] (Optional) Change the Icebox’s IP address.<br />

[{--description:|-d:} ]<br />

(Optional) A brief description of the Icebox. If you do not specify a description,<br />

<strong>Clusterworx</strong> uses the current Icebox description.<br />

[{--password:|-p:} ]<br />

(Optional) The Icebox’s administrative password. If you do not specify a<br />

password, <strong>Clusterworx</strong> uses the original password.<br />

[{--hosts:|-h:} :[,:...]]<br />

(Optional) A list of hosts connected to the Icebox and the ports to which they are<br />

connected. If you do not specify this option, <strong>Clusterworx</strong> assumes that the hosts<br />

remain in their original state.<br />

iceboxdel<br />

Delete a <strong>Clusterworx</strong> Icebox.<br />

The name of the Icebox to delete.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

201


cwhost<br />

CLI Commands<br />

202<br />

iceboxshow<br />

Display the current settings for an Icebox(es).<br />

[[ ...]]<br />

(Optional) The Icebox(es) for which to display the current setting(s). Multiple<br />

entries are delimited by spaces. Leave this option blank to display all Iceboxes.<br />

inflate [ ...]<br />

(Optional) Allows you to change between full and compressed host list format.<br />

Inflate the specified host range(s) to display a full list of hosts.<br />

deflate [ ...] (Optional) Allows you to change between full and compressed host list format.<br />

Deflate the specified host range(s) to display a compressed host list.<br />

[{--verbose|-v}] (Optional) Display verbose output when performing operations. This option is<br />

common to all subcommands.<br />

[-signature] (Optional) Displays the application signature. The application signature<br />

contains the name, description, version, and build information of this<br />

application.<br />

[{-usage|-help|-?}] (Optional) Display help information for the command and exit. All other options<br />

are ignored.<br />

Examples<br />

EXAMPLE 1<br />

View the layout of the system:<br />

cwhost hostshow<br />

EXAMPLE 2<br />

Get details of the system:<br />

cwhost hostshow -v<br />

EXAMPLE 3<br />

Create a region called group1:<br />

cwhost regionadd group1<br />

EXAMPLE 4<br />

Add a host to region group1 with the host name n1, the mac 0005b342afe1, and the IP address 10.0.0.1:<br />

cwhost hostadd -r:group1 n1 0005b342afe1 10.0.0.1<br />

EXAMPLE 5<br />

Add host n2 to the group1 region:<br />

cwhost hostmod -r:group1 n2<br />

EXAMPLE 6<br />

Add an Icebox with the name ice2, the mac 0003b349e8a3, and the IP address 10.0.0.102:<br />

cwhost iceboxadd ice2 0003b349e8a3 10.0.0.102<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


EXAMPLE 7<br />

Deflate the host list n1, n2, n3, and n4:<br />

cwhost deflate n1 n2 n3 n4<br />

n[1-4]<br />

EXAMPLE 8<br />

Inflate the host list n[1-4]:<br />

cwhost inflate n[1-4]<br />

n1<br />

n2<br />

n3<br />

n4<br />

cwhost<br />

CLI Commands<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

203


cwpower<br />

CLI Commands<br />

204<br />

cwpower<br />

cwpower {<br />

{<br />

[--on:|-1:]|<br />

[--off:|-0:]|<br />

[--cycle:|-C:]|<br />

[--reset:|-R:]|<br />

[--powerstatus:|-S:]|<br />

[--reboot:|-r:]|<br />

[--halt:|-h:]|<br />

[--down:|-d:]|<br />

[--hoststatus:|-s:]|<br />

[--flash|-f]|<br />

[--unflash|-u]|<br />

[--beacon|-b]|<br />

[{--duration|-F} [|force]]|<br />

[--severity|-e]|<br />

[{--verbose:|-v:} [--progressive:|-p:]]<br />

}<br />

[ ...]|<br />

[-signature]|<br />

[{-usage|-help|-?}]<br />

}<br />

Description<br />

The Power Administration (cwpower) utility allows you to perform power administration operations on a<br />

host(s) within the cluster. Operations include power on, power off, power cycle, reset, reboot, halt, and power<br />

down (a soft power off). You may also query the current power status of a particular host(s).<br />

Note<br />

You may specify only one power administration operation option each time you use the cwpower<br />

command.<br />

Parameters<br />

[--on|-1] (Optional) Turn on power to the specified host(s).<br />

[--off|-0] (Optional) Turn off power to the specified host(s).<br />

[--cycle|-C] (Optional) Cycle power to the specified host(s).<br />

[--reset|-R] (Optional) Perform a hardware reset for the specified host(s).<br />

[--powerstatus|-S] (Optional) Query the hard power status for the specified host(s).<br />

[--reboot|-r] (Optional) Reboot the specified host(s).<br />

[--halt|-h] (Optional) Halt the specified host(s).<br />

[--down|-d] (Optional) Execute a soft power down on the specified host(s).<br />

[--hoststatus|-s] (Optional) Query the host administration power status for the specified host(s).<br />

[--flash|-f] (Optional) Turn the beacon on for the specified host(s).<br />

[--unflash|-u] (Optional) Turn the beacon off for the specified host(s).<br />

[--beacon|-b] (Optional) Report the beacon status for the specified host(s).<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


cwpower<br />

CLI Commands<br />

[{--duration|-F} [|force]]<br />

(Optional) Used only with the flash option, specifies the duration (in seconds)<br />

for which to flash the beacon on the specified host(s). To turn the beacon on<br />

indefinitely, enter the force option. If you do not specify a duration, the beacon<br />

turns on for 15 seconds.<br />

Note<br />

This option is available only for hosts that support IPMI.<br />

[--severity|-e]| (Optional) Report the error status for the specified host(s).<br />

[{--verbose|-v} [--progressive|-p]]<br />

(Optional) Change the standard output to verbose. Output displays the power<br />

status of each host, one per line. To display output as information becomes<br />

available, select the progressive option—progressive output is not guaranteed to<br />

be sorted and is not summarized.<br />

[ ...]<br />

The name of the host(s) for which to execute the specified operation. You may<br />

enter a range of hosts or a space-delimited list of hosts (e.g., host[1-4 7 9]).<br />

[-signature] (Optional) Displays the application signature. The application signature<br />

contains the name, description, version, and build information of this<br />

application.<br />

[{-usage|-help|-?}] (Optional) Display help information for the command and exit. All other options<br />

are ignored.<br />

Examples<br />

EXAMPLE 1<br />

To Power on hosts 1–10:<br />

cwpower -1 n[1-10]<br />

EXAMPLE 2<br />

Power off host 1:<br />

cwpower -0 n1<br />

EXAMPLE 3<br />

Power cycle hosts 2–5:<br />

cwpower -C n[2-5]<br />

EXAMPLE 4<br />

Check the status (On, Off, Unknown, Provisioning) of hosts 1–10:<br />

cwpower -s n[1-10]<br />

EXAMPLE 5<br />

Flash the beacon on an IPMI host for 60 seconds:<br />

cwpower -f -F 60 n5<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

205


cwprovision<br />

CLI Commands<br />

206<br />

cwprovision<br />

cwprovision {<br />

[{--download-path:|-d:}<br />

{--image:|-i:}<br />

{--image.revision:|-I:}<br />

{--kernel:|-k:}[]<br />

[{--kernel-log-level:|-l:}[]]<br />

{--payload:|-p:}[]<br />

[{--payload-download:|-D:}yes|no|default]<br />

[{--repartition:|-R:}yes|no|default]<br />

[{--working-image:|-w:}]|<br />

[{--next-reboot:|-n:}]]|<br />

[{--query-last-image:|-q} [--uncompressed-hostnames:|-u]]<br />

[ ...]}|<br />

[-signature]|<br />

[{-usage|-help|-?}]<br />

}<br />

Description<br />

The Provisioning (cwprovision) utility allows you to provision a host(s) on the cluster and use working copies<br />

to override the kernel and payload associated with the image. See Provisioning on page 153 and Version<br />

Control <strong>System</strong> (VCS) on page 144.<br />

Parameters<br />

{--download-path:|-d:}<br />

The path to which to download the image during the boot process (by default,<br />

/mnt).<br />

{--image:|-i:} The image to use to provision the host(s). Unless you specify the working image<br />

option, <strong>Clusterworx</strong> assumes that the image is a version-controlled image.<br />

{--image.revision:|-I:}<br />

The revision of the image to use to provision the host(s). If you specify a branch<br />

revision, <strong>Clusterworx</strong> uses the tip revision of the branch. If you do not specify a<br />

revision or a working image, <strong>Clusterworx</strong> uses the tip revision of the image.<br />

Revisions may be specified either numerically or by alias.<br />

Note<br />

The image.revision option is not available in conjunction with the working-image option.<br />

{--kernel:|-k:}[]<br />

The working copy of the kernel associated with the image used to provision the<br />

host(s). The name is required only if two or more working copies of the kernel<br />

exist.<br />

[{--kernel-log-level:|-l:}[]]<br />

Select the kernel verbosity level used to control debug messages. This level may<br />

range from 1 (the least verbose) to 8 (the most verbose). By default, the verbosity<br />

level is 1.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


cwprovision<br />

CLI Commands<br />

Power Management{--payload:|-p:}[]<br />

The working copy of the payload associated with the image used to provision<br />

the host(s). The name is required only if two or more working copies of the<br />

payload exist.<br />

[{--payload-download:|-D:}yes|no|default]<br />

(Optional) Specify whether or not to force a download of the payload to the host<br />

during this provisioning operation. The default option automatically detects<br />

whether or not to download the payload. See Advanced Provisioning Options on<br />

page 156.<br />

[{--repartition:|-R:}yes|no|default]<br />

(Optional) Specify whether or not to force a repartition of the host during this<br />

provisioning operation. The default option automatically detects whether or not<br />

to repartition the host. See Advanced Provisioning Options on page 156.<br />

[{--working-image:|-w:}]<br />

(Optional) Use the working copy of the specified image to provision the host(s).<br />

Note<br />

The working-image option is not available in conjunction with the image.revision option.<br />

[{--next-reboot:|-n:}] (Optional) Provision the selected host(s) after the next reboot.<br />

[{--query-last-image:|-q}] (Optional) Display the name and revision of the last image used to provision the<br />

host(s). By default, this option displays a list of compressed host names and their<br />

corresponding images. To change this format, use the uncompressed-hostnames<br />

option. The uncompressed format displays hosts and images in a colonseparated<br />

list that is easily parsed by command-line tools. Each line follows the<br />

format:<br />

Tip<br />

:[VCS| Working] Image::<br />

{|}: :<br />

The kernel and payload specify zero (0) if you use the VCS version and one (1) if you use the working<br />

version to override the kernel or payload using the advanced provisioning options.<br />

Note<br />

The query-last-image option can display image and host information even if the host is down.<br />

[{--uncompressed-hostnames:|-u}]<br />

(Optional) Select this option to change the output format for query-last-image to<br />

list one host name and corresponding image per line. This option can be used<br />

only with query-last-image.<br />

[ ...]<br />

The name of the host(s) to provision. You may enter a range of hosts or a spacedelimited<br />

list of hosts (e.g., host[1-4 7 9]).<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

207


cwprovision<br />

CLI Commands<br />

208<br />

[-signature] (Optional) Displays the application signature. The application signature<br />

contains the name, description, version, and build information of this<br />

application.<br />

[{-usage|-help|-?}] (Optional) Display help information for the command and exit. All other options<br />

are ignored.<br />

Examples<br />

Tip<br />

Use vcs iterate -R:images to see what images are available for provisioning. For a list of working images,<br />

use imgr --list.<br />

EXAMPLE 1<br />

To provision hosts 2–4 with image Compute_Host:<br />

cwprovision -i:Compute_Host n[2-4]<br />

EXAMPLE 2<br />

To provision hosts 2–4 with an older version (version 3) of the image Compute_Host:<br />

cwprovision -i:Compute_Host -I:3 n[2-4]<br />

EXAMPLE 3<br />

To set advanced options to force re-partitioning and download the payload for hosts 2–4:<br />

cwprovision -i:Compute_Host -I:3 -R:yes -D:yes n[2-4]<br />

EXAMPLE 4<br />

To provision hosts 2–10 after the next reboot:<br />

cwprovision -i:rhel4_img --next-reboot n[2-10]<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


cwuser<br />

cwuser {<br />

[useradd [{--description:|-c:}] [{--home:|-d:}] [{--group:|-g:}]<br />

[{--groups:|-G:}[,,...]]<br />

[{--password:|-p:}] [{--shell:|-s:}] [{--uid:|-u:}]<br />

[{--enable:|-U}] [{--disable:|-L:}] [{--normal:|-n:}] ]|<br />

[usermod [{--description:|-c:}] [{--home:|-d:}] [{--group:|-g:}]<br />

[{--groups:|-G:}[,,...]]<br />

[{--password:|-p:}] [{--shell:|-s:}] [{--uid:|-u:}]<br />

[{--enable:|-U}] [{--disable:|-L:}] [{--name:|-l:}] ]|<br />

[userdel ]|<br />

[usershow [[ ...]]]|<br />

[passwd ]|<br />

[encryptpasswd]|<br />

[groupadd [{--description:|-d:}] [{--gid:|-g:}]<br />

[[{--roles:|-r:}] [,...]] [{--regions:|-R:}[,...]] ]|<br />

[groupmod [{--description:|-d:}] [{--gid:|-g:}]<br />

[[{--roles:|-r:}] [,,...]] [{--regions:|-R:}[,,...]]<br />

[{--name:|-n:}] ]|<br />

[groupdel ]|<br />

[groupshow [[ ...]]]|<br />

[roleadd [{--description:|-d:}] [{--privileges:|-p:}[,,...]] ]|<br />

[rolemod [{--description:|-d:}] [{--privileges:|-p:}[,,...]]<br />

[{--name:|-n:}] ]|<br />

[roledel ]|<br />

[roleshow [[ ...]]]|<br />

[privshow [[ ...]]]|<br />

[{--verbose|-v}]|<br />

[-signature]|<br />

[{-usage|-help|-?}]<br />

}<br />

Description<br />

cwuser<br />

CLI Commands<br />

The User Administration (cwuser) utility allows you to perform user, group, and role administration<br />

operations on the cluster. Operations include adding, modifying, deleting, and displaying the current state of<br />

users, groups, and roles.<br />

Subcommands<br />

useradd<br />

Add a <strong>Clusterworx</strong> user account.<br />

[{--description:|-c:}]<br />

The user’s description (e.g., the user’s full name). If you do not specify a<br />

description, this field remains blank.<br />

[{--home:|-d:}]<br />

The user’s home directory (by default, /home/).<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

209


cwuser<br />

CLI Commands<br />

210<br />

[{--group:|-g:}]<br />

The user’s primary group. You may enter the group name or its numerical gid. If<br />

you do not enter a primary group, <strong>Clusterworx</strong> will do one of the following:<br />

Red Hat Linux<br />

Create a group with the same name as the user and assign the primary<br />

group to that group (unless you specify the [--normal:|-n:] option).<br />

SuSE Linux<br />

The primary group for the user is the default group specified for users,<br />

usually users.<br />

[{--groups:|-G:}[,,...]]<br />

The secondary group(s) to which the user belongs. If you do not specify this<br />

option, the user belongs to no secondary groups. Multiple entries are delimited<br />

by commas.<br />

[{--password:|-p:}]<br />

The user’s encrypted password. If you do not specify a password, <strong>Clusterworx</strong><br />

disables the account.<br />

[{--shell:|-s:}] The user’s login shell. If you do not specify this option, <strong>Clusterworx</strong> assigns<br />

/bin/bash as the user’s login shell.<br />

[{--uid:|-u:}] The user’s uid. If you do not specify a uid, <strong>Clusterworx</strong> assigns the first available<br />

uid greater than 499.<br />

[{--enable:|-U}] [{--disable:|-L:}]<br />

These options allow you to enable or disable the user’s account. The -U (unlock)<br />

and -L (lock) options are provided for compatibility with the useradd utility and<br />

allow you to enable and disable the user’s account respectively. If you do not<br />

specify either of these options, the user’s account is enabled by default (unless<br />

no password is supplied).<br />

[{--normal:|-n:}] If you do not specify a group for the user on Red Hat Linux, <strong>Clusterworx</strong> will<br />

behave as it does with most other versions of Linux. The user’s primary group<br />

uses the default user group, users.<br />

The user’s login name.<br />

usermod<br />

Modify an existing <strong>Clusterworx</strong> user account.<br />

[{--description:|-c:}]<br />

The user’s description (e.g., the user’s full name). If you do not specify a<br />

description, <strong>Clusterworx</strong> uses the current description.<br />

[{--home:|-d:}]<br />

The user’s home directory. If left blank, the current home directory.<br />

[{--group:|-g:}]<br />

The user’s primary group. You may enter the group name or its numerical gid. If<br />

you do not enter a primary group, <strong>Clusterworx</strong> uses the current group<br />

assignment.<br />

[{--groups:|-G:}[,,...]]<br />

The secondary group(s) to which the user belongs. If you do not specify this<br />

option, <strong>Clusterworx</strong> assigns the user to any secondary groups previously<br />

assigned. Multiple entries are delimited by commas.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


[{--password:|-p:}]<br />

Change the user’s encrypted password. If you do not specify a password,<br />

<strong>Clusterworx</strong> uses the current password.<br />

[{--shell:|-s:}] The user’s login shell. If you do not specify this option, <strong>Clusterworx</strong> uses the<br />

login shell previously assigned to the user.<br />

[{--uid:|-u:}] The user’s uid. If you do not specify a uid, <strong>Clusterworx</strong> uses the current uid.<br />

cwuser<br />

CLI Commands<br />

[{--enable:|-U}] [{--disable:|-L:}]<br />

These options allow you to enable or disable the user’s account. The -U (unlock)<br />

and -L (lock) options are provided for compatibility with the useradd utility and<br />

allow you to enable and disable the user’s account respectively. If you do not<br />

specify either of these options, the user’s account is enabled by default (unless<br />

no password is supplied).<br />

[{--name:|-l:}] Change the login name for the user’s account. If you do not specify this option,<br />

<strong>Clusterworx</strong> uses the previous login name.<br />

The user’s login name.<br />

userdel<br />

Delete a <strong>Clusterworx</strong> user account.<br />

The user’s login name.<br />

usershow<br />

Display the current settings for <strong>Clusterworx</strong> user(s).<br />

[[ ...]]<br />

(Optional) The user’s(s’) login name(s). Multiple entries are delimited by spaces.<br />

Leave this option blank to display all users.<br />

passwd<br />

Alter the password for a <strong>Clusterworx</strong> user. After making the change, <strong>Clusterworx</strong> prompts you to re-enter the<br />

password.<br />

The user’s login name.<br />

encryptpasswd<br />

This option allows you to encrypt a clear text password into the <strong>Clusterworx</strong> encrypted format and display it<br />

on screen. You may then copy and paste the encrypted password when creating a new user account. See<br />

example on page 214.<br />

Note<br />

Encrypted password strings often contain characters with which the Linux shell has problems. To<br />

overcome this, encrypted text must be escaped using single quotes:<br />

cwuser usermod '-p:$1$Jx^VLEZy$/7SmJmEbmbVMQW13kxaIg.' john<br />

groupadd<br />

Add a group to <strong>Clusterworx</strong>.<br />

[{--description:|-d:}]<br />

The group’s description. If you do not specify a description, this field remains<br />

blank.<br />

[{--gid:|-g:}] The group’s gid. If you do not specify a gid, <strong>Clusterworx</strong> assigns the first<br />

available gid greater than 499.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

211


cwuser<br />

CLI Commands<br />

212<br />

[{--roles:|-r:}[,,...]]<br />

The roles associated with the group. If you do not specify a role(s), the group is<br />

not associated with any roles. Multiple entries are delimited by commas.<br />

[{--regions:|-R:}[,,...]]<br />

The region(s) associated with the group. If you do not specify a region(s),<br />

<strong>Clusterworx</strong> does not associate the group with any regions. Multiple entries are<br />

delimited by commas.<br />

Group name.<br />

groupmod<br />

Modify an existing <strong>Clusterworx</strong> group.<br />

[{--description:|-d:}]<br />

The group’s description. If you do not specify a description, <strong>Clusterworx</strong> uses<br />

the current group description.<br />

[{--gid:|-g:}] The group’s gid. If you do not specify a gid, <strong>Clusterworx</strong> uses the gid previously<br />

assigned.<br />

[{--roles:|-r:}[,,...]]<br />

The roles associated with the group. If you do not specify a role(s), the group<br />

maintains its previous role associations. Multiple entries are delimited by<br />

commas.<br />

[{--regions:|-R:}[,,...]]<br />

The regions associated with the group. If you do not specify a region(s),<br />

<strong>Clusterworx</strong> maintains the current region associations. Multiple entries are<br />

delimited by commas.<br />

[{--name:|-n:}] Use this option to change the group name. If you do not specify a name, the<br />

group name remains unchanged.<br />

Current group name.<br />

groupdel<br />

Delete a <strong>Clusterworx</strong> group.<br />

Group name.<br />

groupshow<br />

Display the current settings for <strong>Clusterworx</strong> group(s).<br />

[[ ...]]<br />

(Optional) Group name(s) for which to display the current settings. Multiple<br />

entries are delimited by spaces. Leave this option blank to display all groups.<br />

roleadd<br />

Add a role to the <strong>Clusterworx</strong> database.<br />

[{--description:|-d:}]<br />

The role’s description. If you do not specify a role description, this field remains<br />

blank.<br />

[{--privileges:|-p:}[,,...]]<br />

The privileges associated with the role. If you do not specify a privilege(s),<br />

<strong>Clusterworx</strong> does not assign any privileges to the role. Multiple entries are<br />

delimited by commas.<br />

The name of the role.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


olemod<br />

Modify an existing <strong>Clusterworx</strong> role.<br />

cwuser<br />

CLI Commands<br />

[{--description:|-d:}]<br />

The role’s description. If you do not specify a description for the role,<br />

<strong>Clusterworx</strong> uses the current description.<br />

[{--privileges:|-p:}[,,...]]<br />

The privileges associated with the role. If you do not specify a privilege(s),<br />

<strong>Clusterworx</strong> uses current privilege associations. Multiple entries are delimited<br />

by commas.<br />

[{--name:|-n:}] Use this option to change the name of the role. If you do not specify a name, the<br />

role name remains unchanged.<br />

The name of the current role.<br />

roledel<br />

Delete a <strong>Clusterworx</strong> role.<br />

The name of the role to delete.<br />

roleshow<br />

Display the current settings for <strong>Clusterworx</strong> role(s).<br />

[[ ...]]<br />

(Optional) The name of the role(s) for which to display the current settings.<br />

Multiple entries are delimited by spaces. Leave this option blank to display all<br />

roles.<br />

privshow<br />

Display the current settings for <strong>Clusterworx</strong> privilege(s).<br />

[[ ...]]<br />

(Optional) The privilege(s) for which to display the current settings. Multiple<br />

entries are delimited by spaces. Leave this option blank to display all privileges.<br />

[{--verbose|-v}] (Optional) Display verbose output when performing operations. This option is<br />

common to all subcommands.<br />

[-signature] (Optional) Displays the application signature. The application signature<br />

contains the name, description, version, and build information of this<br />

application.<br />

[{-usage|-help|-?}] (Optional) Display help information for the command and exit. All other options<br />

are ignored.<br />

Examples<br />

EXAMPLE 1<br />

Display the current users in the system:<br />

cwuser usershow -v<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

213


cwuser<br />

CLI Commands<br />

214<br />

EXAMPLE 2<br />

Add the user john to the users group:<br />

cwuser useradd -g:users john<br />

Note<br />

John’s account will be disabled until you add a password.<br />

EXAMPLE 3<br />

Add an encrypted password to a new user account:<br />

cwuser encryptpasswd<br />

<br />

The command outputs an encrypted string to use when creating the new account.<br />

$1$Jx^VLEZy$/7SmJmEbmbVMQW13kxaIg<br />

Note<br />

Because encrypted password strings often contain characters with which the Linux shell has problems,<br />

encrypted text and user names containing spaces (e.g., John Johnson) must be escaped using single<br />

quotes.<br />

Create the new user account using the encrypted password.<br />

cwuser useradd '-p:$1$Jx^VLEZy$/7SmJmEbmbVMQW13kxaIg.' -d:/home/john -s:/bin/bash -uid:510 -g:users<br />

-c:‘John Johnson’ john<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


dbix<br />

dbix {<br />

[{-d|--delete} [ ...]]|<br />

[{-i|--import} ] |<br />

[{-x|--export} [ ...]]|<br />

[{-usage|-help|-?}]<br />

}<br />

Description<br />

dbix<br />

CLI Commands<br />

The dbix application provides support for importing, exporting, and deleting <strong>Clusterworx</strong> database entries.<br />

The application uses the standard input and output streams for reading and writing data, and the delete and<br />

export options accept an optional space-delimited list of contexts (a context refers to the path to the database<br />

attributes on which to perform the operation).<br />

Parameters<br />

[{-d|--delete} [ ...]]<br />

Delete entries under the specified context(s).<br />

[{-i|--import} ]<br />

Import entries from stdin.<br />

[{-x|--export} [ ...]]<br />

Export entries for the specified context(s) to stdout.<br />

[{-usage|-help|-?}] (Optional) Display help information for the command and exit. All other options<br />

are ignored.<br />

Examples<br />

EXAMPLE 1<br />

Export the entire database to a file:<br />

dbix -x > cwx.3.2.4-May.20.2005.db<br />

EXAMPLE 2<br />

Export the hosts section of the database to a file:<br />

dbix -x hosts > cwx.3.2.4-hosts.db<br />

EXAMPLE 3<br />

Delete the entire database:<br />

dbix -d<br />

(confirm action)<br />

EXAMPLE 4<br />

Import a new database (or additions):<br />

dbix -i < cwx.3.2.4-new_hosts.db<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

215


dbx<br />

CLI Commands<br />

216<br />

dbx<br />

dbx {<br />

[{--domain:|-d} ] [{--format:|-f:} ] [{-usage|-help|-?}] [-runtime[:verbose]]<br />

[-signature] [-splash]<br />

}<br />

Description<br />

This utility exports specific file formats from the database. Supported formats include a simple host name list<br />

typically used for mpich, pdsh, etc., an IP address to host name map (/etc/hosts), and configuration files for<br />

powerman and conman.<br />

Parameters<br />

Note<br />

Arguments and option values are case sensitive. Option names are not.<br />

[{--domain:|-d} ]<br />

(Optional) Domain name.<br />

[{--format:|-f:} ]<br />

(Optional) Output file format. Supported formats are defined as follows:<br />

names<br />

Simple host name list.<br />

hosts<br />

IP address to host name map.<br />

powerman<br />

Powerman configuration file.<br />

conman<br />

Conman configuration file.<br />

[{-usage|-help|-?}] (Optional) Display help information for the command and exit. All other options<br />

are ignored.<br />

[-runtime[:verbose]] (Optional) Provides specific information about the current Java runtime<br />

environment.<br />

[-signature] (Optional) Displays the application signature. The application signature<br />

contains the name, description, version, and build information of this<br />

application.<br />

[-splash] (Optional) Enables the presentation of the application caption or splash screen.<br />

By default, on.<br />

Examples<br />

EXAMPLE 1<br />

Use dbx to configure a powerman.conf file:<br />

dbx -f:conman > /etc/conman.conf<br />

EXAMPLE 2<br />

Use dbx to configure a hosts file:<br />

dbx -f:hosts -d:lnxi.com > /etc/hosts<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


imgr<br />

imgr {<br />

{--image:|-i:} [{--kernel:|-k:}] [{--kernel-revision:|-K:}]<br />

[{--payload:|-p:}] [{--payload.revision:|-P:}] [{--force:|-f:}] [{--list:|-l:}]|<br />

[{-usage|-help|-?}]<br />

}<br />

Description<br />

The imgr command is used to modify the kernel or payload of an existing image. To create a new image,<br />

please refer to Image Management on page 120. The Imaging CLI allows you to perform the following<br />

operations:<br />

Specify a kernel for an image<br />

Specify a payload for an image<br />

Note<br />

If you change a kernel or payload, <strong>Clusterworx</strong> rebuilds the image but still requires that you commit the<br />

image to VCS. See vcs on page 228.<br />

Parameters<br />

imgr<br />

CLI Commands<br />

{--image:|-i:} The name of the image to modify. By default, <strong>Clusterworx</strong> selects the version of<br />

the image that was most recently checked in.<br />

[{--kernel:|-k:}]<br />

(Optional) The name of the kernel to modify.<br />

[{--kernel-revision:|-K:}<br />

(Optional) Specify which kernel revision to use. If you do not specify a revision,<br />

you will be asked whether or not to use the latest revision.<br />

[{--payload:|-p:}]<br />

(Optional) The name of the payload.<br />

[{--payload.revision:|-P:}]<br />

(Optional) Specify which payload revision to use. If you do not specify a<br />

revision, you will be asked whether or not to use the latest revision.<br />

[{--force:|-f:}] (Optional) Select the force option to automatically select the latest revision of a<br />

payload or kernel. Selecting this option suppresses the prompt that asks you<br />

whether or not to use the latest revision.<br />

[{--list:|-l:}] (Optional) Display a list of working images.<br />

[{-usage|-help|-?}] (Optional) Display help information for the command and exit. All other options<br />

are ignored.<br />

Examples<br />

Update image Compute to use revision 4 of kernel-2.4:<br />

imgr -i:Compute -k:linux-2.4 -K:4<br />

To use the latest revision of a payload in an image:<br />

imgr -i:MyImage -p:MyPayload<br />

You have not specified the payload revision (latest is 1)<br />

Using latest revisions, continue (yes/no)?<br />

yes<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

217


kmgr<br />

Example 1<br />

218<br />

kmgr<br />

kmgr {<br />

{--name:|-n:} [{--description:|-d:}]<br />

{--path:|-p:} [{--kernel:|-k:}]<br />

[{--architecture:|-a:}] [{--modules:|-m:}] [{--binary:|-b:}] [{--list:|-l:}]|<br />

[{-usage|-help|-?}]<br />

}<br />

Description<br />

The kmgr command is used to create a kernel package from a binary kernel or from a kernel source<br />

directory. The utility copies the binary kernel, .config, <strong>System</strong>.map, and modules to the kernel directory.<br />

Parameters<br />

{--name:|-n:} The kernel name.<br />

[{--description:|-d:}:]<br />

(Optional) A brief description of the kernel.<br />

{--path:|-p:}<br />

The path to the kernel source.<br />

[{--kernel:|-k:}]<br />

(Optional) The binary name of the kernel. By default,<br />

arch//boot/bzImage<br />

[{--architecture:|-a:}]<br />

(Optional) The kernel architecture: amd64 or ia32 (by default, ia32).<br />

[{--modules:|-m:}] (Optional) The absolute path to lib/modules/.<br />

[{--binary:|-b:}] (Optional) Enable support for binary kernels.<br />

[{--list:|-l:}] (Optional) Display a list of working kernels.<br />

[{-usage|-help|-?}] (Optional) Display help information for the command and exit. All other options<br />

are ignored.<br />

Example 1<br />

Create a new kernel named linux-2.4:<br />

kmgr -n:linux-2.4 -p:/usr/src/linux-2.4.20-8 -a:ia32<br />

Example 2<br />

Create a new kernel, linux-2.6, from a binary kernel:<br />

kmgr -b -n:linux-2.6 -p:/boot/vmlinuz-2.6.16-smp -a:amd64<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


pdcp<br />

pdcp {[<br />

[-w [,...,]]|<br />

[-x [,...,]]|<br />

[-a]|<br />

[-i]|<br />

[-r]|<br />

[-p]|<br />

[-q]|<br />

[-f ]|<br />

[-l ]|<br />

[-t ]|<br />

[-d]]<br />

[ ... ]<br />

<br />

}<br />

Description<br />

Pdcp is a parallel copy command used to copy files from a Master Host to all or selected hosts in the cluster.<br />

Unlike rcp which copies files only to an individual host, pdcp can copy files to multiple remote hosts in<br />

parallel. When pdcp receives SIGINT (Ctrl+C), it lists the status of current threads. A second SIGINT within<br />

one second terminates the program.<br />

Parameters<br />

TARGET HOST LIST OPTIONS<br />

Note<br />

If you do not specify any of the following options, the WCOLL environment variable must point to a file<br />

that contains a list of hosts, one per line.<br />

[-w [,...,]]<br />

Note<br />

No spaces are allowed in comma-delimited lists.<br />

(Optional) Execute this operation on the specified host(s). You may enter a<br />

range of hosts or a comma-delimited list of hosts (e.g., host[1-4,7,9]). Any list that<br />

consists of a single “-” character causes pdsh to read the target hosts from stdin,<br />

one per line.<br />

[-x [,...,]]<br />

(Optional) Exclude the specified hosts from this operation. You may enter a<br />

range of hosts or a comma-delimited list of hosts (e.g., host[1-4,7,9]). You may<br />

use this option in conjunction with other target host list options such as -a.<br />

[-a] (Optional) Perform this operation on all hosts in the cluster.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

pdcp<br />

Example 2<br />

219


pdcp<br />

Example 2<br />

220<br />

[-i] (Optional) Use this option in conjunction with -a or -g to request canonical host<br />

names. By default, pdsh uses reliable host names.<br />

Note<br />

Gender or -g classifications are not currently supported in this version of pdsh.<br />

[-r] (Optional) Copy recursively.<br />

[-p] (Optional) Preserve modification time and modes.<br />

[-q] (Optional) List option values and target hosts.<br />

[-f ] (Optional) Set the maximum number of simultaneous remote copies (by default,<br />

32).<br />

[-l ] (Optional) This option allows you to copy files as another user, subject to<br />

authorization. For BSD rcmd, the invoking user and system must be listed in the<br />

user’s *.rhosts file (even for root).<br />

[-t ] (Optional) Set the connect time-out (by default, 10 seconds)—this is concurrent<br />

with the normal socket level time-out.<br />

[-d] (Optional) Include more complete thread status when receiving SIGINT and,<br />

when finished, display connect and command time statistics on stderr.<br />

[ ... ]<br />

List the source file(s) you want to copy from the Master Host. To copy multiple<br />

files, enter a space-delimited list of files (e.g., pdcp -a /source1 /source2 /<br />

source3 /destination).<br />

Note<br />

The destination is always the last file in the list.<br />

The location to which to copy the file. The destination is set off from the source<br />

by a space.<br />

Example 1<br />

Copy /etc/hosts to foo01–foo05:<br />

pdcp -w foo[01-05] /etc/hosts /etc<br />

Example 2<br />

Copy /etc/hosts to foo0 and foo2–foo5:<br />

pdcp -w foo[0-5] -x foo1 /etc/hosts /etc<br />

Example 3<br />

To copy a file to all hosts in the cluster:<br />

pdcp -a /etc/hosts /etc/<br />

Example 4<br />

To copy a directory recursively:<br />

pdcp -a -r /scratch/dir /scratch<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Example 5<br />

To copy multiple files to a directory<br />

pdcp -a /etc/passwd /etc/shadow /etc/group /etc<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

pdcp<br />

Example 2<br />

221


pdsh<br />

Example 2<br />

222<br />

pdsh<br />

pdsh {<br />

[[-w [,...,]]|<br />

[-x [,...,]]|<br />

[-a]|<br />

[-i]|<br />

[-q]|<br />

[-f ]|<br />

[-s]|<br />

[-l ]|<br />

[-t ]|<br />

[-u ]|<br />

[-n ]|<br />

[-d]|<br />

[-S]|<br />

[,...,]]<br />

<br />

}<br />

Description<br />

Pdsh is a variant of the rsh command. However, unlike rsh which runs commands only on an individual host,<br />

pdsh allows you to issue parallel commands on groups of hosts. When pdsh receives SIGINT (Ctrl+C), it lists<br />

the status of current threads. A second SIGINT within one second terminates the program. If set, the<br />

DSHPATH environment variable is the PATH for the remote shell.<br />

If a command is not specified on the command line, pdsh runs interactively, prompting for commands, then<br />

executing them when terminated with a carriage return. In interactive mode, target hosts that time-out on the<br />

first command are not contacted for subsequent commands. Commands prefaced with an exclamation point<br />

are executed on the local system.<br />

Parameters<br />

TARGET HOST LIST OPTIONS<br />

[-w [,...,]]<br />

Note<br />

No spaces are allowed in comma-delimited lists.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

(Optional) Execute this operation on the specified host(s). You may enter a<br />

range of hosts or a comma-delimited list of hosts (e.g., host[1-4,7,9]). Any list that<br />

consists of a single “-” character causes pdsh to read the target hosts from stdin,<br />

one per line.<br />

[-x [,...,]]<br />

(Optional) Exclude the specified hosts from this operation. You may enter a<br />

range of hosts or a comma-delimited list of hosts (e.g., host[1-4,7,9]). You may<br />

use this option in conjunction with other target host list options such as -a.<br />

[-a] (Optional) Perform this operation on all hosts in the cluster. By default, a list of<br />

all hosts installed in the cluster is available under /etc/pdsh/machines.


[-i] (Optional) Use this option in conjunction with -a or -g to request canonical host<br />

names. By default, pdsh uses reliable host names.<br />

Note<br />

Gender or -g classifications are not currently supported in this version of pdsh.<br />

[-q] (Optional) List option values and target hosts.<br />

[-f ] (Optional) Set the maximum number of simultaneous remote commands (by<br />

default, 32).<br />

[-s] (Optional) Combine the remote command stderr with stdout. Combining these<br />

commands saves one socket per connection but breaks remote cleanup when<br />

pdsh is interrupted with a Ctrl+C.<br />

[-l ] (Optional) This option allows you to run remote commands as another user,<br />

subject to authorization. For BSD rcmd, the invoking user and system must be<br />

listed in the user’s *.rhosts file (even for root).<br />

[-t ] (Optional) Set the connect time-out (by default, 10 seconds)—this is concurrent<br />

with the normal socket level time-out.<br />

[-u ] (Optional) Limit the amount of time a remote command is allowed to execute<br />

(by default, no limit is defined).<br />

[-n ] (Optional) Set the number of tasks spawned per host. In order for this to be<br />

effective, the underlying remote shell service must support spawning multiple<br />

tasks.<br />

[-d] (Optional) Include more complete thread status when receiving SIGINT and,<br />

when finished, display connect and command time statistics on stderr.<br />

[-S] (Optional) Return the largest of the remote command return values.<br />

[,...,]<br />

The name of the host(s) on which to execute the specified operation. You may<br />

enter a range of hosts or a comma-delimited list of hosts (e.g., host[1-4,7,9]).<br />

Note<br />

No spaces are allowed in comma-delimited lists.<br />

The command you want to execute on the host(s).<br />

Example 1<br />

Run a command on foo7 and foo9–foo15:<br />

pdsh -w foo[7,9-15] <br />

Example 2<br />

Run a command on foo0 and foo2–foo5:<br />

pdsh -w foo[0-5] -x foo1 <br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

pdsh<br />

Example 2<br />

223


pdsh<br />

Example 2<br />

224<br />

Example 3<br />

In some instances, it is preferable to run pdsh commands using a pdsh shell. To open the shell for a specific<br />

group of hosts, enter the following:<br />

pdsh -w foo[0-5]<br />

From the shell, you may enter commands without specifying the host names:<br />

pdsh> date<br />

To exit the pdsh shell, type exit.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


pmgr<br />

pmgr {<br />

[[{--description:|-d:}] [{--include:|-i:}]<br />

[{--include-from:|-I:}] [{--location:|-l:}] [{--silent:|-s:}]<br />

[{--exclude:|-x:}]] [{--exclude-from:|-X:}] |<br />

[{-usage|-help|-?}]<br />

}<br />

Description<br />

The pmgr utility generates a <strong>Clusterworx</strong> payload from an existing Linux installation to use on a specified<br />

host—however, <strong>Clusterworx</strong> services must be running on the remote host. An exclude list (or file) allows you<br />

to manage which files and directories you want to exclude from the payload (e.g., remote NFS mounted<br />

directories or /proc).<br />

Parameters<br />

[-d:] (Optional) The description of the payload.<br />

[-i:]<br />

(Optional) Enter the name of the file or directory to include in the payload.<br />

When you specify a directory, the payload will include all files and<br />

subdirectories contained in the directory.<br />

Tip<br />

To include a previously excluded item (i.e., a file or directory contained in an excluded directory), enter<br />

the name of the file or subdirectory.<br />

[{--include-from:|-I:}]<br />

(Optional) Enter the name of the file that contains a list of all files to include in<br />

the payload.<br />

[-l:] (Optional) The directory in which to create the payload. By default, the user's<br />

payload working directory with the payload name appended.<br />

[-s:] (Optional) Omit all output other than errors, including the payload creation<br />

progress meter and final summary. This is useful when scripting pmgr.<br />

[-x:]<br />

(Optional) Exclude the named file or directory from the payload. Excluding a<br />

directory excludes all files and subdirectories.<br />

[{--exclude-from:|-X:}]<br />

(Optional) Enter the name of the file that contains a list of all files to exclude<br />

from the payload.<br />

The name of the payload.<br />

[{-usage|-help|-?}] (Optional) Display help information for the command and exit. All other options<br />

are ignored.<br />

Example<br />

The following example demonstrates how to create a new payload from an existing host installation, n2, and<br />

exclude some unwanted directories from the payload:<br />

pmgr -x:/proc:/home:/var/log:/dev/pts:/mnt -h=n2 n2_payload<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

pmgr<br />

Example 2<br />

225


powerman<br />

Example 2<br />

226<br />

powerman<br />

powerman {<br />

[[{--on|-1}]|<br />

[{--off|-0}]|<br />

[{--cycle|-c}]|<br />

[{--reset|-r}]|<br />

[{--flash|-f}]|<br />

[{--unflash|-u}]|<br />

[{--list|-l}]|<br />

[{--query|-q}]|<br />

[{--node|-n}]|<br />

[{--beacon|-b}]|<br />

[{--temp|-t}]|<br />

[{--help|-h}]|<br />

[{--license|-L}]|<br />

[{--destination|-d} host[:port]]|<br />

[{--version|-V}]|<br />

[{--device|-D}]|<br />

[{--telemetry|-T}]|<br />

[{--exprange|-x}]]<br />

[ ...]<br />

}<br />

Description<br />

PowerMan offers power management controls for hosts in clustered environments. Controls include power<br />

on, power off, and power cycle via remote power control (RPC) devices. Target host names are mapped to<br />

plugs on RPC devices in powerman.conf.<br />

Parameters<br />

[{--on|-1}] (Optional) Power hosts On.<br />

[{--off|-0}] (Optional) Power hosts Off.<br />

[{--cycle|-c}] (Optional) Cycle power to hosts.<br />

[{--reset|-r}] (Optional) Assert hardware reset for hosts (if implemented by RPC).<br />

[{--flash|-f}] (Optional) Turn beacon On for hosts (if implemented by RPC).<br />

[{--unflash|-u}] (Optional) Turn beacon Off for hosts (if implemented by RPC).<br />

[{--list|-l}] (Optional) List available hosts. If possible, output is compressed into host ranges.<br />

[{--query|-q}] (Optional) Query plug status of a host(s). If you do not specify a host(s),<br />

PowerMan queries the plug status of all hosts. Status is not cached—PowerManD<br />

queries the appropriate RPC’s each time you use this option. Hosts connected to<br />

RPC’s that cannot be contacted (e.g., due to network failure) are reported as<br />

status unknown. If possible, output is compressed into host ranges.<br />

[{--node|-n}] (Optional) Query host power status (if implemented by RPC). If you do not<br />

specify a host(s), PowerMan queries the power status of all hosts. Please note<br />

that this option returns the host’s power status only, not its operational status. A<br />

host in the Off state could be On at the plug and operating in standby power<br />

mode.<br />

[{--beacon|-b}] (Optional) Query beacon status (if implemented by RPC). If you do not specify a<br />

host(s), PowerMan queries the beacon status of all hosts.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


[{--temp|-t}] (Optional) Query host temperature (if implemented by RPC). If you do not<br />

specify a host(s), PowerMan queries the temperature of all hosts. Temperature<br />

information is not interpreted by PowerMan and is reported as received from<br />

the RPC on one line per host, prefixed by the host name.<br />

[{--help|-h}] (Optional) Display option summary.<br />

[{--license|-L}] (Optional) Show PowerMan license information.<br />

[{--destination|-d} host[:port]]<br />

(Optional) Connect to a PowerMan daemon on a non-default host and optional<br />

port.<br />

[{--version|-V}] (Optional) Display the PowerMan version number.<br />

[{--device|-D}] (Optional) Display RPC status information. If you specify a host(s), PowerMan<br />

displays only RPC’s that match the host list.<br />

[{--telemetry|-T}] (Optional) Displays RPC telemetry information as commands are processed.<br />

This is useful for debugging device scripts.<br />

[{--exprange|-x}] (Optional) Expand host ranges in query responses.<br />

[ ...]<br />

The name of the host(s) on which to execute the specified operation. You may<br />

enter a range of hosts or a space- or comma-delimited list of hosts (e.g., host[1-4<br />

7 9] or host[1-4 7,9]).<br />

FILES<br />

/usr/sbin/powermand<br />

/usr/bin/powerman<br />

/usr/bin/pm<br />

/etc/powerman/powerman.conf<br />

/etc/powerman/*.dev<br />

Example 1<br />

To power on hosts bar, baz, and n01–n05:<br />

powerman --on bar baz n[01-05]<br />

Example 2<br />

To turn off hosts n4 and n7–n9:<br />

powerman -0 n4,n[7-9]<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

powerman<br />

Example 2<br />

227


vcs<br />

Example 2<br />

228<br />

vcs<br />

vcs {<br />

[{identify| id}]|<br />

[status]|<br />

[include ]|<br />

[exclude ]|<br />

[archive ]|<br />

[import -R: -M: [-n:] [-d:] []]|<br />

[commit [-n:] [-d:] []]|<br />

[branch [-n:] [-d:] []]|<br />

[{checkout | co} -R: -M: [-r:||]]|<br />

[{update | up} [-r:||] []]|<br />

[name [-R:] [-M:] [-r:||] ]|<br />

[describe [-R:] [-M:] [-r:||] ]|<br />

[{narrate | log} [-R: -M:] [-r:||]]|<br />

[iterate [-R: [-M: [-r:||]]]]|<br />

[list]|<br />

[{-usage|-help|-?}]<br />

}<br />

Description<br />

Manage version controlled directories within <strong>Clusterworx</strong>.<br />

Parameters<br />

[{identify| id}] (Optional) Display information about the module contained in the current<br />

working directory.<br />

[status] (Optional) Display the status of the files within the current working directory<br />

including whether they have been added (A), modified (M) or deleted (D).<br />

[include ] (Optional) Add provided list of files to the include list. You may also use this<br />

option to override a specific file exclusion.<br />

[exclude ] (Optional) Add provided list of files to the exclude list. Excluding files allows you<br />

to remove files that may cause problems (e.g., when trying to archive files).<br />

[archive ] (Optional) Create an archive of the current working directory in the given file.<br />

This option may be used to archive a host and include it in VCS as a payload.<br />

[import -R: -M: [-n:] [-d:] []]<br />

(Optional) Create a new module with the provided list of files or all of the<br />

current working directory.<br />

[commit [-n:] [-d:] []]<br />

(Optional) Insert a new revision in the module using the provided list of files or<br />

any working copy modifications.<br />

[branch [-n:] [-d:] []]<br />

(Optional) Insert a new revision that is not on tip using the provided list of files<br />

or any working copy modifications.<br />

[{checkout| co} -R: -M: [-r:||]]<br />

(Optional) Retrieve an existing revision from a module. The contents of the<br />

module will be stored in a new directory named after the module.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


[{update| up} [-r:||] []]<br />

(Optional) Update the current directory to use the latest tip revision of a<br />

branch (3.4), the main trunk of a specific branch (4), or a branch with a specific<br />

name (Golden). The files option allows you to update a specific file contained in<br />

a payload.<br />

[name [-R:] [-M:] [-r:||] ]<br />

(Optional) Add, modify or delete the optional name or alias of a revision. Names<br />

are unique revision identifiers for the entire module. A blank for the name will<br />

delete the previous value.<br />

[describe [-R:] [-M:] [-r:||] ]<br />

(Optional) Add, modify or delete the optional description of a revision. A blank<br />

for the description will delete the previous value.<br />

[{narrate| log} [-R: -M:] [-r:||]]<br />

(Optional) Display the history of a module revision.<br />

[iterate [-R: [-M: [-r:||]]]]<br />

(Optional) Display the organizational information of the version service.<br />

[list] (Optional) Display a list of all category types (payloads, kernels, and images) that<br />

have been checked into VCS.<br />

[{-usage|-help|-?}] (Optional) Display help information for the command and exit. All other options<br />

are ignored.<br />

Examples<br />

EXAMPLE 1<br />

Display a list of images contained in the Version Control <strong>System</strong>:<br />

vcs iterate -R:images<br />

EXAMPLE 2<br />

Display a list of files that have changed since the last time the Compute payload was checked out:<br />

cd /opt/cwx/imaging/root/payloads/Compute<br />

vcs status<br />

EXAMPLE 3<br />

List current versions of all category types (payloads, kernels, and images) checked into VCS:<br />

vcs list<br />

Images<br />

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -<br />

MyImage (1) - Kernel: MyKernel (3) Payload: MyPayload (6.1.4)<br />

TestImage (1) - Kernel: Compute (2) Payload: SLES10 (23)<br />

Kernels<br />

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -<br />

MyKernel (5)<br />

Compute (2)<br />

Payloads<br />

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -<br />

MyPayload (6.1.7)<br />

SLES9 (34)<br />

SLES10 (23)<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

vcs<br />

Example 2<br />

229


vcs<br />

Example 2<br />

230<br />

EXAMPLE 4<br />

Check out a specific revision, 8, of a version controlled payload named Compute:<br />

vcs checkout -R:payloads -M:Compute -r:8<br />

EXAMPLE 5<br />

Use VCS to make sure you have the latest revision of what was originally checked out in the previous<br />

example:<br />

cd /opt/cwx/imaging//payloads/Compute<br />

vcs update<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


xms<br />

xms<br />

Description<br />

This command displays the name, build date, build time, version number and build number for each module<br />

installed in <strong>Clusterworx</strong>. The <strong>Clusterworx</strong>Release module displays the version information for the<br />

<strong>Clusterworx</strong> installation on the system.<br />

Examples<br />

EXAMPLE 1<br />

Display all build information for each module:<br />

xms<br />

2004-11-15 23:44:48.269-0700 1.1.3-5 Authentication<br />

2004-11-15 23:45:02.268-0700 1.1.4-5 AuthenticationServer<br />

2004-12-21 13:28:14.364-0700 3.2.2-8 <strong>Clusterworx</strong>Release<br />

2004-11-15 23:45:12.265-0700 1.1.3-5 Command<br />

2004-11-15 23:45:29.268-0700 1.1.4-5 CommandDesktop<br />

2004-11-15 23:45:47.294-0700 1.1.4-4 CommandServer<br />

2004-11-15 23:46:31.279-0700 1.2.1-4 DHCP<br />

2004-11-15 15:51:54.322-0700 1.2.2-7 DHCPServer<br />

2004-11-15 23:45:57.265-0700 1.1.1-4 Database<br />

2004-12-01 13:40:30.318-0700 1.1.0-8 DatabaseConsole<br />

2004-11-15 23:46:22.264-0700 2.1.1-4 DatabaseServer<br />

2004-11-15 23:46:57.268-0700 1.1.2-5 DistributionServer<br />

2004-11-15 23:47:40.287-0700 1.2.1-4 File<br />

2004-11-15 23:47:50.263-0700 1.1.4-4 FileConsole<br />

2004-11-24 16:21:58.362-0700 1.2.1-8 FileDesktop<br />

2004-11-15 23:48:24.266-0700 1.1.4-4 FileServer<br />

2004-12-08 14:02:56.356-0700 1.3.2-9 Foundation<br />

2004-11-15 23:48:51.282-0700 2.0.1-4 Host<br />

2004-12-17 12:00:05.357-0700 1.0.2-8 HostConsole<br />

2004-12-17 12:01:19.363-0700 2.0.2-11 HostDesktop<br />

2004-11-15 23:49:47.265-0700 2.0.2-4 HostServer<br />

2004-11-15 23:49:59.261-0700 1.1.3-4 Icebox<br />

2004-12-17 12:02:46.325-0700 1.1.4-10 IceboxDesktop<br />

2004-11-15 23:50:35.282-0700 1.1.4-4 IceboxServer<br />

2004-11-15 23:50:46.262-0700 1.2.1-4 Image<br />

2004-11-15 23:51:01.264-0700 1.2.2-4 ImageConsole<br />

2004-12-17 12:03:51.363-0700 1.3.2-11 ImageDesktop<br />

2004-12-08 14:06:59.367-0700 1.2.2-9 ImageServer<br />

2004-11-15 23:52:29.287-0700 1.2.1-4 Instrumentation<br />

2004-09-22 15:32:02.549-0600 1.0.0-0 InstrumentationConsole<br />

2004-12-17 12:06:11.318-0700 1.2.2-10 InstrumentationServer<br />

2004-12-17 12:06:54.383-0700 1.1.4-8 Integration<br />

2004-11-15 23:53:15.265-0700 1.2.1-4 Kernel<br />

2004-11-15 23:53:30.265-0700 1.2.2-4 KernelConsole<br />

2004-11-15 23:53:49.283-0700 1.2.2-4 KernelServer<br />

2004-11-15 23:54:04.264-0700 2.0.2-4 License<br />

2004-11-15 23:54:19.266-0700 2.0.1-4 LicenseDesktop<br />

2004-11-15 23:54:36.272-0700 2.0.2-4 LicenseServer<br />

2004-11-15 23:54:50.266-0700 1.0.1-4 Log<br />

2004-11-15 23:55:05.315-0700 1.0.2-4 LogServer<br />

2004-12-17 12:07:43.329-0700 1.2.2-9 Migration<br />

2004-12-17 12:08:15.318-0700 1.3.1-9 Payload<br />

2004-11-15 23:55:52.262-0700 1.3.2-4 PayloadConsole<br />

2004-12-21 13:25:56.321-0700 1.3.2-11 PayloadServer<br />

2004-11-15 23:56:28.262-0700 1.3.0-4 Provisioning<br />

2004-11-24 16:31:23.369-0700 1.1.0-8 ProvisioningConsole<br />

2004-12-17 12:09:40.363-0700 1.3.0-10 ProvisioningDesktop<br />

2004-11-15 23:57:34.269-0700 1.3.0-4 ProvisioningServer<br />

- - : : . - . . - Testing<br />

2004-12-17 12:10:54.371-0700 1.0.2-8 Tree<br />

2004-11-24 16:33:14.369-0700 1.0.2-8 TreeDesktop<br />

2004-11-15 23:58:23.267-0700 1.0.2-4 TreeServer<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

xms<br />

Example 2<br />

231


xms<br />

Example 2<br />

232<br />

2004-11-15 23:58:40.265-0700 1.0.2-4 UserConsole<br />

2004-12-17 12:13:30.367-0700 1.2.2-11 UserDesktop<br />

2004-11-15 23:59:09.298-0700 1.2.2-4 Versioning<br />

2004-11-15 23:59:25.268-0700 1.2.3-4 VersioningConsole<br />

2004-12-17 12:14:23.318-0700 1.2.3-9 VersioningServer<br />

2004-11-15 23:59:57.286-0700 1.2.1-4 Workload<br />

2004-11-16 00:00:18.322-0700 1.2.2-4 WorkloadDesktop<br />

2004-11-15 09:17:48.250-0700 3.0.1-1 XeroOne<br />

- - : : . - . . - license<br />

EXAMPLE 2<br />

Display all modules based on build time:<br />

xms -o:t<br />

2004-09-22 15:32:02.549-0600 1.0.0-0 InstrumentationConsole<br />

2005-05-24 15:20:10.273-0600 1.1.6-2 AuthenticationServer<br />

2005-05-24 15:20:15.270-0600 1.2.0-2 Command<br />

2005-05-24 15:20:52.414-0600 1.2.3-2 DHCP<br />

2005-05-24 15:21:22.277-0600 1.1.6-2 FileServer<br />

2005-05-24 15:21:28.356-0600 2.1.0-2 Host<br />

2005-05-24 15:21:49.376-0600 2.1.0-1 HostServer<br />

2005-05-24 15:21:56.306-0600 1.1.5-1 Icebox<br />

2005-05-24 15:22:03.286-0600 1.2.0-1 IceboxServer<br />

2005-05-24 15:22:40.429-0600 1.2.3-1 Instrumentation<br />

2005-05-24 15:22:52.336-0600 1.1.6-1 Integration<br />

2005-05-24 15:23:18.274-0600 2.0.3-1 LicenseDesktop<br />

2005-05-24 15:23:34.467-0600 1.1.0-1 Log<br />

2005-05-24 15:23:44.351-0600 1.3.0-1 Migration<br />

2005-05-24 15:24:14.374-0600 1.5.0-2 Provisioning<br />

2005-05-24 15:24:27.303-0600 1.5.0-2 ProvisioningServer<br />

2005-05-24 15:24:33.402-0600 1.0.4-2 Tree<br />

2005-05-24 15:24:36.406-0600 1.1.0-2 TreeDesktop<br />

2005-05-24 15:24:39.390-0600 1.0.5-2 TreeServer<br />

2005-05-24 16:12:49.230-0600 1.1.5-7 Authentication<br />

2005-05-31 16:17:02.231-0600 1.4.0-3 PayloadConsole<br />

2005-06-01 12:54:36.230-0600 1.1.4-4 DistributionServer<br />

2005-06-01 13:26:49.229-0600 2.1.3-3 DatabaseServer<br />

2005-06-01 13:27:34.231-0600 1.3.0-4 File<br />

2005-06-01 13:27:51.230-0600 1.2.0-4 FileConsole<br />

2005-06-01 13:48:46.229-0600 1.4.0-2 Image<br />

2005-06-01 13:49:02.231-0600 1.4.0-2 ImageConsole<br />

2005-06-01 13:51:54.230-0600 2.0.4-3 LicenseServer<br />

2005-06-01 13:52:18.230-0600 1.1.0-3 LogServer<br />

2005-06-01 13:53:54.229-0600 1.2.6-3 VersioningServer<br />

2005-06-01 14:40:00.229-0600 1.2.4-4 Versioning<br />

2005-06-08 15:53:31.230-0600 1.2.0-4 Database<br />

2005-06-23 11:05:45.228-0600 1.2.0-3 CommandServer<br />

2005-06-23 11:06:02.227-0600 1.2.3-3 FileDesktop<br />

2005-06-23 11:07:09.228-0600 1.4.0-3 ImageServer<br />

2005-06-23 11:08:04.230-0600 1.2.3-2 Kernel<br />

2005-06-23 11:10:20.229-0600 1.3.0-3 KernelConsole<br />

2005-06-23 11:10:41.229-0600 1.2.4-3 KernelServer<br />

2005-06-23 11:11:40.228-0600 1.3.0-4 UserDesktop<br />

2005-06-23 18:28:33.228-0600 1.0.4-10 UserConsole<br />

2005-06-30 11:03:00.429-0600 4.0.0-11 XeroOne<br />

2005-06-30 14:11:48.239-0600 1.2.0-5 DatabaseConsole<br />

2005-06-30 14:12:15.232-0600 1.3.0-5 DHCPServer<br />

2005-06-30 14:14:12.231-0600 1.5.0-1 ProvisioningDesktop<br />

2005-06-30 14:15:25.233-0600 1.3.0-5 VersioningConsole<br />

2005-07-12 15:42:21.271-0600 1.4.0-7 Foundation<br />

2005-07-12 15:46:31.234-0600 1.1.0-13 HostConsole<br />

2005-07-12 15:49:28.239-0600 1.2.0-3 IceboxDesktop<br />

2005-07-12 16:05:13.235-0600 1.3.0-3 ProvisioningConsole<br />

2005-07-13 15:52:57.238-0600 2.0.4-3 License<br />

2005-07-13 17:06:07.237-0600 2.1.0-9 HostDesktop<br />

2005-07-15 10:02:12.233-0600 3.3.0-22 <strong>Clusterworx</strong>Release<br />

2005-07-15 10:30:23.235-0600 1.2.5-9 InstrumentationServer<br />

2005-07-15 10:30:44.246-0600 1.2.0-6 CommandDesktop<br />

2005-07-15 10:31:36.236-0600 1.5.0-9 ImageDesktop<br />

2005-07-15 10:32:32.235-0600 1.4.0-3 Payload<br />

2005-07-15 10:33:06.234-0600 1.4.0-10 PayloadServer<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Glossary<br />

Anti-aliasing A technique used to smooth images and text to improve their<br />

appearance on screen.<br />

Architecture-independent Allows hardware or software to function regardless of<br />

hardware platform. Both <strong>Clusterworx</strong> and Icebox work together to deliver seamless cluster management<br />

functionality. Because Icebox physically monitors individual processor temperatures and has direct power<br />

control, administrators are not dependent on specific motherboards.<br />

Baud rate A unit of measure that describes data transmission rates (in bits per second).<br />

Block size The largest amount of data that the file system will allocate contiguously.<br />

boot.profile A file that contains instructions on how to boot a host.<br />

Boot utilities Utilities added to the RAM Disk that run during the boot process. Boot utilities allow you to<br />

create such things as custom, pre-finalized scripts using utilities that are not required for standard Linux<br />

versions.<br />

Cluster Clustering is a method of linking multiple computers or compute hosts together to form a unified<br />

and more powerful system. These systems can perform complex computations at the same level as a<br />

traditional supercomputer by dividing the computations among all of the processors in the cluster, then<br />

gathering the data once the computations are completed. A cluster refers to all of the physical elements of<br />

your Linux Networx solution, including the <strong>Clusterworx</strong> Master Host, compute hosts, <strong>Clusterworx</strong>, Icebox,<br />

UPS, high-speed network, storage, and the cabinet.<br />

<strong>Clusterworx</strong> Master Host The <strong>Clusterworx</strong> Master Host is the host that controls the remaining hosts in a<br />

cluster (for large systems, multiple masters may be required). This host is reserved exclusively for managing<br />

the cluster and is not typically available to perform tasks assigned to the remaining hosts.<br />

Command-line Interface (CLI) A user interface to the Icebox through which the administrator may enter<br />

commands to perform additional tasks and configurations on the system. The CLI is accessible via the Serial<br />

Console port, a Telnet session, and SSH.<br />

DHCP Dynamic Host Configuration Protocol. Assigns dynamic IP addresses to devices on a network.<br />

Diskless host A host whose operating system and file system are installed into physical memory. This<br />

method is generally referred to as RAMfs or TmpFS.<br />

EBI An ELF Binary Image that contains the kernel, kernel options, and a RAM Disk.<br />

Event engine Allows administrators to trigger events based on a change in system status (e.g., when<br />

processors rise above a certain temperature or experience a power interruption). <strong>Administrators</strong> may<br />

configure triggers to inform users of a specific event or to take a specific action.<br />

Ext Original extended file system for Linux systems. Provides 255-character filenames and supports files<br />

sizes up to 2 Gigabytes.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

233


Glossary<br />

234<br />

Ext2 The second extended file system for Linux systems. Offers additional features that make the file<br />

system more compatible with other file systems and provides support for file system extensions, larger file<br />

sizes (up to 4 Terabytes), symbolic links, and special file types.<br />

Ext3 Provides a journaling extension to the standard ext2 file system on Linux. Journaling reduces time<br />

spent recovering a file system, critical in environments where high availability is important.<br />

GUI A Graphical User Interface employs the use of visual cues and indicators (not just text) to help you<br />

navigate through your system and perform system operations. <strong>Clusterworx</strong> uses a GUI to provide intuitive<br />

cluster navigation and configuration.<br />

Group A group refers to an organization with shared or similar needs. A cluster may contain multiple groups<br />

with unique or shared rights and privileges. A group may also refer to an administrator-defined collection of<br />

hosts within a cluster that perform tasks such as data serving, Web serving, and computational number<br />

crunching.<br />

Hardware flow control A <strong>Clusterworx</strong> control setting for Icebox host and auxiliary ports. Enabling<br />

hardware flow control allows a transaction recipient to tell the transmitter to stop sending data (e.g., if the<br />

recipient’s buffer is getting too full). This can eliminate data loss due to buffer overflow.<br />

Health monitoring An element of the Instrumentation Service used to track and display the state of all<br />

hosts in the system. Health status icons appear next to each host viewed with the instrumentation service or<br />

from the navigation tree to provide visual cues about system health. Similar icons appear next to clusters,<br />

partitions, and regions to indicate the status of hosts contained therein.<br />

Host An individual server or computer within the cluster that operates in parallel with other hosts in the<br />

cluster. Hosts may contain multiple processors.<br />

Icebox An important piece of the Linux Networx cluster management solution, the Icebox is an<br />

architecture-independent hardware device that provides remote monitoring and advanced power control for<br />

hosts installed in your cluster. The Icebox can monitor up to four processors per host and is accurate to ± 1<br />

degrees Celsius. The Icebox also contains advanced serial switching that allows administrators to maintain a<br />

redundant serial network.<br />

image.profile A file used to generate boot.profile. This file contains information about the image, including<br />

the payload, kernel, and partition layout.<br />

IP address A 32-bit number that identifies each sender or receiver of information. In order to transmit or<br />

receive information on the network, each Icebox must have its own unique IP address (which can be set by<br />

the administrator).<br />

Kerberos Kerberos is a network authentication protocol. It is designed to provide strong authentication for<br />

client/server applications by using secret-key cryptography.<br />

Kernel The binary kernel, a .config file, <strong>System</strong>.map, and modules (if any).<br />

LDAP Lightweight Directory Access Protocol is an Internet protocol that email programs use to look up<br />

contact information from a server.<br />

Listener A listener constantly reads and reviews system metrics. Configuring listener thresholds allows you<br />

to trigger loggers to address specific issues as they arise.<br />

Logger The action taken when a threshold exceeds its maximum or minimum value. Common logger events<br />

include sending messages to the centralized <strong>Clusterworx</strong> message log, logging to a file, logging to the serial<br />

console, and shutting down the host.<br />

MAC address A hardware address unique to each device installed in the system.<br />

Metrics Used to track logger events and report data to the instrumentation service (where it may be<br />

monitored).<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


MIB Management Information Base. The MIB is a tree-shaped information structure that defines what sort<br />

of data can be manipulated via SNMP.<br />

Monitors Monitors run periodically on hosts and provide the metrics that are gathered, processed, and<br />

displayed using the <strong>Clusterworx</strong> instrumentation service.<br />

Multi-user Allows multiple administrators to simultaneously log into and administer the cluster.<br />

Netmask A string of 0's and 1's that mask or screen out the network part of an IP address so only the host<br />

computer portion of the address remains. The binary 1's at the beginning of the mask turn the network ID<br />

portion of the IP address into 0's. The binary 0's that follow allow the host ID to remain. A commonly used<br />

netmask is 255.255.255.0 (255 is the decimal equivalent of a binary string of eight ones).<br />

NIS Network Information Service makes information available throughout the entire network.<br />

Node See Host.<br />

Partition Partitions are used to separate clusters into non-overlapping collections of hosts.<br />

Payload A compressed file system that is downloaded via multicast during the provisioning process.<br />

Plug-ins Programs or utilities added to the boot process that expand system capabilities.<br />

RAID Redundant Array of Independent Disks. Provides a method of accessing multiple, independent disks as<br />

if the array were one large disk. Spreading data over multiple disks improves access time and reduces the<br />

risk of losing all data if a drive fails.<br />

RAM Disk A small, virtual drive that is created and loaded with the utilities that are required when you<br />

provision the host. In order for host provisioning to succeed, the RAM Disk must contain specific boot<br />

utilities. Under typical circumstances, you will not need to add boot utilities unless you are creating<br />

something such as a custom, pre-finalized script that needs utilities not required by standard Linux versions<br />

(e.g., modprobe).<br />

RHEL RedHat Enterprise Linux.<br />

Region A region is a subset of a partition and may share any hosts that belong to the same partition—even if<br />

the hosts are currently used by another region.<br />

Role Roles are associated with groups and privileges, and define the functionality assigned to each group.<br />

Secure remote access The ability to monitor and control the cluster from a distant location through an<br />

SSL-encrypted connection. <strong>Administrators</strong> have the benefit of secure remote access to their clusters through<br />

any Java-enhanced browser. <strong>Clusterworx</strong> can be used remotely, allowing administrators access to the cluster<br />

from anywhere in the world.<br />

Secure Shell (SSH) SSH is used to create a secure connection to the CLI. Connections made with SSH are<br />

encrypted and safe to use over insecure networks.<br />

SLES SuSE Linux Enterprise Server.<br />

Version branching The ability to modify an existing payload, kernel, or image under version control and<br />

check it back into VCS as a new, versioned branch of the original item.<br />

Version Control <strong>System</strong> (VCS) The <strong>Clusterworx</strong> Version Control <strong>System</strong> allows users with privileges to<br />

manage changes to payloads, kernels, or images (similar in nature to managing changes in source code with a<br />

version control system such as CVS). The Version Control <strong>System</strong> supports common Check-Out and<br />

Check-In operations.<br />

Versioned copy A versioned copy of a payload, kernel, or image is stored in VCS.<br />

Working copy A working copy of a payload, kernel, or image is currently present in the working area only<br />

(e.g., /opt/cwx/imaging// payloads). Working copies are not stored in VCS.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

Glossary<br />

235


Glossary<br />

236<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


Appendix<br />

Pre-configured Metrics<br />

The <strong>Clusterworx</strong> instrumentation service supports the following metrics. For<br />

information about defining and using metrics, see Metrics on page 175.<br />

Architecture<br />

Metric Name Format and Description<br />

Architecture hosts.{host.moniker}.os.architecture<br />

The host's hardware architecture.<br />

CPU<br />

Metric Name Format and Description<br />

CPU Bogomips hosts.{host.moniker}.cpus.[#].bogomips<br />

A measurement that indicates, in a relative way, how fast the processor runs.<br />

CPU Cache hosts.{host.moniker}.cpus.[#].cache<br />

The processor cache size.<br />

CPU Count hosts.{host.moniker}.cpus.[#]<br />

The number of processors on the host.<br />

CPU Family hosts.{host.moniker}.cpus.[#].family<br />

The processor family type.<br />

CPU FPU hosts.{host.moniker}.cpus.[#].fpu<br />

If the processor has a floating point unit.<br />

CPU Hardware Interrupts hosts.{host.moniker}.cpus.[#].irq.hard<br />

The cycles used by a specific CPU for hardware interrupts.<br />

CPU Hardware Interrupts hosts.{host.moniker}.cpu.irq.hard<br />

Aggregate<br />

The total cycles used by all CPUs for hardware interrupts.<br />

CPU I/O Wait hosts.{host.moniker}.cpus.[#].iowait<br />

The cycles used by a specific CPU waiting for I/O.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

237


Pre-configured Metrics<br />

CPU<br />

238<br />

Metric Name Format and Description<br />

CPU I/O Wait Aggregate hosts.{host.moniker}.cpu.iowait<br />

The total cycles used by all CPUs waiting for I/O.<br />

CPU Level hosts.{host.moniker}.cpus.[#].level<br />

The identifying level of the processor.<br />

CPU Model hosts.{host.moniker}.cpus.[#].model<br />

The model of the processor.<br />

CPU Model Name hosts.{host.moniker}.cpus.[#].name<br />

The model name of the processor.<br />

CPU Nice hosts.{host.moniker}.cpus.[#].nice<br />

The cycles used by a specific CPU in user mode with low priority.<br />

CPU Nice Aggregate hosts.{host.moniker}.cpu.nice<br />

CPU Software Interrupts<br />

The total cycles used by all CPUs in user mode with low priority.<br />

hosts.{host.moniker}.cpus.[#].irq.soft<br />

The cycles used by a specific CPU for software interrupts.<br />

CPU Software Interrupts hosts.{host.moniker}.cpu.irq.soft<br />

Aggregate<br />

The total cycles used by all CPUs for software interrupts.<br />

CPU Speed hosts.{host.moniker}.cpus.[#].speed<br />

The maximum speed of the processor.<br />

CPU Stepping hosts.{host.moniker}.cpus.[#].stepping<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

The revision level of the processor within the processor family.<br />

CPU <strong>System</strong> hosts.{host.moniker}.cpus.[#].system<br />

The cycles used by a specific CPU in kernel mode.<br />

CPU <strong>System</strong> Aggregate hosts.{host.moniker}.cpu.system<br />

The total cycles used by all CPUs in kernel mode.<br />

CPU Total hosts.{host.moniker}.cpus.[#].total<br />

The sum of User, Nice, and <strong>System</strong> for a specific CPU.<br />

CPU Total Aggregate hosts.{host.moniker}.cpu.total<br />

The sum of User, Nice, and <strong>System</strong> for all CPUs in the host.<br />

CPU User hosts.{host.moniker}.cpus.[#].user<br />

The cycles used by a specific processor in user mode.<br />

CPU User Aggregate hosts.{host.moniker}.cpu.user<br />

The total cycles used by all CPUs in user mode.<br />

CPU Vendor hosts.{host.moniker}.cpus.[#].vendor<br />

The name of the processor vendor.


Disk<br />

Metric Name Format and Description<br />

Disk Aggregate Block Reads hosts.{host.moniker}.disk.block.reads<br />

The number of blocks read from all disks.<br />

Disk Aggregate Block Writes hosts.{host.moniker}.disk.block.writes<br />

The number of blocks written to all disks.<br />

Disk Aggregate Capacity hosts.{host.moniker}.disk.capacity<br />

Disk Aggregate Capacity<br />

Free<br />

Disk Aggregate Capacity<br />

Used<br />

The disk capacity for all disks.<br />

hosts.{host.moniker}.disk.capacity.free<br />

The disk capacity free for all disks.<br />

hosts.{host.moniker}.disk.capacity.used<br />

The disk capacity used for all disks.<br />

Disk Aggregate I/O Read hosts.{host.moniker}.disk.io.reads<br />

The number of I/O reads from all disks.<br />

Disk Aggregate I/O Writes hosts.{host.moniker}.disk.io.writes<br />

Disk Aggregate Percentage<br />

Used<br />

The number of I/O writes to all disks.<br />

hosts.{host.moniker}.disk.percentage.used<br />

The disk percentage used for all disks.<br />

Disk Block Reads hosts.{host.moniker}.disks.block.reads<br />

The number of blocks read from a disk.<br />

Disk Block Writes hosts.{host.moniker}.disks.block.writes<br />

The number of blocks written to a disk.<br />

Disk Capacity hosts.{host.moniker}.disks.capacity<br />

The disk capacity for each disk.<br />

Disk Capacity Free hosts.{host.moniker}.disks.capacity.free<br />

The disk capacity free for each disk.<br />

Disk Capacity Used hosts.{host.moniker}.disks.capacity.used<br />

The disk capacity used for each disk.<br />

Disk I/O Read hosts.{host.moniker}.disks.io.reads<br />

The number of I/O reads from a disk.<br />

Disk I/O Writes hosts.{host.moniker}.disks.io.writes<br />

The number of I/O writes to a disk.<br />

Disk Mount Point hosts.{host.moniker}.disks.mountpoint<br />

The mount point for each partition.<br />

Disk Percentage Used hosts.{host.moniker}.disks.percentage.used<br />

The disk percentage used for each disk.<br />

Pre-configured Metrics<br />

Disk<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

239


Pre-configured Metrics<br />

Icebox<br />

240<br />

EDAC (BlueSmoke)<br />

Metric Name Format and Description<br />

DIMM CE hosts.{host.moniker}.bluesmoke.CE<br />

The number of Correctable Errors identified on all of the DIMMs. This metric is typically<br />

used with listeners.<br />

DIMM CE hosts.bluesmoke.CE.warning<br />

A warning message displayed in the GUI to indicate a correctable error.<br />

DIMM UE hosts.{host.moniker}.bluesmoke.UE<br />

The number of Uncorrectable Errors identified on all of the DIMMs. This metric is<br />

typically used with listeners.<br />

DIMM UE hosts.bluesmoke.UE.warning<br />

A warning message displayed in the GUI to indicate an uncorrectable error.<br />

Icebox<br />

Metric Name Format and Description<br />

Icebox Average<br />

iceboxes.{host.moniker}.ports.temperature<br />

Temperatures<br />

The average temperature per port from the Icecard in degrees Celsius.<br />

Icebox Temperature iceboxes.{host.moniker}.ports.temperatures<br />

Image<br />

Kernel<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

The temperatures from the Icecard in degrees Celsius for each port.<br />

Metric Name Format and Description<br />

Image Name hosts.{host.moniker}.image.name<br />

The image last used to provision.<br />

Image Revision hosts.{host.moniker}.image.revision<br />

The image revision last used to provision.<br />

Metric Name Format and Description<br />

Kernel Boot Time hosts.{host.moniker}.kernel.boottime<br />

Boot time, in seconds, since the epoch (January 1, 1970).<br />

Kernel Contexts hosts.{host.moniker}.kernel.contexts<br />

The number of context switches the system has undergone.<br />

Kernel Interrupts hosts.{host.moniker}.kernel.interrupts<br />

The number of interrupts received from the system since boot.


Metric Name Format and Description<br />

Kernel Name hosts.{host.moniker}.kernel.name<br />

The kernel last used to provision.<br />

Kernel Pages In hosts.{host.moniker}.kernel.pages.in<br />

The number of pages the system paged in from disk.<br />

Kernel Pages Out hosts.{host.moniker}.kernel.pages.out<br />

The number of pages the system paged out to disk.<br />

Kernel Processes hosts.{host.moniker}.kernel.processes<br />

The number of forks since boot.<br />

Kernel Revision hosts.{host.moniker}.kernel.revision<br />

The revision of kernel used to provision.<br />

Kernel Swaps In hosts.{host.moniker}.kernel.swaps.in<br />

The number of swap pages that have been brought in.<br />

Kernel Swaps Out hosts.{host.moniker}.kernel.swaps.out<br />

The number of swap pages that have been sent out.<br />

Kernel Version hosts.{host.moniker}.os.version<br />

The version of the currently running Linux kernel<br />

Load<br />

Metric Name Format and Description<br />

Load 1 Minute hosts.{host.moniker}.load.1m<br />

The number of tasks in the run state averaged over 1 minute.<br />

Load 15 Minutes hosts.{host.moniker}.load.15m<br />

The number of tasks in the run state averaged over 15 minutes.<br />

Load 5 Minutes hosts.{host.moniker}.load.5m<br />

The number of tasks in the run state averaged over 5 minutes.<br />

Load Tasks hosts.{host.moniker}.load.jobs<br />

The total number of tasks.<br />

Load Running Tasks hosts.{host.moniker}.load.jobs.running<br />

LinuxBIOS<br />

The number of tasks currently running.<br />

Metric Name Format and Description<br />

LinuxBIOS Bootmode hosts.{host.moniker}.linuxbios.bootmode<br />

The current operational status of LinuxBIOS.<br />

Pre-configured Metrics<br />

Load<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

241


Pre-configured Metrics<br />

LS-1 1950i and 1435a<br />

242<br />

LS-1 1950i and 1435a<br />

Metric Name Format and Description<br />

IPMITool Fan 1A Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_1A_RPM.description<br />

Note<br />

See also, LS-1 1435a Only on page 243.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

IPMITool Fan 1A Speed<br />

IPMITool Fan 1B Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_1B_RPM.description<br />

IPMITool Fan 1B Speed<br />

IPMITool Fan 1C Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_1C_RPM.description<br />

IPMITool Fan 1C Speed<br />

IPMITool Fan 1D Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_1D_RPM.description<br />

IPMITool Fan 1D Speed<br />

IPMITool Fan 2A Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_2A_RPM.description<br />

IPMITool Fan 2A Speed<br />

IPMITool Fan 2B Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_2B_RPM.description<br />

IPMITool Fan 2B Speed<br />

IPMITool Fan 2C Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_2C_RPM.description<br />

IPMITool Fan 2C Speed<br />

IPMITool Fan 2D Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_2D_RPM.description<br />

IPMITool Fan 2D Speed<br />

IPMITool Fan 3A Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_3A_RPM.description<br />

IPMITool Fan 3A Speed<br />

IPMITool Fan 3B Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_3B_RPM.description<br />

IPMITool Fan 3B Speed<br />

IPMITool Fan 3C Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_3C_RPM.description<br />

IPMITool Fan 3C Speed<br />

IPMITool Fan 3D Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_3D_RPM.description<br />

IPMITool Fan 3D Speed<br />

IPMITool Fan 4A Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_4A_RPM.description<br />

IPMITool Fan 4A Speed<br />

IPMITool Fan 4B Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_4B_RPM.description<br />

IPMITool Fan 4B Speed<br />

IPMITool Fan 4C Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_4C_RPM.description<br />

IPMITool Fan 4C Speed<br />

IPMITool Fan 4D Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_4D_RPM.description<br />

IPMITool Fan 4D Speed


LS-1 2950i<br />

Metric Name Format and Description<br />

IPMITool Fan 1 Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_1_RPM.description<br />

LS-1 1435a Only<br />

Note<br />

IPMITool Fan 1 Speed<br />

IPMITool Fan 2 Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_2_RPM.description<br />

IPMITool Fan 2 Speed<br />

IPMITool Fan 3 Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_3_RPM.description<br />

IPMITool Fan 3 Speed<br />

IPMITool Fan 4 Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_4_RPM.description<br />

IPMITool Fan 4 Speed<br />

IPMITool Fan 5 Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_5_RPM.description<br />

IPMITool Fan 5 Speed<br />

IPMITool Fan 6 Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_6_RPM.description<br />

IPMITool Fan 6 Speed<br />

By default, these metrics are disabled. These are useful only if no LS-1 1950i hosts are installed<br />

Metric Name Format and Description<br />

IPMITool Fan 1 Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_1_RPM.description<br />

IPMITool Fan 1 Speed<br />

IPMITool Fan 2 Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_2_RPM.description<br />

IPMITool Fan 2 Speed<br />

IPMITool Fan 3 Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_3_RPM.description<br />

IPMITool Fan 3 Speed<br />

IPMITool Fan 4 Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_4_RPM.description<br />

IPMITool Fan 4 Speed<br />

IPMITool Fan 5 Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_5_RPM.description<br />

IPMITool Fan 5 Speed<br />

IPMITool Fan 6 Speed hosts.{host.moniker}.ipmitool.IPMITool-FAN_6_RPM.description<br />

IPMITool Fan 6 Speed<br />

Pre-configured Metrics<br />

LS-1 2950i<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

243


Pre-configured Metrics<br />

Memory<br />

244<br />

Memory<br />

Metric Name Format and Description<br />

Memory Active hosts.{host.moniker}.memory.active<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

The amount of active memory.<br />

Memory Active Anon hosts.{host.moniker}.memory.active.anon<br />

The amount of anonymous active memory.<br />

Memory Active Cached hosts.{host.moniker}.memory.active.cache<br />

The amount of active cached memory.<br />

Memory Big Free hosts.{host.moniker}.memory.big.free<br />

The amount of free big memory.<br />

Memory Buffered hosts.{host.moniker}.memory.buffers<br />

The amount of buffered memory.<br />

Memory Cached hosts.{host.moniker}.memory.cached<br />

The amount of cached memory.<br />

Memory Committed hosts.{host.moniker}.memory.committed<br />

The amount of committed memory.<br />

Memory Dirty hosts.{host.moniker}.memory.dirty<br />

The amount of dirty memory.<br />

Memory Free hosts.{host.moniker}.memory.free<br />

The total amount of free memory.<br />

Memory High hosts.{host.moniker}.memory.high<br />

The amount of used high memory.<br />

Memory High Free hosts.{host.moniker}.memory.high.free<br />

The amount of free high memory.<br />

Memory Huge Pages Free hosts.{host.moniker}.memory.hugepages.free<br />

The amount of huge pages available.<br />

Memory Huge Pages Size hosts.{host.moniker}.memory.hugepages.size<br />

The size of a huge page.<br />

Memory Huge Pages Total hosts.{host.moniker}.memory.hugepages<br />

The total amount of huge pages available.<br />

Memory Inactive hosts.{host.moniker}.memory.inactive<br />

The amount of inactive memory.<br />

Memory Inactive Clean hosts.{host.moniker}.memory.inactive.clean<br />

The amount of clean inactive memory.<br />

Memory Inactive Dirty hosts.{host.moniker}.memory.inactive.dirty<br />

The amount of dirty inactive memory.<br />

Memory Inactive Laundry hosts.{host.moniker}.memory.inactive.laundry<br />

The amount of inactive laundry memory.


Metric Name Format and Description<br />

Memory Inactive Target hosts.{host.moniker}.memory.inactive.target<br />

The target amount of inactive memory.<br />

Memory Low hosts.{host.moniker}.memory.low<br />

The amount of used low memory.<br />

Memory Low Free hosts.{host.moniker}.memory.low.free<br />

The amount of free low memory.<br />

Memory Mapped hosts.{host.moniker}.memory.mapped<br />

The amount of mapped memory.<br />

Memory Page Tables hosts.{host.moniker}.memory.pagetables<br />

The number of page tables available.<br />

Memory Shared hosts.{host.moniker}.memory.shared<br />

The total amount of shared memory.<br />

Memory Slab hosts.{host.moniker}.memory.slab<br />

The size of the memory slab used for dynamic kernel data.<br />

Memory Swap hosts.{host.moniker}.memory.swap<br />

The amount of free swap space.<br />

Memory Swap Cached hosts.{host.moniker}.memory.swap.cached<br />

The amount of cached swap.<br />

Memory Swap Free hosts.{host.moniker}.memory.swap.free<br />

The amount of free swap space.<br />

Memory Total hosts.{host.moniker}.memory.total<br />

The total amount of memory.<br />

Memory VMalloc Chunk hosts.{host.moniker}.memory.vmalloc.chunk<br />

The size of a VMalloc chunk.<br />

Memory VMalloc Total hosts.{host.moniker}.memory.vmalloc<br />

The total amount of VMalloc.<br />

Memory VMalloc Used hosts.{host.moniker}.memory.vmalloc.used<br />

The amount of VMalloc used.<br />

Memory Writeback hosts.{host.moniker}.memory.writeback<br />

The amount of writeback memory.<br />

Pre-configured Metrics<br />

Memory<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

245


Pre-configured Metrics<br />

Network<br />

246<br />

Network<br />

Metric Name Format and Description<br />

Network Bytes Received hosts.{host.moniker}.network..rx.bytes<br />

Network Bytes Received<br />

Aggregate<br />

Network Compressed Bytes<br />

Received<br />

Network Compressed Bytes<br />

Received Aggregate<br />

Network Compressed Bytes<br />

Transmitted<br />

Network Compressed Bytes<br />

Transmitted Aggregate<br />

Network Dropped Packets<br />

Received<br />

Network Dropped Packets<br />

Received Aggregate<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

The total number of bytes received on an interface.<br />

hosts.{host.moniker}.network..rx.bytes<br />

The total number of bytes received on all interfaces.<br />

hosts.{host.moniker}.network..rx.compressed<br />

The amount of compressed traffic received on an interface.<br />

hosts.{host.moniker}.network..rx.compressed<br />

The amount of compressed traffic received on all interfaces.<br />

hosts.{host.moniker}.network..tx.compressed<br />

The amount of data compressed during transmission on an interface.<br />

hosts.{host.moniker}.network..tx.compressed<br />

The amount of data compressed during transmission on all interfaces.<br />

hosts.{host.moniker}.network..rx.errors.dropped<br />

The total number of dropped packets when receiving on an interface.<br />

hosts.{host.moniker}.network..rx.errors.dropped<br />

The total number of dropped packets when receiving on all interfaces.<br />

Network Errors Received hosts.{host.moniker}.network..rx.errors<br />

Network Errors Received<br />

Aggregate<br />

Network FIFO Errors<br />

Received<br />

Network FIFO Errors<br />

Received Aggregate<br />

Network Frame Errors<br />

Received<br />

Network Frame Errors<br />

Received Aggregate<br />

Network Multicast Bytes<br />

Received<br />

Network Multicast Bytes<br />

Received Aggregate<br />

The total number of errors when receiving on an interface.<br />

hosts.{host.moniker}.network..rx.errors<br />

The total number of errors when receiving on all interfaces.<br />

hosts.{host.moniker}.network..rx.errors.fifo<br />

The number of FIFO errors received on an interface.<br />

hosts.{host.moniker}.network..rx.errors.fifo<br />

The number of FIFO errors received on all interfaces.<br />

hosts.{host.moniker}.network..rx.errors.frame<br />

The number of frame errors when receiving on an interface.<br />

hosts.{host.moniker}.network..rx.errors.frame<br />

The number of frame errors when receiving on all interfaces.<br />

hosts.{host.moniker}.network..rx.multicast<br />

The number of bytes received via multicast on an interface.<br />

hosts.{host.moniker}.network..rx.multicast<br />

The number of bytes received via multicast on all interfaces.<br />

Network Packets Received hosts.{host.moniker}.network..rx.packets<br />

Network Packets Received<br />

Aggregate<br />

The total number of received packets on an interface.<br />

hosts.{host.moniker}.network..rx.packets<br />

The total number of received packets on all interfaces.


Metric Name Format and Description<br />

Network Packets<br />

Transmitted<br />

Network Packets<br />

Transmitted Aggregate<br />

OS<br />

hosts.{host.moniker}.network..tx.packets<br />

The total number of packets transmitted on an interface.<br />

hosts.{host.moniker}.network..tx.packets<br />

The total number of packets transmitted on all interfaces.<br />

Network Transmission Bytes hosts.{host.moniker}.network..tx.bytes<br />

Network Transmission Bytes<br />

Aggregate<br />

Network Transmission<br />

Carrier Errors<br />

Network Transmission<br />

Carrier Errors Aggregate<br />

Network Transmission<br />

Collisions<br />

Network Transmission<br />

Collisions Aggregate<br />

Network Transmission<br />

Dropped Packets<br />

Network Transmission<br />

Dropped Packets Aggregate<br />

Network Transmission<br />

Errors<br />

Network Transmission<br />

Errors Aggregate<br />

Network Transmission FIFO<br />

Errors<br />

Network Transmission FIFO<br />

Errors Aggregate<br />

The total number of transmitted bytes on an interface.<br />

hosts.{host.moniker}.network..tx.bytes<br />

The total number of transmitted bytes on all interfaces.<br />

hosts.{host.moniker}.network..tx.errors.carrier<br />

The amount of carrier errors during transmission on an interface.<br />

hosts.{host.moniker}.network..tx.errors.carrier<br />

The amount of carrier errors during transmission on all interfaces.<br />

hosts.{host.moniker}.network..tx.errors.collisions<br />

The number of packet collisions when transmitting on an interface.<br />

hosts.{host.moniker}.network..tx.errors.collisions<br />

The number of packet collisions when transmitting on all interfaces.<br />

hosts.{host.moniker}.network..tx.errors.dropped<br />

The total number of dropped packets when transmitting on an interface.<br />

hosts.{host.moniker}.network..tx.errors.dropped<br />

The total number of dropped packets when transmitting on all interfaces.<br />

hosts.{host.moniker}.network..tx.errors<br />

The total number of errors when transmitting on an interface.<br />

hosts.{host.moniker}.network..tx.errors<br />

The total number of errors when transmitting on all interfaces.<br />

hosts.{host.moniker}.network..tx.errors.fifo<br />

The amount of FIFO errors during transmission on an interface.<br />

hosts.{host.moniker}.network..tx.errors.fifo<br />

The amount of FIFO errors during transmission on all interfaces.<br />

Metric Name Format and Description<br />

OS Distribution hosts.{host.moniker}.os.distribution.description<br />

The name of the Linux distribution.<br />

OS Name hosts.{host.moniker}.os.name<br />

The name of the operating system<br />

OS Version hosts.{host.moniker}.os.distribution.version<br />

The version of the Linux distribution.<br />

Pre-configured Metrics<br />

OS<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

247


Pre-configured Metrics<br />

Payload<br />

248<br />

Payload<br />

Metric Name Format and Description<br />

Payload Name hosts.{host.moniker}.payload.name<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

The payload last used to provision.<br />

Payload Revision hosts.{host.moniker}.payload.revision<br />

The revision of payload used to provision.


Index<br />

Numerics<br />

1435a<br />

metrics 242<br />

1435a only<br />

metrics 243<br />

1950i<br />

metrics 242<br />

2950i<br />

metrics 243<br />

A<br />

accounts<br />

disable user 63<br />

enable 63<br />

manage group 101<br />

manage local 101<br />

acl_roots 157<br />

add<br />

boot utilities 138<br />

directory to payload 108<br />

file to payload 108<br />

group 57<br />

user account to payload 104<br />

host 25<br />

Icebox 68<br />

kernel modules without loading 118<br />

local user account to payload 101<br />

package<br />

to existing payload 92<br />

to new payload 88<br />

partition 32<br />

plug-in 141<br />

RAID partition 127<br />

region 36<br />

role 52<br />

user 62<br />

to group 58<br />

administration levels 50<br />

annotations<br />

electric shock iii<br />

note iii<br />

tip iii<br />

warning iii<br />

anti-aliasing 40, 41<br />

appearance<br />

interface 22<br />

architecture<br />

metrics 237<br />

authentication management, payload 98<br />

authentication, port 70<br />

B<br />

baud rate 72<br />

beacon<br />

turn off 78, 80<br />

turn on 77, 80<br />

block size 117, 125<br />

BlueSmoke<br />

metrics 240<br />

boot process, plug-ins for 140<br />

boot utilities, add 138<br />

boot.profile 122, 139<br />

branch, version 144<br />

C<br />

ccp 192<br />

channels 181<br />

check into VCS<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

249


Index<br />

D<br />

250<br />

image 146<br />

kernel 146<br />

payload 146<br />

check out of VCS<br />

image 147<br />

kernel 147<br />

payload 147<br />

CLI 233<br />

cluster 23<br />

copy command 192<br />

environment 49<br />

host administration 196<br />

power administration 204<br />

provisioning 206<br />

system monitoring 40<br />

user administration 209<br />

<strong>Clusterworx</strong><br />

install into payload 110<br />

install on client 11<br />

introduction 19<br />

launch 10<br />

Master Host<br />

DHCP settings 158<br />

rename 29<br />

set up 4<br />

services 14<br />

system requirements 2<br />

command-line interface 185, 233<br />

ccp 192<br />

conman 193<br />

cwhost 196<br />

cwpower 204<br />

cwprovision 206<br />

cwuser 209<br />

dbix 215<br />

dbx 216<br />

imgr 217<br />

kmgr 218<br />

pdcp 219<br />

pdsh 222<br />

pmgr 225<br />

powerman 226<br />

vcs 228<br />

compute host. See host<br />

configuration subtab 67<br />

configure<br />

baud rate 72<br />

NIS 98<br />

conman 193<br />

connect<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

to host via Runner 164<br />

copy<br />

from VCS 148<br />

image 122<br />

kernel 114<br />

payload 90<br />

CPU<br />

metrics 237<br />

utilization 45<br />

CPU tab 45<br />

create<br />

group 57<br />

host 25<br />

image 120<br />

kernel 112<br />

kernel from binary 218<br />

multiple payloads from source 86<br />

partition 32, 124<br />

password<br />

Icebox 68, 201<br />

user 62, 210<br />

payload 85<br />

region 36<br />

role 52<br />

custom loggers 182<br />

customer education iv<br />

cwhost 196<br />

cwpower 204<br />

cwprovision 206<br />

cwuser 209<br />

D<br />

dbix 215<br />

dbx 216<br />

default user administration settings 51<br />

delete<br />

all payloads, kernels, and images 148<br />

file(s) from payload 109<br />

group 61<br />

account from payload 105<br />

host 31<br />

image partition 131<br />

local user account from payload 103<br />

package from payload 92<br />

partition 35<br />

payload 109<br />

region 39<br />

role 55<br />

user account 66


working copy of image 123<br />

working copy of kernel 119<br />

working copy of payload 109<br />

dependency checks, package 96<br />

DHCP 158<br />

dhcpd.conf 158<br />

dhcpd.conf.template 158<br />

disable<br />

anti-aliasing 40<br />

gradient fill 40<br />

host 30<br />

Kerberos 100<br />

LDAP 99<br />

NIS 98<br />

partition 35<br />

port authentication 70<br />

SNMP settings 74<br />

user account 63, 65<br />

disconnect<br />

from host via Runner 169<br />

disk<br />

aggregate usage 46<br />

fill to end of 136<br />

I/O 46<br />

metrics 239<br />

disk tab 46<br />

diskless hosts 135<br />

configure 135<br />

mount point 136<br />

distribution, upgrade 2<br />

dmesg.level 122<br />

DNS name resolution 92<br />

documentation, online ii<br />

E<br />

ebi files 155<br />

EDAC<br />

metrics 240<br />

edit<br />

group 60<br />

host 28<br />

Icebox password 201<br />

image partition 129<br />

kernel 117<br />

partition 34<br />

password 64, 211<br />

payload 94<br />

using text editor 107<br />

region 38<br />

role 54<br />

user account 64<br />

electric shock iii<br />

enable<br />

anti-aliasing 40<br />

concurrent Icebox ports 70<br />

gradient fill 40<br />

hardware flow control 72<br />

Kerberos 100<br />

LDAP 99<br />

NIS 98<br />

SNMP settings 74<br />

temperature shutdown 70<br />

user account 63<br />

error messages 184<br />

errors, RPM 89<br />

EULA 259<br />

event monitoring 171<br />

exclude<br />

files and directories from VCS 151<br />

exclude file(s) from payload 91<br />

F<br />

feedback, documentation ii<br />

file system, user-defined 132<br />

file(s), exclude from payload 91<br />

fill to end of disk 136<br />

filter 41<br />

filters subtab 75<br />

flow control, enable hardware 72<br />

format partition 125<br />

fstab 125, 136<br />

G<br />

general subtab 70<br />

general tab 42<br />

GID 50, 57<br />

gradient fill 40, 41<br />

group 49, 57<br />

add 57<br />

account to payload 104<br />

assign roles to 53, 58<br />

assign user to 63<br />

delete 61<br />

account from payload 105<br />

edit 60<br />

GID 50, 57<br />

grant access to region 59<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

Index<br />

E<br />

251


Index<br />

H<br />

252<br />

primary 63<br />

region, add to 37<br />

user membership 63<br />

GTK, customized interface 22<br />

H<br />

hardware<br />

enable flow control 72<br />

system requirements 1<br />

health monitoring 40<br />

message log 41<br />

system status icons 40<br />

host 23, 25<br />

add 25<br />

to partition 33<br />

administration 23<br />

grant privileges 56<br />

assign Icebox port 27<br />

beacon<br />

turn off 78<br />

turn on 77<br />

CLI administration 196<br />

<strong>Clusterworx</strong> Master 23<br />

DHCP settings 158<br />

rename 29<br />

set up 4<br />

configure 23<br />

diskless host 135<br />

connect to via Runner 164<br />

controls 77<br />

cycle power to 78<br />

delete 31<br />

disable 30<br />

disconnect from Runner 169<br />

diskless 135<br />

edit 28<br />

install RPM 192<br />

load monitoring 47<br />

master. See <strong>Clusterworx</strong> Master Host<br />

message log 41<br />

names 111<br />

port assignment 71<br />

power<br />

turn off 78<br />

turn on 78<br />

provision 153<br />

using CLI 206<br />

reboot 79<br />

region<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

add host to 37<br />

assign host to 26<br />

reset 78<br />

remote reset 67<br />

shared 23<br />

shut down 78<br />

states 40<br />

view output via Runner 166<br />

hosts subtab 77<br />

I<br />

Icebox 67<br />

access, restore 75<br />

add 68<br />

administration 67<br />

administration privileges 56<br />

beacon<br />

turn off 80<br />

turn on 80<br />

connect to 69<br />

controls 80<br />

create password for 68, 201<br />

filter settings 75<br />

deny 75<br />

host port assignment 27<br />

IP address 68<br />

MAC address 69<br />

metrics 240<br />

modify password 201<br />

power<br />

cycle 81<br />

turn off 80<br />

turn on 80<br />

power management 67, 77<br />

primary 27<br />

reset 81<br />

remote reset 67<br />

temperature monitoring 67<br />

Iceboxes subtab 80<br />

icons, system status 40<br />

image 83, 122<br />

add modules without loading 118<br />

check into VCS 146<br />

check out from VCS 147<br />

CLI controls 217<br />

copy 122<br />

create 120<br />

delete all 148<br />

delete partition 131


delete working copy of image 123<br />

edit image partition 129<br />

management 120<br />

metrics 240<br />

partition 124<br />

privileges, enable imaging 56<br />

provision 153<br />

select image 154<br />

versioned 144<br />

working copy 144<br />

image.once 122<br />

image.path 122<br />

imgr 217<br />

import<br />

binary kernel 218<br />

informational messages 184<br />

install<br />

<strong>Clusterworx</strong> 4<br />

client 11<br />

into payload 110<br />

server 5<br />

set up a Master Host 4<br />

instrumentation 40<br />

CPU utilization 45<br />

custom loggers 182<br />

custom monitors 172<br />

disk<br />

aggregate usage 46<br />

I/O 46<br />

enhance performance 40<br />

host load 47<br />

kernel information 47<br />

list view 44<br />

listeners 179<br />

loggers, pre-defined 181<br />

memory utilization 45<br />

menu controls 41<br />

message log 41<br />

metrics, define 175<br />

metrics, pre-configured 237<br />

monitoring and event subsystem 171<br />

packet transmissions 46<br />

resource utilization 42<br />

system configuration 42<br />

system status 40<br />

overview 42<br />

temperature readings 48<br />

thumbnail view 43<br />

interface<br />

customized appearance 22<br />

map 21<br />

interface, management 27<br />

interval 41<br />

IP address 234<br />

host 26<br />

Icebox 68<br />

K<br />

Kerberos 100<br />

kernel 83<br />

build from source 112<br />

check into VCS 146<br />

check out from VCS 147<br />

CLI controls 218<br />

copy 114<br />

create 112<br />

create from binary 218<br />

delete all 148<br />

delete working copy of kernel 119<br />

edit 117<br />

install modules without loading 118<br />

loadable modules 118<br />

management 112<br />

metrics 240<br />

modular 118<br />

monolithic 118<br />

upgrade 2<br />

verbosity level 158, 206<br />

versioned 144<br />

working copy 144<br />

kernel tab 47<br />

kmgr 218<br />

L<br />

LDAP 99<br />

license 15<br />

administration 17<br />

authentication 17<br />

<strong>Clusterworx</strong> EULA 259<br />

installation 15<br />

new 15<br />

viewer 17<br />

links, dangling symbolic 91<br />

list view 41, 44<br />

listeners 171, 179<br />

load<br />

metrics 241<br />

load tab 47<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

Index<br />

K<br />

253


Index<br />

M<br />

254<br />

loadable kernel modules 118<br />

logger 171<br />

loggers<br />

custom 182<br />

pre-defined 181<br />

M<br />

MAC address, Icebox 69<br />

management<br />

host(s), system requirements 1<br />

management network 6<br />

VCS 148<br />

management interface 27<br />

Master Host<br />

definition 23<br />

rename 29<br />

set up 4<br />

memory<br />

estimate partition requirements 126, 133, 136<br />

metrics 244<br />

utilization 45<br />

memory tab 45<br />

message log 41, 184<br />

metrics 175, 237<br />

1435a 242<br />

1435a only 243<br />

1950i 242<br />

2950i 243<br />

alignment 176<br />

architecture 237<br />

BlueSmoke 240<br />

CPU 237<br />

custom 177<br />

disk 239<br />

EDAC 240<br />

Icebox 240<br />

image 240<br />

instrumentation service 41<br />

kernel 240<br />

load 241<br />

memory 244<br />

metric selector 176<br />

network 246<br />

OS 247<br />

payload 248<br />

MIB 74<br />

migration utility 13<br />

mkelfimage 4<br />

mkfs 125<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

modules<br />

install without loading 118<br />

loadable kernel 118<br />

modules subtab 118<br />

monitoring<br />

event 171<br />

system health 40<br />

monitors 171, 172<br />

custom 172<br />

multicast<br />

route fix 162<br />

route issues 158<br />

throttle 162<br />

wastegate values 162<br />

N<br />

name resolution 111<br />

navigation tree 21<br />

netmask 235<br />

network metrics 246<br />

network subtab 73<br />

network tab 46<br />

NFS 50<br />

NIS 98<br />

note iii<br />

O<br />

online documentation ii<br />

operating system requirements 2<br />

OS<br />

metrics 247<br />

out-of-memory error 110<br />

output<br />

view host output via Runner 166<br />

overview 41<br />

overview, system status 42<br />

P<br />

package<br />

add to existing payload 92<br />

add to new payload 88<br />

dependency checks 96<br />

remove from payload 94<br />

packet transmissions 46<br />

partition 23, 32, 122<br />

add 32


host to 33<br />

RAID 127<br />

region to 33<br />

create 124<br />

user-defined file system 132<br />

delete 35<br />

delete from image 131<br />

disable 35<br />

edit 34<br />

edit image partition 129<br />

estimate memory requirements 126, 133, 136<br />

format 125<br />

manage 124<br />

overwrite protection 125<br />

partition this time 157<br />

partitioning behavior 121<br />

save 125<br />

size<br />

fill to end of disk 126, 133, 136<br />

fixed 126, 133, 136<br />

partition.once 122<br />

password<br />

create Icebox 68, 201<br />

create new 62, 210<br />

encrypt 211<br />

modify 64, 211<br />

modify Icebox 201<br />

payload 83<br />

.payload files 155<br />

account management, local user 101<br />

add<br />

directory to 108<br />

file to 108<br />

group user account to 104<br />

local user account to 101<br />

package to existing 92<br />

package to new 88<br />

attributes, troubleshoot 87<br />

authentication management 98<br />

check into VCS 146<br />

check out from VCS 147<br />

CLI controls 225<br />

configure 106<br />

copy 90<br />

create 85<br />

multiple payloads from source 86<br />

dangling symbolic links 91<br />

delete 109<br />

.payload files 155<br />

file(s) from payload 109<br />

group account from payload 105<br />

local user account from payload 103<br />

working copy of payload 109<br />

delete all 148<br />

download 122<br />

download this time 157<br />

edit<br />

using CLI 108<br />

with text editor 107<br />

exclude file(s) 91<br />

file configuration 106<br />

group account management 101<br />

install <strong>Clusterworx</strong> into 110<br />

management 84<br />

metrics 248<br />

package dependency checks 96<br />

pmgr 225<br />

remove package from 94<br />

update directory 108<br />

update file 108<br />

versioned 144<br />

working copy 144<br />

PBS 157<br />

pdcp 219<br />

pdsh 222<br />

permissions 56<br />

See role; privileges<br />

physical memory utilization 45<br />

plug-ins<br />

add 141<br />

for boot process 140<br />

port<br />

authentication, disable 70<br />

enable concurrent Icebox ports 70<br />

host port assignment 71<br />

ports subtab 71<br />

power<br />

CLI administration 204<br />

cycle<br />

to host 78<br />

to Icebox 81<br />

management 67<br />

powerman 226<br />

turn off<br />

to host 78<br />

to Icebox 80<br />

turn on<br />

to host 78<br />

to Icebox 80<br />

power management subtab 77<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

Index<br />

P<br />

255


Index<br />

Q<br />

256<br />

powerman 226<br />

pre-configured metrics 237<br />

pre-defined loggers 181<br />

primary<br />

group 63<br />

Icebox 27<br />

privileges 56<br />

change user 54, 55<br />

host administration 56<br />

Icebox administration 56<br />

imaging 56<br />

user administration 56<br />

provision 153<br />

CLI controls 206<br />

disable confirmation dialog 155<br />

enable confirmation dialog 155<br />

format partition 125<br />

schedule at next reboot 157<br />

select an image 154<br />

Q<br />

qmgr 157<br />

R<br />

RAID 127<br />

RAM Disk 138<br />

block size 117<br />

RAMfs 135<br />

reboot host 79<br />

region 23, 36, 49<br />

add 36<br />

group to 37<br />

host to 37<br />

to partition 33<br />

assign to host 26<br />

delete 39<br />

edit 38<br />

grant group access to 59<br />

remote reset 67<br />

remove<br />

file(s) from payload 109<br />

group 61<br />

group account from payload 105<br />

host 31<br />

local user account from payload 103<br />

package from payload 94<br />

partition 35<br />

region 39<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

role 55<br />

user account 66<br />

rename<br />

<strong>Clusterworx</strong> Master Host 29<br />

host 28<br />

requirements<br />

software 2<br />

reset<br />

host 78<br />

Icebox 81<br />

remote reset 67<br />

resource<br />

utilization 42<br />

RHEL 235<br />

rights<br />

See role; privileges<br />

role 49, 52<br />

add 52<br />

assign to group 53, 58<br />

delete 55<br />

edit 54<br />

grant privileges and permissions 53<br />

RPM errors 89<br />

RPM, install on hosts 192<br />

Runner 163<br />

connect to host 164<br />

disconnect from host 169<br />

execute commands on hosts 167<br />

restrictions 163<br />

troubleshooting 50<br />

view host output 166<br />

S<br />

save<br />

partition 125<br />

schedule provision at next reboot 157<br />

secure remote access 11<br />

set temperature thresholds 72<br />

shut down a host 78<br />

size, thumbnail 41<br />

SLES 235<br />

SNMP<br />

settings 74<br />

traps 74<br />

SNMP subtab 74<br />

software<br />

requirements 2<br />

sort 41<br />

SSL 99


states, system 40<br />

status, version 149<br />

support, technical iv<br />

symbolic links, dangling 91<br />

symlink 91<br />

system<br />

configuration 42<br />

health 40<br />

requirements<br />

hardware 1<br />

operating system 2<br />

status<br />

icons 40<br />

message log 41<br />

overview 42<br />

T<br />

task progress dialog 87<br />

technical support iv<br />

temperature<br />

enable shutdown 70<br />

monitoring, Icebox 67<br />

readings 48<br />

thresholds, set 72<br />

temperature tab 48<br />

third-party power controls 63<br />

thumbnail<br />

size 41<br />

thumbnail view 41, 43<br />

tip iii<br />

TmpFS 135<br />

training iv<br />

transmissions, packet 46<br />

troubleshooting<br />

out-of-memory error 110<br />

payload attributes 87<br />

RPM errors 89<br />

Runner 50<br />

U<br />

UID 50, 62<br />

upgrade 3<br />

<strong>Clusterworx</strong> 4<br />

distribution 2<br />

kernel 2<br />

See also migration utility<br />

user 49, 62<br />

add 62<br />

local user account to payload 101<br />

to group 58<br />

administration 49<br />

default settings 51<br />

privileges 56<br />

assign to group 63<br />

CLI administration 209<br />

delete<br />

local user account from payload 103<br />

delete account 66<br />

disable account 65<br />

edit account 64<br />

group membership 63<br />

multi-group 50<br />

UID 50, 62<br />

user-defined file system 132<br />

V<br />

VCS 144<br />

branch 146<br />

CLI controls 228<br />

command-line controls 228<br />

copy 148<br />

exclude files and directories 151<br />

VCS management console 148<br />

verbosity level, kernel 158, 206<br />

version<br />

branching 144<br />

control system 144<br />

check into 146<br />

check out 147<br />

vcs command 228<br />

status 149<br />

VersionControlService.profile 151<br />

versioned copy 144, 235<br />

virtual memory utilization 45<br />

VPN support 11<br />

W<br />

warning iii<br />

warning messages 184<br />

Windows, customized interface 22<br />

working copy 144, 235<br />

X<br />

xms 231<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

Index<br />

T<br />

257


Index<br />

258<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


<strong>Clusterworx</strong> End User<br />

License Agreement<br />

PLEASE READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED<br />

SUPPLEMENTAL LICENSE TERMS (COLLECTIVELY “Software Maintenance Agreement”) CAREFULLY<br />

BEFORE SIGNING THE FINAL ACCEPTANCE DOCUMENTS OF THE SYSTEM. BY ACCEPTING THE<br />

SYSTEM OR SIGNING THIS AGREEMENT FROM LINUX NETWORX YOU AGREE TO ALL OF THE<br />

TERMS OF THIS AGREEMENT. IF YOU ARE ACCESSING THE SOFTWARE ELECTRONICALLY,<br />

INDICATE YOUR ACCEPTANCE OF THESE TERMS BY ANSWERING “YES” AT THE END OF THIS<br />

AGREEMENT. IF YOU DO NOT AGREE TO ALL THESE TERMS, PROMPTLY RETURN THE UNUSED<br />

SOFTWARE OR, IF THE SOFTWARE IS ACCESSED ELECTRONICALLY, ANSWER “NO” AND THE<br />

INSTALLATION PROCESS WILL NOT CONTINUE.<br />

Linux Networx Inc. (“LNXI”) agrees to grant CUSTOMER a license to the release of the <strong>Clusterworx</strong>® 3.4<br />

software and documentation in accordance with the following terms and conditions:<br />

1. Definitions.<br />

1.1. “Documentation” means the documentation specified in the Product Schedule(s), attached hereto and<br />

incorporated herein by this reference, together with all additions, changes and updates furnished by LNXI<br />

under this Agreement or the Software Maintenance Agreement. Any reference to the Documentation herein<br />

shall include each component and/or portion of the Documentation.<br />

1.2. “Product” means the computer programs specified in the Product Schedule(s) together with any and all<br />

corrections and updates furnished by LNXI to CUSTOMER under this Agreement or the Software<br />

Maintenance Agreement. Any reference to the Product herein shall include each component and/or portion<br />

of the Product.<br />

1.3. “Software Maintenance Agreement” means that certain Software Maintenance Agreement set forth<br />

on Exhibit B attached hereto and incorporated herein by this reference.<br />

1.4. “License File” means the binary electronic file distributed by LNXI containing the license key(s) for the<br />

Product.<br />

2. Product Delivery and License.<br />

2.1. Deliverables. Upon execution of this Agreement, LNXI shall deliver to CUSTOMER one reproducible<br />

master copy of the Product, in object code form and one copy of the Documentation, in electronic form.<br />

2.2. Grant. LNXI hereby grants CUSTOMER a personal, nonexclusive, nontransferable license to:<br />

2.2.1. Install and use the Product for internal processing requirements of CUSTOMER, on the number of<br />

CUSTOMER'S computers then authorized under this Agreement. The number of computers authorized<br />

initially is set forth in the Product Schedule(s). CUSTOMER may increase the number of authorized<br />

computers from time to time in the unit quantities and upon payment to LNXI of the applicable amount as set<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

259


<strong>Clusterworx</strong> End User License Agreement<br />

260<br />

forth in the Product Schedule(s). Customer shall keep accurate records of the reproduction and location of<br />

each copy of software and, upon request, provide LNXI with complete access to such records and to<br />

CUSTOMER'S facilities, computers and the Product to audit and verify CUSTOMER'S compliance with this<br />

Agreement.<br />

2.2.2. Reproduce and make one copy of the Product and documentation for archival and backup purposes.<br />

2.3. Customer Responsibilities. CUSTOMER shall use the Product and the Documentation only for the<br />

purposes specified in Section 2.2 and in accordance with the following:<br />

2.3.1. CUSTOMER shall use the Product only on the then authorized number of computers which are<br />

owned or used by CUSTOMER and will use the Product and Documentation solely for CUSTOMER'S<br />

internal use.<br />

2.3.2. CUSTOMER shall not distribute modified derivative works from the Product or Documentation except<br />

as expressly permitted in Section 2.2 or with the express written permission of Linux Networx.<br />

2.3.3. CUSTOMER shall not tamper with or emulate the License File or reverse engineer, disassemble or decompile<br />

the Product.<br />

2.3.4. CUSTOMER shall not remove, obscure, or alter any notice of patent, copyright, trade secret, trademark,<br />

or other proprietary right present on any Product or Documentation.<br />

2.3.5. CUSTOMER shall not sublicense, sell, lend, rent, lease, or otherwise transfer all or any portion of the<br />

Product or the Documentation to any third party except as permitted in Section 8.3.<br />

2.4. Protection Against Unauthorized Use. CUSTOMER shall promptly notify LNXI of any unauthorized<br />

use of the Product or Documentation that comes to CUSTOMER'S attention. In the event of any<br />

unauthorized use by any of CUSTOMER'S employees, agents or representatives, CUSTOMER shall use<br />

reasonable efforts to terminate such unauthorized use and to retrieve any copy of the Product or<br />

Documentation in the possession or control of the person or entity engaging in such unauthorized use.<br />

CUSTOMER shall immediately notify LNXI of any legal proceeding initiated by CUSTOMER in connection<br />

with such unauthorized use. LNXI may, at its option and expense, participate in any such proceeding and, in<br />

such event, CUSTOMER shall provide such authority, information and assistance related to such proceeding<br />

as LNXI may reasonably request to protect LNXI'S interests.<br />

2.5. Reservation of Proprietary Rights. CUSTOMER and LNXI agree that the Product and the<br />

Documentation involve valuable copyright, trade secret, trademark and other proprietary rights of LNXI.<br />

Except for the license granted under Section 2.2, LNXI reserves all rights to the Product and the<br />

Documentation. No title to or ownership of any Product or proprietary rights related to the Products or<br />

Documentation is transferred to CUSTOMER under this Agreement. CUSTOMER agrees that modified or<br />

enhanced version of the product do not constitute a program different form the Product, and as such, fall<br />

under the other terms and conditions of this Agreement.<br />

3. Software Maintenance and Support.<br />

3.1. CUSTOMER and LNXI shall execute the Software Maintenance Agreement that shall govern all<br />

maintenance, support and update obligations of LNXI and CUSTOMER with respect to the Product and the<br />

Documentation.<br />

4. Termination.<br />

4.1. Term. The term of this Agreement and the license set forth in Section 2.2 shall commence on the date<br />

of this Agreement and shall end upon the earlier to occur of the termination of this Agreement pursuant to<br />

Section 4.2 or 4.3 or the date shown by the Product as specified by the License File.<br />

4.2. Termination By CUSTOMER. CUSTOMER may terminate this Agreement and the license by giving<br />

thirty (30) days' written notice to LNXI. Any and all outstanding fees due must be paid commensurate with<br />

such notice of termination.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


<strong>Clusterworx</strong> End User License Agreement<br />

4.3. Termination By LNXI. If CUSTOMER fails to pay any amount due hereunder when due, LNXI may<br />

terminate this Agreement and the license, in addition to its other rights and remedies under law. If<br />

CUSTOMER defaults in the performance of or compliance with any of its obligations under this Agreement<br />

(other than a failure to pay any amount due hereunder), and such default has not been remedied or cured<br />

within thirty (30) days after LNXI gives CUSTOMER written notice specifying the default, LNXI may<br />

terminate this Agreement and the license, in addition to its other rights and remedies under law. An event of<br />

default shall include but not be limited to the following: (a) if CUSTOMER files a petition under any chapter<br />

of the Bankruptcy Code, as amended, or for the appointment of a receiver, (b) if an involuntary petition in<br />

bankruptcy is filed against CUSTOMER and said petition is not discharged within thirty (30) days, (c) if<br />

CUSTOMER shall become insolvent or make a general assignment for the benefit of its creditors, (d) if the<br />

business or property of CUSTOMER shall come into the possession of its creditors, a governmental agency or<br />

a receiver, (e) if any proceedings supplementary to judgment shall be commenced against CUSTOMER, or (f)<br />

if any judgment against CUSTOMER, not fully bonded, shall remain unpaid in whole or in part for at least<br />

five (5) days after entry thereof, then, in any case, the other party may at its option terminate this Agreement.<br />

4.4. Post Termination. Upon termination of this Agreement by LNXI, CUSTOMER shall promptly cease use<br />

of the Product and Documentation and return to LNXI all copies of the Product and Documentation then in<br />

CUSTOMER'S possession or control.<br />

4.5. Survival. Sections 1, 2.5, 4, 5, 6, and 7 and all other provisions of this Agreement which may<br />

reasonably be interpreted or construed as surviving the termination of this Agreement, shall survive the<br />

termination of this Agreement.<br />

5. Warranties.<br />

5.1. DISCLAIMER AND RELEASE. CUSTOMER ACKNOWLEDGES THAT EXCEPT WITH RESPECT TO<br />

THE EXPRESS WARRANTY IN SECTION 5.2 BELOW, CUSTOMER HEREBY WAIVES, RELEASES AND<br />

DISCLAIMS, ALL WARRANTIES, OBLIGATIONS AND LIABILITIES OF LNXI AND ALL OTHER<br />

REMEDIES, RIGHTS AND CLAIMS OF CUSTOMER, EXPRESS OR IMPLIED, ARISING BY LAW OR<br />

OTHERWISE, WITH RESPECT TO THE PRODUCTS, THE DOCUMENTATION, ANY SERVICES<br />

PROVIDED BY LNXI AND ANY OTHER ITEMS SUBJECT TO THIS AGREEMENT, INCLUDING, BUT NOT<br />

LIMITED TO: (A) ANY IMPLIED WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR<br />

PURPOSE; (B) ANY IMPLIED WARRANTY ARISING FROM COURSE OF PERFORMANCE, COURSE OF<br />

DEALING OR USAGE OF TRADE; (C) ANY OBLIGATION, LIABILITY, RIGHT, REMEDY, OR CLAIM IN<br />

TORT, NOTWITHSTANDING ANY FAULT, NEGLIGENCE, STRICT LIABILITY OR PRODUCT LIABILITY<br />

OF LNXI (WHETHER ACTIVE, PASSIVE OR IMPUTED); AND (D) ANY OBLIGATION, LIABILITY,<br />

REMEDY, RIGHT OR CLAIM FOR INFRINGEMENT. (EXCEPT AS PROVIDED IN SECTION 5.2). WITHOUT<br />

LIMITING THE FOREGOING LNXI DOES NOT WARRANT THAT THE PRODUCT IS FREE FROM ALL<br />

BUGS, ERRORS AND OMISSIONS.<br />

5.2. Proprietary Rights. LNXI warrants that the Product does not infringe any U.S. copyright. LNXI will<br />

defend CUSTOMER against any proceeding based upon any failure to satisfy the foregoing warranty,<br />

provided that: CUSTOMER notifies LNXI of the proceeding promptly after it is commenced; CUSTOMER<br />

tenders sole control of the defense of the proceeding to LNXI; CUSTOMER provides such assistance in<br />

defense of the proceeding as LNXI may reasonably request; and CUSTOMER complies with any court order<br />

or settlement made in connection with the proceeding (e.g., relating to the future use of the affected Product).<br />

Further, LNXI will: indemnify CUSTOMER against any and all damages, costs and attorneys' fees awarded<br />

against CUSTOMER in connection with such proceeding as a result of any such noncompliance; reimburse<br />

the expenses reasonably incurred by CUSTOMER to provide any assistance requested by LNXI in defense of<br />

the proceeding; and, if the action is settled, pay any amounts agreed to by LNXI in settlement of any claims<br />

based upon such noncompliance. If on account of such proceeding, CUSTOMER'S right to use the Product<br />

are materially diminished, LNXI may refund all or an equitable portion of the compensation paid by<br />

CUSTOMER to LNXI for the same in full satisfaction of CUSTOMER'S claims relating to such<br />

noncompliance.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

261


<strong>Clusterworx</strong> End User License Agreement<br />

262<br />

5.3. Warranty Limitations. The warranty set forth in Section 5.2 applies only to the latest release of the<br />

Product made available by LNXI to CUSTOMER. Such warranty does not apply to any noncompliance<br />

resulting from misuse, casualty loss, use or combination of the Product with any products, goods, services or<br />

other items furnished by anyone other than LNXI or any modification not made by or for LNXI.<br />

6. Limitations of Liability.<br />

6.1. Excused Performance. Neither party will be liable for, or be considered to be in breach of or default<br />

under this Agreement on account of, any delay or failure to perform as required by this Agreement (other<br />

than monetary obligations) as a result of any cause or condition beyond such party's reasonable control.<br />

6.2. DOLLAR LIMITATION. LNXI'S LIABILITY (WHETHER IN CONTRACT, WARRANTY, TORT OR<br />

OTHERWISE; AND NOTWITHSTANDING ANY FAULT, NEGLIGENCE, REPRESENTATION, STRICT<br />

LIABILITY OR PRODUCT LIABILITY OF LNXI) UNDER THIS AGREEMENT (EXCEPT FOR LNXI'S<br />

OBLIGATIONS UNDER SECTION 5.2) WITH REGARD TO ANY PRODUCT, DOCUMENTATION,<br />

SERVICES OR OTHER ITEMS SUBJECT TO THIS AGREEMENT SHALL IN NO EVENT EXCEED THE<br />

TOTAL COMPENSATION PAID BY CUSTOMER TO LNXI UNDER THIS AGREEMENT.<br />

6.3. DAMAGE LIMITATION. IN NO EVENT WILL LNXI HAVE ANY OBLIGATION OR LIABILITY<br />

(WHETHER IN CONTRACT, WARRANTY, TORT (INCLUDING NEGLIGENCE), PRODUCT LIABILITY OR<br />

OTHER CAUSE OF ACTION) FOR THE COST OF COVER OR FOR ANY INCIDENTAL, DIRECT, INDIRECT<br />

OR CONSEQUENTIAL DAMAGES OR LIABILITIES (INCLUDING, BUT NOT LIMITED TO, ANY LOSS OF<br />

REVENUE, PROFIT OR BUSINESS) EVEN IF LNXI OR ITS EMPLOYEES AND REPRESENTATIVES HAVE<br />

BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.<br />

7. Miscellaneous.<br />

7.1. Confidential Information. CUSTOMER shall not disclose the terms of this Agreement except as<br />

required by law or governmental regulation without LNXI'S prior written consent except that CUSTOMER<br />

may disclose this Agreement on a confidential basis to CUSTOMER'S accountants, attorneys, parent<br />

organizations and financial advisors and lenders.<br />

7.2. Notices. Any notice or other communication under this Agreement given by either party to the other<br />

will be deemed to be properly given if given in writing and delivered in person, by facsimile, if acknowledged<br />

received by return facsimile, by nationally-recognized overnight courier, next business day delivery<br />

requested, or by registered or certified U.S. mail, properly addressed and stamped with the required postage<br />

with return receipt requested, to the intended recipient at its address specified in this Agreement. Such<br />

notice shall be deemed given, if by mail, five days after depositing such notice with the U.S. Post Office, if by<br />

overnight courier, the next business day following delivery of such notice to such courier, and, if in person or<br />

by facsimile, the same day as so given. Either party may from time to time change its address for notices<br />

under this Section by giving the other party notice of the change in accordance with this Section.<br />

7.3. Assignment. CUSTOMER will not assign (directly, by operation of law or otherwise) this Agreement or<br />

any of its rights under this Agreement without the prior written consent of LNXI. Any merger or other<br />

similar transaction resulting in the transfer of fifty percent or more of the capital stock of CUSTOMER shall<br />

be deemed to be a change in control. Subject to the foregoing, this Agreement is binding upon, inures to the<br />

benefit of and is enforceable by the parties and their respective successors and assigns.<br />

7.4. Nonwaiver. Any failure of either party to insist upon or enforce performance by the other party of any<br />

of the provisions of this Agreement or to exercise any rights or remedies under this Agreement will not be<br />

interpreted or construed as a waiver or relinquishment of such party's right to assert or rely upon such<br />

provision, right or remedy in that or any other instance; rather the same will be and remain in full force and<br />

effect.<br />

7.5. Entire Agreement. This Agreement consists of the Software License Agreement, the Software<br />

Maintenance Agreement and the Product Schedule(s), and supersedes any and all prior agreements, between<br />

LNXI and CUSTOMER relating to the Product, the Documentation and other items subject to this<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>


<strong>Clusterworx</strong> End User License Agreement<br />

Agreement. No amendment of this Agreement will be valid unless set forth in a written instrument signed by<br />

both parties.<br />

7.6. Compliance With Laws. LNXI and CUSTOMER shall each comply with all applicable laws,<br />

regulations, rules, orders and other requirements, now or hereafter in effect, of any applicable governmental<br />

authority, in their performance of this Agreement. Without limiting the generality of the foregoing,<br />

CUSTOMER will comply with all export control laws and regulations of the United States in dealing with the<br />

Product including its export and use of the Product outside the United States.<br />

8. Governing Law. THIS AGREEMENT WILL BE INTERPRETED, CONSTRUED AND ENFORCED IN<br />

ALL RESPECTS IN ACCORDANCE WITH THE LAWS OF THE STATE OF UTAH WITHOUT REFERENCE<br />

TO ITS CHOICE OF LAW RULES AND NOT INCLUDING THE 1980 U.N. CONVENTION ON CONTRACTS<br />

FOR THE INTERNATIONAL SALE OF GOODS. CUSTOMER WILL NOT COMMENCE OR PROSECUTE<br />

ANY CLAIM, ACTION, SUIT OR PROCEEDING RELATING TO THIS AGREEMENT OR THE PRODUCT,<br />

DOCUMENTATION, SERVICES OR OTHER ITEMS SUBJECT TO THIS AGREEMENT OTHER THAN IN<br />

THE COURTS OF THE STATE OF UTAH, SALT LAKE COUNTY, OR THE UNITED STATES DISTRICT<br />

COURT LOCATED IN SALT LAKE COUNTY. CUSTOMER HEREBY IRREVOCABLY CONSENTS TO THE<br />

JURISDICTION AND VENUE OF THE COURTS IDENTIFIED IN THE PRECEDING SENTENCE IN<br />

CONNECTION WITH ANY CLAIM, ACTION, SUIT OR PROCEEDING RELATING TO THIS AGREEMENT<br />

OR ANY PRODUCT, DOCUMENTATION, SERVICES OR OTHER ITEMS SUBJECT TO THIS AGREEMENT.<br />

Each party's authorized representative for execution of this Agreement or any amendment thereto shall be a<br />

president, partner, or another duly authorized signatory of each respective party. The parties executing this<br />

Agreement warrant that they have the requisite authority to bind their companies to the terms and<br />

conditions of this Agreement.<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong><br />

263


<strong>Clusterworx</strong> End User License Agreement<br />

264<br />

<strong>Clusterworx</strong> <strong>System</strong> Administrator’s <strong>Guide</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!