BACK
Installing Cluster Infrastructure ("CI") on Red Hat 9
These instructions describe how to install a CI cluster
on Red Hat 9 ("RH9") with minimal hardware requirements. All you
need is two or more computers, connected with a private
ethernet network. This network is called the "interconnect",
and should be private for security and performance reasons.
Each individual computer in the cluster is called a "node".
CI contains a subset of the functionality of OpenSSI
(http://OpenSSI.org). It provides an in-kernel CLuster Membership
Service ("CLMS") and Internode Communication Subsystem ("ICS").
CLMS keeps track of nodes joining and leaving the cluster and
guarantees an identical view of cluster membership on every node.
An included library, libcluster, allows access to CLMS information
from user-mode. Various man pages document the libcluster API.
There are also several programs that provide command-line access
to the libcluster API.
CLMS depends on ICS, which is a protocol for passing Remote
Procedure Calls ("RPCs") between nodes, using node numbers as
addresses. ICS stacks on top of TCP, and is architected so that it
can be ported on top of other reliable transport protocols. There
is currently no API for using ICS from user-mode, although
docs/enhancing.txt discusses how to use both ICS and CLMS from
kernel-mode.
These instructions assume you are doing a fresh install of
CI. They must be followed for each node you wish to add to
the cluster.
1. Install RH9.
2. When configuring your firewall, do one of the following:
(a) designate as "trusted" the interface for the cluster interconnect
(b) open the port for tftp-server (port 69)
(c) disable the firewall
3. Extract the CI tarball and run the ./install script. It will
install the necessary packages, and prompt you to reboot.
Select the CI kernel during boot.
4. Create a /etc/cluster.conf file. This will contain the boot
parameters used by cluster_config to add the node to the cluster.
Each boot parameter should be on a line of its own.
IFCONFIG=eth<x>:<ipaddr>:<netmask>
CLUSTER_MASTER=<y>:<clms_ipaddr1>[,<z>:<clms_ipaddr2>]
CLUSTER_NODENUM=<a>
[ICS_ROUTE="default"|<host ipaddr>:<gateway ipaddr>:<metric>]
The IFCONFIG variable contains information about the ICS interface,
separated by colons:
x = ethernet device number for the ICS network card
ipaddr = IP address used for this node's ICS interface
netmask = netmask address used for this node's ICS interface
The CLUSTER_MASTER variable contains a comma separated list of
potential CLMS master nodes. The cluster cannot be formed without
a CLMS master node. Each master node needs a node number and IP
address specified, separated by a colon:
y = Node number of the first potential CLMS master
clms_ipaddr1 = IP address of the first potential CLMS master
z, clms_ipaddr2 = node number and IP address of other potential
CLMS masters
The CLUSTER_NODENUM variable contains the node number for this particular
machine. It should be within the range of 1 to NSC_MAX_NODE_VALUE.
a = Node number of this cluster machine.
The ICS_ROUTE line is optional. Use this to set a default gateway or
a route to a specific host. All routes are static and run over the ICS
interface. Multiple ICS_ROUTE lines can be specified to setup multiple
routes (the maximum is 32). This feature allows cluster nodes to span across
multiple subnets.
5. Run the cluster_start script to add this node to the cluster:
/usr/sbin/cluster_start
6. Repeat steps 1-5 for each node you want to add.
If you have questions or comments that are not addressed on
the website, do not hesitate to send a message to the CI
discussion forum:
ci-linux-devel@lists.sf.net
Maintained by Brian J. Watson <Brian.J.Watson@hp.com>