Recommended Components

Hardware selection for a ClusterWare system is based on the price/performance ratio. Scyld recommends the components listed below:

Processors. 64-bit Intel® or AMD ™ x86_64 architecture required, single-core or multi-core

Architecture. 1, 2, or 4 sockets per motherboard

Physical Memory. 4096 MBytes (4 GBytes) or more preferred, minimum 2048 MBytes (2GBytes)

Operating System. Red Hat Enterprise Linux 5 (RHEL5) or CentOS 5 required

The Release Notes state the specific version and update of Red Hat or CentOS required to support the ClusterWare release you are installing.

Network Interface Controllers (NIC). Gigabit Ethernet (Fast Ethernet at a minimum) PCI-X or PCI-Express adapters (with existing Linux driver support) in each node for the internal private IP network.

The master node typically employs an additional NIC for connecting the cluster to the external network. This NIC should be selected based on the network infrastructure (e.g., Fast Ethernet if the external network you are connecting the cluster to is Fast Ethernet).

Network Switch. The master node private network NIC and all compute nodes should be connected to a non-blocking Gigabit Ethernet switch for the internal private network. At a minimum, the network switch should match the speed of the network cards.

The switch is a critical component for correct operation and performance of the cluster. In particular, the switch must be able to handle all network traffic over the private interconnect, including cluster management traffic, process migration, library transfer, and storage traffic. It must also properly handle DHCP and PXE.

Tip

It is sometimes confusing to identify which NIC is connected to the private network. Take care to connect the master node to the private switch through the NIC with the same or higher speed than the NICs in the compute nodes.

Disk Drives. For the master node, we recommend using either Serial ATA (SATA) or SCSI disks in a RAID 1 (mirrored) configuration. The operating system on the master node requires approximately 3 GB of disk space. We recommend configuring the compute nodes without local disks (disk-less).

If the compute nodes do not support PXE boot, a bootable CD-ROM drive is required. If local disks are required on the compute nodes, we recommend using them for storing data that can be easily re-created, such as scratch storage or local copies of globally-available data.

If you plan to create boot CDs for your compute nodes, your master node requires a CD-RW or writable DVD drive.

In the default configuration, /home on the master node is exported to the compute nodes; other file systems may be exported as well. If you expect heavy file system traffic, we recommend that you provide a second pair of disks in a RAID 1 (mirrored) configuration for these exported file systems. Otherwise, it is possible for accesses to the exported file systems to interfere with the master node accessing its system files, thus affecting the master node's ability to launch new processes and manage the cluster.

Optional Hardware Components. Gigabit Ethernet with a non-blocking switch serves most users. However, some applications benefit from a lower-latency interconnect.

Infiniband is an industry standard interconnect providing low-latency messaging, IP, and storage support. Infiniband can be configured as a single universal fabric serving all of the cluster's interconnect needs.

More information about Infiniband may be found at the Infiniband Trade Association web site at http://www.infinibandta.org. Scyld supports Infiniband as a supplemental messaging interconnect in addition to Ethernet for cluster control communications.