Heracles Architecture - Multi-Core Cluster

Server name: heracles.ucdenver.pvt

Home

 

Heracles Multi-core cluster consists of following primary components:

Master Node

The master node is mainly used to manage all computing resources and operations on Heracles cluster, and it corresponds to node 0 in the cluster. It is also the machine that users log into, create/edit/compile programs, and submit the programs for execution on compute nodes.

Users do not run their programs on master node.

Repeat: user programs MUST NOT be run on master node. Instead, they must be submitted to the compute nodes for execution.

The master node on Heracles cluster is featured by:

Compute Nodes - node 2 to node 16

Compute nodes execute the jobs submitted by users. From the master node, users may submit programs to execute them on one or more compute nodes.

There are 15 compute nodes (nodes 2 to 16) on Heracles.

The fifteen compute nodes together have:

Node 1 with 2 x GPUs - Nvidia Ada Lovelace

Configuration of Node 18 with 4 x GPUs - Nvidia Tesla P100

30MB L3 Cache, DDR4-2400, 9.6 GT/sec QPI, 105W

Supports Hyper-Threading and Turbo Boost up to 2.9 GHz

For detail information about the GPUs on node18 run the following command:

       ssh node18 /usr/local/cuda/samples/Samples/1_Utilities/deviceQuery/deviceQuery

ssh node18 nvidia-smi

ssh node2 nvidia-smi