Logo dirsisti
Logo Unitn
 
  • Italiano
Logo MyUnitn

HPC Cluster

 
HPC Cluster @ UNITN

For over a decade prophets have voiced the contention that the organization of a single computer has reached its limits. 

And that the truly significant advances can be made only by interconnection of a multiplicity of computers. 

 

Gene Amdahl, 1967

 

UNITN offers access to HPC cluster facility to professors and researchers

 

System architecture

 

Today the HPC cluster is composed of 25 computing CPU nodes (for a total of 500 core), 2 GPU computing nodes (for a total of 20,000 CUDA cores) and two frontend nodes (head node).

All nodes are interconnected with infiniband network and feature 10Gb connectivity to the University MAN.

The home users and software are installed on shared storage (Dell Compellent) and replicated to a similar storage system at the backup site.

The chosen operating system is Linux CentOS 7 while the cluster management software is Altair PBS.

Per core Memory 12 GB
Cores per server 20
Total Cluster cores 500
Total Cluster CUDA cores 20.000

head node:

n.2 DELL R630 with 2 CPU Intel Xeon E5-2650 v3 2.3GHz, 64 GB di RAM, 2 Hot-plug Hard Drive 300GB 10K RPM SAS, 10Gb SFP+, Infiniband Mellanox ConnectX-3, Single Port, VPI FDR, QSFP+ Adapter

cpu node:

n.17 DELL R630 with 2 CPU Intel Xeon E5-2650 v3 2.3GHz, 256 GB di RAM, 2 Hot-plug Hard Drive 300GB 10K RPM SAS,  10Gb SFP+, Infiniband Mellanox ConnectX-3, Single Port, VPI FDR, QSFP+ Adapter

n.8 DELL R630 with 2 CPU Intel Xeon E5-2650 v3 2.3GHz, 256 GB di RAM, 1 Hot-plug Hard Drive SSD 200GB, 10Gb SFP+, Infiniband Mellanox ConnectX-3, Single Port, VPI FDR, QSFP+ Adapter

GPU node:

n.2 DELL C4130 with 2 CPU Intel Xeon E5-2660 v3 2.6GHz, 256GB RAM 2 x Hot-plug Hard Drive 300GB 10K RPM SAS, 1x 10Gb SFP+, 1 Infiniband Mellanox ConnectX-3, Single Port, VPI FDR, QSFP+ Adapter, 2  Nvidia K80

Connectivity:

Switch InfiniBand Mellanox, model MIS5030Q-1SFC with 36 ports QDR (40Gb/s)

 

Request Service Access

The request  must be submitted by staff structured in University of Trento (professors, researchers) sending a mail to gestione.sistemi [at] unitn.it

Access could also be required for unstructured personal with an UNITN account (students, PTA, post.doc, etc.)

 

Cluster statistics

 

CPU Utilization

 

cnode01.cluster.net
cnode02.cluster.net
cnode03.cluster.net
cnode04.cluster.net
cnode05.cluster.net
cnode06.cluster.net
cnode07.cluster.net
cnode08.cluster.net
cnode09.cluster.net
cnode10.cluster.net
cnode11.cluster.net
cnode12.cluster.net
cnode13.cluster.net
cnode14.cluster.net
cnode15.cluster.net
cnode16.cluster.net
cnode17.cluster.net
cnode18.cluster.net
cnode19.cluster.net
cnode20.cluster.net
cnode21.cluster.net
cnode22.cluster.net
cnode23.cluster.net
cnode24.cluster.net
cnode25.cluster.net
gnode01.cluster.net
gnode02.cluster.net

 

Other Statiscics

Get Service 

The request  must be submitted by staff structured in University of Trento (professors, researchers) sending a mail to gestione.sistemi [at] unitn.it

Access could also be required for unstructured personal with an UNITN account (students, PTA, post.doc, etc.)

 

 

Recipients