Xanadu, Mantis, and NMR High-Performance Compute Clusters
Over 12,000 CPU Cores and 77 TB of RAM capable of processing at 360 TFLOPs and comprised of the following hardware:
- 4 Dell R620 nodes with two, 10-core Intel(R) Xeon(R) CPU E5-2660 v2 processors, 256 GB RAM, and 10 GB interfaces
- 17 Dell R660 nodes with two, 64-core Intel(R) Xeon(R) Platinum 8462Y+ processors, 512 GB RAM, and 10 GB interfaces
- 21 Dell R730 nodes with two, 18-core Intel Xeon E5-2697v4 processors, 256 GB RAM, and 10 GB interfaces
- 27 Dell R740 nodes with two, 40-core Intel Xeon Gold 6138 processors, 192/384 GB RAM, and 10 GB interfaces
- 3 Dell R815 nodes with four, 12-core AMD Opteron 6172 processors, 396 GB RAM, and 10 GB interfaces
- 1 Dell R905 node with four, 6-core AMD Opteron 8435 processors, 256 GB RAM, and 10 GB interfaces
- 9 Dell C6145 nodes with four, 12-core AMD Opteron 6172 processors, 256 GB RAM, and 10 GB interfaces
- 8 Dell R6525 nodes with two, 64-core AMD EPYC 7662 64-Core processors, 256 GB RAM, and 10 GB interfaces
- 21 Dell R7525 nodes with two, 64-core AMD EPYC 7453 processors, 256 GB RAM, and 10 GB interfaces
- 24 Penguin MH61-HD3-ZB nodes with Intel Xeon Gold 6230 processors, 192 GB RAM, and 10 GB interfaces
- 15 Supermicro AS 2024US-TRT nodes with two, 64-Core AMD EPYC 7713 processors, 512 GB RAM, and 10 GB interfaces
- 1 Penguin MZ92-FS0-00 node with eight, 6-Core AMD EPYC 7352 processors, 2048 GB RAM, and 10 GB interfaces
- 2 Dell XE8545 AI nodes with two, 64-Core AMD EPYC 7763 processors, 2048 GB RAM, and 10 GB interfaces
- 1 Supermicro 6049GP-TRT node with four, 28-Core Intel Xeon 8176 processors, 1500 GB RAM, and 10 GB interfaces
- 2 Quanta Cloud S76 nodes with NVIDIA Grace Hopper Superchip, Grace CPU, Hopper GPU processors, 480 GB RAM, and 10 GB interfaces
- 15 NVIDIA A100 80gb PCIe GPUs, 6 NVIDIA A10 GPUs, and 2 NVIDIA M10 GPUs
- Job Management provided by Slurm Workload Manager with provisioning by Bright and Confluent
Virtualization Infrastructure
- 16 Dell PowerEdge R640 hosts with 1280 CPU cores and 10.8 TB RAM running VMWare 8 hosting 300+ Windows and Linux virtual machines with SSD high OPS performance cache tier
Datacenter Infrastructure
- UPS generator backed power with redundant cooling
- 3×40 Gbe dark fiber connection to off-site DR location
Network (100+ Gbe)
- Full non-oversubscribed 10/40/100/400 GbE datacenter network core layer
- BioScienceCT Research Network – 100 GbE to CEN, Internet2, Storrs
- HPC Science DMZ – low latency, 80 Gb-capable firewall
Storage
- Over 20 PB of storage including 775 TB Quantum, 3.4 PB EMC Isilon, 10.5 PB Qumulo scale-out clusters along with 2.3 PB Amplidata, and 3.1 PB Scality Geo-Spread cloud storage
Computational Biology Core
- CBC provides computational power and technical support to UConn researchers and affiliates.
- Please visit the CBC website for more information: Computational Biology Core.
- Join the official CBC Slack channel for up to date news and to request help: UCONN-CBC.slack.com.