Xanadu High-Performance Compute Clusters
2168 CPU Cores with 11 TB of RAM capable of processing at 50 TFLOPs and comprised of the following hardware:
- 10 Dell R730 nodes with two, 18-core Intel Xeon E5-2697v4 processors, 256 GB RAM and 10 gigabit interfaces
- 19 Dell C6145 nodes with four, 12-core AMD Opteron processors, 256 GB RAM and 10 gigabit interfaces
- 15 Dell C6145 nodes with four 8-core AMD Opteron processors and 256 GB RAM and 10 gigabit interfaces
- 2688 NVIDIA Tesla M2075 GPU cores
- Job Management provided by Slurm Workload Manager and provisioning provided by Bright
Virtualization Infrastructure
- 20 Dell Poweredge hosts with 1368 CPU and 20,160 GPU cores and 7 TB RAM running VMWare 6.7 hosting 300+ Windows and Linux virtual machines and Horizon desktop virtualization with SSD high OPS performance cache tier
Datacenter Infrastructure
- UPS generator backed power with redundant cooling
- 3×40 Gbe dark fiber connection to off-site DR location
Network (100+ Gbe)
- Full non-oversubscribed 10/40 GbE datacenter network core layer
- BioScienceCT Research Network – 100 GbE to CEN, Internet2, Storrs
- New HPC Science DMZ – low latency, 80 Gb-capable firewall
Storage
- 8.0+ PB of storage including 2.3 PB EMC Isilon, 2.2 PB Qumulo QC24/QC208 scale-out clusters along with 3.8 PB Amplidata, Geo-Spread cloud storage
Computational Biology Core
- CBC provides computational power and technical support to UConn researchers and affiliates.
- Please visit the CBC website for more information: Computational Biology Core.
- Join the official CBC Slack channel for up to date news and to request help: UCONN-CBC.slack.com.