{"id":185,"date":"2015-12-08T14:59:40","date_gmt":"2015-12-08T19:59:40","guid":{"rendered":"http:\/\/wp.hpc.uchc.uconn.edu\/?page_id=185"},"modified":"2025-05-16T11:20:23","modified_gmt":"2025-05-16T15:20:23","slug":"resources","status":"publish","type":"page","link":"https:\/\/health.uconn.edu\/high-performance-computing\/resources\/","title":{"rendered":"Resources"},"content":{"rendered":"<div id=\"pl-185\" class=\"panel-layout\">\n<div id=\"pg-185-0\" class=\"panel-grid panel-no-style\">\n<div id=\"pgc-185-0-0\" class=\"panel-grid-cell\">\n<div id=\"panel-185-0-0-0\" class=\"so-panel widget widget_black-studio-tinymce widget_black_studio_tinymce panel-first-child panel-last-child\">\n<div class=\"textwidget\">\n<p>&nbsp;<\/p>\n<\/div>\n<h3>Xanadu, Mantis, and NMR High-Performance Compute Clusters<\/h3>\n<p>Over 12,000 CPU Cores and 77 TB of RAM capable of processing at 360 TFLOPs and comprised of the following hardware:<\/p>\n<ul>\n<li>4 Dell R620 nodes with two, 10-core Intel(R) Xeon(R) CPU E5-2660 v2 processors, 256 GB RAM, and 10 GB interfaces<\/li>\n<li>17 Dell R660 nodes with two, 64-core Intel(R) Xeon(R) Platinum 8462Y+ processors, 512 GB RAM, and 10 GB interfaces<\/li>\n<li>21 Dell R730 nodes with two, 18-core Intel Xeon E5-2697v4 processors, 256 GB RAM, and 10 GB interfaces<\/li>\n<li>27 Dell R740 nodes with two, 40-core Intel Xeon Gold 6138 processors, 192\/384 GB RAM, and 10 GB interfaces<\/li>\n<li>3 Dell R815 nodes with four, 12-core AMD Opteron 6172 processors, 396 GB RAM, and 10 GB interfaces<\/li>\n<li>1 Dell R905 node with four, 6-core AMD Opteron 8435 processors, 256 GB RAM, and 10 GB interfaces<\/li>\n<li>9 Dell C6145 nodes with four, 12-core AMD Opteron 6172 processors, 256 GB RAM, and 10 GB interfaces<\/li>\n<li>8 Dell R6525 nodes with two, 64-core AMD EPYC 7662 64-Core processors, 256 GB RAM, and 10 GB interfaces<\/li>\n<li>21 Dell R7525 nodes with two, 64-core AMD EPYC 7453 processors, 256 GB RAM, and 10 GB interfaces<\/li>\n<li>24 Penguin MH61-HD3-ZB nodes with Intel Xeon Gold 6230 processors, 192 GB RAM, and 10 GB interfaces<\/li>\n<li>15 Supermicro AS 2024US-TRT nodes with two, 64-Core AMD EPYC 7713 processors, 512 GB RAM, and 10 GB interfaces<\/li>\n<li>1 Penguin MZ92-FS0-00 node with eight, 6-Core AMD EPYC 7352 processors, 2048 GB RAM, and 10 GB interfaces<\/li>\n<li>2 Dell XE8545 AI nodes with two, 64-Core AMD EPYC 7763 processors, 2048 GB RAM, and 10 GB interfaces<\/li>\n<li>1 Supermicro 6049GP-TRT node with four, 28-Core Intel Xeon 8176 processors, 1500 GB RAM, and 10 GB interfaces<\/li>\n<li>2 Quanta Cloud S76 nodes with NVIDIA Grace Hopper Superchip, Grace CPU, Hopper GPU processors, 480 GB RAM, and 10 GB interfaces<\/li>\n<li>15 NVIDIA A100 80gb PCIe GPUs, 6 NVIDIA A10 GPUs, and 2 NVIDIA M10 GPUs<\/li>\n<li>Job Management provided by Slurm Workload Manager with provisioning by Bright and Confluent<\/li>\n<\/ul>\n<\/div>\n<h3><\/h3>\n<div id=\"panel-185-0-0-0\" class=\"so-panel widget widget_black-studio-tinymce widget_black_studio_tinymce panel-first-child panel-last-child\">\n<div class=\"textwidget\">\n<h3>Virtualization Infrastructure<\/h3>\n<ul>\n<li>16 Dell PowerEdge R640 hosts with 1280 CPU cores and 10.8 TB RAM running VMWare 8 hosting 300+ Windows and Linux virtual machines with SSD high OPS performance cache tier<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h3>Datacenter Infrastructure<\/h3>\n<ul>\n<li>UPS generator backed power with redundant cooling<\/li>\n<li>3&#215;40 Gbe dark fiber connection to off-site DR location<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h3>Network (100+ Gbe)<\/h3>\n<ul>\n<li>Full non-oversubscribed 10\/40\/100\/400 GbE datacenter network core layer<\/li>\n<li>BioScienceCT Research Network \u2013 100 GbE to CEN, Internet2, Storrs<\/li>\n<li>HPC Science DMZ \u2013 low latency, 80 Gb-capable firewall<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h3>Storage<\/h3>\n<ul>\n<li>Over 22 PB of storage including 775 TB Quantum, 3.4 PB EMC Isilon, 10.5 PB Qumulo scale-out clusters, 2 PiB Spectra tape backup and archive along with 2.3 PB Amplidata, and 3.1 PB Scality Geo-Spread cloud storage<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h3>Computational Biology Core<\/h3>\n<ul>\n<li>CBC provides computational power and technical support to UConn researchers and affiliates.<\/li>\n<li>Please visit the CBC website for more information: <a href=\"https:\/\/bioinformatics.uconn.edu\/\" target=\"_blank\" rel=\"noopener noreferrer\">Computational Biology Core<\/a>.<\/li>\n<li>Join the official CBC Slack channel for up to date news and to request help: <a href=\"https:\/\/uconn-cbc.slack.com\/\" target=\"_blank\" rel=\"noopener noreferrer\">UCONN-CBC.slack.com<\/a>.<\/li>\n<\/ul>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>&nbsp; Xanadu, Mantis, and NMR High-Performance Compute Clusters Over 12,000 CPU Cores and 77 TB of RAM capable of processing at 360 TFLOPs and comprised of the following hardware: 4 Dell R620 nodes with two, 10-core Intel(R) Xeon(R) CPU E5-2660 v2 processors, 256 GB RAM, and 10 GB interfaces 17 Dell R660 nodes with two, [&hellip;]<\/p>\n","protected":false},"author":38,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"footnotes":""},"acf":[],"publishpress_future_action":{"enabled":false,"date":"2026-04-25 11:28:45","action":"change-status","newStatus":"draft","terms":[],"taxonomy":""},"_links":{"self":[{"href":"https:\/\/health.uconn.edu\/high-performance-computing\/wp-json\/wp\/v2\/pages\/185"}],"collection":[{"href":"https:\/\/health.uconn.edu\/high-performance-computing\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/health.uconn.edu\/high-performance-computing\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/health.uconn.edu\/high-performance-computing\/wp-json\/wp\/v2\/users\/38"}],"replies":[{"embeddable":true,"href":"https:\/\/health.uconn.edu\/high-performance-computing\/wp-json\/wp\/v2\/comments?post=185"}],"version-history":[{"count":22,"href":"https:\/\/health.uconn.edu\/high-performance-computing\/wp-json\/wp\/v2\/pages\/185\/revisions"}],"predecessor-version":[{"id":834,"href":"https:\/\/health.uconn.edu\/high-performance-computing\/wp-json\/wp\/v2\/pages\/185\/revisions\/834"}],"wp:attachment":[{"href":"https:\/\/health.uconn.edu\/high-performance-computing\/wp-json\/wp\/v2\/media?parent=185"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}