CMS-T2 Resources#

Computing#

The High Energy Physics group at the University of Wisconsin strives to maintain an information technology infrastructure that is trouble-free, secure, highly available and well understood. The table below provides a summary of the expected available computing (and storage) resources. In practice, due to disk failures and lack of back up disks the total storage can be less than or equal to the number shown.#

Gen Cores CPU Class (year puchased) Slots* HS06/slot** Total HS06 Storage (TB)
g18 16 2.4GHz Opteron 6136 (2010 Spring) 480 9.30 4464 189
g19 24 2.2GHz Opteron 6174 (2010 Fall) 696 8.51 5926 184
g20 16 2.67GHz Xeon E5640 (2010 Fall) 112 8.35 935 161
g22 24 2.3GHz Opteron 6176 (2011 Summer) 384 8.71 3344 460
g23 24 2.3GHz Opteron 6176 (2011 Fall) 384 8.71 3344 300
g24 24 2.6GHz Opteron 6238 (2012 Spring) 240 8.30 1993 310
g25 24 2.6GHz Opteron 6344 (2012 Summer) 24 9.25 220 29
g26 32 2.2GHz Xeon E5-2660 (2013 Winter) 960 9.64 9256 1207
g27 40 2.2GHz Xeon E5-2660V2 (2013 Fall) 1200 9.64 11568 360
g28 40 2.3GHz Xeon E5-2660V2 (2014 Fall) 1400 9.64 13496 228
g29 40 2.3GHz Xeon E5-2650V3 (2015 Fall) 1160 10.88 12615 366
g30 40 2.3GHz Xeon E5-2650V3 (2016 Mar) 1200 10.88 13050 178
g31 48 2.2GHz Xeon E5-2650V4 (2016 Fall ) 3264 10.21 33320 1101
s15 8 2.60 GHz Xeon - - - 200
s17 8 2.60 GHz Xeon - - - 388
s21 8 2.40 GHz Xeon - - - 300
Total - 11504 9.87 (Wtd) 113530 5961

*: Slot counts are working batch slots as of Jan 1, 2017.#

**: HS06 benchmark was run with all services disabled other than sshd, afsd, and xinetd. The following software was used:#

GCC: gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52)
C++: g++ (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52)
FC:  GNU Fortran (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52)
SPEC2006 version: 1.2

We use Condor batch computing software to implement a high throughput Linux computing environment. Opportunistic computing resources from the Grid Laboratory of Wisconsin (GLOW) and Center for Highthroughput Computing (CHTC) provide the potential for the utilization of a total of over around 10,000 Linux CPUs.#

Network#

Compute nodes have 1G links to our LAN, which is composed of several layers. At the rack layer, there are Cisco Nexus 2200 Fabric Extenders (48G backplane). These are connected via 4x10G links to Cisco Nexus 5000 switches (~1T backplane per compute room). The two room switches are each connected via 8x10G links to the building Cisco Nexus 7000 switch. This is connected via 100G to the campus 100G backbone, which then connects at 100G to Chicago and the national research networks, including Internet2, National Lambda Rail (NLR), and ESNet, along with direct connections to Midwest CMS sites including FermiLab, Purdue, and Nebraska.#

See UW HEP Network activities : [CampusNet].#

Storage#

The storage system is based on HDFS with an SRM and xrootd interface. A total storage space of more than 4PB is distributed across many dedicated and dual-use commodity servers in the cluster. To avoid data loss when hard disks fail, two copies of each HDFS block is maintained, so the amount of data that can be stored is half the total disk space.#