CloudLab will be distributed infrastructure, building clusters at three sites. (We hope to add more in the future.) Each site will be a variation on a “reference” architecture. The reference architecture comprises approximately 5,000 cores and 300–500 Terabytes of storage in the latest virtualization-capable hardware. CloudLab will provide 2x10 Gbps network interfaces to every node via software-defined networking (at least OpenFlow, and we hope to provide other SDN technologies as well.) A 100 Gbps full-mesh SDN interconnect lets researchers instantiate a wide range of in-cluster experimental topologies, e.g., fat trees, rings, hypercubes, etc. Each site will leverage CC-NIE infrastructure to provide at least one connection to AL2S, the SDN-based 100 Gbps network that is part of Internet2's Innovation Platform; this will enable high-speed, end-to-end SDN between all CloudLab sites. CloudLab will provide two major types of storage: per-server storage (a mix of high-performance flash and high-capacity magnetic disks at a ratio of about 1 disk per every 4 cores), and a centralized storage system. This storage mix enables a range of experiments with file systems, storage technologies, and big data, while providing convenient, reliable file systems to researchers who are not interested in storage experiments. Our reference infrastructure is sized to be large enough to enable valuable experiments (dozens of concurrent small experiments, or a few medium-large ones).
We are building CloudLab with a different commercial partner at each site. This gives us diversity: while sharing a common reference architecture, each site has a slightly different focus and implementation of similar concepts. This means that researchers can evaluate whether the behavior of their systems is tightly bound to a particular realization of the architecture, or whether their findings are more universal. It also ensures that the needs of particular research communities (for example, storage or green computing) are specifically addressed by at least one cluster.
CloudLab will be federated with a wealth of existing research infrastructure, giving users access to a diverse set of hardware resources at dozens of locations. It will be a member of the GENI federation, meaning that GENI users can access CloudLab with their existing accounts, and CloudLab users have access to all of the hardware resources federated with GENI.
CloudLab sites interconnect with each other via IP and Layer-2 links to regional/national research networks using techniques now being adopted by many campuses under the NSF CC-NIE/CC-IIE program. Thus CloudLab experiments can also connect at Layer-2 to the core GENI Network, US Ignite cities, and advanced HPC clusters across the US. A single experiment can span all of these resources: in addition to CloudLab's own clusters, it might include GENI Racks (small clusters distributed across the United States), local fiber in a US Ignite city (cities with advanced high-speed municipal networks), and cyber-physical systems such as the UMASS CASA distributed weather radar system.
This represents the first half of the Utah cluster; the remainder will be built in about a year
The University of Utah is partnering with HP to build a cluster with 64-bit ARM processors and OpenFlow 1.3 support throughout. This clusters will consist of 7 HP Moonshot Chassis, each having 45 8-core ARM servers (315 servers, 2,520 cores total) with 64 GB of RAM (20 TB total), 120 TB of SATA flash storage (38 TB total). Each chassis has two "top of rack" (ToR) switches, and each server has two 10Gb NICs, one connected to each of the ToRs. Each ToR has 4x 40 Gb of uplink capacity to a large core switch, for a total of 900 Gbits of connectivity within the chassis and 320 Gbits of connectivity to the core. One option for allocation will be to allocate an entire chassis at a time; when allocated this way, the user will have complete administrative access to the ToR switches in addition to the nodes. Users allocating entire chassis will also be given administrative access to a "slice" of the core switch using MDC, which gives the user a complete virtual switch, including full control over layer 2 and 3 features and a dedicated OpenFlow datapath.
The specifics of the hardware are:
This represents the first half of the Wisconsin cluster; the remainder will be built in about a year
The University of Wisconsin-Madison is partnered with Cisco Systems to build a powerful and diverse cluster that closely reflects the technology and architecture used in modern commercial data centers. The initial cluster will have 100 servers with a total of 1,600 cores connected in a CLOS Fat-Tree topology. Future acquisitions in 2015 and 2016 will grow the system to at least 240 servers. The servers are currently broken into two categories, each offering different capabilities and enabling different types of cloud experiments. In the initial cluster all servers will have the same CPU (2x 8C @ 2.4GHz), RAM (128GB), and network (2x 10Gbps to ToR) configuration, but will differ in their storage configurations. Each of the ninety servers in the first category will have 2x 1.2TB disks. We expect these to be used for experimenting with exciting new cloud architectures and paradigms, management frameworks, and applications. Each of the ten servers in the second category will have a larger number (1x 1TB, 12x 3TB donated by Seagate) of slower disks. This category is targeted toward supporting experiments that stress storage throughput. Each server will also have an SSD (480GB) to enable sophisticated experiments that explore storage hierarchies in the cloud.
The servers use Nexus switches from Cisco for top-of-rack (ToR) switching. Each ToR Nexus is connected to six spine switches via dedicated 40Gbps links. Each spine will connect via a 40Gbps link to a Nexus WAN switch for campus connectivity to Internet2 and the other two CloudLab facilities. We selected the Cisco Nexus series because it offers several unique features that enable broad and deep instrumentation, as well as a wide variety of cloud networking experiments. Examples of these features include OpenFlow 1.0; monitoring instantaneous queue lengths in individual ports; tracing control plane actions at fine time-scales; and support for a wide-range of routing protocols.
The specific details of the hardware are as follows:
The Clemson system, developed in close cooperation with Dell, will have three major components: bulk block storage, low density storage for MapReduce/Hadoop-like computing, and generic VM nodes used to provision virtual machines. All nodes have 16 cores per node (2 CPUs), one on-board 1 Gb Ethernet and a dual port 56/40/10 GBps card. Bulk storage nodes will provide block level services to all nodes over a dedicated 10 Gb/s Ethernet. Storage nodes will consist of 12x4 TB disk drives in each node, plus 8x1 TB disks in each node. The nodes are configured with 256 GB of memory. Hadoop nodes have 4x1 TB disk and 256 GB of memory. VM nodes have 256 GB memory each. The large memory configuration reflects the need for significant memory in VMs today and allows use to increase performance by reduced paging/swapping in the VMs.
The focus of this system will be to provision significantly sized environments that can be linked to national and international resources. It will also be able to directly connect to Clemson Condo HPC system with nearly 2000 nodes. The interactions between bare metal systems in the Condo cluster with VMs in the cloud system will allow prototyping of next generation HPC and CS environments in a SDN-enabled network.
CloudLab users will have access to all hardware that is is federated with the GENI testbed. This comprises thousands of cores and hundreds of terabytes of storage across, spread across dozens of sites across the country. This will help CloudLab users build highly-distributed infrastructures, such as CDN-type services, and it will enable applications that require that low latency to end users and devices. This federation also provides access to a variety of resources that go beyond simple clusters, such as campus-scale wireless networks.
Most of this equipment is interconnected (and connected to CloudLab) through a programmable layer 2 network.