Skip to main content Skip to secondary navigation
A row of servers inside the SRCF is being shown to a visitor by director Ruth Marinshaw. Photo by Linda Cicero, Stanford News Service.

Hosting Facilities

Main content start

To provide a home for your research compute servers and disk storage arrays, Stanford offers a modern, state-of-the-art data center, the Stanford Research Computing Facility (SRCF).  A Stanford building located on SLAC’s land, the SRCF provides a highly efficient hub for the physical hosting of high density compute and storage equipment, along with systems administration and support services.  The SRCF opened for production use in November 2013.  A major expansion was completed in 2023. For more information about the facility, see the section below.

In addition, Research Computing currently hosts and provides system administration services in a smaller, secure, centrally-managed data center in Forsythe Hall (RCF).  As equipment in the RCF is life-cycled, replacement servers will be housed at the SRCF, returning the Forsythe space to non-research computing use.

Contact us at srcc-support@stanford.edu  if you would like to explore hosting your new equipment at the SRCF and/or if you want to know more about our services and offerings.

Many program announcements for grant proposals require you to provide a description of local compute capabilities and facilities. We can help you out! Until we get that information posted, drop us a note at srcc-support@stanford.edu and we can provide the needed text, tailored to your specific proposal.

The Stanford Research Computing Facility

Overview

The Stanford Research Computing Facility (SRCF) provides the campus research community with data center facilities designed specifically to host high-performance computing equipment. Supplementing the renovated area of the Forsythe data center, the SRCF is intended to meet Stanford’s research computing needs for the coming years.    A Stanford building located on the SLAC campus, the SRCF was completed in the fall of 2013, with production HPC services being offered as of December 2013. In 2023, we opened a new module (SRCF2), significantly expanding the facility’s hosting capabilities. The building and services therein are managed by the Stanford Research Computing.

Technical Information

Power — The SRCF has a resilient but not redundant power infrastructure. The transmission grade power, delivered to SLAC and the SRCF, is UPS and generator protected, providing significant assurance should there be a regional power outage.

Cooling — The building’s design is non-traditional and especially energy efficient. The facility is cooled with ambient air fan systems for 90% of the year. For the hotter days and for equipment needing chilled water, high-efficiency air cooled chillers are available.

Network Connectivity — The SRCF has multiple redundant 10 gigabit networks linking it to the campus backbone, the Internet, Internet2 and other national research networks. In the fall of 2014, 100 gigabit network connectivity was added between the SRCF and external networks. That bandwidth, coupled with the use of OpenFlow communications protocol (developed at Stanford) will provide unprecedented flexibility and capability in meeting the network transport needs of the research communities using the facility.

Service Models

Three service models are supported at the SRCF.

  1. Hosting:  a researcher purchases his/her own rack, PDUs and equipment and works with Research Computing to coordinate installation timing and access. The researcher is responsible for the management and system administration of the equipment.  Equipment must be replaced with new equipment, or removed from the facility, before or when the equipment is 5 years old. Note that some schools, such as H&S, have purchased empty racks and PDUs on behalf of their faculty, recognizing that not all researchers will purchase entire racks of equipment at one time.
  2. Supported cluster:  a researcher purchases his/her own rack, PDUs and equipment, and works with Research Computing to coordinate installation timing and access. The researcher pays Research Computing to provide system administration and support.  Equipment must be replaced with new equipment, or removed from the facility, before or when the equipment is 5 years old.
  3. Shared cluster: The Provost provided Research Computing with capital funding to purchase computing equipment to encourage faculty to use the SRCF and the shared Research Computing cluster model. This incentive represents access to additional HPC resources beyond those funded by grants and may greatly expand researchers’ computing capacity. The cluster purchased with those funds, Sherlock, is available for the use of any Stanford faculty member, and associated research teams, for his/her sponsored or departmental research. The base configuration of 125 servers is shared by all. Beyond using the base Sherlock platform, researchers can use their grant funds to add more servers and storage, choosing from a standard set of configurations.  Purchased and managed by Research Computing, these PI-funded servers become part of the Sherlock cluster, but not available to the entire user base.  PIs who follow this model are referred to as "owners". Owners have access to the servers they purchased, but they also can use other owners' servers when they are idle. At the present time, system administration and support of all components of the Sherlock cluster - whether base servers or owners' servers is funded by the Dean of Research and Provost. In the future, modest fees may be charged for system administration and support.

Note that the SRCF has been designed for hosting high-density racks. Toward this end, vendor pre-racked equipment is the preferred method for deployment. Hosting preference will be given to those researchers with high density, full racks of equipment, in order to make the best use of the resources.

SRCC Service and Facility Features

  • Assistance in specifying equipment, negotiating pricing, coordinating purchases and planning deployment into the data center
  • Technical specifications and boiler-plate facility descriptions for inclusion in proposals
  • Secured 24x7 entry
  • Monitored temperature and environmental control systems
  • Fire detection and fire suppression