Skip to main content Skip to secondary navigation

Systems & Services Overview

Main content start

The Stanford Research Computing Center (SRCC) provides a number of services, from helping your specify computing and data needs for research proposals; to scalable and affordable data storage platforms; to consultation on your specific computing needs and problems; to computational platforms and resources. We want to help you find the best solution for your specific need, whether that solution results in you using resources at Stanford or at a national HPC resource. Several of our services, though, do relate directly to physical hardware, including hosting and system administration.  Hosting largely takes place at the Stanford Research Computing Facility (SRCF), a data center designed specifically to host high density, power-consumptive computing equipment. A Stanford building located on the SLAC campus, the SRCF is intended to meet the research server hosting needs of the campus for years to come.

Below is a representative sample of the types of systems that we manage and support.  Some are owned by individual PIs or groups, for the use of their research teams; others are truly campus-wide offerings.  If you are interested in having the SRCC manage or host your servers, or if you are interested in investing in new equipment for the Sherlock cluster, please contact us at


Sherlock is a shared compute cluster available for use by all Stanford faculty and their research teams for sponsored or departmental faculty research. Using the Slurm resource manager, all research teams have access to a base set of general compute nodes, GPU-based servers, and a multi-petabyte Lustre parallel file system for temporary files. Faculty can supplement these shared nodes by purchasing additional servers, thus becoming Sherlock owners.  By becoming an owner, your group will not only have exclusive use of purchased nodes, but will have access to the over 1,300 owner compute nodes with a total of more than 30,000 CPUs. The Dean's office of the School of Humanities and Sciences has also purchased a set of 89 nodes with over 3,000 CPUs for use by H&S departments.  Thus far, owners from more than 160 research groups have added compute nodes to Sherlock for their team's exclusive use.  Currently Sherlock supports more than 5,000 users from over 800 research groups at Stanford.


Oak provides the research community with inexpensive storage for research projects, storage that can grow in order to accommodate the projects’ increasing storage requirements. Oak is fast I/O storage for HPC, $50/TB a year, billed monthly.  The Oak storage system is mounted on Sherlock and XStream. SFTP and Globus support are available. Oak is a capacity-oriented HPC storage system designed for long term storage that doesn't rely on a single vendor implementation. It was designed by the SRCC team using COTS (commercial off-the-shelf) components and open source software to provide up to billions of inodes and tens of petabytes of storage space to answer Stanford researchers' big data storage needs. The software behind Oak is based on the Lustre filesystem and the Robinhood Policy Engine.


FarmShare is a compute environment available to students, faculty and staff with fully sponsored SUNetIDs. FarmShare is a great resource for coursework-related and miscellaneous computational tasks. This resource also helps those doing research have a place to try codes and learn about technical solutions to assist in reaching their research goals, prior to scaling up to Sherlock or another cluster.

The Stanford Genomics Cluster

The Stanford Genomics Cluster is available to members of the Genomics research community at Stanford. With scientific support from the Genetics Bioinformatics Service Center (GBSC) and system administration from the Stanford Research Computing Center (SRCC), the SCG cluster uses a charge-back model to provide access to a wide array of bioinformatics tools, techniques and scientific support expertise. It has approximately 4600 compute cores and 9PB of usable storage.

Stanford Research Computing Facility

The Stanford Research Computing Facility (SRCF) provides the campus research community with data center facilities designed specifically to host high-performance computing equipment. Supplementing the renovated area of the Forsythe data center, the SRCF is intended to meet Stanford’s research computing needs for the coming years.    A Stanford building located on the SLAC campus, the SRCF was completed in the fall of 2013, with production HPC services being offered as of December 2013.  The facility and services therein are managed by the Stanford Research Computing Center (SRCC).

Population Health Sciences

SRCC supports the research computing infrastructure of the Population Health Sciences project, with over 300 researchers who can access 120 data sets containing 281 billion records.  The PHS-Windows servers form a cluster of powerful computers where programs like SAS, Stata, R, Matlab, and other analysis tools are available to all users remotely. After connecting to a server in the cluster, users have access to licensed software tools and shared published data from other collaborators. The cluster serves as a central repository for all PHS-acquired research data, and it allows for more efficient data management as well as increased data security.

The Institute for Computational & Mathematical Engineering (ICME)

The Institute for Computational & Mathematical Engineering (ICME) GPU cluster consists of servers with 140 GPU cores and 1.2 TB of RAM. In addition, ICME also has an MPI cluster with 184 CPUs and 172 GB of RAM, available to students and faculty from ICME.

Carina Computing Platform

Carina leverages a modern private cloud environment (Carina On-Prem) to address the significant demand for a secured computational environment to work with PHI and other high risk data at scale. Carina is integrated with University regulations like the Data Risk Assessment and compliant with Stanford’s minimum security requirements for hosting High Risk Data

Carina is available to any team led by a researcher with Principal Investigator privileges at Stanford (e.g.: faculty, or researchers with a PI waiver) working with exclusively with high-risk data. Users can request Carina On-Prem as well as the Nero Cloud computing environment, depending on their needs.

HANA Immersive Visualization Environment

The HANA Immersive Visualization Environment (HIVE), is a visualization center located in Huang 050. Sponsored by ICME, the Army High Performance Computing Research Center, and the School of Engineering, it is available to the Stanford community. The HIVE enables collaborative visualization in teaching and research across the sciences, humanities, social sciences, and engineering.