Skip to main content Skip to secondary navigation

Systems Overview

Main content start

Below is a representative sample of the types of systems that we manage and support. Some are owned by individual PIs or groups, for the use of their research teams; others are truly campus-wide offerings. If you are interested in having Research Computing manage or host your servers, or if you are interested in investing in new equipment for the Sherlock cluster, please contact us at srcc-support@stanford.edu.

Systems information is also available on the Compute Clusters and HPC Platforms and Getting Started on our HPC Systems pages.

Sherlock

Sherlock is a shared compute cluster available for use by all Stanford faculty and their research teams for sponsored or departmental faculty research.  Research teams have access to a base set of general compute nodes, GPU-based servers, and a multi-petabyte Lustre parallel file system for temporary files. Faculty can supplement these shared nodes by purchasing additional servers, thus becoming Sherlock owners.  Sherlock supports more than 6,300 users from over 1000 research groups at Stanford.

Oak

Oak is Stanford's premiere long-term data storage platform for research data. Oak is available in increments of 10TB or 250TB, priced to be competitive with cloud storage options. Start with as few as 10TB and scale up effortlessly as needed. With tight integration to world-class HPC resources like the Sherlock cluster, you can consolidate your data storage pipeline while saving time and costs. In addition, a variety of network file sharing protocols are available to meet your needs.

FarmShare

FarmShare is a compute environment available to students, faculty and staff with fully sponsored SUNetIDs. FarmShare is a great resource for coursework-related and miscellaneous computational tasks. This resource also helps those doing research have a place to try codes and learn about technical solutions to assist in reaching their research goals, prior to scaling up to Sherlock or another cluster.

Portrait of Nero Wolfe, the mystery writer for whom the Nero compute cluster is named.

Nero GCP

Nero GCP is a shared secure computing platform specifically designed for High Risk Data.  This computing environment leverages the public cloud to address the significant demand for a secured computational environment to work with PHI and other high risk data at scale.

Nero is available to any team led by a researcher with Principal Investigator privileges at Stanford (e.g.: faculty, or researchers with a PI waiver) working exclusively with high-risk data.

An illustration of the stars in the Carina Nebula.

Carina Computing Platform

Carina leverages a modern private cloud environment (Carina On-Prem) to address the significant demand for a secured computational environment to work with PHI and other high risk data at scale.

Carina is available to any team led by a researcher with Principal Investigator privileges at Stanford (e.g.: faculty, or researchers with a PI waiver) working exclusively with high-risk data.

Population Health Sciences

Research Computing supports the research computing infrastructure of the Population Health Sciences project, with over 300 researchers who can access 120 data sets containing 281 billion records.  The PHS-Windows servers form a cluster of powerful computers where programs like SAS, Stata, R, Matlab, and other analysis tools are available to all users remotely. The cluster serves as a central repository for all PHS-acquired research data.

Hosting Facilities

The Stanford Research Computing Facility (SRCF) provides the campus research community with data center facilities designed specifically to host high-performance computing equipment. Supplementing the renovated area of the Forsythe data center, the SRCF is intended to meet Stanford’s research computing needs for the coming years.    A Stanford building located on the SLAC campus, the facility was completed in the fall of 2013 (SRCF) and expanded in 2023 (SRCF2). 

The Stanford Genomics Cluster

The Stanford Genomics Cluster is available to members of the Genomics research community at Stanford. With scientific support from the Genetics Bioinformatics Service Center (GBSC) and system administration from Stanford Research Computing, the SCG cluster uses a charge-back model to provide access to a wide array of bioinformatics tools, techniques and scientific support expertise. It has approximately 4600 compute cores and 9PB of usable storage.

Abstract photo showing multicolored software code text on a computer screen, vanishing into the distance.

Quantum Learning Machine

The Atos Quantum Learning Machine is a complete appliance that offers a universal programming environment to avoid vendor lock-in. The Atos Quantum Assembly Language, a hybrid language based on Python, allows programmers to develop their own algorithms on any existing or future quantum programming framework.