Systems Overview
Below is a representative sample of the types of systems that we manage and support. Some are owned by individual PIs or groups, for the use of their research teams; others are truly campus-wide offerings. If you are interested in having the SRCC manage or host your servers, or if you are interested in investing in new equipment for the Sherlock cluster, please contact us at srcc-support@stanford.edu.
Systems information is also available on the Compute Clusters and HPC Platforms page.

Sherlock
Sherlock is a shared compute cluster available for use by all Stanford faculty and their research teams for sponsored or departmental faculty research. Research teams have access to a base set of general compute nodes, GPU-based servers, and a multi-petabyte Lustre parallel file system for temporary files. Faculty can supplement these shared nodes by purchasing additional servers, thus becoming Sherlock owners. Sherlock supports more than 6,300 users from over 1000 research groups at Stanford.

Oak
Oak provides the research community with inexpensive storage for research projects, storage that can grow in order to accommodate the projects’ increasing storage requirements. Oak is fast I/O storage for HPC, $50/TB a year, billed monthly. The Oak storage system is mounted on Sherlock and XStream. SFTP and Globus support are available. Oak is a capacity-oriented HPC storage system, designed by the SRCC team for long term storage, that doesn't rely on a single vendor implementation. The software behind Oak is based on the Lustre filesystem and the Robinhood Policy Engine.

FarmShare
FarmShare is a compute environment available to students, faculty and staff with fully sponsored SUNetIDs. FarmShare is a great resource for coursework-related and miscellaneous computational tasks. This resource also helps those doing research have a place to try codes and learn about technical solutions to assist in reaching their research goals, prior to scaling up to Sherlock or another cluster.

Nero GCP
Nero GCP is a shared secure computing platform specifically designed for High Risk Data. This computing environment leverages the public cloud to address the significant demand for a secured computational environment to work with PHI and other high risk data at scale.
Nero is available to any team led by a researcher with Principal Investigator privileges at Stanford (e.g.: faculty, or researchers with a PI waiver) working exclusively with high-risk data.

Carina Computing Platform
Currently in Beta testing
Carina leverages a modern private cloud environment (Carina On-Prem) to address the significant demand for a secured computational environment to work with PHI and other high risk data at scale.
Carina is available to any team led by a researcher with Principal Investigator privileges at Stanford (e.g.: faculty, or researchers with a PI waiver) working exclusively with high-risk data.

Population Health Sciences
SRCC supports the research computing infrastructure of the Population Health Sciences project, with over 300 researchers who can access 120 data sets containing 281 billion records. The PHS-Windows servers form a cluster of powerful computers where programs like SAS, Stata, R, Matlab, and other analysis tools are available to all users remotely. The cluster serves as a central repository for all PHS-acquired research data.

Stanford Research Computing Facility
The Stanford Research Computing Facility (SRCF) provides the campus research community with data center facilities designed specifically to host high-performance computing equipment. Supplementing the renovated area of the Forsythe data center, the SRCF is intended to meet Stanford’s research computing needs for the coming years. A Stanford building located on the SLAC campus, the SRCF was completed in the fall of 2013, with production HPC services being offered as of December 2013. The facility and services therein are managed by the Stanford Research Computing Center (SRCC).

The Stanford Genomics Cluster
The Stanford Genomics Cluster is available to members of the Genomics research community at Stanford. With scientific support from the Genetics Bioinformatics Service Center (GBSC) and system administration from the Stanford Research Computing Center (SRCC), the SCG cluster uses a charge-back model to provide access to a wide array of bioinformatics tools, techniques and scientific support expertise. It has approximately 4600 compute cores and 9PB of usable storage.

The Institute for Computational & Mathematical Engineering (ICME)
The Institute for Computational & Mathematical Engineering (ICME) GPU cluster consists of servers with 140 GPU cores and 1.2 TB of RAM. In addition, ICME also has an MPI cluster with 184 CPUs and 172 GB of RAM, available to students and faculty from ICME.

Quantum Learning Machine
The Atos Quantum Learning Machine is a complete appliance that offers a universal programming environment to avoid vendor lock-in. The Atos Quantum Assembly Language, a hybrid language based on Python, allows programmers to develop their own algorithms on any existing or future quantum programming framework.