Skip to main content Skip to secondary navigation

Systems Overview

Main content start

Below is a representative sample of the types of systems that we manage and support. Some are owned by individual PIs or groups, for the use of their research teams; others are truly campus-wide offerings. If you are interested in having Research Computing manage or host your servers, or if you are interested in investing in new equipment for the Sherlock cluster, please contact us at srcc-support@stanford.edu.

Systems information is also available on the Compute Clusters and HPC Platforms and Getting Started on our HPC Systems pages.

Sherlock

Sherlock is a shared compute cluster available for use by all Stanford faculty and their research teams for sponsored or departmental faculty research.  Research teams have access to a base set of general compute nodes, GPU-based servers, and a multi-petabyte Lustre parallel file system for temporary files. Faculty can supplement these shared nodes by purchasing additional servers, thus becoming Sherlock owners.  Sherlock supports more than 6,300 users from over 1000 research groups at Stanford.

Oak Storage

Oak is Stanford's premiere long-term data storage platform for research data. Oak is available in increments of 10TB or 250TB, priced to be competitive with cloud storage options. Start with as few as 10TB and scale up effortlessly as needed. With tight integration to world-class HPC resources like the Sherlock cluster, you can consolidate your data storage pipeline while saving time and costs. In addition, a variety of network file sharing protocols are available to meet your needs.

FarmShare

FarmShare is a compute environment available to students, faculty and staff with fully sponsored SUNetIDs. FarmShare is a great resource for coursework-related and miscellaneous computational tasks. This resource also helps those doing research have a place to try codes and learn about technical solutions to assist in reaching their research goals, prior to scaling up to Sherlock or another cluster.

Portrait of Nero Wolfe, the mystery writer for whom the Nero compute cluster is named.

Nero GCP

Nero GCP is a shared secure computing platform specifically designed for High Risk Data.  This computing environment leverages the public cloud to address the significant demand for a secured computational environment to work with PHI and other high risk data at scale.

Nero is available to any team led by a researcher with Principal Investigator privileges at Stanford (e.g.: faculty, or researchers with a PI waiver) working exclusively with high-risk data.

An illustration of the stars in the Carina Nebula.

Carina Computing Platform

Carina leverages a modern private cloud environment (Carina On-Prem) to address the significant demand for a secured computational environment to work with PHI and other high risk data at scale.

Carina is available to any team led by a researcher with Principal Investigator privileges at Stanford (e.g.: faculty, or researchers with a PI waiver) working exclusively with high-risk data.

The Stanford Genomics Cluster

The Stanford Genomics Cluster is available to members of the Genomics research community at Stanford. With scientific support from the Genetics Bioinformatics Service Center (GBSC) and system administration from Stanford Research Computing, the SCG cluster uses a charge-back model to provide access to a wide array of bioinformatics tools, techniques and scientific support expertise. It has approximately 4600 compute cores and 9PB of usable storage.

Marlowe logo - a fedora hat on top of a maroon box

Marlowe GPU-Based Computational Instrument

Marlowe is an NVIDIA DGX H100 Superpod, built using NVIDIA’s reference architecture and designed to deliver cutting-edge computational performance. This newest Stanford HPC cluster is managed by Stanford Data Science, with hardware and software infrastructure administered by Stanford Research Computing.

NSF Access: Discover nationwide NSF cyberinfrastructure

Need advanced computing and storage options for your research or classroom? The ACCESS program has been established and funded by the U.S. National Science Foundation to help you, the nation’s researchers and educators - to use some of the country’s most advanced computing systems and services — at no cost to you.
With more than 30 resources from more than 15 resource providers, there’s bound to be a resource for you, your lab, or your class.

Abstract photo showing multicolored software code text on a computer screen, vanishing into the distance.

Quantum Learning Machine

The Atos Quantum Learning Machine is a complete appliance that offers a universal programming environment to avoid vendor lock-in. The Atos Quantum Assembly Language, a hybrid language based on Python, allows programmers to develop their own algorithms on any existing or future quantum programming framework.