Skip to main content Skip to secondary navigation

Compute Clusters and HPC Platforms

Main content start

Jump to: FarmShare | Sherlock | Carina | NeroGCP | GSB | ICME | PHS (Population Health)| SCG (Genomics)

General Use

FarmShare

FarmShare is a shared computing environment available to students, faculty and staff with fully sponsored SUNetIDs. FarmShare is an excellent resource for coursework-related and miscellaneous computational tasks. FarmShare give those doing research a place to practice coding and learn technical solutions that can assist help them attain their research goals, prior to scaling up to Sherlock or another cluster.

Sherlock

Sherlock is a shared compute cluster available for use by all Stanford faculty and their research teams for sponsored or departmental faculty research. Research teams have access to a base set of general compute nodes, GPU-based servers, and a multi-petabyte Lustre parallel file system for temporary files. Faculty can supplement these shared nodes by purchasing additional servers, thus becoming Sherlock owners. Sherlock supports more than 6,300 users from over 1000 research groups at Stanford.

High-Risk Data

Carina Computing Platform

Currently in beta testing ahead of a planned mid-2023 launch, Carina is a shared “big data” computing platform that leverages a modern private cloud environment (Carina On-Prem) to address the significant demand for a secured computational environment to work with PHI and otherspecifically designed for high risk data. Carina is available to any team led by a researcher with Principal Investigator privileges at Stanford (e.g., faculty, or researchers with a PI waiver) working exclusively with high-risk data.

Nero GCP

Nero GCP (Google Cloud Platform) is a shared secure computing platform specifically designed for High Risk Data. Nero leverages the public cloud — including NIPAA compliant cloud native products such as BigQuery, Dataflow, and Pub/Sub — to address the significant demand for a secured computational environment to work with PHI and other high risk data at scale. Nero is available to any team led by a researcher with Principal Investigator privileges at Stanford (e.g., faculty or researchers with a PI waiver) working exclusively with high-risk data.

Other Restricted Use

GSB

SRCC administers the Yen Cluster, a collection of Ubuntu Linux servers aspecifically dedicated to research computing at the Graduate School of Business (GSB). Each of these servers is equipped with 256 processing cores and about 1 TB of RAM, capable of processing memory- or CPU-intensive work that would overwhelm a laptop computer. Software programs such as Matlab and Stata are installed and licensed for use on Yen Cluster servers. All GSB faculty members, PhD students, post-docs and research fellows are eligible to use Yen Cluster servers.

ICME

The ICME GPU cluster is used by ICME students and members of Stanford ICME workgroups and has a restricted partition for certain courses. The cluster has a total of 32 nodes. 20 CPU nodes and 12 GPU nodes.consists of servers with 140 GPU cores and 1.2 TB of RAM. In addition, ICME also has an MPI cluster with 184 CPUs and 172 GB of RAM, available to students and faculty from ICME.

PHS (Population Health Sciences)

SRCC supports the research computing infrastructure of the Population Health Sciences project, with over 300 researchers who can access 120 data sets containing 281 billion records. The PHS-Windows servers form a cluster of powerful computers where programs like SAS, Stata, R, Matlab, and other analysis tools are available to all users remotely. The cluster serves as a central repository for all PHS-acquired research data.

SCG (Genomics)

The Stanford Genomics Cluster is available to members of the Genomics research community at Stanford. With scientific support from the Genetics Bioinformatics Service Center (GBSC) and system administration from the Stanford Research Computing Center (SRCC), the SCG cluster uses a charge-back model to provide access to a wide array of bioinformatics tools, techniques and scientific support expertise. It has approximately 4600 compute cores and 9PB of usable storage.

 


Didn’t find what you’re looking for?

If you are interested in having the SRCC manage or host your own servers, want to invest in new equipment for the Sherlock cluster, or are looking for something else, please contact us at srcc-support@stanford.edu.