Skip to content Skip to navigation

Systems & Services Overview

The Stanford Research Computing Center (SRCC) provides a number of services, from helping you specify computing and data needs for research proposals; to scalable and affordable data storage platforms; to consultation on your specific computing needs and problems; to computational platforms and resources. We want to help you find the best solution for your specific need, whether that solution results in you using resources at Stanford, at a national HPC resource, or in the cloud. Several of our services, though, do relate directly to physical hardware, including hosting and system administration.  Hosting largely takes place at the Stanford Research Computing Facility (SRCF), a data center designed specifically to host high density, power-consumptive computing equipment. A Stanford building located on the SLAC campus, the SRCF is intended to meet the research server hosting needs of the campus for years to come.

Below is a representative sample of the types of systems that we manage and support.  Some are owned by individual PIs or groups, for the use of their research teams; others are truly campus-wide offerings.  If you are interested in having the SRCC manage or host your servers, or if you are interested in investing in new equipment for the Sherlock cluster, please contact us at srcc-support@stanford.edu.

Sherlock is a shared compute cluster available for use by all Stanford faculty and their research teams for sponsored or departmental faculty research. Using the Slurm resource manager, all research teams have access to a base set of general compute nodes, GPU-based servers, and a multi-petabyte Lustre parallel file system for temporary files. Faculty can supplement these shared nodes by purchasing additional servers, thus becoming Sherlock owners.  By becoming an owner, your group will not only have exclusive use of purchased nodes, but will have access to the over 1,300 owner compute nodes. Thus far, owners from more than 100 research groups have added compute nodes to Sherlock for their team's exclusive use.

Oak provides the research community with inexpensive storage for research projects, storage that can grow in order to accommodate the projects’ increasing storage requirements. Oak is fast I/O storage for HPC, $50/TB a year, billed monthly. You can save 65% by buying 350 TB at once. Sold in 10 TB increments, 4-year term. The Oak storage system is mounted on Sherlock and XStream. SFTP and Globus support are available. Oak is a capacity-oriented HPC storage system designed for long term storage that doesn't rely on a single vendor implementation. It was designed by the SRCC team using COTS (commercial off-the-shelf) components and open source software to provide up to billions of inodes and tens of petabytes of storage space to answer Stanford researchers' big data storage needs. The software behind Oak is based on the Lustre filesystem and the Robinhood Policy Engine.

Farmshare is a compute environment available to students, faculty and staff with fully sponsored SunetIDs. Farmshare is a great resource for coursework-related and miscellaneous computational tasks. This resource also helps those doing research have a place to try codes and learn about technical solutions to assist in reaching their research goals, prior to scaling up to Sherlock or another cluster.

The Stanford Genomics Cluster is available to members of the Genomics research community at Stanford. With scientific support from the Genetics Bioinformatics Service Center (GBSC) and system administration from the Stanford Research Computing Center (SRCC), the SCG4 cluster uses a charge-back model to provide access to a wide array of bioinformatics tools, techniques and scientific support expertise. It has about 2000 compute cores and about 6PB of usable storage.

The Stanford Research Computing Facility (SRCF) is the home of an exciting 1,040 GPU, one petaflop computational cluster, thanks to a National Science Foundation Major Research Instrumentation grant awarded to Stanford principal investigator (PI) Todd Martinez (Chemistry/PULSE Institute) and co-PIs Tom Abel (Physics/KIPAC), Margot Gerritsen (ICME/Earth, Energy & Environmental Sciences) and Vijay Pande (Chemistry). Additional faculty participating on the grant were from the Schools of Earth, Energy & Environmental Sciences; Engineering; Humanities & Sciences; and Medicine.  Twenty percent of the capacity of XStream is allocated to researchers across the country via the NSF XRAC allocation process.

SRCC supports the research computing infrastructure of the Population Health Sciences project, with over 300 researchers who can access 120 data sets containing 281 billion records.  The PHS-Windows servers form a cluster of powerful computers where programs like SAS, Stata, R, Matlab, and other analysis tools are available to all users remotely. After connecting to a server in the cluster, users have access to licensed software tools and shared published data from other collaborators. The cluster serves as a central repository for all PHS-acquired research data, and it allows for more efficient data management as well as increased data security.

Nero is a new highly-secure, fully-integrated research data platform to enable cross-disciplinary collaboration, with capabilities to easily develop and share data models.  Nero's platform meets or exceeds Stanford's minimum security standards for High Risk and Protected Health Information (PHI) data.  Nero is managed by experienced IT staff to meet data security requirements.  Researchers can focus on science, so there is no need to deal with the hassle, expense, and risk of operating and maintaining their own servers.  Standard analytical software like Jupyter Notebook, STATA and R are also updated and patched by the IT staff.  Nero is designed to support Big Data team science. Big Data research benefits from the availability of High Risk and PHI compliant environments, whether for analysis of social network data or health data. Nero brings the analytical communities across different disciplines together to work in a collaborative and secure environment.

HIVE image

The HANA Immersive Visualization Environment (HIVE), is a visualization center located in Huang 050. Sponsored by ICME, the Army High Performance Computing Research Center, and the School of Engineering, it is available to the Stanford community. The HIVE enables collaborative visualization in teaching and research across the sciences, humanities, social sciences, and engineering.

The Institute for Computational & Mathematical Engineering (ICME) GPU cluster consists of servers with 140 GPU cores and 1.2 TB of RAM. The GPU cluster can be used by anyone at Stanford. In addition, ICME also has an MPI cluster with 184 CPUs and 172 GB of RAM, available to students and faculty from ICME.