Skip to content Skip to navigation

Sherlock High Performance Computing Cluster


Sherlock at SRCF

Need access to compute resources beyond your desktop to support your sponsored or departmental research?  You may want to try out the Stanford Sherlock cluster.   Purchased and supported with seed funding from the Provost, Sherlock comprises more than 1300 compute servers and associated storage.  More than 100 of those servers are available to any Stanford PI to run their computational codes and programs, with resources managed through a fair-share algorithm using SLURM as the resource manager/job scheduler. 

Faculty can also purchase additional dedicated resources to augment Sherlock by becoming Sherlock "owners".  Choosing from a standard set of server configurations supported by the SRCC staff, owners' servers are "joined" to the base Sherlock cluster.  "Owners" have access to that base University-funded set of servers, through fair-share.  But they also have priority access to the resources they purchased, whenever they want.  When an owner's servers aren't in use, other owners can use them ... but non-owners cannot.  The base Sherlock configuration is 141 servers; since June 2014, an additional 1,107 servers have been added by owners. Whether an owner or not, researchers using Sherlock have access to more than 400 different computational tools and codes prebuilt by the Research Computing team.

In July 2018 Sherlock comprised some1,325 compute nodes, 24,096 CPU cores, 1,195 GPUs and 1,590 TFlops of computing power used by ~600 Principal Investigators and their roughly 3,500 research team members.

Sherlock node ordering information can be found here

Information on Sherlock 's GPU servers can be found here.

Real-time Sherlock System Status