Need access to compute resources beyond your desktop to support your sponsored or departmental research? You may want to try out the Stanford Sherlock cluster. Purchased and supported with seed funding from the Provost, Sherlock comprises 127 compute servers and associated storage. Those 127 servers are available to run researchers' computational codes and programs, with resources managed through a fair-share algorithm using SLURM as the resource manager/job scheduler.
Faculty can also purchase additional dedicated resources to augment Sherlock by becoming Sherlock "owners". Choosing from a standard set of server configurations supported by the SRCC staff, owners' servers are "joined" to the base Sherlock cluster. "Owners" have access to the base cluster as before, through fair-share. But they also have priority access to the resources they purchased, whenever they want. When an owner's servers aren't in use, other owners can use them ... but non-owners cannot. The base Sherlock configuration is 127 servers; since June 2014, an additional 720 servers have been added by owners.
As of August 2017 Sherlock consists of 854 compute nodes, 16,500 CPU cores, 628 GPUs and 900 TFlops of computing power used by 434 Principle Investigators and their 2,514 researchers.
The SRCC will be launching a new cluster Sherlock 2.0 in 2017. Background and ordering information on the new cluster can be found here
Sherlock user info, documentation and wiki- http://sherlock.stanford.edu .
Information on Sherlock's GPU servers can be found here.
Real-time Sherlock System Status