Skip to main content Skip to secondary navigation

Research Computing’s Consultants & Instructors

Main content start

Our team can help you find solutions to any of your research computing needs and problems, from specifying your computing and data requirements for a research proposal, to assessing the applicability of high performance computing (HPC) resources to your project. These same experts teach courses about research software and how to use our systems. And every quarter they work together to fine-tune our training curriculum to address areas of greatest interest to our researcher-users.

Sara Cook

Sara is a co-developer of Research Computing’s Lunch and Learn noontime single-topic classes — including the “Slurm Resource Estimation” session — and she participates in the design and improvement of other classes in our training curriculum. With her impressive skills and experience in front-end web coding, Sara co-developed and maintains the newly redesigned Globus at Stanford website, which runs on Github Pages, an open source web publishing platform. And there’s more! Sara created the Slurm-O-Matic tool, which provides a friendly interface, and automates some functions of the SLURM tool (Simple Linux Utility for Resource Management).

Christina Gancayco

With a consultative approach that’s grounded in her history with — and understanding of — researchers’ needs and objectives, Christina is always prepared to help Stanford researchers troubleshoot problems, develop custom scripts for data analysis, and learn best practices on our HPC systems. As a lead training instructor for Research Computing, Christina coordinates and teaches “Lunch & Learn” (previously “Code & Coffee”) sessions. In addition, she is a regular Slack chat contributor on the channels dedicated to our respective compute clusters.

Mark Piercy

Mark is our primary liaison to the School of Humanities and Sciences, supporting researchers in their use of our HPC systems. He on-boards new faculty and research groups to SRC resources and conducts HPC outreach throughout Stanford.  As a lead training instructor, Mark conducts our monthly Sherlock onboarding sessions, Introduction to HPC classes, HPC resource estimation classes, participates in online office hours, and develops and improves classes and documentation. Mark has been working at Stanford in research and software development for over 15 years.  

Brad Rittenhouse

Brad's primary focus is supporting humanities and social sciences researchers in the use of our HPC systems. Brad administers — and was instrumental in founding — the Stanford Research Computing Fellowship, modeled on a concept he began developing while working toward his PhD at the University of Miami. Research Computing now awards the Fellowship annually to researchers of all experience and degree levels — especially early career scholars, master’s students, and individuals from underrepresented backgrounds. Brad remains an active researcher and scholar himself, with recent and forthcoming publications with academic and trade presses which exemplify his integration of technical expertise and theoretical inquiry.

Robby Rollins

Robby is a High Risk research and data facilitator in our Nero and Carina team. He focuses on troubleshooting, writing documentation and training materials, and assuring data security & compliance on the Nero and Carina systems. Before joining Stanford Research Computing, Robby’s research topics were in cybersecurity of medical bluetooth devices (insulin pumps, pace makers, and etc.), network emulation tools, and automatic log analysis.

Mark Yoder

Mark is Stanford Research Computing’s liaison to the Doerr School of Sustainability’s computing groups. In his day-to-day work as a computational scientist making code work on Sherlock or in the cloud, Mark applies a combination of systems thinking and analytical/software expertise to assure that our HPC systems are well-matched to researchers’ needs. With a career that has spanned data engineering, software development, and data science, a PhD in Physics and Natural Hazards Science from UC Davis — and with several peer reviewed papers to his name — Mark is no stranger to academic publishing or the indispensable role that computing plays in modern research.

Zhiyong Zhang

Zhiyong is a computational and R&D scientist with decades of experience in the application of computational science for research. He works with researchers from virtually all disciplines to resolve computational and research challenges. As Stanford Research Computing’s resident expert in HPC code optimization, Zhiyong has developed and taught HPC workshops on a range of topics, from basic to advanced, including a series of Nvidia Deep Learning Institute workshops. Projects that Zhiyong currently supports and writes software for include the development and application of foundation models to sleep research (as described in The Human Sleep Project) and genomics analysis. Zhiyong is also helping to foster a cross-institutional community of developers and application researchers, with the aim to create protein folding ML models that will revolutionize drug discovery and other applications.  

To learn about other members of the Research Computing team, visit the People page.