Stanford Research Network (SRN)
The Stanford Research Network (SRN) is the current-generation multi-hundred-Gigabit network used within Stanford's two research data centers, the Stanford Research Computing Facility (SRCF), and the Research Computing Facility (RCF) in Forsythe Hall. In the past, the SRN was a now-obsolete physical network connecting the research data centers and multiple buildings on campus.
For more information, see one of the below-linked pages:
- SRN Data Center Network Connectivity, for data center clients, and especially those providing IT support.
- SRN Client Network Architecture Guide, for data center client sysadmins and LNAs.
- Building Network Connectivity for Research, for lab and building managers.
The Original SRN
In the 2010s, the Stanford Research Network was developed as a separate campus network, which provided 10 Gigabit network connectivity to endpoints in covered buildings. The buildings included in the pilot were Allen, Clark Center, Huang, Building 01-420 (christened Jordan Hall, new name TBD), MERL, Physics & Astrophysics, Pine Hall, Polya Hall, Sequoia, SRCF, Terman, 1050 Arastradero (Building B), and 3165 Porter Drive.
The goal of the SRN was to enable the creation of Science DMZs for labs which needed them, and to provide high-bandwidth connectivity to labs at a time when buildings might have had only a pair of 10-Gigabit connections for all building network traffic. The buildings included in the pilot received dedicated research switches, paid for by the University. Labs would only pay the one-time cost for the fiber connection (if needed) to the building's SRN switch.
The building SRN switches—along with a pair of core switches and routers located on campus, and most of the connectivity at the SRCF—formed their own dedicated network, separate from the normal Stanford campus network (SUNet). The SRN had direct connectivity to campus border routers, and provided a dedicated path to Internet2 via Stanford's 100-Gigabit connection to CENIC's HPR service. There was also a connection to SUNet routers; although traffic could move from the SRN to SUNet, most network firewalls did not consider SRN systems to be campus systems. The SRN itself was not firewalled, so system owners were responsible for following all of the MinSec for Servers requirements of the day.
Feedback from the initial SRN deployment proved to be very interesting. While several groups asked for and are using the higher speed ports, removing the bottleneck of network bandwidth highlighted other issues with their own IT infrastructure. For example, most in-lab storage was unable to provide data quickly enough to fill the 10-Gigabit network link, while still serving the normal needs of the lab. Many labs were not ready for this impact. Researchers also found that, while their own network bandwidth was increased, their distant collaborators were finding their own networks saturated. In other cases—for example, when a campus network sits between a collaborator's lab and the Internet—the remote campus network was the bottleneck.
Perhaps because of some of these issues, a number of researchers using the SRN found that moving to Sherlock solved several of these problems. First, the SRCF and Sherlock are already equipped with high speed network connections to campus and the outside world. Second, Sherlock’s file system was optimized to be extremely fast and able to handle large flows easily. Finally, once data is on Sherlock, the processing can be done right on Sherlock. Once the data were on Sherlock, the lab's normal 1-Gigabit connections were enough to transfer the subset of data that was needed at the lab (for local visualization, for example, or for other specialized processing).
Since the time of pilot, the University declined to make central funding available for research endpoint connectivity. However, changes were made to improve network bandwidth for all users. At SRCC and SRCF, the Oak Storage service was launched, providing a cost-effective way to store large volumes of data, and transfer them at high throughput to SRCC clusters (Sherlock and SCG) and to the outside world. Back on campus, funding was extended—as part of the normal campus network refresh program—to increase campus core and border connectivity from 10-Gigabit to 100-Gigabit. The Campus core network now uses multiple 100-Gigabit connections, with 100-Gigabit connections between campus and our three ISPs. With these improvements, SUNet is able to handle Research levels of traffic.
The results from the original SRN pilot did not disprove the need for higher-speed links on campus, but it did suggest that, instead of storing and sharing data from a local machine in a lab on campus, it is faster and more cost-effective to keep data on Oak, and work in an Oak-connected environment (like Sherlock or SCG). Since then, as high-bandwidth instruments have become prevalent, connectivity and bandwidth from those instruments has become a core interest of the Office of the Dean of Research.
Today, the remaining original SRN switches are reaching the end of their useful life, and are no longer available for new connections. If you were looking to connect to the original SRN, you should read through the present-day options for research networking in buildings.
The 2021/2022 SRN Migration
Since the opening of SRCF in November, 2013, the Stanford University Network (SUNet) has been provided at SRCF using a pair of Cisco Nexus 7009 chassis. Over 2021/2022, SRCF was migrated to the new Stanford Research Network.