Skip to main content Skip to secondary navigation

Stanford Research Network (SRN)

Main content start

The Stanford Research Network (SRN) is the current-generation multi-hundred-Gigabit network used within Stanford's two research data centers, the Stanford Research Computing Facility (SRCF), and the Research Computing Facility (RCF) in Forsythe Hall.  In the past, the SRN was a now-obsolete physical network connecting the research data centers and multiple buildings on campus.

For more information, see one of the below-linked pages:

The Original SRN

Thanks to Paul Murray, Phil Reese, and Johan van Reijendam for the information in this section.

In the 2010s, the Stanford Research Network was developed as a separate campus network, which provided 10-Gigabit network connectivity to endpoints in covered buildings.  The buildings included in the pilot were Allen, Clark Center, Huang, Building 420 (née Jordan Hall), MERL, Physics & Astrophysics, Pine Hall, Polya Hall, Sequoia, SRCF, Terman (since demolished), 1050 Arastradero (Building B), and 3165 Porter Drive.

The goal of the SRN was to enable the creation of Science DMZs for labs which needed them, and to provide high-bandwidth connectivity to labs at a time when buildings might have had only a pair of 10-Gigabit connections for all building network traffic.  The buildings included in the pilot received dedicated research switches, paid for by the University.  Labs would only pay the one-time cost for the fiber connection (if needed) to the building's SRN switch.

The building SRN switches—along with a pair of core switches and routers located on campus, and most of the connectivity at the SRCF—formed their own dedicated network, separate from the normal Stanford campus network (SUNet).  The SRN had direct connectivity to campus border routers, and provided a dedicated path to Internet2 via Stanford's 100-Gigabit connection to CENIC's HPR service.  There was also a connection to SUNet routers; although traffic could move from the SRN to SUNet, most network firewalls did not consider SRN systems to be campus systems.  The SRN itself was not firewalled, so system owners were responsible for following all of the MinSec for Servers requirements of the day.

Feedback from the initial SRN deployment proved to be very interesting.  While several groups asked for and are using the higher speed ports, removing the bottleneck of network bandwidth highlighted other issues with their own IT infrastructure.  For example, most in-lab storage was unable to provide data quickly enough to fill the 10-Gigabit network link, while still serving the normal needs of the lab.  Many labs were not ready for this impact.  Researchers also found that, while their own network bandwidth was increased, their distant collaborators were finding their own networks saturated.  In other cases—for example, when a campus network sits between a collaborator's lab and the Internet—the remote campus network was the bottleneck.

Perhaps because of some of these issues, a number of researchers using the SRN found that moving to Sherlock solved several of these problems.  First, the SRCF and Sherlock are already equipped with high speed network connections to campus and the outside world.  Second, Sherlock’s file system was optimized to be extremely fast and able to handle large flows easily.  Finally, once data is on Sherlock, the processing can be done right on Sherlock.  Once the data were on Sherlock, the lab's normal 1-Gigabit connections were enough to transfer the subset of data that was needed at the lab (for local visualization, for example, or for other specialized processing).

Since the time of pilot, the University declined to make central funding available for research endpoint connectivity.  However, changes were made to improve network bandwidth for all users.  At SRCC and SRCF, the Oak Storage service was launched, providing a cost-effective way to store large volumes of data, and transfer them at high throughput to SRCC clusters (Sherlock and SCG) and to the outside world.  Back on campus, funding was extended—as part of the normal campus network refresh program—to increase campus core and border connectivity from 10-Gigabit to 100-Gigabit.  The Campus core network now uses multiple 100-Gigabit connections, with 100-Gigabit connections between campus and our three ISPs.  With these improvements, SUNet is able to handle Research levels of traffic.

The results from the original SRN pilot did not disprove the need for higher-speed links on campus, but it did suggest that, instead of storing and sharing data from a local machine in a lab on campus, it is faster and more cost-effective to keep data on Oak, and work in an Oak-connected environment (like Sherlock or SCG).  Since then, as high-bandwidth instruments have become prevalent, connectivity and bandwidth from those instruments has become a core interest of the Office of the Vice Provost & Dean of Research.

Today, the remaining original SRN switches are reaching the end of their useful life, and are no longer available for new connections.  If you were looking to connect to the original SRN, you should read through the present-day options for research networking in buildings.

The 2021/2022 SRN Migration

When the SRCF (now SRCF1) opened in November, 2013, its network was provided using a pair of Cisco Nexus 7009-series switches, with a large number of 10-Gigabit ports and a few 100-Gigabit ports.  SRCF1 was integrated into the original SRN, and used the SRN's routes for getting traffic to & from campus and the Internet.  It was separate from the "any VLAN, anywhere" Cisco FabricPath-based Stanford University Network (SUNet) at the time.  Most networks were not firewalled.  For networks that did need to be firewalled, routing was provided by existing firewalls on campus, and a pair of 10-Gigabit fiber links in SRCF1 were used to "stretch" those VLANs into the research data centers, via the School of Medicine's 'Operational Area' (the "MOA").

After the original SRN was decommissioned, routers built in to the SRCF1 and RCF switch management blades took over ownership of the non-stretched VLANs, and they were now identified as proper campus networks.  The routers received their own connections to the border routers, so research traffic could bypass campus if needed.  The research data centers continued to exist within the University IP space, but with their own VLANs.  Switching was provided with the same 7009-series switches as before.  The SRCF/MOA link remained.

During that time, campus networks were moving away from FabricPath, but a solution was still needed for networks that needed to be stretched between the Campus and Research spaces, or between different campus Operational Areas.  The decision was made to use the research data centers as the launching point for a new, EVPN/VXLAN-based network architecture.

Over 2021/2022, top-of-rack switches were moved from the Cisco switches to the new SRN switches, and unfirewalled networks were moved to be routed by the SRN.  This completed the first major migration to the new Stanford Research Network, with the SRCF/MOA link remaining for stretched VLANs.

The 2024 SRN/Campus Stretch Migration

After the 2021/2022 SRN migration completed, stretched VLANs continued to use the SRCF1/MOA link.  This was because EVPN/VXLAN was only being used in the research data centers.

After the SRN migration completed, in 2022/2023 the School of Engineering research networks were moved to the new EVPN/VXLAN-based Engineering Research Network (ERN).  Once that was completed, all remaining parts of campus were migrated to use EVPN/VXLAN.  It was finally possible to remove the SRCF1/MOA link.

In late 2024, these VLANs were migrated to be proper SRN VLANs, "stretched" to campus using EVPN Interconnect Gateways (IGWs).  Read more about the 2024 SRN/Campus Stretch Migration.