Skip to main content Skip to secondary navigation
Main content start

Analyzing Gravitational Lenses 10 million times faster on Sherlock with neural networks

Sherlock user Yashar Hazaveh and his co-author Laurence Perreault Levasseur at Stanford's and SLAC's Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) recently published some groundbreaking research in the journal Nature.  Their work utilizes Sherlock's powerful resources, machine learning and convolutional neural networks (CNNs) to quantify image distortions caused by strong gravitational lensing.

For the first time they have shown that this approach can accurately analyze these distortions in spacetime- 10 million times faster than traditional methods.  They demonstrated this using real images from NASA’s Hubble Space Telescope and simulated ones.  The artificial neural networks code utilized Tensorflow and Sherlock's powerful GPU resources.  These neural networks could be used to analyze ultra high definition, 3.2 gigapixel images from the Large Synoptic Survey Telescope (LSST). 

Without these CNNs and our computational resources it would have taken weeks or months to complete an analysis of a single gravitational lens.  However, the LSST is poised to discover tens of thousands of lenses. With traditional modeling the analysis of these lenses would require many yearsof work and a tremendous amount of computing resources.  With this new method these analyses can be done in a few minutes.   A gravitational lens is a distribution of matter (such as a cluster of galaxies) between a distant light source and an observer that is capable of bending the light from the source as the light travels towards the observer.   The amount that the light is bent is proportional to the mass being observed.

Check out the write-up on this research on SLAC News

The article in Nature

Ensai: Lensing with Artificial Intelligence software on GitHub