Finding Gravitational Lenses with Machine Learning
Scientists use advanced technology to locate cosmic gravitational lenses effectively.
R. Pearce-Casey, B. C. Nagam, J. Wilde, V. Busillo, L. Ulivi, I. T. Andika, A. Manjón-García, L. Leuzzi, P. Matavulj, S. Serjeant, M. Walmsley, J. A. Acevedo Barroso, C. M. O'Riordan, B. Clément, C. Tortora, T. E. Collett, F. Courbin, R. Gavazzi, R. B. Metcalf, R. Cabanac, H. M. Courtois, J. Crook-Mansour, L. Delchambre, G. Despali, L. R. Ecker, A. Franco, P. Holloway, K. Jahnke, G. Mahler, L. Marchetti, A. Melo, M. Meneghetti, O. Müller, A. A. Nucita, J. Pearson, K. Rojas, C. Scarlata, S. Schuldt, D. Sluse, S. H. Suyu, M. Vaccari, S. Vegetti, A. Verma, G. Vernardos, M. Bolzonella, M. Kluge, T. Saifollahi, M. Schirmer, C. Stone, A. Paulino-Afonso, L. Bazzanini, N. B. Hogg, L. V. E. Koopmans, S. Kruk, F. Mannucci, J. M. Bromley, A. Díaz-Sánchez, H. J. Dickinson, D. M. Powell, H. Bouy, R. Laureijs, B. Altieri, A. Amara, S. Andreon, C. Baccigalupi, M. Baldi, A. Balestra, S. Bardelli, P. Battaglia, D. Bonino, E. Branchini, M. Brescia, J. Brinchmann, A. Caillat, S. Camera, V. Capobianco, C. Carbone, J. Carretero, S. Casas, M. Castellano, G. Castignani, S. Cavuoti, A. Cimatti, C. Colodro-Conde, G. Congedo, C. J. Conselice, L. Conversi, Y. Copin, M. Cropper, A. Da Silva, H. Degaudenzi, G. De Lucia, A. M. Di Giorgio, J. Dinis, F. Dubath, X. Dupac, S. Dusini, M. Farina, S. Farrens, F. Faustini, S. Ferriol, M. Frailis, E. Franceschi, S. Galeotta, K. George, W. Gillard, B. Gillis, C. Giocoli, P. Gómez-Alvarez, A. Grazian, F. Grupp, S. V. H. Haugan, W. Holmes, I. Hook, F. Hormuth, A. Hornstrup, P. Hudelot, M. Jhabvala, B. Joachimi, E. Keihänen, S. Kermiche, A. Kiessling, M. Kilbinger, B. Kubik, M. Kümmel, M. Kunz, H. Kurki-Suonio, D. Le Mignant, S. Ligori, P. B. Lilje, V. Lindholm, I. Lloro, E. Maiorano, O. Mansutti, O. Marggraf, K. Markovic, M. Martinelli, N. Martinet, F. Marulli, R. Massey, E. Medinaceli, S. Mei, M. Melchior, Y. Mellier, E. Merlin, G. Meylan, M. Moresco, L. Moscardini, R. Nakajima, C. Neissner, R. C. Nichol, S. -M. Niemi, J. W. Nightingale, C. Padilla, S. Paltani, F. Pasian, K. Pedersen, W. J. Percival, V. Pettorino, S. Pires, G. Polenta, M. Poncet, L. A. Popa, L. Pozzetti, F. Raison, A. Renzi, J. Rhodes, G. Riccio, E. Romelli, M. Roncarelli, E. Rossetti, R. Saglia, Z. Sakr, A. G. Sánchez, D. Sapone, B. Sartoris, P. Schneider, T. Schrabback, A. Secroun, G. Seidel, S. Serrano, C. Sirignano, G. Sirri, J. Skottfelt, L. Stanco, J. Steinwagner, P. Tallada-Crespí, I. Tereno, R. Toledo-Moreo, F. Torradeflot, I. Tutusaus, E. A. Valentijn, L. Valenziano, T. Vassallo, G. Verdoes Kleijn, A. Veropalumbo, Y. Wang, J. Weller, G. Zamorani, E. Zucca, C. Burigana, M. Calabrese, A. Mora, M. Pöntinen, V. Scottez, M. Viel, B. Margalef-Bentabol
― 7 min read
Table of Contents
Have you ever tried to look at something through a wobbly glass? That’s kind of what happens when light from distant galaxies gets bent by massive objects like other galaxies. This bending creates a visual effect called Gravitational Lensing. Instead of seeing one galaxy, you might see multiple images, arcs, or rings of that galaxy. This phenomenon is not just a neat optical trick; it can help astronomers learn about Dark Matter and dark energy, the mysterious stuff that makes up most of our universe.
In this article, we’ll talk about how scientists are using advanced technology to find these gravitational lenses in the sky. Imagine trying to find a handful of marbles hidden in a giant field of grass. It’s tough, right? Now, imagine trying to find hundreds of thousands of marbles among billions of other objects; a feat that would make your head spin!
The Cosmic Landscape
In the grand cosmic scheme, the universe is a bit of a jigsaw puzzle. Each piece represents different celestial objects like stars, galaxies, and, of course, those tricky gravitational lenses. The European Space Agency (ESA) has cooked up a project called Euclid to help put together this puzzle. Euclid is a space telescope that will take pictures of a large portion of the sky, looking for these cosmic lenses.
But let's be real-finding gravitational lenses is like looking for a needle in a needle factory. There are just too many galaxies and not enough time for astronomers to look at each image closely. So, what's the solution? Enter Machine Learning and Convolutional Neural Networks (CNNs), which are essentially like super-smart robots that can help find these cosmic tricksters.
How Do We Find These Lenses?
-
Light Bending Basics: As mentioned, gravitational lensing happens when light from a distant galaxy is bent by a massive foreground galaxy. Think of the massive galaxy as a huge lens sitting in front of a distant light bulb. As light travels from the bulb, it can get distorted, creating all sorts of fascinating visual phenomena.
-
The Challenge: Astronomers predict that the Euclid mission could uncover about 170,000 galaxy-galaxy lenses. That’s a lot! The problem is, spotting them manually would take forever. Imagine a bunch of astronomers staring at pictures a bit too long, losing their minds over spiral shapes that look like they could be lenses-quite a sight!
-
Enter the Robots: This is where CNNs come into play. These computer programs are designed to look at images and spot patterns. They learn from a training set of images to recognize what a lens looks like. Once trained, they can go through thousands of images in no time, pointing out which ones look suspiciously lens-like.
-
The Process: Scientists apply these CNNs to different images taken from the Euclid mission. They start with simulated lens images, train their robots, and then let them loose on actual images. If the CNNs can pick out the lenses without a ton of false alarms, that’s a win!
The Quest for Quality Data
In the pursuit of finding these lenses, scientists needed a starting point. They took a close look at existing images from earlier observations of a galaxy cluster known as Perseus. By inspecting these images, they created a reference set to train their models.
-
The Training Ground: Scientists used a variety of images while training the CNNs. They had images where they knew the lenses were present and some images that had other features that could trick the robots. This mixture is crucial because if the robots only see lenses, they won’t be able to recognize them in real images.
-
Team Effort: The process involved people too! Astronomers visually inspected a lot of images to create a “truth set” of what they believed were gravitational lenses. So, it wasn’t just the robots doing the heavy lifting; humans kept a close eye too.
The Power of CNNs
Now, let’s take a minute to understand what makes CNNs special when it comes to this cosmic treasure hunt.
-
Learning from Mistakes: CNNs learn by looking at lots of images and figuring out what they should look for. They improve over time by adjusting themselves based on whether they guessed right or wrong. It’s like a toddler learning to recognize a cat after being shown several fuzzy pictures.
-
Spotting Patterns: CNNs are particularly good at picking out visual features. They can detect edges, colors, and other traits in images that might be too subtle for the human eye. Imagine trying to find Waldo in a crowded picture-the CNNs are the super-sleuths that can zoom in and highlight him!
-
Finding the Right Fit: Different CNN architectures have been tested to find which ones work best. Think of it like trying out different kinds of boots for hiking-some styles just work better on rocky paths than others. The same goes for networks; some can navigate complex data more effectively than others.
Training the Machines
The process of training CNNs isn't just plug-and-play. There's a lot of fine-tuning involved, which makes it quite an art. Here’s how the process unfolds:
-
Simulated Data: To train these networks, scientists used simulated images that resembled what they would expect to find. This helped the networks learn from examples where the results were already known.
-
Fine-Tuning: After training with simulated data, the networks were fine-tuned with real images to nail down their performance. This is akin to practicing a dance routine before the final performance.
-
Evaluating Performance: Once trained, the networks were tested against a set of real images to gauge their performance. The goal was to identify as many lens candidates as possible with the fewest false alarms. A false positive in this case could be a regular galaxy mistakenly identified as a lens-yikes!
What About the Results?
After all the training and testing, the results were promising. The CNNs could effectively spot potential lenses; however, there were some hiccups along the way.
-
False Positives: Despite their training, CNNs still had trouble at times. They misidentified regular galaxies with odd shapes as lenses. It’s like mistaking a delicious-looking cake for a sponge-sometimes the appearance is just misleading!
-
Choosing the Best Model: Different CNN models were compared, and while some performed better than others, the quest for the best lens finder is ongoing. Some CNNs were particularly good at spotting lenses but also found a lot of non-lenses as well-a tricky balance to strike!
-
Human Touch: Ultimately, human oversight is still essential. Even though CNNs can quickly analyze images, a final check by astronomers helps ensure that real lenses are correctly identified.
Conclusion: Cosmic Collaboration
Finding gravitational lenses is not just a job for robots; it requires teamwork between humans and machines. With advanced CNNs, astronomers can scour through vast amounts of sky data faster than ever before.
The mission of identifying 170,000 galaxy-galaxy lenses sounds daunting. Still, with the aid of technology and a sprinkle of human expertise, it could soon become a reality. The universe is full of mysteries, and gravitational lenses are just one of the captivating secrets waiting to be decoded in the great cosmic puzzle.
So the next time you look up at the night sky, think about all those clever scientists and their robotic helpers working tirelessly to decode the universe's secrets. Keep your eyes peeled; you never know when they might spot a cosmic trickster!
Title: Euclid: Searches for strong gravitational lenses using convolutional neural nets in Early Release Observations of the Perseus field
Abstract: The Euclid Wide Survey (EWS) is predicted to find approximately 170 000 galaxy-galaxy strong lenses from its lifetime observation of 14 000 deg^2 of the sky. Detecting this many lenses by visual inspection with professional astronomers and citizen scientists alone is infeasible. Machine learning algorithms, particularly convolutional neural networks (CNNs), have been used as an automated method of detecting strong lenses, and have proven fruitful in finding galaxy-galaxy strong lens candidates. We identify the major challenge to be the automatic detection of galaxy-galaxy strong lenses while simultaneously maintaining a low false positive rate. One aim of this research is to have a quantified starting point on the achieved purity and completeness with our current version of CNN-based detection pipelines for the VIS images of EWS. We select all sources with VIS IE < 23 mag from the Euclid Early Release Observation imaging of the Perseus field. We apply a range of CNN architectures to detect strong lenses in these cutouts. All our networks perform extremely well on simulated data sets and their respective validation sets. However, when applied to real Euclid imaging, the highest lens purity is just 11%. Among all our networks, the false positives are typically identifiable by human volunteers as, for example, spiral galaxies, multiple sources, and artefacts, implying that improvements are still possible, perhaps via a second, more interpretable lens selection filtering stage. There is currently no alternative to human classification of CNN-selected lens candidates. Given the expected 10^5 lensing systems in Euclid, this implies 10^6 objects for human classification, which while very large is not in principle intractable and not without precedent.
Authors: R. Pearce-Casey, B. C. Nagam, J. Wilde, V. Busillo, L. Ulivi, I. T. Andika, A. Manjón-García, L. Leuzzi, P. Matavulj, S. Serjeant, M. Walmsley, J. A. Acevedo Barroso, C. M. O'Riordan, B. Clément, C. Tortora, T. E. Collett, F. Courbin, R. Gavazzi, R. B. Metcalf, R. Cabanac, H. M. Courtois, J. Crook-Mansour, L. Delchambre, G. Despali, L. R. Ecker, A. Franco, P. Holloway, K. Jahnke, G. Mahler, L. Marchetti, A. Melo, M. Meneghetti, O. Müller, A. A. Nucita, J. Pearson, K. Rojas, C. Scarlata, S. Schuldt, D. Sluse, S. H. Suyu, M. Vaccari, S. Vegetti, A. Verma, G. Vernardos, M. Bolzonella, M. Kluge, T. Saifollahi, M. Schirmer, C. Stone, A. Paulino-Afonso, L. Bazzanini, N. B. Hogg, L. V. E. Koopmans, S. Kruk, F. Mannucci, J. M. Bromley, A. Díaz-Sánchez, H. J. Dickinson, D. M. Powell, H. Bouy, R. Laureijs, B. Altieri, A. Amara, S. Andreon, C. Baccigalupi, M. Baldi, A. Balestra, S. Bardelli, P. Battaglia, D. Bonino, E. Branchini, M. Brescia, J. Brinchmann, A. Caillat, S. Camera, V. Capobianco, C. Carbone, J. Carretero, S. Casas, M. Castellano, G. Castignani, S. Cavuoti, A. Cimatti, C. Colodro-Conde, G. Congedo, C. J. Conselice, L. Conversi, Y. Copin, M. Cropper, A. Da Silva, H. Degaudenzi, G. De Lucia, A. M. Di Giorgio, J. Dinis, F. Dubath, X. Dupac, S. Dusini, M. Farina, S. Farrens, F. Faustini, S. Ferriol, M. Frailis, E. Franceschi, S. Galeotta, K. George, W. Gillard, B. Gillis, C. Giocoli, P. Gómez-Alvarez, A. Grazian, F. Grupp, S. V. H. Haugan, W. Holmes, I. Hook, F. Hormuth, A. Hornstrup, P. Hudelot, M. Jhabvala, B. Joachimi, E. Keihänen, S. Kermiche, A. Kiessling, M. Kilbinger, B. Kubik, M. Kümmel, M. Kunz, H. Kurki-Suonio, D. Le Mignant, S. Ligori, P. B. Lilje, V. Lindholm, I. Lloro, E. Maiorano, O. Mansutti, O. Marggraf, K. Markovic, M. Martinelli, N. Martinet, F. Marulli, R. Massey, E. Medinaceli, S. Mei, M. Melchior, Y. Mellier, E. Merlin, G. Meylan, M. Moresco, L. Moscardini, R. Nakajima, C. Neissner, R. C. Nichol, S. -M. Niemi, J. W. Nightingale, C. Padilla, S. Paltani, F. Pasian, K. Pedersen, W. J. Percival, V. Pettorino, S. Pires, G. Polenta, M. Poncet, L. A. Popa, L. Pozzetti, F. Raison, A. Renzi, J. Rhodes, G. Riccio, E. Romelli, M. Roncarelli, E. Rossetti, R. Saglia, Z. Sakr, A. G. Sánchez, D. Sapone, B. Sartoris, P. Schneider, T. Schrabback, A. Secroun, G. Seidel, S. Serrano, C. Sirignano, G. Sirri, J. Skottfelt, L. Stanco, J. Steinwagner, P. Tallada-Crespí, I. Tereno, R. Toledo-Moreo, F. Torradeflot, I. Tutusaus, E. A. Valentijn, L. Valenziano, T. Vassallo, G. Verdoes Kleijn, A. Veropalumbo, Y. Wang, J. Weller, G. Zamorani, E. Zucca, C. Burigana, M. Calabrese, A. Mora, M. Pöntinen, V. Scottez, M. Viel, B. Margalef-Bentabol
Last Update: 2024-11-25 00:00:00
Language: English
Source URL: https://arxiv.org/abs/2411.16808
Source PDF: https://arxiv.org/pdf/2411.16808
Licence: https://creativecommons.org/licenses/by/4.0/
Changes: This summary was created with assistance from AI and may have inaccuracies. For accurate information, please refer to the original source documents linked here.
Thank you to arxiv for use of its open access interoperability.