Estimating Distances to Galaxies: A New Approach
New methods improve distance estimates for billions of galaxies using photometric redshift.
Xingchen Zhou, Nan Li, Hu Zou, Yan Gong, Furen Deng, Xuelei Chen, Qian Yu, Zizhao He, Boyi Ding
― 6 min read
Table of Contents
In the vast universe, galaxies are like stars at a cosmic dinner party, each trying to get the attention of astronomers. Knowing where they are and how far they are from us is essential for understanding the universe. This is where photometric redshift comes into play. It’s a fancy term for estimating how far away a galaxy is based on its light. Remember, it’s like trying to figure out how far away that giant pizza slice is from your friend — only a lot more complex and cosmic!
Photometric Redshifts?
What arePhotometric redshifts are a handy tool that allows scientists to estimate the distance of galaxies without needing to look at their spectra. Think of it as a quick glance at a menu rather than reading the fine print. By capturing light in different colors, astronomers can gather clues about a galaxy's distance.
In this cosmic quest, we find ourselves staring at massive amounts of data from various surveys. Instead of reading every single spectrum like an over-caffeinated bookworm, scientists devised a method to estimate distances using images in different colors from telescopes.
The New Study
Scientists recently gathered a treasure trove of data, examining billions of galaxies. They used advanced techniques to estimate the photometric redshifts, capturing images in three optical bands and two near-infrared bands. Just imagine taking a picture of a crowded pizza joint with different cameras — some for close-ups and some for wide shots, to get the most details!
To help with this, they used a computer model known as a Bayesian Neural Network (BNN). This brainy model learns from data and can make predictions, much like how your buddy tries to guess which toppings you’ll choose the next time you order pizza, based on past experiences.
Grouping the Galaxies
The researchers didn’t just throw all this data into one big cosmic blender. They sorted galaxies into groups based on certain characteristics. It’s like organizing your DVD collection — action movies here, comedies there, and documentaries in a special corner.
The groups included:
- Bright Galaxy Sample (BGS): These are the well-known, nearby galaxies that are easy to spot.
- Luminous Red Galaxies (LRG): These are the heavyweights, older galaxies with a history of star formation.
- Emission Line Galaxies (ELG): These beauties shine brightly in certain colors, like a neon sign.
- Non-targets (NON): These are the other galaxies that don’t fit neatly into the first three categories.
By analyzing each group separately, researchers could get better estimates of how far away these galaxies are. It turns out that treating them like unique individuals rather than one chaotic crowd made a significant difference in their measurements.
Training the Model
To train the BNN, scientists needed high-quality data. They gathered images and redshift measurements from a significant source — the DESI Early Data Release. Think of this as feeding your pet a gourmet meal to ensure they grow strong and healthy.
The training process involved teaching the BNN to recognize patterns in the images and relate them to known distances. It’s similar to how someone learns to differentiate between different types of pizzas based on their toppings. The better the model was trained, the more accurate its future predictions would be.
Results and Findings
After the training phase, researchers took a closer look at how well the BNN performed. The results were promising! For the BGS and LRG groups, the models made incredibly accurate predictions, boasting very low error rates. However, the ELG group was more challenging, and the predictions strayed far from accuracy. It’s a bit like trying to guess the age of a pizza from its smell alone; sometimes, it's really tough!
The study showed that using individual groups for estimating distances improved results significantly. It’s a bit like asking a foodie to guess the flavors of a dish rather than a random person who has no clue.
ELGs
Unveiling the Mystery ofNow, let’s talk about those elusive Emission Line Galaxies. These galaxies were the underperformers in the study. Despite their bright appearances, estimating their distances was like trying to find Waldo in a sea of red-and-white stripes. Researchers noticed that the ELGs didn’t fit neatly into the established patterns due to their unique features.
Since these galaxies often lacked clear markers for their distance, the results were inconsistent. This finding was not a complete shock. It highlighted the need for different approaches when working with unique groups of objects.
The Importance of Morphology
The study also looked at the shapes of these galaxies, using what’s known as morphological classification. It’s like assessing the styles of different pizzas — thin crust, deep dish, or stuffed. The researchers noted that galaxies with more defined shapes tended to yield better results in redshift estimations.
In simpler terms, the easier it was to recognize the galaxy's structure, the more accurate the distance estimation became. This is because the convolutional neural networks could better interpret the details, just like how you can guess the pizza type just by looking at its outline.
Future Improvements
As is the case with any research, this study opened up new questions and opportunities for improvement. With more data from the ongoing surveys, the methods and results will likely get even better. Like putting more toppings on your pizza — more is definitely better!
Researchers plan to refine their methods, update their catalogues, and include more galaxies from upcoming data releases. The goal is to create a detailed treasure map of the cosmos, helping astronomers navigate the vast universe better.
Conclusion
This study contributes to our understanding of the universe by providing a robust catalogue of photometric redshifts for billions of galaxies. The researchers demonstrated that using advanced computer models and categorizing galaxies based on their characteristics significantly enhances the accuracy of distance estimations.
As we continue studying the cosmos, expect improvements in methodologies and results, making our understanding deeper, like adding more cheese to that perfect pizza. The next time you gaze up at the stars, remember the many galaxies out there, each with its own story, waiting to be uncovered.
In the vast cosmic pizza parlor, we still have much to explore. Bon appétit!
Original Source
Title: Estimating Photometric Redshifts for Galaxies from the DESI Legacy Imaging Surveys with Bayesian Neural Networks Trained by DESI EDR
Abstract: We present a catalogue of photometric redshifts for galaxies from DESI Legacy Imaging Surveys, which includes $\sim0.18$ billion sources covering 14,000 ${\rm deg}^2$. The photometric redshifts, along with their uncertainties, are estimated through galaxy images in three optical bands ($g$, $r$ and $z$) from DESI and two near-infrared bands ($W1$ and $W2$) from WISE using a Bayesian Neural Network (BNN). The training of BNN is performed by above images and their corresponding spectroscopic redshifts given in DESI Early Data Release (EDR). Our results show that categorizing galaxies into individual groups based on their inherent characteristics and estimating their photo-$z$s within their group separately can effectively improve the performance. Specifically, the galaxies are categorized into four distinct groups based on DESI's target selection criteria: Bright Galaxy Sample (BGS), Luminous Red Galaxies (LRG), Emission Line Galaxies (ELG) and a group comprising the remaining sources, referred to as NON. As measured by outliers of $|\Delta z| > 0.15 (1 + z_{\rm true})$, accuracy $\sigma_{\rm NMAD}$ and mean uncertainty $\overline{E}$ for BNN, we achieve low outlier percentage, high accuracy and low uncertainty: 0.14%, 0.018 and 0.0212 for BGS and 0.45%, 0.026 and 0.0293 for LRG respectively, surpassing results without categorization. However, the photo-$z$s for ELG cannot be reliably estimated, showing result of $>15\%$, $\sim0.1$ and $\sim0.1$ irrespective of training strategy. On the other hand, NON sources can reach 1.9%, 0.039 and 0.0445 when a magnitude cut of $z
Authors: Xingchen Zhou, Nan Li, Hu Zou, Yan Gong, Furen Deng, Xuelei Chen, Qian Yu, Zizhao He, Boyi Ding
Last Update: 2024-12-03 00:00:00
Language: English
Source URL: https://arxiv.org/abs/2412.02390
Source PDF: https://arxiv.org/pdf/2412.02390
Licence: https://creativecommons.org/publicdomain/zero/1.0/
Changes: This summary was created with assistance from AI and may have inaccuracies. For accurate information, please refer to the original source documents linked here.
Thank you to arxiv for use of its open access interoperability.
Reference Links
- https://pan.cstcloud.cn/s/hUWwk1QTSjo
- https://must.astro.tsinghua.edu.cn/en
- https://data.desi.lbl.gov/doc/
- https://www.legacysurvey.org/dr9/files/
- https://github.com/dstndstn/tractor
- https://www.astropy.org/
- https://github.com/desihub/redrock
- https://goo.gl/fpk90V
- https://keras.io/
- https://www.tensorflow.org/
- https://www.tensorflow.org/probability
- https://www.legacysurvey.org/dr9/
- https://batc.bao.ac.cn/~zouhu/doku.php?id=projects:desi_photoz
- https://pan.cstcloud.cn/web/share.html?hash=hUWwk1QTSjo