This article examines how different contexts affect fairness testing results in AI.
― 5 min read
Cutting edge science explained simply
This article examines how different contexts affect fairness testing results in AI.
― 5 min read
Discover the latest developments in text-to-image models and their impact.
― 7 min read
Introducing BMFT: a method to enhance fairness in machine learning without original training data.
― 4 min read
SAGE-RT creates synthetic data to improve language model safety assessments.
― 5 min read
A study on bias detection in NLP models and its implications.
― 6 min read
This study analyzes personality traits of a language model across nine languages.
― 5 min read
MIA-Tuner aims to address privacy issues in LLM training data.
― 5 min read
This study examines how biases affect language model responses and proposes solutions.
― 7 min read
Techniques to safeguard personal images from misuse by generative models.
― 6 min read
Exploring how external inputs shape responses of large language models.
― 6 min read
REFINE-LM uses reinforcement learning to mitigate bias in language models effectively.
― 4 min read
A new method improves tracking of privacy leaks in large language models.
― 7 min read
A critical look at AI's impact on science and understanding.
― 5 min read
How labeling AI affects user acceptance and perception in vehicles.
― 4 min read
Examining the impact of generative AI on knowledge and marginalized communities.
― 6 min read
This article reviews gender bias evaluation in text-to-image generation.
― 6 min read
A look at reducing bias in AI-generated images.
― 7 min read
An overview of advancements in speaker recognition through the VoxCeleb Challenge.
― 4 min read
Discover how bias in ML affects public health outcomes and fairness.
― 7 min read
Exploring new methods for fair decision-making in machine learning systems.
― 4 min read
Examining biases in outlier detection models to promote fairness.
― 6 min read
Combining AI and human annotations improves data accuracy and efficiency in research.
― 6 min read
ToxDet proposes a new method to identify harmful outputs in language models.
― 5 min read
Examining how AUPs shape the foundation model landscape.
― 10 min read
Innovative methods are being developed to detect and combat DeepFakes effectively.
― 5 min read
Introducing Fair Best Arm Identification with fairness constraints in decision-making.
― 5 min read
Exploring blendfake data's effectiveness in deepfake detection methods.
― 8 min read
How AI can improve rather than replace software engineering.
― 5 min read
This article examines how combining real and synthetic images boosts face recognition accuracy and fairness.
― 5 min read
This study assesses ChatGPT's ability to simulate demographic and attitudinal data.
― 4 min read
A resource for studying the impact and trends of political deepfakes.
― 5 min read
This article discusses methods to reduce bias in text safety classifiers using ensemble models.
― 5 min read
A new method aims to reduce bias in machine learning models for better fairness.
― 5 min read
This study analyzes the fairness of diffusion-based recommendation methods compared to traditional models.
― 4 min read
This article discusses constrained diffusion models and their role in reducing bias.
― 6 min read
A new method for predicting personality traits from online posts using filtered data.
― 7 min read
A new dataset enhances multilingual speech technology in India.
― 5 min read
This study explores how synthetic data can tackle class and group imbalances.
― 7 min read
Exposing the manipulation risks of influence functions in machine learning.
― 5 min read
Discover a new way to design adaptive programming languages.
― 5 min read