Research reveals bias in AI tools used for hiring based on race and gender.
― 6 min read
Cutting edge science explained simply
Research reveals bias in AI tools used for hiring based on race and gender.
― 6 min read
Investigating how LLMs interact with human thoughts and feelings.
― 7 min read
This article discusses methods to make language model outputs fairer.
― 6 min read
A method to prevent misuse of text-to-image models while maintaining their lawful applications.
― 6 min read
Research shows vulnerabilities in federated learning's approach to text privacy.
― 5 min read
Improving models for understanding everyday human-object interactions through innovative approaches.
― 5 min read
A look at the importance of safety in AI systems and user interactions.
― 9 min read
A novel method improves understanding of language model outputs.
― 4 min read
Exploring safety, reliability, and ethical issues in language models.
― 7 min read
Examining AI's struggle with honesty and its impact on user trust.
― 7 min read
This article examines how AI can impact marginalized groups and ways to improve outcomes.
― 7 min read
Explore the privacy challenges posed by inference attacks in machine learning models.
― 7 min read
Exploring federated learning and unlearning for user privacy and model integrity.
― 4 min read
Exploring the complexities of aligning AI systems with diverse human interests.
― 6 min read
This article examines how diffusion models improve image generation and manipulation tasks.
― 7 min read
Examining how fairness in machine learning can evolve across decisions and time.
― 5 min read
This article discusses the challenges of machine unlearning and a new approach to balance privacy and accuracy.
― 5 min read
This article explores techniques and challenges in detecting deepfake media.
― 5 min read
Study reveals biases in AI hiring recommendations based on candidate names.
― 6 min read
User traits influence the responses of language models and their safety.
― 6 min read
Examining how cultural bias affects AI image understanding.
― 8 min read
A method to maintain privacy while sharing urban traffic statistics.
― 5 min read
This article explores strategies for protecting individual privacy in machine learning.
― 7 min read
Learn how differential privacy protects individual data while allowing useful analysis.
― 5 min read
A fresh approach to AI combines language models with symbolic programs for better interpretability.
― 8 min read
Learn best practices for developing AI models responsibly and effectively.
― 5 min read
A look at AI risk categories and the need for unified policies.
― 6 min read
New methods reveal serious privacy threats from location data sharing.
― 6 min read
This article examines if large language models possess beliefs and intentions.
― 5 min read
Discussing long-term fairness in technology and its social impact.
― 7 min read
AAggFF introduces adaptive strategies for equitable model performance in Federated Learning.
― 6 min read
Tools like OxonFair help ensure fairness in AI decision-making.
― 6 min read
Research shows how easily safety features can be removed from Llama 3 models.
― 5 min read
Research shows how prompt adjustments can enhance AI responses to diverse cultures.
― 5 min read
This article examines risks linked to LLMs and proposes ways to enhance safety.
― 4 min read
A new framework aims to detect and fix errors in LVLM outputs.
― 7 min read
This study examines privacy issues and protection methods for AI classifiers.
― 5 min read
This study evaluates how well AI models understand different cultures.
― 4 min read
A new defense method to enhance safety in text-to-image diffusion models.
― 5 min read
NFARD offers innovative methods to protect deep learning model copyrights.
― 6 min read