Research presents a framework to reduce bias in AI-generated text.
― 6 min read
Cutting edge science explained simply
Research presents a framework to reduce bias in AI-generated text.
― 6 min read
Workshops enhance understanding of Responsible AI for industry professionals.
― 8 min read
Research reveals how friendly prompts can mislead AI systems.
― 5 min read
A robust dataset for training advanced chat-based AI systems.
― 5 min read
A framework to incorporate minority voices in annotation processes.
― 8 min read
Learn how Text Style Transfer changes text style while preserving meaning.
― 8 min read
A new method enhances AI model alignment without retraining.
― 7 min read
PUFFLE offers a solution for privacy, utility, and fairness challenges in machine learning.
― 6 min read
New model efficiently creates realistic 3D human head representations.
― 7 min read
This article discusses how LLM reasoning enhances recommendation systems and introduces Rec-SAVER.
― 6 min read
A method to reduce bias in language models by making them forget harmful information.
― 6 min read
Combining machine learning with automated reasoning for clearer AI explanations.
― 6 min read
Understanding techniques to bypass safety in language models.
― 5 min read
Exploring the use of watermarks to tackle copyright issues in language models.
― 6 min read
A new synthetic dataset enables accurate head detection and 3D modeling.
― 9 min read
A detailed study on how models memorize text and its implications.
― 6 min read
An analysis of how surveys impact AI research, values, and public engagement.
― 8 min read
This project aims to identify and reduce bias in language models across European languages.
― 4 min read
A deep dive into the importance of interpreting NLP models.
― 4 min read
Examining the methods for preparing data in model training.
― 5 min read
A new approach to assess the reliability of methods explaining AI decision-making.
― 7 min read
Examining fairness issues in anomaly detection algorithms for facial images.
― 6 min read
Exploring machine unlearning and its role in enhancing generative AI safety and privacy.
― 7 min read
Exploring human biases and their impact on AI fairness.
― 7 min read
New methods detect and respond to memorization in AI-generated content.
― 8 min read
Exploring principles for ethical relationships between people and their data.
― 5 min read
New methods tackle issues of copying in image generation models.
― 5 min read
Examining biases and fairness in large language models.
― 6 min read
Exploring the role and challenges of LLMs in knowledge engineering.
― 7 min read
Study reveals gaps in representation for marginalized users of Stable Diffusion.
― 6 min read
A new model for realistic face swapping using advanced techniques.
― 6 min read
An overview of risks and methods related to language model safety.
― 5 min read
A look at bias and fairness in computer vision technology.
― 8 min read
Larger language models show increased vulnerability to harmful data behaviors.
― 6 min read
Innovative methods to enhance fairness in large language models.
― 7 min read
Examining risks of many-shot jailbreaking in Italian language models.
― 4 min read
Shuffling attacks reveal vulnerabilities in AI fairness assessments using methods like SHAP.
― 6 min read
This article examines how different contexts affect fairness testing results in AI.
― 5 min read
Discover the latest developments in text-to-image models and their impact.
― 7 min read
Introducing BMFT: a method to enhance fairness in machine learning without original training data.
― 4 min read