Exploring the challenges and strategies for moderating hate speech online.
― 8 min read
Cutting edge science explained simply
Exploring the challenges and strategies for moderating hate speech online.
― 8 min read
Learn about the challenges and methods to improve the accuracy of LLMs.
― 5 min read
This study examines how multimodal models handle false claims with text and images.
― 5 min read
Introducing a method to assess reliability in language model outputs.
― 7 min read
This article discusses a method for connecting attack behaviors to techniques using MITRE ATTCK.
― 4 min read
A look at how to identify human and machine-written content.
― 7 min read
Transforming an MCQA dataset for extractive questions in multiple languages.
― 6 min read
A framework to ensure language models provide accurate information.
― 8 min read
A deep dive into meme analysis and its societal effects.
― 7 min read
This study examines methods to enhance machine empathy through storytelling.
― 7 min read
This article discusses the importance of measuring uncertainty in AI predictions.
― 9 min read
New dataset improves how models convert web pages into HTML code.
― 7 min read
Researchers develop methods to improve language models for various languages.
― 5 min read
OpenFactCheck provides a framework for evaluating the accuracy of language model outputs.
― 5 min read
A detailed approach to identify machine-generated texts effectively.
― 6 min read
A method to improve how confident language models are in their text generation.
― 6 min read
This project enhances text correction in Bulgarian historical documents using OCR technology.
― 5 min read
TART enhances table reasoning tasks using specialized tools and large language models.
― 4 min read
Learn how social event detection works and its importance in today's world.
― 8 min read
Multi-agent systems help robots learn and adapt while working together.
― 8 min read
MediaGraphMind helps evaluate news source reliability and bias effectively.
― 7 min read
RAG improves language models but faces challenges from misinformation attacks.
― 7 min read
Researchers propose new methods to keep LLMs safe from harmful content generation.
― 6 min read
A new framework prioritizes safety alongside performance in AI evaluation.
― 5 min read