MLGuard offers a framework for safe and reliable machine learning applications.
― 5 min read
Cutting edge science explained simply
MLGuard offers a framework for safe and reliable machine learning applications.
― 5 min read
Adv3D introduces realistic 3D adversarial examples for self-driving car systems.
― 6 min read
A new method for safe robot navigation considering physical risks in complex environments.
― 7 min read
Introducing a benchmark to verify neural networks coded in plain C.
― 5 min read
A new method enhances AV models' adaptability to camera viewpoint changes.
― 6 min read
Examining the role of testing tools in ensuring safe image generation.
― 9 min read
Discussing safety measures for machine learning systems in critical areas.
― 6 min read
Exploring the need for safety and trust in AI systems.
― 6 min read
Introducing a framework that prioritizes safety in reinforcement learning.
― 5 min read
A look into the importance of model verification for safety in AI systems.
― 7 min read
New methods improve predictions for spent nuclear fuel management.
― 4 min read
Explore how Data-Driven Safety Filters maintain safety in learning-based systems.
― 5 min read
A new approach to monitor neural networks during operation to ensure reliability.
― 7 min read
New methods improve safe navigation for autonomous systems through innovative control strategies.
― 5 min read
New methods improve the safety verification of Bayesian Neural Networks against attacks.
― 5 min read
New methods improve decision-making in AI while ensuring safety and efficiency.
― 5 min read
This study assesses the safety of DNNs in handling unfamiliar driving data.
― 10 min read
A look at control barrier functions for safe operation in technology.
― 5 min read
A look into safe reinforcement learning techniques and their real-world applications.
― 6 min read
A new method enhances safety for robots navigating complex environments.
― 4 min read
A new method enhances how robots interpret user instructions safely.
― 7 min read
Exploring safety measures to tackle overconfidence in autonomous vehicle AI systems.
― 6 min read
A new framework for improving safety and performance in hybrid robotic systems.
― 6 min read
An overview of distributed systems, synchronization, and safety methods.
― 6 min read
A community-led initiative to identify harmful prompts in T2I models.
― 6 min read
A new approach to ensure safety in offline reinforcement learning.
― 7 min read
A new method aims to enhance large language models' safety and usefulness.
― 6 min read
A new framework enhances the safety of UGVs in complex environments.
― 7 min read
A study on Shielded Deep Reinforcement Learning for safe spacecraft autonomy.
― 7 min read
A new method enhances safety features in multimodal AI systems without extensive training.
― 6 min read
A guide on combining AI with robots for safer and efficient operations.
― 5 min read
New methods enhance safety in reinforcement learning while optimizing performance in constrained environments.
― 6 min read
DeepKnowledge method improves the reliability of DNNs in critical applications.
― 7 min read
A new method enhances OOD detection by focusing on gradient information.
― 6 min read
Research focuses on improving neural network verification with minimal NAP specifications.
― 7 min read
DEXTER improves safety for AI by enhancing out-of-distribution detection.
― 6 min read
A study comparing the safety performance of popular language models.
― 5 min read
Combining OOD detection and Conformal Prediction enhances model reliability.
― 6 min read
Learn how breaking down complex tasks helps robots navigate effectively.
― 5 min read
A new dataset evaluates how language models handle harmful content across cultures.
― 5 min read