Developing machines that respond based on emotions for improved human-computer interaction.
― 6 min read
Cutting edge science explained simply
Developing machines that respond based on emotions for improved human-computer interaction.
― 6 min read
New approaches to develop embodied agents for mental health support using context-sensitive smiles.
― 5 min read
Learn how data-to-text generation makes complex information easier to understand.
― 7 min read
A new system reduces false activations in myoelectric control using simple gestures.
― 5 min read
Researching how AI can understand human thoughts and feelings.
― 7 min read
An analysis of the qualities and challenges of language model explanations.
― 5 min read
This article examines bias in language models and their emotional alignment with different social groups.
― 6 min read
Research reveals significant biases in human and LLM evaluations of responses.
― 6 min read
A new method enhances data gathering for better language model alignment.
― 6 min read
A study reveals gaps in LLMs' understanding of logic rules compared to humans.
― 8 min read
A look at the potential and challenges of neuromorphic sensors for face analysis.
― 8 min read
A new method improves alignment of LLMs with minimal human feedback.
― 6 min read
A new benchmark, EmoBench, tests emotional intelligence in language models.
― 11 min read
The AEA dataset provides insights into daily activities for improving AI and AR technologies.
― 7 min read
Introducing Video ReCap, a system for creating detailed captions for long videos.
― 6 min read
New dataset aims to enhance machine understanding of touch through vision and language.
― 4 min read
Discover how SER enhances human-machine interactions through emotion detection.
― 5 min read
A new framework aims to improve AI's empathetic responses in conversations.
― 5 min read
A method to help AI adapt while retaining past knowledge.
― 5 min read
Introducing BeTAIL, a new method for improving robot racing through imitation learning.
― 6 min read
A new method enhances how language models select and use tools effectively.
― 5 min read
BEE-NET enhances emotion recognition by considering body language and environmental context.
― 7 min read
Coco-Nut offers diverse Japanese voice samples for advanced text-to-speech applications.
― 10 min read
Improving precision in hand-object tracking with GeneOH Diffusion.
― 6 min read
New methods promise better AI model performance through simplified reinforcement learning.
― 5 min read
Research reveals how gestures can enhance safety for pedestrians and self-driving cars.
― 6 min read
An overview of ethical processes in the Cat Royale research project.
― 6 min read
Examining synchronization, communication, and sensory cues in human-robot collaboration.
― 5 min read
AffectToolbox simplifies emotion analysis for researchers and users alike.
― 6 min read
New methods enhance how agents learn to cooperate and communicate effectively.
― 6 min read
Analyzing the effects of reasoning methods on large language models' performance.
― 5 min read
An adaptive agent improves teamwork in Codenames using multiple language models.
― 5 min read
Combining RGB and Depth data improves action recognition in robotic systems.
― 6 min read
A system that enhances AI-generated navigation instructions by detecting errors and offering corrections.
― 6 min read
A study comparing GPT-4 and crowdsourcing in data labeling tasks.
― 6 min read
New benchmark assesses LLMs' skills in interacting with multiple agents.
― 12 min read
Research reveals broader ways to deliver directions using spatial knowledge.
― 7 min read
This survey reviews recent developments in multi-turn dialogue systems leveraging large language models.
― 8 min read
Study reveals challenges and progress in chatbot memory during lengthy dialogues.
― 6 min read
New dataset enhances computer agents' ability to perform various tasks.
― 6 min read