This article discusses the privacy concerns of using GPT models in cloud settings.
― 5 min read
Cutting edge science explained simply
This article discusses the privacy concerns of using GPT models in cloud settings.
― 5 min read
Examination of jailbreak attacks shows weaknesses in language model safety.
― 5 min read
Custom LLMs raise safety concerns, particularly with instruction backdoor attacks.
― 5 min read
Diverse samples enhance the effectiveness of machine learning model theft.
― 6 min read
A new framework assesses the effectiveness of image safety classifiers against harmful content.
― 10 min read
Inductive GNNs face privacy threats from link stealing attacks.
― 6 min read
A new framework controls in-context learning to prevent misuse in AI models.
― 8 min read
Examining the threats posed by autonomous language model agents and their weaknesses.
― 6 min read
Examining the effects of updates on safety, bias, and authenticity in image generation.
― 6 min read
Examining how important data points attract more security risks in machine learning.
― 5 min read
Examining how SSL models memorize data points and its implications.
― 7 min read