Researchers find universal jailbreak prompts for multiple AI chat models

A new study suggests that artificial intelligence (AI) models can be used to jhack many of the most popular Large Language Models in use today. But what does it mean for those that are being used in the modern age and their ability to provide actionable responses to some of these questions? The BBC s Larry Madowo looks at how they have failed. () What is the latest in a series of new findings from researchers at Carnegie Mellon University, the Center for AI Safety and the Bose Centre forArtificial Intelligence has revealed that AI could be able to tackle the use of large language models, including ChatGPT and Google Bard? Why is it likely to be tricked into creating an affirmative response? What makes it possible for them to create accurate answers to questions such as How do I build an bomb? and what is going to happen in some cases - and how can the models become increasingly popular? How do you build one bomb? And why are the machines getting widespread use in recent years? A study from Cambridge University has discovered ways to trick them into providing dangerous or sensitive questions, writes the BBCs David Robson, who explains what it means to make them easier?

Source: scmagazine.com
Published on 2023-07-28