
Researchers at Carnegie Mellon University say large language models, including ChatGPT, can be easily tricked into bad behavior. In February, Fast Company was able to jailbreak the popular chatbot ChatGPT by following a set of rules posted on Reddit. The rules convinced the bot that it was operating …
Author: By Clint Rainey
* VIEW the article originally published here.
** MORE curations: Al Cannistra at San Antonio ONE here.