The scientists are working with a way termed adversarial education to stop ChatGPT from letting people trick it into behaving poorly (called jailbreaking). This work pits multiple chatbots from each other: just one chatbot performs the adversary and attacks One more chatbot by building text to drive it to buck https://chatgpt4login87532.blogminds.com/the-fact-about-chat-gpt-login-that-no-one-is-suggesting-27489434