By using brief, plain clues in their prompts that are likely to influence the app to query older models, a user can downgrade ChatGPT for malicious ends.
First seen on darkreading.com
Jump to article: www.darkreading.com/application-security/chatgpt-downgrade-attack-gpt-5-security
![]()

