Experts Say AI Model Makers Are Prioritizing Profit Over Security. Hackers don’t need the deep pockets of a nation-state to break GPT-5, OpenAI’s new flagship model. Analysis from artificial intelligence security researchers finds a few well-placed hyphens are enough to convince the large language model into breaking safeguards against adversarial prompts.
First seen on govinfosecurity.com
Jump to article: www.govinfosecurity.com/inside-jailbreak-methods-beating-gpt-5-safety-guardrails-a-29245
![]()

