Large language models have a well-earned reputation for making things up. But for AI cybersecurity architect Erica Burgess, rather than being a bug, GPT hallucinations can be a threat-modeling feature. I like to think of the hallucinations as just ideas that haven’t been tested yet, she said.
First seen on govinfosecurity.com
Jump to article: www.govinfosecurity.com/interviews/red-team-brainstorming-gpts-accelerates-threat-modeling-i-5517
![]()

