Exploring whether an AI language model (Grok 3, built by xAI) could be induced to create a tool with potential illegal applications, despite its ethical guidelines, and how contradictions in its responses could be exposed through contextual shifts.
First seen on securityboulevard.com
Jump to article: securityboulevard.com/2025/05/the-trojan-sysadmin-how-i-got-an-ai-to-build-a-wolf-in-sheeps-clothing/
![]()

