Knowledge graphs 101: A bit of background about knowledge graphs: LLMs use a technique called Retrieval-Augmented Generation (RAG) to search for information based on a user query and provide the results as additional reference for the AI system’s answer generation. In 2024, Microsoft introduced GraphRAG to help LLMs answer queries needing information beyond the data on which they have been trained. GraphRAG uses LLM-generated knowledge graphs to improve performance and lower the odds of hallucinations in answers when performing discovery on private datasets such as an enterprise’s proprietary research, business documents, or communications.The proprietary knowledge graphs within GraphRAGs make them “a prime target for IP theft,” just like any other proprietary data, says the research paper. “An attacker might steal the KG through external cyber intrusions or by leveraging malicious insiders.”Once an attacker has successfully stolen a KG, they can deploy it in a private GraphRAG system to replicate the originating system’s powerful capabilities, avoiding costly investments, the research paper notes. Unfortunately, the low-latency requirements of interactive GraphRAG make strong cryptographic solutions, such homomorphic encryption of a KG, impractical. “Fully encrypting the text and embeddings would require decrypting large portions of the graph for every query,” the researchers note. “This process introduces prohibitive computational overhead and latency, making it unsuitable for real-world use.”AURA, they say, addresses these issues, making stolen KGs useless to attackers.
AI is moving faster than AI security: As the use of AI spreads, CSOs have to remember that artificial intelligence and everything needed to make it work also make it much harder to recover from bad data being put into a system, Steinberg noted.”AI is progressing far faster than the security for AI,” Steinberg warned. “For now, many AI systems are being protected in similar manners to the ways we protected non-AI systems. That doesn’t yield the same level of protection, because if something goes wrong, it’s much harder to know if something bad has happened, and its harder to get rid of the implications of an attack.”The industry is trying to address these issues, as the researchers observe in their paper. One useful reference, they note, is the US National Institute for Standards and Technology (NIST) AI Risk Management Framework that emphasizes the need for robust data security and resilience, including the importance of developing effective KG protection.
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4113463/automated-data-poisoning-proposed-as-a-solution-for-ai-theft-threat.html
![]()

