Security controls and practices designed to protect against risks associated with generative artificial intelligence tools, including data leakage, prompt injection, and model abuse.
Generative AI security addresses unique insider risks including inadvertent data disclosure through prompts, intentional data exfiltration via AI services, and intellectual property theft through code generation tools. Organizations must implement AI usage policies, prompt filtering, data classification awareness, and monitoring of AI service interactions. The Samsung incident where employees accidentally leaked company secrets via ChatGPT highlights the need for proactive GenAI security measures.