Insider Risk Management empowering risky AI usage visibility and security investigations

Insider Risk Management empowering risky AI usage visibility and security investigations

As businesses expand globally, employees accessing sensitive information from anywhere can pose risks. With a large portion of data breaches caused by insiders, understanding and mitigating these risks is crucial. The rise of generative AI technology has further emphasized the need for strong data security measures. Microsoft’s Purview Insider Risk Management (IRM) offers solutions to help organizations combat insider risks associated with GenAI applications.

IRM uses various signals to detect potential malicious or inadvertent insider risks, such as IP theft and data leakage. It allows organizations to create customized policies based on their internal governance. Privacy is prioritized by pseudonymizing users by default and implementing access controls for user-level privacy.

With an increasing need to protect against risky employee use of AI tools, IRM now offers detections for risky activities on GenAI apps. This includes identifying sensitive information shared on these apps and responses generated from sensitive files. These new detections aim to prevent misuse of technologies by insiders, whether accidental, negligent, or malicious.

For enhanced data security and integration with security operations, IRM alerts are now integrated with Microsoft Defender XDR. This allows for better incident investigations by providing insights into risky user activities. SOC teams can access IRM alerts to gain context on incidents, aiding in distinguishing internal incidents from

Source: https://techcommunity.microsoft.com/t5/security-compliance-and-identity/insider-risk-management-empowering-risky-ai-usage-visibility-and/ba-p/4298246

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *