GenAI is bringing an exciting world of possibilities. But it comes with its own risks
We are building a comprehensive GenAI Risk Posture Management Platform
To help you make decisions on GenAI
You can make risk-informed decisions such as:
- What is your risk exposure to ‘Shadow GenAI’ usage?
- Should you host an open source LLM or
buy a third-party managed LLM?
- What is your risk while building foundational LLMs?
- How to evaluate your third parties’ use of GenAI?
The Latest on Safe’s GenAI Research
GenAI Risk Scenario Library
What are the different ways in which you are being exposed to GenAI risks? We have identified Five vectors of new GenAI risks.
We have compiled a list of top risk scenarios into the GenAI Risk Library.
For example: If you are evaluating hosting an LLM vs. buying a managed LLM, the risk scenarios to consider are:
For example: If you are evaluating hosting an LLM vs. buying a managed LLM, the risk scenarios to consider are:
- Misinformed decisions / actions resulting from inaccurate content generation
- Business disruption or loss due to model availability issues
- Disclosure of private, confidential, or sensitive information (customer information, organization information, intellectual property, etc.)
- Regulatory or User harms due to the generation of undesirable content (legally restricted, sensitive or policy-violating, harmful, etc.)
Contact pankaj.g@safe.security for a more detailed library.
FAIR-CAM Based GenAI Control Library
For different GenAI risk scenarios, we are building control libraries based on the FAIR-CAM model.
Public disclosure of sensitive information by a privileged insider to a third-party managed GenAI application
Attack Surface | Third party vendor |
Attack Outcome | Data compromise |
Initial Attack Method | Human Error |
Business Resource | Sensitive PII |
Threat Actor | Insider |
Threat Intent | Accidental |
Loss Effect | Loss of Confidentiality |
Stolen proprietary data by managed a GenAI service through automated data collection
Attack Surface | Publicly hosted applications |
Attack Outcome | Data compromise |
Initial Attack Method | External Application Exploitation |
Business Resource | Proprietary Data |
Threat Actor | Outsider |
Threat Intent | Accidental |
Loss Effect | Loss of Confidentiality |
GenAI Index for SaaS players
As more and more of your third parties use GenAI, what risks are you exposed to? We are starting with the SaaS players and introducing the GenAI Risk Index for SaaS players. Based on publicly available information, we have evaluated these 7 factors for any SaaS player and calculated an ‘index’ value. We will refine the approach further and add more players to this index.
Opt-Out of GenAI features | To opt out of generative AI features from the product, disable or restrict access to the AI model's capabilities. |
Use of Customer Data for Training | Is the product or feature consuming the data provided to train their model and enhance their output |
Malicious Content Prevention | Safeguards implemented to restrict generation of malicious content |
Liability for Generated Content | Is the product liable for the generated content for inappropriate usages |
Data Deletion Policy | Right to erasure implemented for all content provided by the individual |
Disclosure of Internal Trust and safety framework of Responsible AI development | Vendor public declaration of trust and safety practices implemented in the organization for secure development of AI |
Red Teaming on AI Development solution | Regular conduct of Red teaming activities for AI system architecture for security findings |