The Practical AI Security Checklist: How to Evaluate Any AI Vendor
- The SnapNote Team

- Jan 4
- 4 min read

Introduction: Vendor Risk Is Now AI Risk
AI vendor selection used to be simple:
features,
price,
integrations.
Now it includes:
data retention,
training use,
tool access,
prompt injection defenses,
and compliance controls.
The biggest mistake organizations make is evaluating AI tools like normal SaaS tools—without asking AI-specific questions.
This post gives you a practical checklist you can use in procurement, security review, or a quick vendor comparison.
Quick Definitions
Vendor due diligence: A process to verify a vendor’s security, privacy, and compliance posture before adoption.
Evidence: Documents or independent audits that back up claims (not just marketing statements).
RAG (retrieval-augmented generation): The AI pulls from documents/knowledge sources to answer questions.
Agent/tool use: The AI can call APIs, run actions, or access systems.
The AI Vendor Checklist (Use This as Your Scorecard)
Below are the core categories. You can treat each as pass/fail, or score them.
Category 1: Data Use (Training, Retention, Deletion)
Questions
Do you train on customer prompts/outputs by default? If not, confirm in writing.
Can we disable training at the tenant/account level?
What is the retention period for:
prompts,
outputs,
logs,
backups?
Can we delete:
a single conversation,
all tenant data,
and confirm deletion timelines?
Are prompts/outputs used for human review? Under what conditions?
Evidence to request
Data handling policy or DPA exhibit with retention schedule
Deletion process documentation
“No training” contractual language (if applicable)
Red flags
“We may retain data indefinitely for improving services.”
“We do not specify retention for logs/backups.”
No ability to disable training for business use.
Category 2: Access Control and Isolation
Questions
Do you support SSO (SAML/OIDC) and MFA?
Can we enforce role-based access control (RBAC) and least privilege?
Is our data isolated from other customers (tenant isolation)?
Who at the vendor can access our data, and how is that access logged and approved?
Evidence to request
RBAC documentation
Admin access procedures
Audit log samples
Red flags
No SSO for business plans
No clear story on internal access controls
Category 3: Encryption and Key Management
Questions
Is data encrypted in transit and at rest?
Do you support customer-managed keys (BYOK / KMS integration) where relevant?
Are backups encrypted?
Are secrets and tokens stored securely?
Evidence to request
Encryption overview
Key management documentation
Red flags
Vague answers like “industry standard encryption” with no detail.
Category 4: AI-Specific Threats (Prompt Injection, Data Exfiltration)
Questions
How do you defend against prompt injection (direct and indirect)?
If your system uses RAG, do you permission-filter retrieved documents per user?
If your system supports tools/agents, is tool use allowlisted and validated?
Do you have output filtering/redaction for sensitive patterns (secrets, PII)?
Do you have a security test suite for injection attempts?
Evidence to request
Threat model summary
Security testing approach for AI features
Tool/agent governance documentation
Red flags
“Prompt injection is not relevant to our product.”
Broad tool permissions with no allowlist and no logging.
Category 5: Logging, Monitoring, and Auditability
Questions
Do we get tenant-level audit logs (who used AI, when, what features)?
Do you log document access and tool calls (for AI agents)?
Can logs integrate with our SIEM?
How long are logs retained and can we control retention?
Evidence to request
Audit log documentation and sample exports
SIEM integration guide
Red flags
No audit logs, or logs only available “upon request.”
Category 6: Compliance and Legal Terms
Questions
Do you offer a DPA (data processing agreement) for personal data?
Can you support data residency requirements (region selection)?
Do you maintain and disclose a sub-processor list?
For healthcare scenarios: will you sign a BAA (if required for your use case)?
What is your breach notification timeline?
Evidence to request
DPA template
Sub-processor list
Breach notification clause
Residency commitments (if needed)
Red flags
No DPA for tools processing customer data
No sub-processor transparency
Long or vague breach notification timelines
Category 7: Security Program Maturity
Questions
Do you have recent third-party audits (SOC 2 Type II, ISO 27001, etc.)?
Do you run penetration tests? How often?
Do you have a vulnerability disclosure program?
Do you have an incident response plan?
Evidence to request
SOC 2 / ISO reports (or at least an executive summary)
Pen test attestation letter
IR policy summary
Red flags
“We’re too new for audits” with no compensating controls.
Minimum Viable Approval: What You Should Require Every Time
If you want a simple baseline, require these before approving any AI vendor:
Clear “training use” answer (yes/no) in writing
Retention schedule (prompts, logs, backups)
SSO/MFA support (or compensating controls)
Tenant-level audit logs
DPA (and BAA if applicable)
Security audit evidence (SOC2/ISO or equivalent)
If a vendor cannot meet these, do not feed it sensitive data.
Simple Scoring Model (Fast, Practical)
If you need a quick scorecard, use this weighting:
Data use + retention: 30%
Access control + audit logs: 25%
AI-specific defenses: 20%
Compliance/legal: 15%
Security maturity: 10%
Interpretation
80–100: reasonable for Tier B (internal)
60–79: only Tier A; Tier B only with strict limitations
< 60: do not use for company data
How to Evaluate a Vendor in 7 Days
Day 1: Define the use case and data tier (A/B/C).
Day 2: Send the checklist as a questionnaire.
Day 3: Collect evidence (DPA, SOC2/ISO, retention schedule).
Day 4: Run a small pilot with non-sensitive data.
Day 5: Test basic “abuse prompts” (injection attempts, access boundaries).
Day 6: Score the vendor and decide allowed data tier.
Day 7: Publish internal guidance:
approved use cases,
prohibited data,
retention settings,
and escalation path.
Common Vendor Claims (And How to Verify Them)
Claim: “We don’t train on your data.”Verify: contract language + plan type + documentation that applies to your tenant.
Claim: “Your data is encrypted.”Verify: encryption details + key management + backup encryption.
Claim: “We are compliant.”Verify: what standard, what scope, what date, what report.
Claim: “We are secure.”Verify: audits, pen tests, incident response, and audit logs.
Key Takeaways
AI vendor evaluation must include AI-specific risks (prompt injection, tool access, retrieval permissioning).
Most failures happen in data retention, logging, and unclear training use.
A short checklist + evidence request gets you most of the benefit fast.
Match the vendor’s posture to the data tier—not the marketing pitch.

