Executive summary
AI Security Risk Assessment
Match your system architecture to the right security engagement. One page, no fluff.
If your system has X, you need Y
Most teams don't need "everything" — they need the right level of testing for how their AI system is actually built.
This table maps system type to primary risk and recommended engagement.
| If your system includes… | Your primary risk is… | Recommended engagement |
|---|---|---|
| Single-tenant AI features Chatbots, content generation, summarization | Prompt & output abuse Jailbreaks, harmful content generation, data leakage through outputs | AI Security Readiness Check |
| RAG with uploads or search Document Q&A, knowledge bases, semantic search | Retrieval poisoning & leakage Malicious documents injecting instructions, cross-document data exposure | Production AI Pentest |
| Multi-tenant AI (SaaS) B2B platforms, white-label AI, shared infrastructure | Cross-tenant exposure Tenant A accessing Tenant B's data via vector leakage or cache poisoning | Production AI Pentest |
| Agents or tool execution MCP servers, function calling, autonomous agents | Privilege escalation Prompt injection triggering delete_user, execute_sql, or file operations | Production AI Pentest |
| Enterprise customers SOC 2, vendor reviews, compliance requirements | Evidence & audit gaps Missing documentation for security questionnaires and compliance audits | Enterprise AI Assurance |
Single-tenant AI features
Chatbots, content generation, summarization
Primary Risk
Prompt & output abuse
Engagement
AI Security Readiness CheckRAG with uploads or search
Document Q&A, knowledge bases, semantic search
Primary Risk
Retrieval poisoning & leakage
Engagement
Production AI PentestMulti-tenant AI (SaaS)
B2B platforms, white-label AI, shared infrastructure
Primary Risk
Cross-tenant exposure
Engagement
Production AI PentestAgents or tool execution
MCP servers, function calling, autonomous agents
Primary Risk
Privilege escalation
Engagement
Production AI PentestEnterprise customers
SOC 2, vendor reviews, compliance requirements
Primary Risk
Evidence & audit gaps
Engagement
Enterprise AI AssuranceThree engagement tiers
Each tier builds on the previous. Choose based on your system complexity and compliance requirements.
1-2 weeks
AI Security Readiness Check
Best for: Pre-launch validation, early-stage AI features
Deliverables
- ✓ Architecture risk assessment
- ✓ OWASP Top 10 for LLMs gap analysis
- ✓ Prioritized remediation roadmap
- ✓ 30-minute findings walkthrough
3-4 weeks
Production AI Pentest
Best for: Production systems, enterprise deals, SOC 2 prep
Deliverables
- ✓ Full 10-section checklist execution
- ✓ Proof-of-concept exploits with evidence
- ✓ Severity-scored findings report
- ✓ Remediation guidance + re-test
4-6 weeks
Enterprise AI Assurance
Best for: Enterprise sales, regulated industries, board reporting
Deliverables
- ✓ Everything in Production Pentest
- ✓ Compliance evidence package
- ✓ Security questionnaire support
- ✓ Executive summary for stakeholders
What we test that others miss
Standard pentests check OWASP Top 10. We add the AI-specific attack vectors that break production systems.
RAG retrieval poisoning
Documents with hidden instructions that execute when retrieved
Multi-tenant vector leakage
Tenant filters applied after similarity search, not before
Agent tool abuse
Prompt injection triggering privileged operations
Cost-based DoS
Token exhaustion attacks that drain API budgets
MCP authorization gaps
Tools registered without per-tool RBAC or argument validation
Indirect injection
Malicious payloads in PDFs, images, CSVs that poison RAG context
Ready to assess your AI system?
Start with a 30-minute call. We'll map your architecture to the right engagement tier and scope the work.
