Architecture Overview
Vormur operates as a stateless automation layer on top of your existing compliance infrastructure. Our service processes data transiently in memory — nothing is persisted to disk, no customer data is stored in any database, and all information is discarded after the investigation cycle completes.
This stateless architecture means Vormur maintains a fundamentally smaller risk surface than traditional data custodians. There is no persistent data store to breach, no customer records to exfiltrate, and no long-lived storage to compromise.
Data Handling
Stateless Processing
All data processing occurs transiently in memory on US-based infrastructure. When an alert investigation completes, all associated data in Vormur's processing environment is discarded. We do not maintain databases of customer information, transaction histories, or investigation records outside of what is written back to your platform.
PII Tokenization
Before any data is sent for AI inference, all personally identifiable information is stripped and replaced with opaque tokens. Names, Social Security numbers, account numbers, and other direct identifiers are never transmitted to the AI model. The AI processes only transaction patterns, amounts, and behavioral signals. Identifiers are re-mapped when writing investigation results back to your platform.
Encryption
All data in transit is encrypted using TLS 1.3. All API communications between Vormur and your platform, and between Vormur and our AI inference providers, are encrypted end-to-end.
Infrastructure Security
US-Based Processing
All inference and data processing occurs on US-based infrastructure, meeting data residency requirements for US financial institutions.
No Persistent Storage
Vormur does not operate databases or file systems containing customer data. Processing is in-memory only, discarded after each investigation cycle.
Encrypted Communications
TLS 1.3 on all connections. API keys and credentials managed with automatic rotation. No secrets in code or logs.
Audit Logging
All agent actions, routing decisions, and investigation events are logged to your platform's audit trail — not to ours.
AI Model Security
Vormur's AI inference providers are contractually bound to the following commitments:
- API data is not used for model training. Your investigation patterns and transaction data will never appear in a model that serves other customers.
- Inference is stateless. Prompts and completions are not persisted beyond what is required for short-term abuse monitoring.
- US-only inference is enforced, ensuring data does not leave US-based infrastructure during processing.
- SOC 2 Type II certification is maintained by all inference providers.
Access Controls
- All API endpoints require bearer token authentication.
- Per-user and per-project rate limiting with sliding window enforcement.
- Tenant isolation ensures each customer's data and configuration is fully separated.
- Role-based access controls govern which users can view, edit, and approve investigations.
Incident Response
In the event of a security incident, Vormur will notify affected customers within 72 hours of confirmed discovery, provide a detailed assessment of impact and scope, implement containment and remediation measures, and provide a post-incident report with root cause analysis and corrective actions.
Contact
For security questions, vulnerability reports, or to request our latest security documentation, contact us at security@vormur.com.