Trust Center
Security architecture
Context deploys autonomous AI agents that execute code, access enterprise data, and interact with live systems. The agent sandbox is treated as a hostile, untrusted environment. Every control on this page is designed with that assumption.
Compliance
Frameworks and certifications
SOC 2 Type II
CompliantAnnual audit of security, availability, and confidentiality controls by an independent third-party auditor.
SOC 2 Type I
CompliantPoint-in-time assessment of the design and implementation of security controls.
ISO 27001
CompliantInternational standard for information security management systems (ISMS).
GDPR
In ProgressEuropean Union General Data Protection Regulation for personal data protection.
Architecture
How agent isolation works
The security architecture assumes that anything running inside a sandbox will attempt to escalate privileges, exfiltrate data, or reach external systems.
Sandbox Isolation
Hardware-level isolation per agent session
Each agent session gets its own microVM with a dedicated kernel, provisioned through Kata Containers and Firecracker. This is the same isolation primitive that powers AWS Lambda. The hypervisor enforces memory and CPU isolation at the hardware virtualization layer.
Sandboxes are ephemeral — provisioned per session and destroyed after. No persistent attack surface accumulates between sessions. The runtime runs as a non-root user, drops all Linux capabilities, disables privilege escalation, and applies a default seccomp profile.
Network Egress
Deny-by-default at the infrastructure layer
Sandbox network egress is deny-by-default, enforced at the network infrastructure layer — not at the application level where a compromised process could bypass it. Direct internet access, the cloud instance metadata service, the cluster API server, peer sandboxes, and external DNS are all blocked.
A sandbox can reach exactly one destination: the internal mediated gateway. LLM inference, web search, and connector requests all follow the same path. The gateway authenticates each request, applies policy, makes the upstream call with its own credentials, and returns the result.
The sandbox never receives upstream secrets. For LLM inference, the sandbox authenticates with a per-session token that the gateway swaps for real provider credentials server-side. For drive access, credentials are minted via STS with an inline session policy scoped to the organization's storage prefix and a specific read or write intent. TTL is 1–12 hours.
Data Isolation
Organization is the hard tenancy boundary
Each user belongs to exactly one organization. Cross-organization data access is impossible by design, enforced at both the application and infrastructure layers: all database queries are scoped by organization through foreign key constraints, and object storage uses STS session policies enforcing prefix-level boundaries.
Agent-to-agent isolation is total. Each agent has its own sandbox, session, credentials, and filesystem. There is no shared context, shared memory, or shared credential between agents. This stands in contrast to platforms where years of accumulated permission sprawl become the agent's access surface.
Identity & Access
Agents are first-class principals
Organization administrators define what each agent can do. Agents operate under the principle of least privilege: the platform grants only the access required for the task. Drive credentials carry explicit read-only or read-write intent, enforced at the IAM policy level.
Observability
Complete chain of custody
Every inference call is warehoused in-cluster with full context: organization, agent, session, model, tokens, input/output, cost, and latency. Identity is resolved server-side — agents cannot misrepresent themselves in traces.
An immutable, append-only audit log captures every permission change, API key lifecycle event, role grant, and data access operation. Given any agent, user, resource, or time range, the complete chain of custody can be reconstructed.
All observability integrates with existing enterprise stacks: OpenTelemetry for distributed tracing, structured JSON logs for any SIEM (Datadog, Splunk, ELK, CloudWatch), and Prometheus metrics for monitoring. An in-cluster Grafana stack is included.
Deployment
Entire stack runs in customer infrastructure
The entire platform runs in the customer's VPC across multiple availability zones. Context does not host any component of the deployment. All encryption uses customer-managed keys.
Updates are delivered as Helm releases through a gated promotion pipeline (Unstable → Beta → Stable). The customer deploys at their discretion through a self-service admin console with health monitoring, one-click upgrades with rollback, preflight validation, and configuration management. Context does not have access to the cluster.
Airgapped installation is fully supported, with offline image bundles and automatic registry rewriting to the customer's internal registry (ECR, Harbor, Artifactory). No outbound connectivity is required for telemetry, license verification, or any phone-home mechanism.
Resources
Reports and policies
Audit reports and security documentation available under NDA.
Controls
57 controls monitored
Continuously monitored across infrastructure, data protection, access management, and operations.
Subprocessors
Third-party services
Security inquiries
For report requests, vendor security questionnaires, or to report a vulnerability.
trust@context.ai