Privacy & Security Standards
At DataGrail, trust is our most important asset. We understand that introducing AI capabilities requires a new level of transparency and a steadfast commitment to data security.
We designed Vera, our AI-powered privacy agent, with a security-first and privacy-by-design architecture. This page outlines the comprehensive measures we've taken to ensure your data remains secure, isolated, and under your control at all times.
Core Principles
Our entire architecture is built on these principles:
- Strict tenant isolation: Your data is never co-mingled with another customer's data. It is architecturally impossible for Vera to access, see, or report on any data outside of your own organization.
- Obeys your permissions: Vera inherits the exact same permissions as the user who is asking the question. If you don't have permission to see a piece of data in DataGrail, Vera won't be able to see it either.
- Separation of logic and data: Vera's "brain" (the AI model, Claude Haiku 4.5) is hosted in a completely separate environment (AWS Bedrock) from the core DataGrail database. The AI never has direct access to your data; instead, it accesses your data through a multi-layered security model (more below).
- No training on customer data: Your data is only used as context for the model to provide responses. This means they are read from a database and fed back into the model each time you ask a question.
- Full auditability: Every request Vera makes for data is fully logged in a detailed audit trail, providing complete transparency for compliance and review. We can provide logs of the conversations to you upon request.
Secure Data Access
We use a multi-layered security model to protect every question you ask. Here is a simplified step-by-step flow of how Vera processes a request securely.
- You submit a question: You submit a question into the Vera chat interface within the DataGrail platform.
- A secure, temporary token is created: Our system generates a unique, short-lived (5-minute) security token (a JWT). This token acts as a temporary "pass" that is tied specifically to you, your organization, and that single conversation.
- Vera receives and processes the query: Your question is then processed by industry leading LLMs hosted on AWS Bedrock. Today, we use Claude Haiku 4.5, but we may employ other models in the future if they provide you a better experience.
- Vera requests data from DataGrail: The AI agent cannot access your database directly. Instead, it must make a request back to a secure, dedicated DataGrail API endpoint which acts as a Model Context Protocol (MCP). It presents its temporary token as proof of identity.
- DataGrail verifies everything: This is the most critical step. Before any data is released, our MCP gateway runs a multi-layer security check:
- Token validation: Is the token authentic and unexpired?
- Tenant check: Does the customer ID in the token match your organization?
- User check: Does the user ID in the token match your user account?
- Permission check: Does your user role (e.g., Admin, Request Agent) have permission to view the data being requested?
- Secure, scoped data retrieval: Only after all checks pass does DataGrail retrieve the data. The database query itself is also automatically filtered to ensure it only runs against your organization's data.
- Vera synthesizes and displays the answer: The AI agent receives only the specific, authorized data it requested. It then uses this data to formulate a natural language answer and sends it back to you. The temporary token is not used again for this query.
This entire process, from question to answer, happens in seconds, with multiple security checkpoints enforced along the way.
Disclaimer: The information contained in this message does not constitute as legal advice. We would advise seeking professional counsel before acting on or interpreting any material.