How it works

Six steps. No copies.

AIGIS sits between your enterprise systems and the LLM. It governs data on the way in, and it logs every decision on the way out.

1. Classify intent

Read, write, action, or coach? Multi-step? AIGIS uses a deterministic classifier first (zero LLM cost) and only escalates to the model when ambiguity demands it.

2. Route to system(s)

An object-ownership registry maps natural language to the correct system. 'Show me opportunities and shipments' becomes a parallel query against Salesforce and SAP, with cross-system identity mapping per user.

3. Resolve identity

Each system asks 'who is this user?' before AIGIS asks 'what can they see?' If we can't map the user in a system, that system is excluded from the query. Never a service-account fallback.

4. Enforce permissions (three tiers)

Object access. Field permissions. Record-level visibility. Live record access is checked on every query and never cached. Object and field permissions are warmed via delta sync for speed.

5. Strip and merge

Fields the user can't see are stripped (not masked) before the model sees the prompt. Multi-system results are merged and per-system provenance is logged in the ledger.

6. Generate, return, audit

The LLM (Claude, GPT, Gemini, or your own) generates a response from governed data only. The response, the provenance receipt, and the permission decisions are all logged for compliance.

How it works

Live everything. Cached nothing that matters.

AIGIS never copies your data. Permissions are cached for speed, record access is checked live on every query, and every decision is logged in the provenance ledger.

01
User
asks a question
02
System Widget
SF / SAP / SNOW chat
03
AIGIS MCP
the brain
AIGIS
04
Permissions
cached + live
05
Target System
live query
06
LLM
any model
07
Response
filtered + provenance

Read path

Live SOQL, live OData, live SQL. Permission cache for speed, live record access for safety.

Write path

Human-in-the-loop. Permissions re-verified at confirmation time, not just at proposal time.

LLM path

Claude, GPT, Gemini, or your own. Swap on Friday. Failover is automatic.

Self-healing where it matters

Fail-closed, not fail-open.

Every governance decision defaults to denial. If the cache is missing, we go live. If the live check is missing, we deny. If a system is unreachable, we exclude it from the response, and we tell you we did.

Scenario

Cache miss

Permission cache lookup fails. We fall back to a live system query. User waits 200ms longer with no policy bypass.

Scenario

Identity mismatch

A system can't resolve the user, so the system is excluded from this query. The response carries an honest provenance note.

Scenario

LLM outage

Primary LLM (Claude) fails. Automatic failover to GPT or any registered model. Failover is logged.

Scenario

Stale permission window

Delta permission sync runs continuously and record-level access is always live. Cache staleness can never leak record data, only field-level metadata, and even that within minutes.