Architecture / AIGIS resources

Human-in-the-loop AI writes need confirmation-time permission checks.

The safest AI write path treats model output as a proposal, not an action. A human confirms the change, and the system re-verifies permissions before execution.

Executive read

The short version, before the deep dive.

AI-generated mutations should be proposals until a human explicitly confirms them.

Permissions can change between proposal and approval.

AIGIS re-checks object, field, and record permissions at confirmation time.

The write audit record should include proposal, approval, permission check, and execution result.

Analysis

What matters

The stale-permission problem

An AI can draft a write while the user has access, but that access may change before the user approves it. If the system only checks permissions at proposal time, it has a time-of-check/time-of-use gap.

AIGIS closes that gap by treating confirmation as a new governance event.

Proposal is not execution

The model can describe the intended change, the affected system, the record, the fields, and the reason. That proposal is displayed to the user for explicit approval.

Only after approval does AIGIS re-check write permissions and call the target system. If any check fails, the write is denied and logged.

Why this matters for trust

Enterprise users will not trust AI that silently mutates business systems. They will trust AI that clearly proposes, asks, verifies, executes, and records what happened.

Human-in-the-loop is not a UX afterthought. It is part of the security boundary.