Architecture / AIGIS resources

Field-level security for LLMs means stripping, not masking.

Traditional applications can hide a value while still knowing the field exists. LLMs are different: if the model sees a masked field, it can reason about the field's existence.

Executive read

The short version, before the deep dive.

Masking hides values, but still reveals structure.

Stripping removes inaccessible fields before prompt construction.

The LLM should never receive field names the user cannot access.

Field-level security has to apply to reads, summaries, and write proposals.

Analysis

What matters

Why masking is not enough

A masked prompt can still leak structure. If the model sees Contact.SSN__c as redacted, it knows that the field exists and can use that fact in reasoning or output.

That is acceptable in some reporting interfaces, but it is too much disclosure for a language model that can infer relationships across the entire prompt.

The stripping approach

AIGIS removes inaccessible fields before the model context is assembled. The model receives Account.Name and Account.Owner, but it does not receive inaccessible fields, placeholder tokens, or hints that those fields exist.

This creates a cleaner trust boundary: the LLM can only reason about the data the user is allowed to know.

Write proposals need the same protection

Field-level security is not only a read concern. If an AI proposes a write to a field the user cannot edit, that proposal itself is a governance failure.

AIGIS checks write permissions before proposal and again at confirmation time, so stale access cannot slip through between suggestion and execution.

Comparison

Scan the decision table.

Control
Masking
Stripping
Value hidden
Yes
Yes
Field existence hidden
No
Yes
Structural inference reduced
Partial
Yes
Prompt context minimized
No
Yes
Best for LLM governance
Limited
Preferred