Examples - SerhatSoruklu/chatpdm GitHub Wiki
Examples
These examples show actual system behavior under pressure, not just successful lookups.
Each example demonstrates the same rule:
- resolve exactly when the input matches the authored system
- refuse or narrow cleanly when it does not
The point is not that ChatPDM can return an answer.
The point is that it does not let incorrect meaning enter the system silently.
Semantic Drift (Critical Example)
Input A: "The tenant must vacate within 30 days."
Input B: "The tenant must vacate within one month."
→ NOT equivalent
Why this matters These statements appear interchangeable but are not.
Impact if wrong "30 days" and "one calendar month" can produce different legal deadlines, especially across months of different lengths.
Without ChatPDM A system may treat these as equivalent and enforce an incorrect deadline.
With ChatPDM The difference is detected and not silently accepted.
Exact Match
GET /api/v1/concepts/resolve?q=authority
→ concept_match
→ Authority
Why this matters
A canonical concept was requested directly, so the system can resolve it without drift.
Impact if wrong
If a system loosely rewrites or reinterprets even exact concept requests, the canonical layer stops being trustworthy.
Without ChatPDM
A system may still return something that looks correct, but there is no guarantee it is anchored to the authored concept boundary.
With ChatPDM
The request resolves directly to the canonical concept and stays inside the defined domain.
Subtype (No Exact Match)
GET /api/v1/concepts/resolve?q=civic%20duty
→ no_exact_match
→ No exact canonical concept match
→ suggestion: duty
Why this matters
"Civic duty" sounds related, but it is not the same as a canonical authored concept in the system.
Impact if wrong
If the system silently upgrades related language into a canonical concept, it introduces semantic drift at the point of interpretation.
Without ChatPDM
A loose system may treat "civic duty" as if it were already defined and resolve it anyway.
With ChatPDM
The system refuses exact resolution, preserves boundary integrity, and suggests the nearest canonical concept without pretending equivalence.
Out of Scope (Example)
Query:
"spiritual authority in nature"
→ out_of_scope OR refusal
Reason:
- outside governance domain
- no valid concept mapping
Why this matters
The input contains language that may be meaningful in a broader human or philosophical sense, but it does not belong to the governed domain ChatPDM is authorized to resolve.
Impact if wrong
If the system tries to interpret out-of-domain language anyway, it stops being a deterministic governance system and becomes a guessing engine.
Without ChatPDM
A general system may produce a fluent interpretation that sounds intelligent while crossing the domain boundary.
With ChatPDM
The system refuses or marks the input out of scope instead of manufacturing meaning.
Key Observation
The system does not:
- stretch meaning
- reinterpret loosely
- generate approximations
- promote related language into canonical meaning without authority
It either:
- resolves exactly
- narrows safely
- or refuses clearly
That is the point of the system.
It is not designed to sound plausible.
It is designed to prevent silent semantic drift.