Symbolic Cognition in Context: Functional Reasoning and Dynamic Activation in the Neurograph Engine - asynkline/neurograph GitHub Wiki
The Glyphic Cognitive Model began as an attempt to formalize cognition not from neurotypical norms, but from the expressive logic of non-verbal and visually-structured thought. This was rooted in three axioms, which remain central to Neurograph’s evolving architecture:
- That visual languages like ASL reveal core communicative mechanisms in non-verbal cognition—where meaning is spatial, compositional, and relational
- That neurodivergent thinking often involves simultaneous awareness of multiple semantic layers, requiring systems that can constrain, not just store, conceptual breadth
- That languages like Mvskoke and Chinese demonstrate that meaning does not need to emerge from sequential phonemic construction, but from hierarchical and visual-semantic compression
These principles guide not only the structure of Neurograph’s symbolic core, but its function: a system designed to activate only relevant meaning in bounded context, while preserving access to broader conceptual hierarchies when needed.
Since that post, Neurograph—the symbolic reasoning engine that extends from those principles—has evolved to handle not just structural composition, but functional reasoning. Where the first stage involved defining %Glyph{}
and %Scene{}
types to represent symbolic entities and the relationships between them, the second stage introduces copulas, articles, and quantifiers as composable functors.
From Structure to Function
Rather than treating relationships like :is_a
or :holds
as static edges in a graph, Neurograph now interprets them as functional copulas—dynamic evaluators that resolve meaning based on the context of the scene, the referents involved, and the activation of quantifiers or articles. For example:
Copula.evaluate(:is\_a, Article.resolve(:the, :cup), Article.resolve(:a, :container), scene)
This reflects how cognition operates under constraint: activating only the relevant referent structures in a given context. Articles like :the
, :a
, :some
, and :every
act as functional modifiers, determining scope, cardinality, and semantic resolution.
Quantifiers and Curried Logic
Neurograph now supports quantifiers as first-class symbolic actors:
Quant.resolve(:all, :container, context)
|> Enum.filter(&Copula.evaluate(:holds, &1, :fluid, scene))
This allows queries and inferences such as:
- "Do all containers hold fluid?"
- "Does some subset of glyphs classified as
:tool
have a:cutting
function?"
Quantifiers are curried with articles and copulas, forming a symbolic pipeline that reflects real-world reasoning.
Preventing Conceptual Explosion: Scene Scoping and Contextual Constraint
As the system grows more expressive, it becomes vulnerable to conceptual explosion—the overloading of a single concept with too many meanings. Take :water
:
- A fluid (chemistry)
- A solvent (chemistry)
- A drink (dietary)
- A cleansing agent (domestic, medical)
- A geographic feature (geography)
If all of these definitions are active simultaneously, the reasoning engine becomes semantically incoherent. To prevent this, Neurograph now scopes scenes to explicit paradigms:
%Scene{paradigm: :medical}
This context constrains which copulas, glyphs, and inferences are active. The concept of :water
in a :medical
scene may resolve as a drinkable, essential fluid, while in a :geography
scene, it may be defined by flow, volume, or spatial containment.
This mirrors the cognitive behavior of most individuals who maintain latent knowledge in long-term memory but activate only the relevant subset in a given task. It's not ambiguity—it's compression. It's survival.
The Copula-Article-Quantifier Stack
What emerges is a fully composable symbolic grammar:
Article.resolve/3
handles referential scopeQuant.resolve/3
determines cardinality and generalizationCopula.evaluate/3
asserts or tests truth based on symbolic scene state
All of this operates atop the %Glyph{}
and %Scene{}
graph, preserving the original visual-linguistic and neurodivergent inspirations of the project.
The Next Stage
The next layer is coming soon: Scene.activate/2
and ContextLoader
modules that allow dynamic paradigm switching, scoped glyph loading, and pluggable ontologies.
This isn’t about emulating cognition from the outside. It’s about implementing the mechanisms we use internally when language fails, memory floods, or attention overloads.
The Glyphic Model was a proposal. Neurograph is taking the next steps.