Component Stamp In NARS - wpower12/opennarswikiv2 GitHub Wiki
The design and function of Stamp in NARS.
A Stamp (nars.entity.Stamp) serves several related functions in NARS. It is a component of a Sentence (nars.entity.Stamp), though conceptually some of its components are actually about the TruthValue or DesireValue of the sentence.
A Stamp contains an evidential base, a derivation chain, a creation time, and an optional occurrence time. Once created, all the information in a stamp remains unchanged.
NARS is designed according to an experience-grounded semantics, and therefore the truth-value of a statement indicates the relation between a statement and certain evidence collected from the system's experience.
The system's experience is the stream of input sentences, each of which is uniquely identified with a serial number, starting from 1. At the current stage, it is assumed that the system will never run out of serial numbers in its lifetime.
The evidential base of a sentence is a list of serial numbers defined above.
For an input sentence, its evidential base contains only the serial number assigned to it when it enters the system.
Since a Question has no truth-value, it has no evidential base, neither. A Goal has a desire-value, which is a variant of truth-value, so it also has an evidential base.
Before the revision rule is applied to a pair of candidate premises, the evidential bases of the two are compared, and revision happens only when the two lists are disjoint, i.e., have no common element. Without restriction, the revision rule could use the same evidence repeatedly to increase the confidence of a conclusion. However, if the same sentence happens multiple times in experience, each instance will get different serial numbers, so can be used by the revision rule to derive more confident conclusions. What is invalidated is multiple usage of the evidental source of a sentence in order to strenghten a statement which depends on the sentence the source claimed with revision.
The evidential base of the conclusion of two sentences (a belief and the task sentence) is conceptually the union (or concatenation) of their evidental bases, while the deviation chain will also add the two sentences itself. However, it is not directly implemented in this way, otherwise the time and space needed for evidential bases and derivation chains would grow exponentially to the length of the inference chain. Under the assumption of insufficient resources, the maximum size of an evidential base must be a constant (as a system parameter, MAXIMUM_EVIDENTAL_BASE_LENGTH, MAXIMUM_DERIVATION_CHAIN_LENGTH). The evidential bases of the premises are interwoven, then cut at the maximum length.
In this way, the order of elements in the list matters, and the two premises are treated equally. It will not work as well if the list is turned into a set, then the union of two sets are reduced to the maximum size, because in that way the serial numbers in the conclusion may only come from one parent.
Though this design works fine for most cases, it has the limitation that it cannot detect common ancestor beyond the maximum length allows.
The derivation chain keeps track of all premises and conclusions in their specific rule-application-steps which leaded to the derivation of the new statement. Every derived sentence has a derivation chain, no matter what is the type of the sentence (i.e., judgment, question, goal, ...).
Before a non-revision rule is applied to a pair of candidate premises, to derive a new conclusion C, the derivation chain of both premises is checked if it contains C, only if they don't the rule-application will be allowed. This policy is established to prevent circular inference. Given the reversibility of the syllogistic rules in NAL, circular inference could happen if no restriction were made when premises are selected for inference. Whenever a new derived conclusion, depends on itself in order to derive itself, this would be faulty circular reasoning which needs to be avoided.
Derivation chain also helps debugging and tuning of the system. It can be visualized to show where a conclusion comes from.
Similar to evidential base, the derivation chain also has a maximum length to avoid unlimited resource demand, which may cause errors.
The creation time of a sentence is either when it enters the system (for input sentences), or when it is generated by the system (for derived sentences), indicated using the internal clock that measures time using the inference cycle.
When the system starts, the time is 0. Then it is increased by one in each working cycle. Therefore, it is possible for multiple sentences to have the same creation time, as long as they are accepted or generated within the same working cycle. However, they will still have different serial numbers in their evidential bases.
This information can be used for various purpose, either by the logic part or the control part of the system.
This component is introduced to implement temporal inference specified in NAL-7. It indicates a moment where the sentence has the specified truth-value.
Since the system cannot have information about every moment on every sentence, the truth-value of a belief can be projected from one moment t1 to another moment t2, though there will be a "confidence loss" factor proportional to the distance of the projection, compared to its distance to the current moment t. That is, c = c0(1 - (|t1 - t2| / (|t1 - t| + |t2 - t|)). A truth-value without tense is considered as eternal, and can be used at any moment with the same confidence. A temporal truth-value can be eternalized via "temporal induction" so for the conclusion c = c0/(c0 + k). It means when the confidence loss of a time projection is larger than the result of temporal induction, the latter will be used.
For an input sentence, its occurrence time is determined by its tense: no tense means "eternal", present sense means the current time (as indicated by the internal clock), and past/future tense means the current minus/plus a constant (a parameter).
For a derived sentence, its occurrence time will be that of its parent task, and its parent belief's truth-value will be projected or eternalized, if necessary. Some inference rules will be exceptions of this default policy. For example, the temporal detachment rule {a, a =/> b} |- b will set the occurrence time of b as after a.