JavaScript Model Tracer - CMUCTAT/CTAT GitHub Wiki
This page details the ways in which rule authors can organize production rule models for the JavaScript model tracer, as well as options for configuring the rule engine and interacting with it at run-time. For information on writing rules see this site (Note that only the nools DSL syntax is used for CTAT model tracer models).
Overview
Required Types
Model Structure
Initializing Working Memory
Backtracking
Evaluating Student Input
Correct Step Prediction
Providing Feedback Messages
Configuring Model Behavior
Setting Tracing Flags
Browser Console Interactivity
Logging Custom Fields
Defining Skills
Hints and Techniques
The JavaScript Model Tracer uses the nools forward-chaining production rule engine to run tutor models. There are, however, differences between rule files written for use directly with nools and tutor models written for use with the model tracer. One important difference is that the model tracer uses its own version of the modify
function to alter facts in working memory. To modify facts in tutor models, use the following function signature: modify(<fact>, <property>, <value>)
rather than the signature described in the nools documentation.
The following fact types must be defined by a given model in order for certain tracer functionality to work.
-
StudentValues
: Required for all models. This is the type of fact that represents student input in the tracer's working memory.
define StudentValues {
selection: null,
action: null,
input: null,
constructor: function(s,a,i) {
this.selection = s;
this.action = a;
this.input = i;
}
}
-
TPA
: Required for any model which generates TPAs. TPAs are sent by assertingTPA
facts from inside rules'then
blocks.
define TPA {
selection: null,
action: null,
input: null,
constructor: function(s, a, i) {
this.selection = s;
this.action = a;
this.input = i;
}
}
-
Hint
: Required for any model which generates hints. Hints are sent by assertingHint
facts from inside rules'then
blocks. Hints will appear in order of precedence, from high to low. Hints with equal precedence will appear in the order they were asserted.
define Hint {
precedence: 0,
msg: "",
constructor: function(m, optPrecedence) {
this.msg = m;
this.precedence = optPrecedence || 0;
}
}
-
IsHintMatch
: Required for any model which sets the "use_hint_fact" configuration parameter totrue
. See Configuring Model Behavior for more information.
define IsHintMatch {
constructor: function() {
}
}
For any non-trivial model, it's a good idea to break up the model definition into multiple different files. This makes models easier to understand and maintain, and allows authors to re-use the same sets of rules and types with different initial problem states. A given model can be broken up into any number of files, but there are four general kinds of file to consider when writing a model:
- Rule File: all of the rule and function definitions for the model
- Types File: type definitions for the model
- Problem File: any necessary problem-specific information
- Skills File: skill definitions and rule-to-skill mappings for the model
A typical problem might consist of these four files organized like so: problem_file(imports skills_file and rule_file (imports types_file))
Authors will usually want to have the working memory of the rule engine be in a particular state when a problem is begun. This can be accomplished using a "bootstrap" rule, or a rule which fires immediately when the model is loaded and then initializes the problem state in its then
block. For example, the bootstrap rule of an addition problem might look like:
rule bootstrap {
when {
// assume addend1 and addend2 are global variables defined in a problem-specific file which imports this model
a1: Number from addend1;
a2: Number from addend2;
}
then {
assert(new Addend(a1));
assert(new Addend(a2));
//and so on...
halt();
}
}
For a model with no given values, the following "when" block can be used instead:
when {
b: Boolean b === true from true;
}
If these when
blocks look strange, see here for an explanation of the from
keyword
Important: be sure to call halt()
at the end of the then
block in your bootstrap rule to prevent the rule engine from starting to execute immediately on load
Fact references to other facts: If you want properties in your facts to refer to other facts (to create trees or other data structures, e.g.), it is usually best to use names, not direct object references, and then include patterns in your rules find the facts themselves. Straight object references to facts in working memory don't always work. To use names instead, you will need a string-valued "name" property in each of your fact types, and you will typically want to give each fact a unique name. To see what this means, assume you have the following 2 types:
define InterfaceElement { // represents a user interface component in working memory
name: null, // component's name, the selection in a selection-action-input tuple
value: null, // component's current value, the input in selection-action-input
constructor: function(n) {
this.name = n;
}
}
define Problem {
interfaceElement1: null, // reference to the name of a fact representing the component used on the 1st step
constructor: function(ie1Name) {
this.interfaceElement1 = ie1Name;
}
}
Then we recommend you initialize working memory as follows:
rule bootstrap {
when {
s: Boolean s === false from false;
}
then {
let ie1 = assert(new InterfaceElement("step1TextInput"));
assert(new Problem(ie1.name)); // do not use ie1 -- store the unique name instead of the reference
halt();
}
}
rule Step1 {
when {
prob : Problem {interfaceElement1: sel};
ie : InterfaceElement ie.name == sel && ie.value == null; // match on the name
}
then {
assert(new Hint("Type 33 in text input "+sel+"."));
if(checkSAI({selection: sel, action: "UpdateTextField", input: 33})) {
modify(ie, "value", 33);
halt();
}
}
}
Backtracking allows the model tracer to explore all possible solution paths independently of one another. It accomplishes this by saving the state of the model any time it finds more than one new rule activation on the agenda. If at a later point there are no more activations on the agenda and halt()
has not been called, the tracer "backtracks" by restoring the model to the state that was most recently saved, then fires the activation subsequent to the one fired last time the model was at that point. This will continue until the search is ended by a call to halt()
or all possible activation chains have been fired.
Authors can force the tracer to backtrack by calling backtrack()
from within a rule's then
block. The tracer will backtrack after executing the rest of the then
block, even if there are still activations on the agenda.
Backtracking is disabled by default in the model tracer. To enable it, add the following call to the then
block of your model's bootstrap rule: setProblemAttribute("use_backtracking", true)
.
Any time a model predicts a potential student step, its prediction can be checked against the most recent student input by calling the function checkSAI(predictedSAI, optionalComparator, isBuggyStep)
, where:
- predictedSAI is the step predicted by the model, in the form of an object with the properties "selection", "action", and "input".
- optionalComparator is an optional comparison function for student and model SAIs. It should take as arguments two objects and return
true
formatch
orfalse
forno-match
. The first argument will be the student SAI, in the form of an object with propertiesselection
,action
andinput
. The second argument will be thepredictedSAI
argument passed tocheckSAI()
. The function itself may be defined in the global scope, in the imported.nools
files or in the caller's argument itself. If this optionalComparator argument is supplied, the comparison result will be the function's return value. If this argument is omitted, the comparison result will bematch
if each of the student SAI's selection, action, and input properties equals the corresponding property in thepredictedSAI
argument; if any property is not equal, the result will beno-match
. - isBuggyStep is a boolean representing whether the predicted step should be considered a correct action. If this parameter is
false
or omitted, and none of the fields of predictedSAI have the value "not_specified", the step is considered correct.
checkSAI
returns the result of the comparison as a boolean value: true
if the student input matches the tutor prediction, false
if not.
Authors can have the checkSAI
function ignore any of the selection, action, and input properties by setting those they want ignored to the string: "not_specified"
. This is useful for pruning the search space the tutor explores while deferring actual evaluation until further down the chain. A predicted SAI with any field set to "not_specified"
will not be considered a correct step. To match any value for a given field and have that prediction be considered correct, i.e. wildcard matching, set that field to "don't_care"
instead.
Here's an example of how checkSAI()
might be used in a model for adding integers:
rule DetermineIntegerSum {
when {
a1: Addend a1.value !== null;
a2: Addend a2.value !== null && a2 !== a1;
sum: Sum sum.value === null;
}
then {
var ans = a1.value + a2.value;
var predictedSAI = {selection: sum.inputComponentName, action: "UpdateTextField", input: ans};
if (checkSAI(predictedSAI)) {
modify(sum, "value", ans);
halt();
}
}
}
In this example, a optionalComparator
function compares floating point numbers, rounding to 2 digits after the decimal point.
rule DetermineDecimalSum {
when {
a1: Addend a1.value !== null;
a2: Addend a2.value !== null && a2 !== a1;
sum: Sum sum.value === null;
}
then {
var s_sum = null; // to capture student's input
var predictedSAI = {selection: sum.inputComponentName, action: "UpdateTextField", input: a1.value + a2.value};
if (checkSAI(predictedSAI, function(s, t) {
s_sum = s.input; // save for call to modify(), below
return s.selection == t.selection && parseFloat(s.input).toFixed(2) == parseFloat(t.input).toFixed(2);
})) {
modify(sum, "value", s_sum);
halt();
} else {
backtrack();
}
}
}
Below, checkSAI()
validates only the student's selection and action; it accepts any input. This rule uses the StudentValues
fact to capture the student's input.
rule enterAnything {
when {
p : Problem p.initialString == null {ieInitialString: sel};
sv : StudentValues;
}
then {
assert(new Hint("Type anything."));
if(checkSAI({selection: sel, action: "UpdateTextField", input: "don't_care"})) {
modify(p, "initialString", sv.input.trim()); // remove surrounding whitespace
halt();
} else {
backtrack();
}
}
}
As the model runs, the tracer keeps track of series or "chains" of subsequent rule firings. Once the match cycle is complete, the full set of these chains constitutes the search space or conflict tree for that cycle, where a single chain is a path through the conflict tree from root to leaf. The tracer then selects one chain from this set to represent the model's step prediction, or the proper step for the student to take at that point. If, during the match cycle, a correct (non-buggy) checkSAI call was successful, then the rule chain along which that call was made will be selected. Otherwise, the tracer selects a chain based on the following priorities (highest to lowest):
- the cycle was a hint cycle AND the chain generated hints
- the model-set priority was highest (set by calling setChainPriority(n))
- the SAI predicted by the chain matches the student selection
- the chain was predicted first by the model (rule salience)
To send a feedback message to the student, authors can call the function setSuccessOrBugMsg(message)
, where:
- message is a string whose text is to be rendered in the CTATHintWindow or other feedback component. If the string begins with
<html>
and ends with</html>
, then the CTATHintWindow will render the HTML markup.
In the example below, a buggy rule calls setSuccessOrBugMsg()
to provide an error message for any input value. The rule is meant to work in tandem with the DetermineSum rule above, but its salience is lower in order that the other rule be fired first.
rule BuggyDetermineSum {
salience: -2; // ensure that this rule fires after corresponding correct rule since any input will match
when {
a1: Addend a1.value !== null;
a2: Addend a2.value !== null && a2 !== a1;
sum: Sum sum.value === null;
}
then {
var ans = a1.value + a2.value;
var predictedSAI = {selection: sum.inputComponentName, action: "UpdateTextField", input: "don't_care"};
if (checkSAI(predictedSAI, null, true)) { // 3rd argument true for buggy rules
setSuccessOrBugMsg("<html>Your sum should be <b>"+ans+"</b> instead.</html>");
backtrack();
}
}
}
Authors can use the setProblemAttribute(<attribute>, <value>)
function from inside a bootstrap rule to control how the rule engine behaves. The valid attributes are listed in the table below:
Name | Values | Description | Default |
"use_backtracking" | true, false | Whether the model should use backtracking when searching for a match | false |
"prune_old_activations" | true, false | Whether "old" activations should be allowed to fire. An activation is "old" if it was not generated by the last match cycle | false |
"use_hint_fact" | true, false | If true, a fact of type "IsHintMatch" will be asserted in working memory at the start of every hint match cycle, and retracted at the end of the cycle. The "IsHintMatch" type must be defined in any model that sets this parameter to true. | false |
"hint_out_of_order" | true, false | If true, the model tracer will provide feedback to the tutor interface when a step is taken out of order (meaning there were no steps predicted by the model during the last match cycle which shared the "selection" property of the input). | false |
"search_all_permutations" | true, false | If true, the model tracer will explore all permutations of a given set of activations on the agenda at a given point. If false, the tracer will only create branch points (backtracking checkpoints) at states where there is at least one new activation, and at least two total activations, on the agenda. | true |
"substitute_input" | true, false | If true, the tracer will replace matched student SAIs in the interface with the SAI predicted by the tutor to which the student SAI matched. | false |
So, using the bootstrap rule from the previous section as an example, we could put the engine in backtracking mode by changing the then
block like so:
then {
assert(new Addend(a1));
assert(new Addend(a2));
setProblemAttribute("use_backtracking", true);
halt();
}
As the rule engine executes student input against a model, it produces tracing information that can be useful for debugging the model or just following along with changes to the model's state. Distinct types of tracing information are each associated with their own flag, and which flags are set determines what information is made visible at run-time. Users can set flags using the function setTracerLogFlags([flag1], [flag2], ... [flagN]);
Information is printed to the browser console (f12 to open in Firefox and Chrome). Users can unset flags using the function unsetTracerLogFlags([flag1], [flag2], ... [flagN]);
To print which flags are set and which are not, use the function getTracerLogFlags()
Valid flags are listed in the table below:
Flag | Prints | When |
"state_save" | IDs of all activations on the agenda | A branch point is reached (more than one new activation on the agenda) |
"state_restore" | IDs of all activations on the agenda | A branch point is returned to as a result of backtracking |
"agenda_insert" | The ID of the activation added to the agenda, whether it was new, and whether it was skipped | An activation is added to the agenda |
"agenda_retract" | The ID of the activation removed from the agenda | An activation is removed from the agenda |
"fire" | The ID of the activation about to fire | An activation is about to fire |
"assert"/"modify"/"retract" | The type of fact, its ID number, and its values in JSON format | A fact has been asserted, modified, or retracted |
"backtrack" | "backtracking," and whether it was triggered by the model (called from within a rule) or the engine (no more valid activations on the agenda) | The model backtracks |
"error" | An error message | An error has occurred |
"debug" | Various debugging messages, mostly to do with internal workings of the model tracer | N/A |
"sai_check" | Both SAIs, whether or not they were found to match, and whether the model's prediction was a buggy step | Student input is compared to steps predicted by the model |
"agenda_pre" | The IDs of all activations on the agenda | A match cycle is about to start |
"agenda_post" | The IDs of all activations on the agenda | A match cycle has ended |
"tpa" | The TPA fact asserted | A TPA fact has been asserted |
"conflict_tree" | The conflict tree for the last match cycle | A match cycle finishes |
The following table lists a set of functions and their uses that are callable at run-time. These functions are globally defined, so can be called through the browser console or by custom scripts.
Function Call | Description |
printAgenda() | Print the IDs of all activations on the agenda at the time the function is called |
printFact([factID]) | Print the type and property values (in JSON format) of the fact in working memory with ID , or "No Such Fact" if no fact with that ID exists. |
printFacts([factType]) | Print all facts of type [factType], or, if no type is provided, print all facts in working memory |
getFact([factID/type]) | Returns the fact with ID [factID]. If a type string is passed instead, returns the first fact of that type found. If neither is passed, returns the first fact found. |
getFacts([type]) | Returns a list of all facts of type [factType]. If no type is provided, returns a list of all facts. |
printBreakpoints() | Prints the names of all rules on which breakpoints are currently set |
printRules([substr]) | Prints the names of all rules whose name contains [substr]. If [substr] is not provided, prints the names of all rules. |
printMatch([CTNodeID]) | If an SAI match-check (function `checkSAI()`) was made as a result of firing the activation associated with [CTNodeID], prints the student/tutor SAIs and the result of the check; the argument should be the integer inside the brackets beside the node in the `printConflictTree()` output |
whyNot([ruleName]) | For each constraint of the given rule, print whether it is currently matched by facts in working memory. If it is matched, also print all possible fact bindings for that constraint's alias. |
setStepperMode([true|false]) | Enable or disable stepper mode, which allows you to fire an arbitrary number of activations at a time, rather than run a complete match cycle for every input. Disabling stepper mode will cause normal execution to resume immediately |
takeSteps([numSteps]) | If in stepper mode, causes [numSteps] rule activations to fire. [numSteps] defaults to 1. Has no effect if stepper mode is not enabled. |
setBreakpoint([ruleName],["first"|"every"|"none"]) | Set or clear a break-point for a given rule. Rule execution will halt immediately before a rule with a break-point set fires. Passing "first" as the second argument sets a break-point only on the next activation for that rule; passing "every" will set a break-point for every activation of that rule. "none" clears any existing break-point on that rule. |
resume() | Resume normal execution after a break-point causes execution to halt |
printConflictTree([firedOnly]) | Print a formatted list of rule names that have appeared on the agenda during the last or current match cycle. Each node's children are those activations that were on the agenda at the time that that node was fired. If [firedOnly] is equal to true, only activations that were fired will be displayed (default value is false). Activations which called checkSAI are preceded by a three-character string representing which fields of the student SAI matched the SAI predicted by the tutor. Each character represents one field of the SAI; a letter in that position signifies that that field was matched, and a '-' signifies that it was not.* For example, the string 'SA-' would mean that the Selection and Action fields of the two SAIs matched, and the Input field did not. This string is followed by a letter, which signifies the ultimate result of the match as follows:
|
Custom fields are a way for models to generate arbitrary name-value pairs and return them to the tracer environment for logging. Authors can make use of custom fields by including the CustomField fact definition in their model, and asserting a CustomField fact when they want a field to be logged. At the end of every match cycle, any CustomField facts that were asserted during that match will be propagated out to the tracer. Here's an example:
//CustomField fact definition
define CustomField {
name: null,
value: null,
constructor: function(n, v) {
this.name = n;
this.value = v;
}
};
rule SayHello {
when {
/* (some constraint here) */
}
then {
/*...*/
assert(new CustomField("msg", "hello world!"));
/*...*/
}
}
Authors can associate rules with skills in one of two ways. First, an author can declare a global variable called "skill_definitions" in the problem-specific .nools file. The object must be initialized to an array of objects, each of which represents a skill. When the package is uploaded to Tutorshop, Tutorshop will look for this global object definition in the problem file to determine the skills automatically. These objects should have the following properties:
- ruleName: the name of the rule associated with this skill; optional from CTAT release 4.5 onwards
- category: the category the skill belongs to
- opportunities: the number of opportunities to exercise this skill that exist in this problem
- skillName: the name of the skill (optional: defaults to ruleName)
- label: the display name for the skill (optional: defaults to skillName)
- description: a description of the skill, seen in Tutorshop (optional: defaults to skillName)
Following the match cycle, all skills in the skill_definitions map whose ruleName property matches the name of a rule fired along the successful chain will be added to the list of skills reported to the tracer. An example skill declaration might look like this:
//In the problem-specific .nools file
global skill_definitions = [
{
skillName: "given",
category: "squaring",
ruleName: "enterGiven",
opportunities: 1,
label: "Enter Given",
description: "Choose a number the algorithm supports"
},
{
skillName: "firstPart",
category: "squaring",
ruleName: "findFirstPart",
opportunities: 2,
label: "Find First Part",
description: "Modify the given number in the 1st step of the algorithm"
}
];
Second, rules can instead be associated with skills individually under explicit program control by asserting facts of the type "Skill" from a rule's then clause. This method requires a definition of the Skill fact type. With this method, Tutorshop will not be able to determine the skills automatically upon upload unless (from CTAT release 4.5 onwards) a skills_definitions map is defined as shown above, but with the optional ruleName property omitted. For example:
//skill_definitions without ruleName: CTAT release 4.5 onward
global skill_definitions = [
{
skillName: "given",
category: "squaring",
opportunities: 1,
label: "Enter Given",
description: "Choose a number the algorithm supports"
},
. . .
];
//Skill fact definition
define Skill {
name: null,
category: null,
constructor: function(n, c) {
this.name = n;
this.category = c;
}
}
//Skill fact assertion
rule rule1 {
when {
/* some constraint */
} then {
/* ... */
assert(new Skill("given", "squaring"));
if(checkSAI(...)) {
/* ... */
}
}
}
When a skill is asserted by this method, the tracer associates that skill with the current chain of rule activations being fired. For more info on how the tracer chooses a chain to represent an interaction, see Correct Step Prediction
For more information about skills, see Skill Attributes and Behavior.
Some tips that might help.
We recommend the Apache HTTP server available for no charge at http://httpd.apache.org/. It is implemented for Windows and Linux and comes pre-installed on recent releases of the MacOSX operating system. If you set Apache's DocumentRoot directive to a parent of the CTAT directory available to the HTML Editor, then you can test your student interfaces in your browser with URLs like this:
http://localhost:80/CTAT/FractionAddition/HTML/fractionAddition.html?question_file=../CognitiveModel/1416.nools
where--
- 80 is the port number in your Apache server's Listen directive; default is 80;
- CTAT is the file system path (may descend through several folders) below Apache's DocumentRoot to your packages;
- FractionAddition/HTML/fractionAddition.html is the package path to your student interface file;
- ../CognitiveModel/1416.nools is the path to the top-level, problem-specific .nools file, relative to the student interface.
On the Chrome and Firefox browsers, the Developer Tools invoked from the hamburger menus at the upper right corner provide interactive debugging aids and access to the printAgenda()
and other functions described above. The most useful tabs on the tools panel are these:
- Network shows the browser's requests to the network, including downloads of all the files retrieved by your HTML page, including those referred to by , or <script> tags and those loaded by JavaScript;
-
Console provides interactive access to the
printAgenda()
and other functions described above.
- If the user interface fails to load and you see your rules printed to the console, scroll up to look for a syntax error description. The beginning of the error diagnostic shows where the rule parser failed, but the actual error might be earlier in your text. We recommend text editors that help you match parentheses, including Sublime (https://www.sublimetext.com/) and many others.
- Pressing the up-arrow repeatedly at the console prompt retrieves prior entries. You may want to develop a macro, e.g.,
printFacts(); printConflictTree(), printAgenda()
to reenter by this means after every step.