Cognitive Tutors (rule based) - CMUCTAT/CTAT GitHub Wiki

Cognitive Tutors (rule-based)

CTAT supports two different engines for tutors whose domain knowledge is encoded in production rules, an artificial-intelligence technology used in expert systems. For new tutors, we recommend the JavaScript-based Nools engine described in the section JavaScript Model Tracer. The rest of this section details the support for cognitive tutors using Jess, the Java Expert System Shell. Both sections describe the files that form a cognitive model and tutor, and the programming and debugging tools available in CTAT. For a full tutorial, see Creating a Cognitive Tutor for fraction addition.

Table of Contents

  1. Files and file types
    1. Production Rules file
    2. Jess Templates file (.clp)
    3. Jess Facts file (.wme)
    4. Behavior Graph file
  2. Cognitive Tutor (Jess) Tools
    1. Behavior Recorder
    2. Working Memory (WME) Editor
    3. Conflict Tree
    4. Why Not? Window
    5. Jess Console
    6. Breakpoints
  3. Writing Jess Production Rules
    1. Anatomy of a production rule
    2. Naming a production rule
    3. Predicting a student action
    4. Providing hints or feedback messages
    5. Jess Resources
  4. Writing Jess Functions
    1. Writing a matcher function
    2. Evaluating the result of a function in a production rule

Files and file types

Besides the student interface, a CTAT cognitive tutor problem consists of the following required files:

  • productionRules.pr: the production rules file; contains Jess production rules, each defined using the (defrule)construct.

  • wmeTypes.clp: the Jess templates file; contains the templates available in working memory, each defined using the (deftemplate) construct; can be generated by CTAT for the currently loaded student interface and behavior graph.

  • problem-name.wme: the Jess facts file for the named behavior graph (BRD); contains an initial representation of working memory for the problem; can be generated by CTAT for the currently loaded student interface and behavior graph.

  • problem-name.brd: the behavior graph for the problem; need only contain a start state node that describes the initial state of the problem: for tutoring, a full problem-solving graph is superfluous—the tutoring is provided by the model-tracing algorithm—but such a graph can be used for semi-automated testing and problem state navigation.

By default, CTAT looks for these files in a directory named "CognitiveModel", which should be a subfolder in your package folder. For example, if your CTAT workspace folder is "CTAT", you might have a package folder with the following layout:

CTAT\
  MyJessTutor\
    CognitiveModel\
    FinalBRDs\
    HTML\

Note

The "CognitiveModel" directory is not created automatically when you create a package (from CTAT or the CTAT HTML Editor). If you are creating a cognitive tutor, you will need to add the directory to your package manually.

Production Rules file

The production rules file, productionRules.pr, contains the Jess production rules that model student procedural knowledge and misconceptions.

Jess functions are typically defined at the top of the production rules file so that production rules can use them. Alternatively, functions can be defined in the Jess templates file, or in a separate file which is referenced via a (require*) function that appears at the bottom of the templates file. See the Jess manual for more on the require* function and its reciprocal function, provide.

CTAT loads the production rules file whenever a new behavior graph is loaded, or when the start state of a loaded behavior graph is clicked.

Jess Templates file (.clp)

The templates file, wmeTypes.clp, contains definitions of Jess templates. A template in Jess is a description of a fact type. Every fact in working memory has a template. See the Jess reference documentation on templates for more on this construct.

CTAT loads this file when a behavior graph is loaded; or when the start state of a behavior graph is created.

The templates file typically ends with the following line to notify CTAT that Jess templates have been parsed:

(provide wmeTypes)

CTAT creates an initial set of Jess templates for you to use. See Initial working memory contents for more information.

Jess Facts file (.wme)

The facts file contains the Jess facts that make up working memory for a given problem. CTAT loads this file (if it exists) when a behavior graph is loaded; otherwise, CTAT creates default facts for the problem. See Initial working memory contents for more information on this process.

One facts file (with file extension .WME) should exist for each behavior graph (with file extension .BRD).

Note

The facts file must have the same name as the behavior graph file (less the filename extension difference) for CTAT to load it.

The facts file typically begins with the following line to specify to CTAT the templates file to read:

(require* wmeTypes "wmeTypes.clp")

Behavior Graph file

The behavior graph is the file that represents the starting state of a problem, and, optionally, the correct and incorrect steps that comprise student problem-solving behavior for that problem.

In a cognitive tutor (Jess), the start state of the behavior graph is the only required node: the start state provides the initialization information necessary for CTAT to create a Jess representation of the problem. Tutoring is provided to the student based on the cognitive model and model-tracing algorithm; therefore, no other graph information is required. A complete behavior graph, however, can be used for semi-automated testing.

Cognitive Tutor (Jess) Tools

CTAT provides a number of tools for planning, testing, and debugging cognitive models authored in Jess. These tools are:

  • Behavior Recorder: supports planning and testing cognitive models.
  • Working Memory (WME) Editor: used for cognitive model development; allows an author to inspect and modify the contents of the cognitive model's working memory.
  • Conflict Tree: debugging tool that provides information on activation paths explored by the model-tracing algorithm, including partial activations; displays the rule-predicted and observed selection, action, input values.
  • Why Not?: launched from the Conflict Tree; debugging tool that provides detailed information on rule activations and partial activations by displaying the values of variables referenced in the rule; includes an embedded Working Memory (WME) Editor for examining the values of working memory slots and facts.
  • Jess Console: command line for interacting directly with the Jess interpreter; helpful for carrying out debugging strategies not directly supported by CTAT.

To see the Jess tools in CTAT:

  • Select "Cognitive Tutor (Jess)" in the Tutor Type drop-down box at the top-left of the CTAT window.

    The Working Memory (WME) Editor, Conflict Tree, and Jess Console windows should display along with a Graph window.

Other features:

Behavior Recorder

In addition to helping you to construct an Example-tracing tutor, the Behavior Recorder can aid in cognitive model planning, development, and testing.

As a planning tool, the Behavior Recorder allows you to:

  • map the solution space, or the realm of student behavior for which the cognitive model should account; and
  • associate knowledge components with steps in the graph, which provides an idea of the quantity and quality of skills to be modeled as production rules.

As a testing tool, the Behavior Recorder allows you to:

  • perform semi-automated regression testing by checking a cognitive model against all states of the behavior graph; and
  • jump to states in the graph, moving both working memory and the student interface to the recorded state.

Planning with the Behavior Recorder

Before developing your cognitive model, consider creating a few representative Example-tracing problems with the student interface you're planning to use for the cognitive tutor. These graphs can be used for planning and testing as described below.

Annotate the steps of a behavior graph using knowledge component labels. By labelling the steps in the graph, you are performing a form of cognitive task analysis; you are determining how the overall problem-solving skill breaks down into smaller knowledge components. These knowledge components are also likely to be formalized as production rules you will write, with each knowledge component corresponding to a production rule. By annotating the graph with knowledge component labels, you've identified the set of skills for which your model must account. (See Skills for more on creating knowledge component labels and viewing a knowledge component matrix.)

Testing with the Behavior Recorder

Behavior graphs can also serve as test cases for a cognitive model. In this way, a behavior graph is a specification for how the model should behave on the steps of the problem.

To test a cognitive model against a behavior graph:

  1. Load the behavior graph into the Behavior Recorder (File > Open Graph).
  2. Check that the cognitive model has loaded by entering the command (rules) in the Jess Console. The console should print the names of your production rules and end with a count of the total number of rules.
  3. Select Cognitive Model > Test Cognitive Model on All Steps, or press CTRL+T.

Two indicators will appear notifying you of the results of the test. The first is the test report window (shown below).

Production Model Test Report Window

Figure: Production Model Test Report Window

This report describes the results of a comparison between the graph's specification of correctness for an ordered list of steps and the model-tracing algorithm's evaluation of that same list of steps. Here, the term step refers to a student action (technically represented by a selection-action-input triple).

The test operates by first determining the possible paths (or path) from start state to done state, the last state of the graph. For each of these paths, the model-tracing algorithm traces the path to each step, and presents its evaluation (incorrect, correct) to the test. The test compares the link type in the graph to the results of the model-tracing algorithm's trace. A comparison is consistent if the link type defined in the graph matches the evaluation by the model-tracer; it is inconsistent if the two do not match, or if a state in the graph is unreachable by the model-tracing algorithm. A state is unreachable if it appears beyond a buggy (incorrect action) link in the graph—a step by the model-tracing algorithm not present in the graph—or if appears in the graph but is not traced by the model-tracing algorithm.

The report also references good and bad changes. As the report indicates, a good change is from inconsistent to consistent; a bad change is from consistent to inconsistent. This comparison is presented if the Test Cognitive Model on All Steps command has been run previously during the authoring session. Typically, you run the test and upon finding inconsistencies, modify the production rules and run the test again.

A second indicator is the "last cognitive model check" label on graph links. These labels can be displayed by selecting View > Last Cog. Model Check Label. Each label shows the link number followed by a letter that indicates the result of the last check.

  • C: Correct
  • U: No model
  • B: Buggy
  • F: Fireable bug
  • N: Not applicable

Last Cognitive Model Check labels on links

Figure: Last Cognitive Model Check labels on links

Lastly, the Behavior Recorder allows you to jump to recorded states, moving both working memory and the student interface to the desired state. To jump to a recorded state, click the desired state in the behavior graph. Note the updates to the student interface, to the working memory window, and to the conflict tree as the cognitive model is traced against the steps outlined in the graph.

Working Memory (WME) Editor

The working memory (WME) editor allows you to inspect and modify a cognitive model's working memory at any time. At any given state of a problem, the templates and facts shown in the WME Editor reflect the contents of working memory after the step is model-traced.

Working Memory (WME) Editor

Figure: Working Memory (WME) Editor

Tip

Don't see the Working Memory (WME) Editor? Ensure that the Tutor Type is set to "Cognitive Tutor (Jess)". If you still do not see the WME Editor, show it by selecting Windows > Show Window > WME Editor.

The Working Memory Editor consists of two panels. The top panel shows a tree of all of the templates and facts in working memory and, above the tree, are two text fields. The bottom panel shows details of the template or fact selected in the tree.

In the top panel, the tree displays Jess templates as folder icons with facts being contained in those folders. You can click on a template or fact to select it. Right-click (Windows) or Control-click (Mac) on the background of the panel to show a menu of actions you can perform. In addition to actions that refresh, expand, and collapse the tree, the actions specific to working memory are:

  • Go To Problem Fact: Select the problem fact in the tree.
  • Back: Select the fact that was selected prior to the currently selected fact.
  • Forward: Reverse of Back.
  • New Template: Create a new, empty template.
  • New Fact: Create a new fact for the selected template (or the template of the selected fact).
  • New Slot: Create a new slot in the selected template (or the template of the selected fact).
  • Delete: Delete the selected template or fact. If the selected template has facts, those facts will also be deleted.

    Warning! Deleting can not be undone. Make sure you have selected the template or fact that you really want to delete.

The text fields above the tree allow you to search for and filter facts shown in the tree:

  • Search by name: For any fact that has a "name" slot value, you can enter the name. As you enter each character, the tree updates to show only the facts that match.

    Tip: Remember to clear the Search by name field after you find the fact your searched for so that all facts are displayed again.

  • Search by fact ID: Type a fact number and press Enter. If a fact with that number exists in working memory, the fact will be selected in the tree.

In the bottom panel of the WME Editor, the currently selected template or fact is displayed.

  • Double-click to change the value of a Slot or Slot Value. Press Enter to submit your new value.
  • Single-click the value of a Type to choose between "slot" and "multislot".
  • Right-click (Windows) or Control-click (Mac) on a Slot value to add a new slot or delete the selected slot.
  • If a Slot Value contains a fact (e.g., <Fact-2>), Right-click (Windows) or Control-click (Mac) on the value to go to that fact.

Initial working memory contents

When you create or enter the start state of a problem, CTAT determines the contents of working memory and displays the results in the Working Memory (WME) Editor.

CTAT determines the initial contents of working memory in one of two ways: if a .WME file (with same name as the problem) and/or a wmeTypes.clp file exist, CTAT will parse them and display working memory contents in the WME Editor. If either of these files is missing, CTAT uses templates and facts that it has generated for the given problem. This content is generated at the time you create the problem start state.

Working memory generated by CTAT contains:

  • a Jess template that represents the problem;
  • a single Jess fact of type problem, described below.
  • a Jess template that represents a hint request;
  • a single Jess fact of type hint, described below.

The problem fact contains the following:

  • (slot name): a name of the problem. The default is the name of the start state of the problem.
  • (multislot interface-elements): contains references to interface-element facts. Empty by default.
  • (multislot subgoals): can be used to maintain references to goal facts. Empty by default.
  • (slot done): can be used by the done rule to access a value that is stored there when the problem is completed. "nil" by default.
  • (slot description): a description of the problem. "nil" by default.

The hint fact contains the following:

  • (slot now): can be used to determine if a hint has been requested. The default is FALSE.

Tip

It's often useful to use the initial working memory representation created by CTAT as a starting point for your cognitive model. This initial structure is typically expanded to include interface components and to account for non-visual structures or more complex facts (e.g., facts composed of other facts). To do so, first save the facts and templates to files for editing. (The initial working memory contents are not saved to file, but loaded into working memory.)

Tip

In Jess, it is also possible to match interface and subgoal facts directly, without using the problem fact.

Modifying working memory with the WME Editor

You can modify working memory using the Working Memory (WME) Editor. We recommend, however, that you first save the initial working memory representation created by CTAT, and make any changes to working memory in the problem .WME file and wmeTypes.clp templates file.

To add a template to working memory:

  1. Right-click (Windows) or CTRL+click (Mac) anywhere in the Working Memory (WME) Editor's list of templates and facts.
  2. Select New Template.

A template named "New Template" is added containing one (slot name).

To rename a template in working memory:

  1. Click the template in the list of templates to select it.
  2. In the bottom half of the window, enter a new name in the text field to the right of the world 'Template:'.
  3. Press the Enter key. The template listed in working memory will update to reflect the new name.

To add a fact to working memory:

  1. Right-click (Windows) or CTRL+click (Mac) the template for which your new fact will be a member.
  2. Select New Fact.

Caution

Changing a fact's Slot or Type value may have unexpected effects as slot and type are attributes defined by the template, not the fact. We recommend that you edit and save the templates or even the initial set of facts, but leave CTAT to manage the facts of a tutor problem.

To edit a fact in working memory:

  1. Click the fact in working memory that you'd like to edit in the list of templates and facts.
  2. Enter a new value in the Slot Value column.
  3. Press the Enter key.

Saving working memory

Use the commands below to save working memory.

Caution

Executing Save Jess Templates will save a Jess templates file named wmeTypes.clp, overwriting (if it exists) the current wmeTypes.clp file in the cognitive model folder. If you wish to preserve the existing wmeTypes.clp file, create a copy in another directory or rename the file before executing this command.

To save the Jess templates:

  • Select File > Save Jess Templates.

A message is displayed showing the location where the wmeTypes.clp file was saved, which should be the "CognitiveModel" folder of your package.

Caution

Only save facts when you are in the start state of the problem. If you save facts while in a different state, the student interface may become out of sync with the working memory representation. This is because the saved facts file is automatically loaded when CTAT enters the start state. If the working memory contents describe a partially completed problem, they won't match the student interface's description of the start state of the problem, which is stored in the behavior graph.

To save the Jess facts:

  • Select File > Save Jess Facts.
  • Save the .wme file to the "CognitiveModel" folder of your package. Ensure that the default name given to the .wme file is the same as the name of your .brd file.

Conflict Tree

The Conflict Tree is a debugging tool that shows you which rules correctly predicted the student's selection/action/input (S/A/I) and which rules fired, but only partially activated. Its purpose is to show the space of rules that were explored by the production system interpreter as it tried to find a "chain" or "path" of rules that correctly predicted the S/A/I. This space is always in the form of a tree.

The Conflict Tree displays rule activations in terms of chains of rules formed during the model trace. In Jess model tracing, a chain is a point in the model-tracing search where the effects of one rule's activation to working memory cause another rule to fire. The chaining point is represented by a folder icon in the tree.

Jess Conflict Tree

Figure: The Jess Conflict Tree

In addition, the Conflict Tree is the launching point for "Why Not?" inquiries (e.g., "I see that a rule did not fire at all, but why not?"). In the Rule column, you can click on a "Chain" in the tree to display a list of all the rules. (Clicking on a rule name opens the "Why Not?" window for that rule.)

For a rule that predicts student S/A/I (via the predict-observable-action function on the right-hand side of the rule), the results of that prediction are shown to the left of the rule name in the columns S, A, and I. A green check mark indicates that the selection, action or input was predicted correctly. A red "X" signifies that the selection, action or input was not predicted correctly. As soon as the production engine encounters a rule activation where the left-hand side (LHS) matches the asserted facts and the selection/action/input (S/A/I) were all correctly predicted, it fires that rule and stops in that state. Hence an entry in the conflict tree having three green check marks signifies the rule activation that fired and ended a successful model-tracing search for the student's action; the production engine's current state should reflect the actions of that rule's RHS.

In the Conflict Tree window, clicking on any of the S, A, or I columns that contain a green check mark or "X" allows you to see the predicted and observed selection, action, and input.

To compare the predicted and observed values:

  1. Click on any check mark or "X" in the row displaying the production rule name you're interested in.

    A window will appear similar to the one depicted below. The first row shows values predicted by the production rule; the second row shows actual values observed in the student interface and performed by either the student or author.

Conflict Tree: Rule's predicted SAI vs. student's actual SAI

Figure: Conflict Tree: Rule's predicted SAI vs. student's actual SAI

For a better understanding of the Conflict Tree, here is an explanation of the types of rows that can appear in the Conflict Tree. Most rows are named for rules and represent individual activations of those rules during the tracing of the last student step. A rule can be in any of the following states at a given point in the Conflict Tree:

Disabled: This applies only to buggy rules during that part of the search where the tools are only trying for a successful match, when the buggy rules are actually removed using the rule engine's undefrule command.

Not Activated: The LHS did not match.

Activated, But Not Fired: The LHS matched, but the search ended before model-tracing algorithm reached this rule.This can be confusing at first because these rule activations do not show up in the Conflict Tree, but you will see them when you do a Why Not? and it will say "LHS matched successfully"

Fired, Chained: The LHS matched and CTAT fired the rule, but its RHS did not generate a complete prediction for the student's S/A/I. If a partial prediction was made, it was correct; after one of these rule firings, it "chains" (a term inherited from TDK)—that is, they descend one more level in the search. Depending on the result of that search, we distinguish between the following two types.

Fired, Chained, Kept: The search is successful and the results of the rule's firing (i.e., the changes it made to working memory) are kept.

Fired, Chained, Undone: The search fails at a lower level and the effects of the rule's firing are undone (i.e., working memory changes are reverted).

Fired but Incorrect S/A/I prediction: CTAT has to undo the effects of this rule's firing since the S/A/I prediction was incorrect.

Fired, Correct and Complete S/A/I Prediction: This ends the search. The effect of the rules firing is kept.

The Conflict Tree distinguishes only Not Activated ("Failure to match LHS"), Activated But Not Fired ("Successful match of the LHS"), Fired But Incorrect S/A/I prediction ("Failed to Match SAI"), and Fired, Correct and Complete S/A/I Prediction.

Note

  • A rule can be in a different state at different levels in the tree. For example, in the addition tutor, the rule must-carry is Not Activated at the top but may be Fired, Correct and Complete S/A/I Prediction at the bottom of the tree.
  • There can be multiple activations for a rule at a single node. These correspond to multiple different fact combinations that matched the rule's LHS.
  • Just because a rule fires successfully does not mean the rule is correct. A rule may evaluate successfully and fire but have a logic error in it.
  • It is possible that rules at the beginning of a chain fire but rules at the end of the chain do not fire. In that case, you would get a window with rules shown in the Conflict Tree, but there would be no path of green rules from root to leaf node.

There are two ways the tutor can fail to match what the student did:

  • CTAT finds no applicable rules, or it finds rules you expect will match working memory but do not (see Why Not? Window below).
  • Your rules did match working memory, but your predictions (as specified via predict-observable-action) did not correspond to student action. In other words, your rule matches working memory—the action you predicted would be appropriate at that moment—but the student didn't take that action, so the student action doesn't match your production rule.

Why Not? Window

You may have a rule that was partially activated (i.e., an instance of a rule where a rule has some, but not all, of its conditions satisfied) or a rule that did not activate at all. In these cases, you may want to see more details concerning the match. Alternatively, you may want to explore the details of a matched rule (one depicted with three green checkmarks in the conflict tree). The Why Not? window provides further detail about each search node depicted in the Conflict Tree.

For a rule that partially or fully activated, click its name in the Conflict Tree. For a rule that did not activate at all, click the node labeled Chain, and select its name.

Interpreting the Why Not? Window

The top third of the Why Not? window (shown below) displays the production rule that you're examining. Here, all variables are given a background color, which corresponds to the color used in the table below the rule definition.

If you hover over a variable in the top window with your mouse cursor, a tooltip will appear displaying the variable's value (in the case of a simple variable) or table of fact information (in the case of a fact reference). Simultaneously, a black outline will appear around the corresponding variable row in the middle area.

Why Not? Window: rule definition

Figure: Why Not? Window: rule definition

Partial Activations shows the various activation attempts by Jess. Each activation is an attempt to match the LHS of the rule with the facts in working memory; and all variables must be bound (matched) to activate the rule. If all variables could not be bound for a given attempt, that attempt results in a partial activation. Green indicates that all variables were bound successfully; red indicates that some variables were not bound.

Click a partial activation in the list box on the left to see the alternate mappings; the variables table to the right will update to reflect the particular activation, as will the highlighting in the rule definition window above.

Of particular importance is the line to the right of Partial Activations (shown in the figure below). Below, this line reads 'LHS matched'. When you click on a partial activation, this line will update to reflect the first disparity in the comparison between the rule's LHS definition and working memory.

Why Not? Window: variable values

Figure: Why Not? Window: variable values

Note

You will often see more than one activation listed in Why Not? because the pattern-matching algorithm in the Jess rule engine almost always makes multiple attempts to bind variables, as it has to try different values for those variables.

In the bottom third of the Why Not? Window is an embedded read-only Working Memory Editor. Similar to the standard Working Memory Editor, it allows you to view the contents of working memory. In addition, it allows you to examine working memory before the rule fires or after the rule fires. You can switch between before and after states by clicking the radio buttons labeled Show Pre and Show Post.

Jess Console

The Jess console is a message window and command line for interacting directly with the Jess interpreter. Output from the model tracer, production rules, and commands (such as (agenda) and (facts)) are displayed in the message window.

Jess Console Window

Figure: Jess Console Window

See the Jess Function List of the Jess language reference.

Breakpoints

You can define breakpoints for debugging a cognitive model. A breakpoint halts the pattern matching algorithm after the specified rule fires, and prevents the firing of more rules or changes to working memory.

To set breakpoints:

  • Select Cognitive Model > Set Breakpoints from the menu.
  • Select a rule name on the left side of the Breakpoints window and use the > button to add a breakpoint for the rule. Remove breakpoints by selecting rule names on the right side of the Breakpoints window and use the < button to remove that breakpoint.
  • Click the Set button to set the breakpoints.

Defining Breakpoints

Figure: Defining Breakpoints

When a breakpoint is encountered, a message will appear in the Jess console:

Iteration: 1
  1. MAIN::determine-lcd
Breakpoint on rule MAIN::determine-lcd reached. 
Select "Resume" from the Cognitive Model menu to continue.

At this point, you can inspect working memory using the WME Editor.

To resume model tracing, select Cognitive Model > Resume.

To clear all breakpoints:

  • Click Cognitive Model > Clear Breakpoints.

Writing Jess Production Rules

In a Cognitive Tutor, production rules in Jess model student procedural knowledge and misconceptions. A single rule can model either correct knowledge or a misconception, but not both.

Anatomy of a production rule

A production rule begins with the Jess construct defrule. For example:

(defrule write-answer ...

In a single rule, a left-hand side (LHS) portion of the rule specifies a pattern to match against the contents of working memory. The right-hand side (RHS) portion of the rule specifies a function or functions to call when the LHS matches. This division is common to all Jess productions rules, not just rules in CTAT. The "=>;" symbol separates the LHS (above the "=>;") from the RHS (below the "=>;").

(defrule add-numerators
  ?problem <- (problem 
    (subgoals $? ?sub $?)
    (answer-fractions ?answer ?))
  ...
=>
  (bind ?sum (+ ?n1 ?n2))
  (predict-observable-action ?field-name UpdateTextField ?sum)
    (modify ?num-ans (value ?sum))
  ...
) 

Naming a production rule

Rules that model correct knowledge can be named anything you'd like. Ideally, they should reflect the production the student is to learn. Some example production rule names:

  • add-first-column-no-carry
  • determine-lcd
  • determine-reduction-factor

Rules that model a student misconception, however, must be named with the text bug+ (where "+" is any non-alphabetic character) or buggy at the beginning of the rule name. Some example incorrect, or buggy, production rule names:

  • BUG-reduce-numerator
  • buggy-add-next-column-did-not-write-carry
  • buggy-done

Predicting a student action

To predict an action that a student should take for a production rule to be fully activated, use the predict-observable-action function in the RHS of your production rule. The syntax of this function is:

(predict-observable-action <selection> <action> <input> <matcherType>?)

<matcherType> is optional. If omitted, an exact match will be used.

This function can also be invoked more concisely as (predict <selection> <action> <input> <matcherType>?).

predict is a special Jess function provided by CTAT that limits whether or not a production rule can make changes to working memory. If predict returns false—if the student does not take the predicted action—then the rule will not modify working memory, the effects of other function calls on the RHS of that rule will be undone, and the student's input will be marked as incorrect. If predict returns true—if the model correctly predicts the student's action—then the rule will be allowed to make changes to working memory, and the student's input will be marked as correct.

You suspect that for a given production to be true, the student must enter the number '5' in the field called 'firstNum'. You'd call this function in the RHS of the rule as follows:

(predict-observable-action firstNum UpdateTextField 5)

In the above example, the <matcherType> argument is omitted.

A matcher is a type of comparison used when determining whether or not the student input matched the predicted input. When the matcherType argument is omitted, an "exact" match will be used (i.e., the predicted input must exactly match that entered by the student).

Other types of matching, defined in Input Matching, are specified by using one of the names below:

  • AnyMatcher
  • ExactMatcher (default when matcherType is omitted)
  • RangeMatcher
  • RegexMatcher
  • WildcardMatcher

As each matcher takes different input parameters, the syntax for each matcher in (predict) is slightly different:

  • (predict <selection> <action> <dummyInput> AnyMatcher)
  • (predict <selection> <action> <input> ExactMatcher)
  • (predict <selection> <action> <input minimum> <input maximum> RangeMatcher)
  • (predict <selection> <action> <regular expression string for input> RegexMatcher)
  • (predict <selection> <action> <input containing wildcards> WildcardMatcher)

An example of calling each matcher is shown below:

  • (predict ?cell-name "UpdateTable" 999 AnyMatcher)
  • (predict ?cell-name "UpdateTable" 2 ExactMatcher)
  • (predict ?cell-name "UpdateTable" 12 14 RangeMatcher)
  • (predict ?cell-name "UpdateTable" "2[x,y,z]" RegexMatcher)
  • (predict ?cell-name "UpdateTable" "The quick brown fox * the lazy dog." WildcardMatcher)

In addition, you can create your own matcher as a Jess function. For details and an example for defining a new matcher as a Jess function, see Writing a matcher function.

(predict) returns false or throws an exception to halt the Rete, the Jess pattern matching algorithm, if the student's action doesn't match the function's three arguments.

Special arguments to (predict) are values DONT-CARE (matches any student value for this selection, action, or input element) and NotSpecified (skips a check of this selection, action, or input element).

Providing hints or feedback messages

To add one or more messages to the list of hint or feedback messages available to the student, use the (construct-message) function in the RHS of your production rule.

(construct-message) works by concatenating its arguments to form messages: square brackets ([ ]) delimit one message from another.

Below, we add a sequence of four hint messages for the student who is stuck on this rule:

(defrule done
   ?problem <- (problem (answer-fractions $? ?unreduced-answer $?))
   ?unreduced-answer <- (fraction (numerator ?num) (denominator ?denom))
   ?num <- (textField (value ?n&:(neq ?n nil)))
   ?denom <- (textField (value ?d&:(neq ?d nil)))
   (test (eq (gcd ?d ?n) 1))
=>
   (predict done ButtonPressed -1)
   (construct-message
      [ Is there anything else to do?]
      [ If the greatest common divisor of a numerator and a denominator is 1, 
        then the fraction cannot be further reduced.]
      "[The greatest common divisor of " ?d " and " ?n " is 1.]"
      [ You are done. Press the done button.]
   )
)

Note

To reference variables in the construct-message function (as in the example above), explicitly set the surrounding message text as a string by using double-quotes.

To create a feedback message for an incorrect action by the student, use the same syntax of construct-message, but in a buggy rule.

Note

Parentheses in construct-message are problematic for model tracing in CTAT. To use parentheses, you must surround the text containing the parentheses with double quotes. For example:

(construct-message 
    ["Start with the column on the right. (This 
    is the 'ones' column.)" ])

(construct-message 
    ["Move on to the" ?pos "column from the 
    right. (This is the" ?col-name "column.)"])

Jess Resources

For more information on writing production rules in Jess, see Chapter 6 "Making Your Own Rules" of the official Jess language reference.

For documentation of existing Jess functions, see the Jess Function List of the Jess language reference.

Writing Jess Functions

The process of writing Jess functions is explained in detail in Chapter 4, "Defining Functions in Jess" of the official Jess manual. In this section, we'll cover writing Jess functions that interface with the unique features of CTAT cognitive tutors. This section presumes you're familiar with Jess syntax and simple Jess functions.

Writing a matcher function

A matcher function is a function that compares observed student input to a predicted value in a unique way. As described in Input Matching, CTAT provides a number of basic matcher functions for use in Jess rules. If none of the existing functions perform matching in the desired way, you may want to write a new matcher function.

A matcher function in CTAT has a few characteristics. First, it should return a boolean value (true or false). This is so the (predict) function, which calls it, can itself return a boolean value. Second, the matcher function should consider both the predicted input (specified in the predict function) and student's input, both of which are provided as arguments to the matcher function. The example function below illustrates both of these characteristics:

(deffunction equal-with-precision (?predictedInput ?studentInput)
    (< (abs (- ?predictedInput ?studentInput)) 0.02)
)
...
(predict newTextField DONT-CARE 2 equal-with-precision)

In this function, equal-with-precision, the student's input is first subtracted from the predicted input. If the absolute value of the resulting number is less than .02, the function returns true; otherwise, it returns false. The predict function is then used to see if the rule's predicted value ("2") is equal (within .02) to the student's value.

Note

To reference the matcher function, the function must be defined in either the wmeTypes.clp file or production rules file before the matcher is called.

Evaluating the result of a function in a production rule

To evaluate the result of a function in a production rule, use the Jess function eval:

(deffunction sum (?a ?b)
    (+ ?a ?b)
)
...
(predict-observable-action ?cell-name "UpdateTable" (eval sum 2 3) )

In this example, the result of adding "2" and "3" is passed as an argument to the (predict-observable-action) function. When the rule containing this call to (predict) is activated, CTAT determines if the selection (the value of ?cell-name), action ("UpdateTable"), and input ("5", the sum of "2" and "3") match the student's selection, action, and input exactly.

Back to top

Next >> Dynamic Interfaces

⚠️ **GitHub.com Fallback** ⚠️