Publications - CMUCTAT/CTAT GitHub Wiki

CTAT

  • Aleven, V., McLaren, B. M., & Sewall, J. (2009). Scaling up programming by demonstration for intelligent tutoring systems development: An open-access website for middle-school mathematics learning. IEEE Transactions on Learning Technologies, 2(2), 64-78.
    http://www.computer.org/portal/web/csdl/doi/10.1109/TLT.2009.22

    Show/Hide Abstract

    Abstract. Intelligent tutoring systems (ITSs), which provide step-by-step guidance to students in complex problem-solving activities, have been shown to enhance student learning in a range of domains. However, they tend to be difficult to build. Our project investigates whether the process of authoring an ITS can be simplified, while at the same time maintaining the characteristics that make ITS effective, and also maintaining the ability to support large-scale tutor development. Specifically, our project tests whether authoring tools based on programming-by-demonstration techniques (developed in prior research) can support the development of a large-scale, real-world tutor. We are creating an open-access Web site, called Mathtutor (https://mathtutor.web.cmu.edu/), where middle school students can solve math problems with step-by-step guidance from ITS. The Mathtutor site fields example-tracing tutors, a novel type of ITS that are built "by demonstration," without programming, using the Cognitive Tutor Authoring Tools (CTATs). The project's main contribution will be that it represents a stringent test of large-scale tutor authoring through programming by demonstration. A secondary contribution will be that it tests whether an open-access site (i.e., a site that is widely and freely available) with software tutors for math learning can attract and sustain user interest and learning on a large scale.

  • Aleven, V., McLaren, B.M., Sewall, J., & Koedinger, K.R. (2009). A New Paradigm for Intelligent Tutoring Systems: Example-Tracing Tutors. International Journal of Artificial Intelligence in Education, 19(2), 105-154.
    Full-text available through IJAIED—citation updated 2010-02-16

    Show/Hide Abstract

    Abstract. The Cognitive Tutor Authoring Tools (CTAT) support creation of a novel type of tutors called example-tracing tutors. Unlike other types of ITSs (e.g., model-tracing tutors, constraint-based tutors), example-tracing tutors evaluate student behavior by flexibly comparing it against generalized examples of problem-solving behavior. Example-tracing tutors are capable of sophisticated tutoring behaviors; they provide step-by-step guidance on complex problems while recognizing multiple student strategies and (where needed) maintaining multiple interpretations of student behavior. They therefore go well beyond VanLehn's (2006) minimum criterion for ITS status, namely, that the system has an inner loop (i.e., provides within-problem guidance, not just end-of-problem feedback).

    Using CTAT, example-tracing tutors can be created without programming. An author creates a tutor interface through drag-and-drop techniques, and then demonstrates the problem-solving behaviors to be tutored. These behaviors are recorded in a "behavior graph," which can be easily edited and generalized. Compared to other approaches to programming by demonstration for ITS development, CTAT implements a simpler method (no machine learning is used) that is currently more pragmatic and proven for widespread, real-world use by non-programmers.

    Development time estimates from a large number of real-world ITS projects that have used CTAT suggest that example-tracing tutors reduce development cost by a factor of 4 to 8, compared to "historical" estimates of ITS development time and cost. The main contributions of the work are a novel ITS technology, based on the use of generalized behavioral examples to guide students in problem-solving exercises, as well as a suite of mature and robust tools for efficiently building real-world ITSs without programming.

  • Aleven, V., Sewall, J., McLaren, B. M., & Koedinger, K. R. (2006). Rapid authoring of intelligent tutors for real-world and experimental use. In Kinshuk, R. Koper, P. Kommers, P. Kirschner, D. G. Sampson, & W. Didderen (Eds.), Proceedings of the 6th IEEE International Conference on Advanced Learning Technologies (ICALT 2006), (pp. 847-851). Los Alamitos, CA: IEEE Computer Society. PDF icon PDF (2.5 MB)

    Show/Hide Abstract

    Abstract. Authoring tools for Intelligent Tutoring Systems are especially valuable if they not only provide a rich set of options for the efficient authoring of tutoring systems but also support controlled experiments in which the added educational value of new tutor features is evaluated. The Cognitive Tutor Authoring Tools (CTAT) provide both. Using CTAT, real-world ”Example-Tracing Tutors” can be created without programming. CTAT also provides various kinds of support for controlled experiments, such as administration of different experimental treatments, logging, and data analysis. We present two case studies in which Example-Tracing Tutors created with CTAT were used in classroom experiments. The case studies illustrate a number of new features in CTAT: Use of Macromedia Flash MX 2004 for creating tutor interfaces, extensions to the Example-Tracing Engine that allow for more flexible tutors, a Mass Production facility for more efficient template-based authoring, and support for controlled experiments.

  • Aleven, V., McLaren, B. M., Sewall, J., & Koedinger, K. (2006). The Cognitive Tutor Authoring Tools (CTAT): Preliminary evaluation of efficiency gains. In M. Ikeda, K. D. Ashley, & T. W. Chan (Eds.), Proceedings of the 8th International Conference on Intelligent Tutoring Systems (ITS 2006), (pp. 61-70). Berlin: Springer Verlag. PDF icon PDF (392 KB)

    Show/Hide Abstract

    Abstract. Intelligent Tutoring Systems have been shown to be effective in a number of domains, but they remain hard to build, with estimates of 200-300 hours of development per hour of instruction. Two goals of the Cognitive Tutor Authoring Tools (CTAT) project are to (a) make tutor development more efficient for both programmers and non-programmers and (b) produce scientific evidence indicating which tool features lead to improved efficiency. CTAT supports development of two types of tutors, Cognitive Tutors and Example-Tracing Tutors, which represent different trade-offs in terms of ease of authoring and generality. In preliminary small-scale controlled experiments involving basic Cognitive Tutor development tasks, we found efficiency gains due to CTAT of 1.4 to 2 times faster. We expect that continued development of CTAT, informed by repeated evaluations involving increasingly complex authoring tasks, will lead to further efficiency gains.

  • Koedinger, K. R., Aleven, V., Heffernan. T., McLaren, B. & Hockenberry, M. (2004). Opening the Door to Non-Programmers: Authoring Intelligent Tutor Behavior by Demonstration. In the Proceedings of 7th Annual Intelligent Tutoring Systems Conference. Maceio, Brazil. PDF icon PDF (404 KB)

    Show/Hide Abstract

    Abstract. Intelligent tutoring systems are quite difficult and time intensive to develop. In this paper, we describe a method and set of software tools that ease the process of cognitive task analysis and tutor development by allowing the author to demonstrate, instead of programming, the behavior of an intelligent tutor. We focus on the subset of our tools that allow authors to create “Pseudo Tutors” that exhibit the behavior of intelligent tutors without requiring AI programming. Authors build user interfaces by direct manipulation and then use a Behavior Recorder tool to demonstrate alternative correct and incorrect actions. The resulting behavior graph is annotated with instructional messages and knowledge labels. We present some preliminary evidence of the effectiveness of this approach, both in terms of reduced development time and learning outcome. Pseudo Tutors have now been built for economics, analytic logic, mathematics, and language learning. Our data supports an estimate of about 25:1 ratio of development time to instruction time for Pseudo Tutors, which compares favorably to the 200:1 estimate for Intelligent Tutors, though we acknowledge and discuss limitations of such estimates.

  • Heffernan, N. T., Koedinger, K. R., & Aleven, V. A. W. M. M. (2003). Tools Towards Reducing the Costs of Designing, Building, and Testing Cognitive Models. The 2003 Conference on Behavior Representation in Modeling and Simulation, BRIMS 2003. Word DOC icon DOC (291 KB)

    Show/Hide Abstract

    Abstract. We are developing a suite of Cognitive Tutor Authoring Tools (CTAT) intended to make tutor development both easier and faster for experienced cognitive modelers and possible for potential modelers who are not experts in cognitive psychology or artificial intelligence programming. Our concrete goal is to experimentally demonstrate a reduction in development time by a factor of three. We are employing Human-Computer Interaction (HCI) methods and Cognitive Science principles, as we have done before, to design development tools that reduce programmer time. Our preliminary analytic and empirical analyses compare use of CTAT with use of our current develop environment and indicate a potential reduction in development time by a factor of about two. These early quantitative results are less important than the specific guidance that such analyses provide as we iteratively converge on demonstrably more cost-effective cognitive tutor development tools.

  • Koedinger, K. R., Aleven, V. A. W. M. M., & Heffernan, N. T. (2003). Toward a Rapid Development Environment for Cognitive Tutors. In U. Hoppe, F. Verdejo, & J. Kay (Eds.), Proceedings of the 11th International Conference on Artificial Intelligence in Education, AI-ED 2003 (pp. 455-457). Amsterdam: IOS Press. PDF icon PDF (412 KB)

    Show/Hide Abstract

    Abstract. We are developing a suite of Cognitive Tutor Authoring Tools (CTAT) intended to make tutor development both easier and faster for experienced modelers and possible for potential modelers who are not experts in cognitive psychology or artificial intelligence programming. Our goal is to demonstrate a reduction in development time by a factor of three. We employ Human-Computer Interaction (HCI) methods and Cognitive Science principles to design development tools that are both useful and useable. Our preliminary analytic and empirical analyses compare use of CTAT with use of our current develop environment and indicate a potential reduction in development time by a factor of about two.

Collaboration

  • Harrer, A., McLaren, B., Walker, E., Bollen, L., Sewall, J. (2005). Collaboration and Cognitive Tutoring: Integration, Empirical Results, and Future Directions. In C.-K. Looi et al. (Eds.), Proceedings of the 12th International Conference on Artificial Intelligence in Education (pp.266-273). Amsterdam: IOS Press. PDF icon PDF (521 KB)

    Show/Hide Abstract

    Abstract. In this paper, we describe progress we have made toward providing cognitive tutoring to students within a collaborative software environment. First, we have integrated a collaborative software tool, Cool Modes, with software designed to develop Cognitive Tutors (the Cognitive Tutor Authoring Tool). Our initial integration provides a means to capture data that acts as the foundation of a tutor for collaboration but does not yet fully support actual tutoring. Second, we've performed two exploratory studies in which dyads of students used our software to collaborate in solving modelling tasks. These studies uncovered five dimensions of observed behavior that point to the need for abstraction of student actions to better recognize, analyze, and correct collaborative steps in problem solving. We discuss plans to incorporate such analyses into our approach and to extend our tools to eventually provide tutoring of collaboration.

  • McLaren, B., Bollen, L., Walker, E., Harrer, A., Sewall, J. (2005). Cognitive Tutoring of Collaboration: Developmental and Empirical Steps Towards Realization. In the Proceedings of the Conference on Computer Supported Collaborative Learning Conference (CSCL-05). Taipei, Taiwan. PDF icon PDF (295 KB)

    Show/Hide Abstract

    Abstract. In this paper, we describe developmental and empirical steps we have taken toward providing Cognitive Tutoring to students within a collaborative software environment. We have taken two important steps toward realizing this goal. First, we have integrated a collaborative software tool, Cool Modes, with software designed to develop Cognitive Tutors (the Cognitive Tutor Authoring Tool). Our initial integration does not provide tutoring per se but rather acts as a means to capture data that provides the beginnings of a tutor for collaboration. Second, we have performed an initial study in which dyads of students used our software to collaborate in solving a classification / composition problem. This study uncovered five dimensions of analysis that our approach must use to help us better understand student collaborative behavior and lead to the eventual development of a Cognitive Tutor for collaboration. We discuss our plans to incorporate such analysis into our approach and to run further studies.

  • McLaren, B. M., Koedinger, K. R., Schneider, M., Harrer, A., and Bollen, L. (2004). Towards Cognitive Tutoring in a Collaborative, Web-based environment. In M. Matera, S. Comai (Eds.), Engineering Advanced Web Applications: Proceedings of Workshops in Connection with the 4th International Conference on Web Engineering (pp. 167-179). Princeton: Rinton Press. PDF icon PDF (218 KB)

    Show/Hide Abstract

    Abstract. While intelligent tutoring has been applied to collaborative learning environments, it has met with little success so far because of the complexity involved in adding a tutoring component to a collaborative environment. We propose to tackle this problem by using Cognitive Tutors as the basis for our approach and by applying a technique we call Bootstrapping Novice Data (BND). The BND approach involves feeding student log les from a problem-solving tool into tutor development software to create the beginnings of a tutor for the tool. We describe an initial implementation of our approach in which Cool Modes, a collaborative software tool, is integrated with the Behavior Recorder, tutor-authoring software that supports development by demonstration. We show how our initial implementation provides a foundation for an intelligent tutor for collaboration but also discuss some of the challenges ahead.

Bootstrapping

  • McLaren, B. M., Koedinger, K. R., Schneider, M., Harrer, A., and Bollen, L. (2004). Bootstrapping Novice Data: Semi-Automated Tutor Authoring Using Student Log Files. In the Proceedings of the Workshop on Analyzing Student-Tutor Interaction Logs to Improve Educational Outcomes. Seventh International Conference on Intelligent Tutoring Systems (ITS-2004), August 2004. PDF icon PDF (278 KB)

    Show/Hide Abstract

    Abstract. A potentially powerful way to aid in the authoring of intelligent tutoring systems is to directly leverage student interaction log data. While problem-solving data has been used in the past to guide the development of tutors, such data has not typically been used as a means to directly construct an initial tutoring system model. We propose an approach called bootstrapping novice data (BND) in which a problem-solving tool is integrated with tutor development software through log files and that integration is then used to create the beginnings of a tutor for the tool. We describe an initial implementation of the BND approach in which Cool Modes, a collaborative software tool, is integrated with the Behavior Recorder, tutor-authoring software that supports development by demonstration. A key to this implementation is a component-based approach in which complementary pieces of software are integrated with little or no change to either software component. We argue that more tutors could be built, and with substantial time savings, using this approach. We discuss some of the lessons learned from this initial effort and from applying the component-based approach, as well as some data analyses that could eventually be performed using the data collected during BND.

Simulated Student

  • Matsuda, N., Cohen, W. W., Sewall, J., Lacerda, G., & Koedinger, K. R. (2007; in press). Evaluating a simulated student using real students data for training and testing. In Proceedings of the international conference on User Modeling. PDF icon PDF (227 KB)

    Show/Hide Abstract

    Abstract. SimStudent is a machine-learning agent that learns cognitive skills by demonstration. It was originally developed as a building block of the Cognitive Tutor Authoring Tools (CTAT), so that the authors do not have to build a cognitive model by hand, but instead simply demonstrate solutions for SimStudent to automatically generate a cognitive model. The SimStudent technology could then be used to model human students' performance as well. To evaluate the applicability of SimStudent as a tool for modeling real students, we applied SimStudent to a genuine learning log gathered from classroom experiments with the Algebra I Cognitive Tutor. Such data can be seen as the human students' "demonstrations" of how to solve problems. The results from an empirical study show that SimStudent can indeed model human students' performance. After training on 20 problems solved by a group of human students, a cognitive model generated by SimStudent explained 82% of the problem-solving steps performed correctly by another group of human students.

  • Matsuda, N., Cohen, W. W., Sewall, J., & Koedinger, K. R. (2006). What characterizes a better demonstration for cognitive modeling by demonstration? Technical report CMU-ML-06-106, School of Computer Science, Carnegie Mellon University. PDF icon PDF (321 KB)

    Show/Hide Abstract

    Abstract. A simulated student is a machine learning agent that learns a set of cognitive skills by observing solutions demon-strated by human experts. The learned cognitive skills are converted into a cognitive model for a Cognitive Tutor that is a computerized tutor that teaches human students the cognitive skills. In this paper, we analyze the characteristics of the effective demonstrations that lead to quicker and more accurate learning. Results from empirical studies show that expressive demonstrations (as opposed to abbreviated demonstrations that involve implicit mental operations) are better for both speed and accuracy of learning. We also found that providing multiple demonstrations of the same cognitive skill with differing surface features accelerates learning. These findings imply that the ordering of training sequence as well as the level of detail in demonstration determines the efficiency with which a simulated student generates a cognitive model.

  • Matsuda, N., Cohen, W. W., & Koedinger, K. R. (2005). Applying Programming by Demonstration in an Intelligent Authoring Tool for Cognitive Tutors. In AAAI Workshop on Human Comprehensible Machine Learning (Technical Report WS-05-04) (pp. 1-8). Menlo Park, CA: AAAI association. PDF icon PDF (274 KB)

    Show/Hide Abstract

    Abstract. We are building an intelligent authoring tool for Cognitive Tutors, a highly successful form of computer-based tutoring. The primary target users (the authors) are educators who are not familiar with cognitive task analysis and AI program-ming, which are essential tasks in building Cognitive Tutors. Instead of asking authors to write a cognitive model by hand, a Simulated Student embedded in the authoring tool lets an author demonstrate how to perform the tasks in the subject domain, for instance, solving an algebra equation. The Simulated Student observes an author’s demonstration and induces a set of production rules that replicate the demon-strated performances. Correct production rules, as well as production rules that are incorrect but similar to those a human student might produce, can be directly embedded in the Cognitive Tutor. We give a preliminary evaluation of an implemented Simulated Students based on inductive logic programming and path-finding.

  • Jarvis, M. P., Nuzzo-Jones, G., & Heffernan, N. T. (2004). Applying Machine Learning Techniques to Rule Generation in Intelligent Tutoring Systems. In J. C. Lester (Ed.), Proceedings of the International Conference on Intelligent Tutoring Systems (pp. 541-553). Heidelberg, Berlin: Springer. PDF icon PDF (483 KB)

    Show/Hide Abstract

    Abstract. The purpose of this research was to apply machine learning techniques to automate rule generation in the construction of Intelligent Tutoring Systems. By using a pair of somewhat intelligent iterative-deepening, depth-first searches, we were able to generate production rules from a set of marked examples and domain background knowledge. Such production rules required independent searches for both the "if" and "then" portion of the rule. This automated rule generation allows generalized rules with a small number of sub-operations to be generated in a reasonable amount of time, and provides non-programmer domain experts with a tool for developing Intelligent Tutoring Systems.

Intelligent Tutoring

  • Koedinger, K. R., Anderson, J. R., Hadley, W. H., & Mark, M. A. (1997). Intelligent tutoring goes to school in the big city. International Journal of Artificial Intelligence in Education, 8, 30-43. PDF icon PDF (633 KB)

    Show/Hide Abstract

    Abstract. This paper reports on a large-scale experiment introducing and evaluating intelligent tutoring in an urban High School setting. Critical to the success of this project has been a client-centered design approach that has matched our client's expertise in curricular objectives and classroom teaching with our expertise in artificial intelligence and cognitive psychology. The Pittsburgh Urban Mathematics Project (PUMP) has produced an algebra curriculum that is centrally focused on mathematical analysis of real world situations and the use of computational tools. We have built an intelligent tutor, called PAT, that supports this curriculum and has been made a regular part of 9th grade Algebra in 3 Pittsburgh schools. In the 1993-94 school year, we evaluated the effect of the PUMP curriculum and PAT tutor use. On average the 470 students in experimental classes outperformed students in comparison classes by 15% on standardized tests and 100% on tests targeting the PUMP objectives. This study provides further evidence that laboratory tutoring systems can be scaled up and made to work, both technically and pedagogically, in real and unforgiving settings like urban high schools.

    TDK Tutorial Slides PPT icon PPT (450 KB)<

Stoich Studies with the CTAT-built Stoichiometry Tutor

  • McLaren, B.M., Lim, S., & Koedinger, K.R. (2008). When and How Often Should Worked Examples be Given to Students? New Results and a Summary of the Current State of Research. In B. C. Love, K. McRae, & V. M. Sloutsky (Eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society (pp. 2176-2181). Austin, TX: Cognitive Science Society. PDF icon PDF (352 KB)

    Show/Hide Abstract

    Abstract. Our work explores the assistance dilemma: when should instruction provide or withhold assistance? In three separate but very similar studies, we have investigated whether worked examples, a high-assistance approach, studied in conjunction with tutored problems to be solved, a mid-level assistance approach, can lead to better learning. Contrary to prior results with untutored problem solving, a low-assistance approach, we found that worked examples alternating with isomorphic tutored problems did not produce more learning gains than tutored problems alone. On the other hand, the examples group across the three studies learned more efficiently than the tutored-alone group; the students spent 21% less time learning the same amount of material. Practically, if these results were to scale across a 20-week course, students could save 4 weeks of time – yet learn just as much. Scientifically, we provide an analysis of a key dimension of assistance: when and how often should problem solutions be given to students versus elicited from them? Our studies, in conjunction with past studies, suggest that on this example-problem dimension mid-level assistance may lead to better learning than either lower or higher level assistance. While representing a step toward resolving the assistance dilemma for this dimension, more studies are required to confirm that mid-level assistance is best and further analysis is needed to develop predictive theory for what combinations of assistance yield the most effective and efficient learning.

  • McLaren, B.M., Lim, S., & Koedinger, K.R. (2008). When is Assistance Helpful to Learning? Results in Combining Worked Examples and Intelligent Tutoring. In B. Woolf, E. Aimeur, R. Nkambou, S. Lajoie (Eds.), Proceedings of the 9th International Conference on Intelligent Tutoring Systems (ITS-08), Lecture Notes in Computer Science, 5091 (pp. 677-680). Berlin: Springer. PDF icon PDF (90 KB)

    Show/Hide Abstract

    Abstract. When should instruction provide or withhold assistance? In three empirical studies, we have investigated whether worked examples, a highassistance approach, studied in conjunction with tutored problems to be solved, a mid-level assistance approach, can lead to better learning. Contrary to prior results with untutored problem solving, a low-assistance approach, we found that worked examples alternating with isomorphic tutored problems did not produce more learning gains than tutored problems alone. However, the examples group across the three studies learned more efficiently than the tutored-alone group. Our studies, in conjunction with past studies, suggest that mid-level assistance leads to better learning than either lower or higher level assistance. However, while our results are illuminating, more work is needed to develop predictive theory for what combinations of assistance yield the most effective and efficient learning.

  • McLaren, B.M., Lim, S., Yaron, D., & Koedinger, K.R. (2007). Can a Polite Intelligent Tutoring System Lead to Improved Learning Outside of the Lab? In R. Luckin, K.R. Koedinger, & J. Greer (Eds.), Proceedings of the 13th International Conference on Artificial Intelligence in Education (AIED-07), Artificial Intelligence in Education: Building Technology Rich Learning Contexts That Work. (pp. 433-440). IOS Press. PDF icon PDF (456 KB)

    Show/Hide Abstract

    Abstract. In this work we are investigating the learning benefits of e-Learning principles (a) within the context of a web-based intelligent tutor and (b) in the “wild,” that is, in real classroom (or homework) usage, outside of a controlled laboratory. In the study described in this paper, we focus on the benefits of politeness, as originally formulated by Brown and Levinson and more recently studied by Mayer and colleagues. We test the learning benefits of a stoichiometry tutor that provides polite problem statements, hints, and error messages as compared to one that provides more direct feedback. Although we find a small, but not significant, trend toward the polite tutor leading to better learning gains, our findings do not replicate that of Wang et al., who found significant learning gains through polite tutor feedback. While we hypothesize that an e-Learning principle such as politeness may not be robust enough to survive the transition from the lab to the "wild," we will continue to experiment with the polite stoichiometry tutor.

  • McLaren, B.M., Lim, S., Gagnon, F., Yaron, D., & Koedinger, K.R. (2006). Studying the Effects of Personalized Language and Worked Examples in the Context of a Web-Based Intelligent Tutor. In M. Ikeda, K.D. Ashley, & T-W. Chan (Eds.), Proceedings of the 8th International Conference on Intelligent Tutoring Systems (ITS-2006), Lecture Notes in Computer Science, 4053 (pp. 318-328). Berlin: Springer. (Finalist for the Best Paper Award) PDF icon PDF (237 KB)

    Show/Hide Abstract

    Abstract. Previous studies have demonstrated the learning benefit of personalized language and worked examples. However, previous investigators have primarily been interested in how these interventions support students as they problem solve with no other cognitive support. We hypothesized that personalized language added to a web-based intelligent tutor and worked examples provided as complements to the tutor would improve student (e- )learning. However, in a 2 x 2 factorial study, we found that personalization and worked examples had no significant effects on learning. On the other hand, there was a significant difference between the pretest and posttest across all conditions, suggesting that the online intelligent tutor present in all conditions did make a difference in learning. We conjecture why personalization and, especially, the worked examples did not have the hypothesized effect in this preliminary experiment, and discuss a new study we have begun to further investigate these effects.

⚠️ **GitHub.com Fallback** ⚠️