A Tutoring Architecture that Learns

R. Morelli and B. Dinkins
Department of Engineering and Computer Science
Trinity College
Hartford, CT 06106
203-297-2220
ralph.morelli@mail.trincoll.edu

G. Pelton
School of Computer Science
Carnegie Mellon University
412-268-3505
gap@cs.cmu.edu

Abstract

Soar/ITS is an architecture for Intelligent Tutoring Systems (ITSs) based on Soar. As an experiment in ITS design, the main question addressed by Soar/ITS is whether machine learning can be used effectively to develop a practical tutoring architecture. We want a tutoring architecture that is more flexible than current tutoring systems both in terms of how it presents lessons and in how it responds to what the student is doing. This flexibility, however, should not come at the expense of efficiency or tractability.

To meet these goals we the architecture must achieve substantial transfer of what it learns across different tasks. We show that learning in Soar/ITS results in considerable transfer and speedup across routine tasks. This gives us reason to believe that the architecture will be practical. More importantly, we show that Soar/ITS is able to transfer learning from routine problem solving tasks into the tutoring task. In particular, Soar/ITS is able to transfer knowledge learned through solving domain problems -- e.g, solving a problem in electrostatics -- into the student monitoring task -- e.g., observing the student solve a similar problem. We believe this type of transfer will allow us to approach important ITS tasks, such as student modeling, with a set of domain independent tools.

After describing the Soar/ITS's main architectural features, we provide three detailed examples of effective use of its learning mechanism and show how these can be developed in ways that significantly enhance the performance of student modeling and tutor control.

Keywords: Intelligent tutoring system, architecture, machine learning, student modeling.

Acknowledgement: This work was supported in part by a Research Opportunity Award from the National Science Foundation (Subgrant #513566-55455 of NSF grant MDR-9150008). G. Pelton's work is supported by a grant from Martin Marietta and in part by ARPA F-33615-93-1-1330.

A Tutoring Architecture that Learns

1. Introduction

Can a tutoring architecture that learns be both flexible and practical? We address this question by presenting a tutoring architecture (Soar/ITS) built in Soar, a cognitive architecture that learns [Newell, 1990]. We want to create a tutoring system that is more flexible than current tutoring systems both in terms of how it presents lessons and in how it responds to what the student is doing. This flexibility, however, should not come at the expense of efficiency or tractability. To meet these goals we believe that the architecture must achieve substantial transfer of what it learns across different tasks. We show that learning in Soar/ITS results in considerable transfer and speedup across routine tasks. This gives us reason to believe that the architecture will be practical. More importantly, we show that Soar/ITS is able to transfer learning from routine problem solving tasks into the tutoring task. In particular, Soar/ITS is able to transfer knowledge learned through solving domain problems -- e.g, solving a problem in electrostatics -- into the student monitoring task -- e.g., observing the student solve a similar problem. We believe this type of transfer will allow us to approach important ITS tasks, such as student modeling, with a set of domain independent tools.

The idea of using machine learning in tutoring is not new. Langley, Ohlsson and Sage [1984] showed that learning could be applied in a practical and cognitively plausible way to the problem of student modeling. There have been many other efforts, which we won't cite here in order to conserve space. What makes Soar/ITS unique is the attempt to integrate learning into all aspects of the ITS architecture. In work that is closely related to ours, Hill and Johnson [1993] have developed a system that employs Soar's machine learning architecture to model a spectrum of problem solving behaviors. They suggest ways in which Soar's learning mechanism can be used successfully in ITS design, pointing out that Soar's chunking mechanism is based on the idea of "learning by doing." This maxim captures precisely the basic idea behind the type of knowledge transfer that makes Soar/ITS a credible ITS architecture. After describing the key features of the Soar/ITS architecture, we describe several varieties of the kind of learning it produces, and we discuss ways in which these mechanisms can be applied to the primary tutoring tasks: modeling the student and deciding what to do next.

2. Overview of Soar/ITS.

2.1 Soar Cognitive Architecture. Soar/ITS is based on Soar, a cognitive architecture that integrates problem-solving and learning [Newell, 1990]. Tasks in Soar are formulated in terms of goal oriented search through problem spaces. Each space is represented by a set of operators and states. Problem solving occurs when operators are repeatedly selected and applied to the current state until the goal state is achieved. All knowledge in Soar is represented by production rules stored in long-term memory. When Soar lacks sufficient knowledge (e.g., when it does not know which operator to propose), an impasse occurs. Soar responds to impasses by creating a subgoal and commencing a search through other problem spaces in an effort to resolve the impasse. This process of setting and achieving a hierarchy of goals is known as universal subgoaling.

Learning occurs in Soar by creating chunks (new productions) whenever an impasse is resolved. Chunks summarize the results of problem solving in such a way that they can be applied to similar problems in similar contexts producing similar results. Over time chunks accumulate in Soar's long term memory (recognition memory), which grows monotonically. Numerous studies have confirmed that Soar's chunking mechanism is quite general and capable of performing a wide range of learning tasks [Laird, Rosenbloom & Newell, 1985; Newell, 1990].

2.2 Task Domain. Two versions of Soar/ITS have been built, Soar5/ITS and Soar6/ITS. Both are based on a simple electrostatics domain.[1] Soar5/ITS is interfaced to a microworld that is accessed through a Unix socket connection. On the input side, Soar5/ITS is given a list of all the objects in the microworld and their locations. During tutoring it can "see" the display and can thus observe the results of either its own actions or the student's actions. The results are seen as changes to the objects in the display. Soar5/ITS can manipulate objects in the display by controlling its mouse cursor, and it can "talk" to the student via messages that are printed on the display. Objects in this task domain include things like vectors, electrical charges, buttons, dialogue boxes, the mouse cursor (an analog of the tutor's hand), and certain text messages. Soar6/ITS is based on the same task domain, except that it focuses more on making tutoring decisions -- i.e., what lesson should be taught next -- whereas Soar5/ITS focuses on solving electrostatics problems in the microworld. From now on we will use the term Soar/ITS to refer to both systems when naming the specific system is not important.

Soar/ITS has knowledge of the microworld interface and sufficient electrostatics knowledge to solve the kind of problem shown in Figure 1.

+ +

GO

Draw force vectors depicting the direction of the electrostatics forcing acting on the particles and then click on the GO button.

Figure 1. A simple electrostatics problem used by Soar5/ITS.

To solve this problem, Soar/ITS attends to the different objects in the display. It determines that both charges are positive, which implies that their force vectors would be opposite. It then draws a force vector on the left hand particle, a force vector on the right hand particle and clicks the mouse button. When the mouse button is clicked, the microworld program determines if the force arrows have been drawn in the correct direction and reports the result of the problem.

The microworld is constructed so that either the tutor or the student can be given this problem. Soar/ITS gives itself this kind of problem as a means of demonstrating appropriate electrostatics knowledge to the student. It give these kinds of problem to the student to determine whether the student has mastered a particular concept (e.g., repulsion of like charges). As we shall see below, chunking plays an important role in transferring the expert knowledge used to solve this problem into knowledge that's used to monitor the student's performance.

2.3 Knowledge Representation. All knowledge in Soar is represented in the form of productions in Soar's long term memory. Within that general framework however, we utilize means-ends analysis (MEA) to represent the tutor's knowledge and control its decisions. This appears to have important advantages, the most important being that it provides a general, domain independent way of describing domain knowledge declaratively. For example, Table 1 gives an MEA summary of a portion of the domain knowledge in Soar5/ITS:

Each goal in Table 1 is associated with a difference and an operator that can reduce the difference. Applying the operator may require that certain preconditions be met, which would lead to the creation of subgoals. For example, if one had the goal of drawing a vector from XY to ZW and there didn't already exist a vector from XY to ZW then a difference would exist. To reduce this difference, Soar/ITS would propose the draw-vector operator. Before this operator can be applied, all of its preconditions must be met, which causes subgoals to be created. In this case, draw-vector will succeed only after the following primitive operations have been carried out: move-mouse-XY, click-mouse, move-mouse-ZW, unclick-mouse. The first two actions cause XY to be selected and the last two cause a vector to be drawn from XY to ZW.

Goal (Ends) Difference Operator (Means) Preconditions

draw-vector-XY-ZW ~ vector-at-XY-ZW draw-vector mouse-at-ZW

XY-selected

mouse-holding-nil

mouse-holding-nil ~ mouse-holding-nil unclick-mouse none

move-mouse-XY ~ mouse-at-XY move-mouse none

click-mouse-XY mouse-button-up click-mouse none

unclick-mouse ~ mouse-button unclick-mouse none

select-XY ~ XY-selected click-mouse mouse-at-XY

drag-mouse-XY ~ mouse-at-XY move-mouse mouse-button-down

Table 1. MEA representation of the draw-a-vector task in Soar5/ITS.

We expect MEA knowledge to be useful not only for structuring and representing the domain and tutoring knowledge, but also, in the future, for monitoring the progress of the student in a flexible, domain independent way. This is one of the reasons we picked a MEA knowledge representation.

For example, as long as the student is resolving task differences while performing a task and is not looping -- both domain independent types of tests -- then the student is making progress toward completing the task. By monitoring progress in this way, we will not be forcing the student to proceed down any particular path, but instead are forced to try and recognize what differences and goals the student's actions might be resolving. We believe that the efficient goal recognizes built up via the chunking process will help make this a tractable problem.[2]

2.4 Perception, Comprehension and Tasking in Soar/ITS. One of the main architectural features of Soar/ITS is its framework for representing and supporting highly interactive visual environments. This framework consists of independent perceptual, motor and cognitive processors that communicate via a shared working memory. Its input-process-output cycle is known as PEACTIDM, which is an acronym for the eight distinct types of operators involved in the cycle. PEACTIDM provides Soar/ITS with a mechanism for focusing attention, comprehending visual input, and performing intentional motor actions under the control of cognition [Newell, 1990, page 262]. This type of cycle comes from detailed studies of expert human behavior, and forms the basis for many of the cognitive plausibility arguments we could make about our tutor model. We believe that a major reason we get transfer from actions the tutor takes to the monitoring of the actions that the tutor sees the student make, is that both are using the same control mechanisms.

The perceive and encode operators sense the environment and encode Soar's percepts before inserting them into working memory. These operators are not influenced by cognition. They are implemented as C functions that map raw input data from, say, a microworld, into structured working memory elements that can be manipulated by productions stored in long term memory. On the output end of the cycle, the decode and move operators convert Soar/ITS's intentions into motor actions. Like their analogs on the input side, these operations are not cognitive and are implemented as mappings from structured working memory elements into lists that can be interpreted by the simulated external world environment.

The cognitive portion of the tutor's interaction with the world involves the use of four operators: attend, comprehend, task and intend. The attend operator is used to focus the tutor's attention (not its eyes) on certain things -- e.g., on a vector in its field of vision or on the topic of the current lesson.[3] The comprehend operator analyzes the attended objects for their significance. An example of comprehension would be the realization that one of the tutor's goals (intentions) had successfully been achieved. Once a goal is accomplished, it is removed from the tutor's goal stack. As we shall see, since a substantial portion of its comprehension takes place through chunk firings, Soar/ITSs comprehension of its own actions improves over time.[4]

The next step in the PEACTIDM cycle involves tasking , i.e., deciding what task to perform next. Depending on its current state, a task decision could occur as either the immediate result of a chunk firing or the end product of a protracted search process. Tasks in Soar/ITS include the whole range of tutoring activity, from primitive motor actions in a microworld, to high level decisions about what lesson to teach next. Functionally, the main result of the tasking operator is the creation of an intention, a modification to the internal state. The intend operator executes the intention either by making an immediate change to its internal state -- e.g., in the case of purely "mental" actions such as deciding what subject to teach -- or by formulating a motor-command -- e.g., in the case of external actions such as moving the mouse. Once formulated, motor commands are automatically executed by the decode and move operators. As a side effect, the intend operator also creates an expectation that describes how to recognize successful completion of an intended action. This is done through a visualization process that is described below.

One of the main strengths of the PEACTIDM model is that it allows Soar/ITS to maintain a forest of independent goals that operate over different levels of abstraction and over different time frames. For example, in the vector-drawing example, the draw-vector goal remains active over an extended period of time while Soar uses MEA knowledge to figure out a sequence of primitive motor actions that draw a vector. Unlike in the vector-drawing example, the simultaneous goals needn't be hierarchical; they can be completely independent of each other. This ability to pursue of number of tasks simultaneously is important in a process as dynamic and unpredictable as tutoring. As we shall see in the next section, Soar/ITS's ability to learn goal recognition chunks plays a significant role within the PEACTIDM framework.

3. Varieties of Learning in Soar/ITS.

This section describes important varieties of learning that have been observed in Soar/ITS. We attempt to indicate ways in which the rather rudimentary levels of the tutor's current learning ability can be developed into powerful mechanisms that are flexible, practical and cognitively plausible.

3.1 Learning is ubiquitous. Learning in Soar/ITS is ubiquitous and integrated into all aspects of the tutoring process. Even the simplest tasks in Soar/ITS generates lots of chunks whose main function seems to be that they speed up execution of the tutor's tasks. For example, Table 2 summarizes the kinds and frequencies of chunks built during two traversals of the curriculum tree.

In this case no actual tutoring was being done; the tutor's actions here are more closely akin to an instructor reviewing the topics in his syllabus. During the first run (without chunks) 1291 decision cycles are required to step through the curriculum. A total of 301 chunks were built, most of which are involved in MEA, and a total of 343 chunk firings occurred. The second run (with chunks) requires only 266 decision cycles, as the chunks built during the first run take over most of the decision process. The fact that no chunks were built on the second run means that this particular task has been completely chunked. Cognitively speaking, this is not as implausible as it might seem, since the tutor merely runs through the leaves of the curriculum tree in a linear order.

TYPE OF CHUNK DECISION CYCLES CHUNKS BUILT CHUNKS FIRED

First Second First Second First Second

MEA-chunks 769 0 182 0 251 213

all others chunks 522 266 119 0 92 5

TOTAL 1291 266 301 0 343 218

Table 2. Summary of chunks in Soar6/ITS.Decision cycles correspond roughly to operator firings. 1291 decision cycles took less than a minute on a Sparc II.

The learning evidenced here is mostly speed-up learning. It is a staple of Soar systems, and it is attributable mostly to knowledge compilation. Knowledge compilation results when a primitive operation -- e.g., move-mouse-to-XY -- is chunked. This is tantamount to replacing search (problem solving) with match (recognition). Note that in the first run 251 chunk firings occurred, as chunks built earlier in the run were used in tasks that occurred later in the run. Note that in the second run almost all of the chunk firings (213 out of 218) were MEA chunks; in other words most of the problem solving knowledge is now compiled into chunks. The roughly five fold speed up seen in this example is not unusual for Soar. If nothing else, it provides one sign of hope that learning in Soar/ITS will make the tutoring process tractable.

3.2 Learning New Tasks. One of the tutoring tasks performed by Soar/ITS is to demonstrate how to solve certain problems. For example, when given the problem of identifying the forces acting on two charged particles (Figure 1), it can correctly draw arrows that represent these forces. To draw a vector, Soar/ITS puts together a sequence of primitive motor actions -- move-the-mouse, etc. -- that result in the existence of a vector in the correct location in the display. During the first pass through the vector drawing task, Soar/ITS relies entirely on searching through problem space knowledge that has been encoded by hand. However, it builds chunks at every step of the process. So when confronted with the task of drawing a second vector at a different location, chunks fire and it immediately carries out the sequence of primitive actions. Once the task of drawing a vector is "fully chunked" Soar/ITS has, in effect, learned a new composite task -- viz., drawing a vector -- that can be used as a primitive operator in future problem solving.[ ]This ability to compile task knowledge into a series of chunk firings is easily extended and generalized in Soar.[5]

Chunks learned during the vector drawing task transfer not only to other vector drawing tasks but to other tasks in the domain. For example, during the vector drawing task, Soar/ITS builds chunks for the move-mouse-to-XY task. Since move-mouse-to-XY occurs as a subgoal of many different tasks -- e.g., selecting a button in the display, selecting a particle -- the move-mouse chunks built during vector drawing fire whenever a task calls for moving the mouse. This kind of transfer across tasks leads also to a speed-up effect. For the vector-drawing task, Soar5/ITS required 260 decision cycles to draw the first vector, but only 83 to draw the second vector. Similarly, the first time the move-mouse-to-X-Y task was attempted required 12 decisions, whereas on subsequent trials it required only 3 decisions.

Speed up due to transfer of this sort occurs routinely in Soar systems, as one should expect. The question we need to address is whether there are forms of knowledge transfer that are particularly beneficial in tutoring. The answer is that knowledge built up in performing tasks like the vector drawing problem does transfer to the task of monitoring a student trying to solve that and similar problems. This is cognitively plausible: knowing how to solve problems efficiently helps us build up expectations about what the student should do. If can recognize what the student is doing as progress towards a solution, we can let him continue. After a certain period of being unable to see progress, we have to intervene. As we show in the next section, learning goal recognition chunks facilitates this task.

Learning Goal Recognition. Soar/ITS uses chunking to build up goal recognition knowledge. This is accomplished through a process that is analogous to the process of building up an expectation of what's going to happen next. Each time a task (goal) is proposed Soar/ITS attempts to "visualize" what the world would be like if the task were successfully completed. This results in the creation of chunks that recognize (comprehend) when the goal has been achieved. Once the goal has been visualized in this way, Soar/ITS executes the intended action and continues on to its next task. Goal recognition chunks serve as operator proposals -- each proposes an instance of the comprehend operator. As they accumulate in long term memory, they constitute a form of episodic knowledge, in particular, a form of "I remember that."

The visualization mechanism in Soar/ITS is designed deliberately to craft recognition chunks general enough to fire in similar contexts, thereby allowing transfer of knowledge within a given task (e.g., drawing a second vector) and across tasks ( e.g., selecting a button in the display). The mechanism works as follows: For a given task, if no visualization chunk yet exists for the task -- as in the first attempt to move-mouse-to-XY -- Soar/ITS generates an impasse. It sets itself the goal of trying to comprehend what a successful execution of the task would look like and drops into an imagery problem space. The imagery space is like a smaller version of the real world; it contains only those objects relevant to the particular task. Soar/ITS imagines how the world would look if the operator were applied in the imagery space. As a result, the mouse-at-XY becomes true in the imagery space.The impasse is resolved and a comprehension chunk of the following form is built:

IF Soar intends [move-mouse TO <target>]

AND the mouse and <target> are in same location

THEN comprehend that [move-mouse TO <target>]

Because of its generality, this chunk will henceforth fire whenever a move-mouse-to-XY task is successfully completed.

For composite tasks, such as draw vector, the process just described is somewhat different. In that case, the attempt to visualize draw-vector will fail initially, since the pre-conditions for drawing a vector at XY are not true (e.g., see Table 1). Failure to visualize means only that Soar/ITS will test whether vector-at-XY is true during every PEACTIDM cycle as long as the draw-vector intention remains active. It will succeed, finally, when the vector appears drawn in the microworld. At that point it will create a chunk of the following form:

IF Soar intends [draw-arrow ON <target>]

AND there is an arrow in the location of <target>

THEN comprehend that [draw-arrow ON <target>]

Goal recognition chunks formed during problem solving are applicable during student monitoring. For example, when Soar/ITS gives the problem in Figure 1 to a student, it puts itself in a monitor mode. On each PEACTIDM cycle it attempts to comprehend what it observes and sets up expectations for itself. As long as the student makes progress toward the goal, his actions will match the tutor's expectations. However, as soon as a student action fails to match the expectations, an impasse results and the tutor considers whether to intervene. Currently, this capability in Soar5/ITS is limited. Soar/ITS uses chunks created during its own problem solving to recognize when a student has successfully solved the problem and when the student fails. However, its current level of knowledge is not capable of sophisticated intervention and when the student makes a mistake it just aborts the problem. Moreover, its ability to recognize failure is very inflexible: e.g., it currently knows only one sequence of operations to draw a vector. The important point, though, is that the ability to trace the student's performance is derived by reusing the goal recognition chunks that were created during the tutor's own attempt to solve the problem. This is a significant from of knowledge transfer.

4. Discussion.

We have tried to point out ways in which a tutoring architecture that incorporates learning can be practical, adaptive and cognitively plausible. The preceding discussion focused on Soar/ITS's current capabilities. In this section we would like to discuss, in somewhat more speculative terms, what we see as the potential for this architecture.

As the preceding example illustrates, student modeling in Soar/ITS is based on the learning of episodic chunks, i.e., in that case chunks that recognize when certain goals are achieved. How general is this mechanism of using episodic chunks to transfer knowledge? We have applied this same mechanism to the problem of remembering where a student left off during its last lesson.

In Soar6/ITS, the curriculum is represented as a hierarchy whose leaves are individual lessons and problems (Figure ). Currently, the curriculum is traversed in a linear fashion, whereby a student progresses left to right through the lessons represented by the leaf nodes of the curriculum tree. Lessons are either passed or failed, and failed lessons are repeated. Admittedly, this is not a very progressive way to tutor! When a tutoring session ends, Soar/ITS gives itself the goal of remembering the student's last lesson and creates episodic chunks that have the following form:

IF goal is recall-student

And student-name is STUDENT-1

AND last-lesson is LESSON-1

THEN propose a task operator that makes current-lesson = lesson-1.

The next time Soar/ITS begins a tutoring session for STUDENT-1, it again sets itself the goal to recall-last-lesson and all of the episodic last-lesson chunks that were created during previous sessions will fire. This will create an impasse -- i.e., an operator proposal tie -- which represents an opportunity to bring appropriate knowledge to bear on resolving the impasse. In the present prototype, the tie is settled in a trivial fashion -- i.e., by giving preference to the highest numbered lesson. So, functionally at least there is little difference in the present prototype between recording the last lesson in a file and then reading that information at the beginning of the next tutoring episode. However, despite the limitations of the present Soar/ITS prototype, the potential advantages of a student model based on episodic memory are striking. For example, given the impasse over recall-last-lesson, Soar/ITS could base its decision on whether the student is likely to remember what it learned last time or on how strongly the student performed on its last lesson. These additional bits of knowledge about the student could themselves be built up through episodic memory about that particular student and, potentially, about other students put in this same situation.

Similar points can be made about the student monitoring example in the previous section. As Soar/ITS learns new ways to solve a problem, its response to previously impenetrable student actions will become more flexible. For example, if it is learned that from state A in a given problem, either operator B or C will make progress toward the goal, then both expectations could be raised. Although this is quite speculative, it is difficult to see where the limits of episodic memory would lie. Perhaps the process would bog down as too many chunks are accumulated. Or perhaps the transfer we see in our simple examples will not scale. But these are empirical issues worth further investigation. The point is that the use of episodic knowledge as a basis for student modeling appears to open up very flexible, cognitively plausible opportunities.

5. Conclusion.

Soar/ITS provides evidence that machine learning can be used effectively to create a tutoring architecture that is flexible, adaptive, and practical. It shows evidence of significant speed-up learning and transfer of knowledge both within and across tracks. Its use of episodic knowledge, particularly of goal recognition knowledge, is applicable to both the student modeling and tutor control problems. Although the examples provided of its capabilities in these two areas were limited in scope, they provide at least suggestive evidence of its good potential. Since an ITS must work in a complex, dynamic environment, it must be able to maintain a forest of goals and be able to recognize changes in their status efficiently. As the ITS's knowledge grows and its approach to tutoring becomes more sophisticated, it becomes even more crucial that there be an effective way to create and maintain independent goal recognizes. The evidence suggests that Soar/ITS's visualization mechanism may be an efficient and practical way to address this problem. important to, learning goal recognizes will be a pragmatic method of learning goal states. More extensive testing of its capabilities will hopefully lead shortly to a proof-of-concept prototype that can be tested with real students.

6. References.

Anderson, J.R. (1983). The Architecture of Cognition. Cambridge, MA: Harvard University Press.

Anderson, J.R. (1984). Cognitive psychology and intelligent tutoring. Proceedings of the Cognitive Science Society Conference, Boulder, Colorado, pp. 37-43.

Anderson, J.R. (1992). Intelligent tutoring and high school mathematics. In Proceedings of ITS-92, New York: Springer-Verlag, pp. 1-10.

Hill, R. and Johnson, L. (1993) Designing an intelligent tutoring system based on a reactive model of skill acquisition. In Proceedings of AI-ED 93:, pp. 273-281.

Laird, J., Rosenbloom, P., Newell, A. (1985). Chunking in soar: the anatomy of a general learning mechanism. CMU-CS-85-154, Department of Computer Science, Carnegie Mellon University, Pittsburgh, PA.

Langely, P, Ohlsson, S. and Sage, S. (1984). A machine learning approach to student modeling. CMU-RI-TR-84-7, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA.

Lewis, R. (1992). Recent developments in the NL-Soar garden path theory. CMU-CS-92-141, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA.

Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA: Harvard University Press.

O'Shea, T. (1982). A self-improving quadratics tutor. In D. Sleeman & J.S. Brown (Eds.), Intelligent tutoring systems, London: Academic Press.

Rosenbloom, P., Laird, J. and Newell, A. (1987). Knowledge level learning in soar. In Proceedings of the Sixth National Conference on Artificial Intelligence, pp. 499-504.


[1] The difference between Soar5/ITS and Soar6/ITS is that they straddle two versions of Soar. Soar 5 is lisp based and Soar 6 is C based. Both prototypes employ the same architecture and task domain.

[2] Another advantage of using MEA to represent task knowledge is that once a declarative representation like Table 1 has been constructed, one can more easily manipulate the knowledge, by manipulating the table. The information in the table can also be used as input to a function that generates Soar productions. We have not yet made use of this capability in Soar/ITS.

[3] Currently no bounds are placed on the scope of Soar/ITS's attention in terms of the number of distinct elements that can be held in attention; this obviously limits its cognitive plausibility.

[4] An important limitation in Soar/ITS is that no effort has yet been made to extend the comprehend operator into the realm of, say, understanding the meaning of natural language input or understanding new objects inductively, although other Soar systems have made significant progress in such areas [Lewis, 1992; Rosenbloom, Laird & Newell, 1987].

[5] One current limitation of Soar/ITS is that it does not fully complete the abstraction process described here, which would involve giving the composite action a name, although this "data chunking" step has been solved by other Soar systems [] .