About the Global Suggestion Mechanisms in ACTIVEMATH

Erica Melis and Eric Andres
DFKI Saarbrücken, Germany
melis|eandres@dfki.de

Abstract:

We investigate feedback in a learning environment for mathematics. In particular, we distinguish local and global feedback. In more detail, we describe the global feedback and suggestions in ACTIVEMATH, its blackboard-architecture with its reusable and easily modifyable components and some of the user-adaptive global learning suggestions implemented so far.

keywords: learning suggestions, blackboard architecture, suggestion rules

Introduction

ACTIVEMATH is a user-adaptive web-based learning environment for mathematics. It generates the learning material for the individual learner according to her goals, preferences, and mastery of concepts as well as to certain learning scenarios [12].

ACTIVEMATH's user model consists of the components history, profile, and mastery-level

The history contains information about the user's activities (reading time for items, exercise success rate, editing of user model) The user profile contains her preferences, scenario, goals submitted for the session. To represent the concept mastery assessment, the user model contains values for a subset of the competences of Bloom's mastery taxonomy [5]:

Knowledge-mastery is connected to reading (or transfer of) a text. Bloom defines the needed skills as recall of information, knowledge of major ideas; Comprehension-mastery can be achieved by relating several concepts and then answer questions or understand examples. Bloom defines the needed skills as understanding information, translate knowledge into new context, predict consequences; Application-mastery relates to solving problems by applying the concept. Bloom defines the needed skills as use information (in new situations), solve problems using required skills or knowledge.1This is also similar to Merrill and Shute's taxonomies of outcome types [].

Finishing an exercise or going to another page triggers an updating of the user model. Since different types of user actions reflect and uncover different competencies they serve as sources for primarily modifying2 the values of corresponding competencies. In particular, reading concepts corresponds to 'Knowledge', following examples corresponds to 'Comprehension', and solving exercises corresponds to 'Application'.

Now we added some diagnoses of several of the student's activities and the generation of learning suggestions (feedback). This article addresses the different types of feedback in ACTIVEMATH, the blackboard architecture of diagnoses and global feedback, and concrete suggestors.

Traditionally, user-adaptive feedback and help in tutor systems (ITSs) has been designed for a direct response to students' problem solving actions and the feedback is designed to help students to accomplish a solution, e.g., in the PACT tutors [4] or in CAPIT [11]. Frequently, the feedback is an explicitly authored text that reflects the author's experience with typical errors in a specific problem solving context. In some sense, the specific feedback is questionable because authoring all the specific feedback is a very laborious task (see e.g., [18]) and often require an extreme authoring effort, e.g., for mal-rules and for explicitly authoring what can go wrong and what the reason is for each erroneous action for each exercise. Therefore, we try to avoid such a kind of diagnosis (and corresponding feedback) in ACTIVEMATH. Moreover, the usage and benefit of more and more detailed help may strongly depend on the individual user [1] and some users might even dislike frequent suggestions and intrusion [6].

Although most feedback in ITSs targets problem solving activities, some systems provide feedback targeting meta-cognitive skills of the student. For instance, [9,2] try to support self-explanation of worked-out examples; SciWise [20] provides feedback for planning and reflecting activities in experimental science and for collaboration. We think that there is much room for further development in this and similar directions and present an approach to deliver global feedback in the learning process.

This article suggests to distinguish local and global feedback. It then focuses on global feedback as implemented in ACTIVEMATH. In more detail, we describe the architecture and some of the user-adaptive global learning suggestions implemented so far.


Separation of Local and Global Feedback

Two general types of feedback and guidance can be provided by an ITS, a local response to student activities which is targeted at the recognition and correction of single problem solving steps of the learner and a global feedback targeting the entire learning process. This differentiation somewhat resembles the distinction of task-level and high-level described in the four-process model of [3].

These two types of feedback differ with respect to realm, content, and aim and, correspondingly, they will be arranged closely to exercises/examples or not. That is, local also means, the feedback is provided immediately after each problem solving action of the user and local feedback should be given directly attached to the problem solving (maybe even locally attached. Instead, the global feedback and suggestions can be provided independently and may be delayed, i.e. delivered, when the user has finished reading a text, studying an example, or working on an exercise.

Many ITSs do not provide global feedback at all. And even if they do, such as SQL-Tutor [14] and CAPIT [11], they do not separate local and global feedback, say architecturally.

In ACTIVEMATH, local and global feedback is distinguished because the of their different aims, different foci, different learning dimensions, and different mechanisms. In addition, the employment of service systems for the check of the correctness of a problem solving step and for the generation of local problem solving feedback is a (practical) reason for separating local and global feedback. The local feedback such as 'syntax error', 'step not correct, because...', 'task not finished yet', or 'step not applicable' is computed by a system and related to a problem solving step in an exercise or to the final achievement in an exercise. The current implementation of local feedback is explained in [13] and more technically in http://www.ags.uni-sb.de/ãdrianf/activemath/.

The global feedback scaffolds the student's navigation, her content sequencing (including examples and exercises), and her meta-cognition. The global feedback and suggestions may concern, e.g., planning what to learn, to repeat, to look at, or to do next, navigating the content, reflecting, monitoring (which also includes summarizing, comparing, finding similar problems, etc).

In what follows we deal with global feedback. Note that K/C/A-present() is an abbreviation for: present content contributing to the concept in a K-, C-, or A-oriented way respectively.

K/C/A-present() are functions of the ACTIVEMATH' pre-existing course generator. K-oriented means present just concepts and possibly explanations; C-oriented means present concepts and examples; A-oriented means present the full spectrum of content including concepts, examples, and exercises.


Global Feedback in ACTIVEMATH

The computation of global feedback requires diagnostic information of several user activities. The information about the student's navigation, reading, understanding, and problem solving actions, e.g. their duration, and success rate, has to be used as a basis for user-adaptive suggestions. Moreover, information about the learner's the history of actions and information from her user model is necessary to generate useful suggestions.


Blackboard Architecture for Global Feedback

The architecture for the global suggestion mechanism clearly separates diagnoses and suggestions as shown in in Figure 1. An advantage of the separation of evaluation and suggestions is that the same evaluation results can be used by different suggestion mechanisms, in different pedagogical strategies, and later also by a dialog system. For instance, if the diagnosis yields a seen(example, insufficient)3, then example is presented again in a strict-guidance strategy but not so in a weak-guidance strategy.

Evaluators

Some evaluators provide a diagnosis immediately from one of a user's action while other evaluators infer a diagnosis from the immediate diagnoses and additional information In Figure 1, several immediate and one intermediate evaluator are displayed. New immediate and intermediate diagnosis agents can be easily added, e.g., an evaluator for the individual average reading time.

Figure 1: The architecture of evaluator and suggestion mechanisms
Architecture

The immediate evaluators each watch one of the following types of activities

The immediate evaluators pass their results to a diagnosis blackboard (DBB) and to the user model (user's mastery-level of concepts and to the activity history). The current updating mechanism for mastery-values in the user model is described in [12]. Essentially, K-mastery values are triggered by reading, C-mastery values by dealing with examples, and A-mastery values by dealing with exercises. A Bayesian Net user model including its updating mechanism is future work.

Intermediate diagnoses are computed by other evaluators from the information on the DBB and in the user model. These diagnosis are written on the DBB too. The evaluators for intermediate diagnoses each watch the DBB-entries and the user model and currently infer the following intermediate diagnoses:

As displayed in Figure 1, several suggestors compute global feedback from the diagnoses on the DBB and write on the suggestion blackboard (SBB). If necessary, the results are sent to a metaReasoner that rates the different suggestions on the SBB. Then the best rated suggestions are executed.


Suggestors

The single suggestors evaluate JESS-rules and write their results on the SBB which are then realized to deliver a smile/noSmile shortcut and the more detailed feedback that consists of a verbal feedback ``xx'' (only abbreviated in this paper) and one or several presentation-actions. We specified the reassuring 'smile' feedback because we feel that reassurance and positive feedback is important for motivational reasons [10] and for avoiding situations in which the learner feels insecure.

These presentation-actions include

  1. navigation help
  2. content suggestions

For each learning-goal level (K-, C-, or A), a suggestion strategy can be designed as a set of suggestors. In what follows, we present the suggestors of an A-level oriented suggestion strategy which (1) reacts to navigation problems and (2) suggests content. First, we explain their essence and then the actual rules follow.

Note that in the expressionseenButUnknown() denotes the missing level of mastery of the focus concept. The verbal feedback is abbreviated by 'noSmile' or 'smile' in the table. The detailed verbal and personalized feedback is being designed at the moment.

Rules for Navigation Suggestions

are needed because ACTIVEMATH delivers a hypertext learning document and it is known that navigation in hypertexts needs special attention [15] and getting lost in hyperspace puts an additional load on the learner.

Rules for Content Suggestions

are needed especially if the goal-level of mastery is not yet reached by the learner. As opposed to the local feedback that corrects single problem solving steps, the global feedback described below promts and supports the learner in activities such as repeating, self-explaining, comparing, varying, information gathering that are known to improve learning, see, e.g., [7,2,16,17,8].

More rules are planned for improving motivation of the learner.

Conclusion

The web-based learning environment ACTIVEMATH presents content, worked-out examples and exercises to the student rather than exercises only or examples only. This presentation may include incomplete examples, elaborations and examples with built-in questions, etc. Now we added two types of feedback, local feedback described elsewhere and global feedback. An on-line demo of ACTIVEMATH is available at http://www.activemath.org.

In this article, we described our research on feedback in ACTIVEMATH and beyond. It includes the distinction between local and global feedback as well as the separation of diagnosis and global suggestion mechanisms by an architecture with two blackboards.


Future Work

Most importantly, the diagnoses and suggestion mechanisms have to be evaluated empirically with actual users. Then the mechanisms will be improved according to the test results.

The next investigations focus on the design of a gender-specific and personalized verbalization of the feedback. In addition to the described way of delivering feedback to a user automatically, we will investigate and design feedback that can be actively requested. Moreover, we will devise several suggestion strategies for different learning goal-levels and for different learning scenarios.

Acknowledgement

We thank the ACTIVEMATH group, in particular, Carsten Ullich, Paul Libbrecht, Jochen Büdenbender, and Sabine Hauser for the involved discussions as well as Bernhard Jacobs for his supply of empirical knowledge.

Bibliography

1
V. Aleven and K.R. Koedinger.
Limitations of student control: Do student know when they need help?
In G. Gauthier, C. Frasson, and K. VanLehn, editors, International Conference on Intelligent Tutoring Systems, ITS 2000, pages 292-303. Springer-Verlag, 2000.

2
V. Aleven, K.R. Koedinger, and K. Cross.
Tutoring answer explanation fosters learning with understanding.
In S.P. Lajoie and M. Vivet, editors, Artificial Intelligence in Education, pages 199-206. IOS Press, 1999.

3
R.G. Almond, L.S. Steinberg, and R.J. Mislevy.
A sample assessment using the four process framework.
Educational Testing Service, 1999.
http://www.education.umd.edu/EDMS.

4
A.T.Corbett, K.R. Koedinger, and J.R. Anderson.
Intelligent tutoring systems.
In Landauer Helander, M. G., T.K., and P.V. Prabhu, editors, Handbook of Human-Computer Interaction, pages 849-874. Elsevier Science, The Netherlands, 1997.

5
B.S. Bloom, editor.
Taxonomiey of Educational Objectives: The Classification of Educational Goals: Handbook I, Cognitive Domain.
Longmans, Green, Totonto, 1956.

6
A. Bunt, C. Conati, M. Huggett, and Kasia Muldner.
On improving the effectiveness of open learning environments through tailored support for exploration.
In J.D. Moore, C.L. Redfield, and W.L. Johnson, editors, International Conference on Artificial Intelligence and Education, pages 365-376. IOS Press, 2001.

7
M.T.H. Chi, N. De Leeuw, M. Chiu, and C. Lavancher.
Eliciting self-explanations improves understanding.
Cognitive Science, 18:439-477, 1994.

8
M.T.H. Chi, S.A. Siler, H. Jeong, T. Yamauchi, and R.G. Hausmann.
Learning from tutoring.
Cognitive Science, 25:471-533, 2001.

9
C. Conati and K. VanLehn.
Teaching meta-cognitive skills: Implementation and evaluation of a tutoring system to guide self-explanation while learning from examples.
In S.P. Lajoie and M. Vivet, editors, Artificial Intelligence in Education, pages 297-304. IOS Press, 1999.

10
H. Mandl, H. Gruber, and A. Renkl.
Enzyklopädie der Psychologie, volume 4, chapter Lernen und Lehren mit dem Computer, pages 436-467.
Hogrefe, 1997.

11
M. Mayo, A. Mitrovich, and J. McKenzie.
An intelligent tutoring system for capitalisation and punctuation.
In M. Kinshuk, C. Jesshope, and T. Okamoto, editors, Adavanced Learning Technology: Design and Development Issues, pages 151-154. IEEE Computer Science, 2000.

12
E. Melis, J. Buedenbender E. Andres, Adrian Frischauf, G. Goguadse, P. Libbrecht, M. Pollet, and C. Ullrich.
ACTIVEMATH: A generic and adaptive web-based learning environment.
Artificial Intelligence and Education, 12(4), winter 2001.

13
E. Melis, J. Buedenbender, E. Andres, Adrian Frischauf, G. Goguadze, P. Libbrecht, and C. Ullrich.
Using computer algebra systems as cognitive tools.
In International Conference on Intelligent Tutor Systems (ITS-2002), LNAI, pages -. Springer-Verlag, 2002.

14
A. Mitrovich and S. Ohlsson.
Evaluationof a constraint-based tutor for database language.
International Journal on Artificial Intelligence in Education, 10(3-4):238-256, 1999.

15
T. Murray, J. Piemonte, S. Khan, T. Shen, and C. Condit.
Evaluating the need for intelligence in an adaptive hypermedia system.
In G. Gauthier, Claude Frasson, and K. VanLehn, editors, Intelligent Tutoring Systems, Proceedings of the 5th International Conference ITS-2000, volume 1839 of LNCS, pages 373-382. Springer-Verlag, 2000.

16
A. Renkl.
Worked-out examples: Instructional explanations support learning by self-explanations.
Learning and Instruction, 2001.

17
R. Stark.
Instruktionale Effekte beim Lernen mit unvollständigen Lösungen.
Forschungsberichte 117, Ludwigs-Maximilian-Universität, München, Lehrstuhl für Empirische Pädagogik und Pädagogische Psychologie, 2000.

18
K. VanLehn.
Bayesian student modeling, user interfaces and feedback: A sensitivity analysis.
International Journal of Artificial Intelligence in Education, 12:.., 2001.

19
L. Vygotsky.
The Development of Higher Psychological Processes.
Harvard University Press, Cambridge, 1978.

20
B.Y. White and T.A. Shimoda.
Enabling students to construct theories of collaborative inquiry and reflective learning: Computer support for metacognitive development.
International Journal of Artificial Intelligence in Education, 10:151-182, 1999.


Footnotes

... knowledge.1
Transfer-mastery could be the most advanced mastery but currently, we subsume this in application.
... modifying2
The implicit connection between comptencies is also considered but will be better formalized in Bayesian Net user model which is in progrress.
... insufficient)3
i.e., the time for reading the example is insufficient