About the Global Suggestion Mechanisms in ACTIVEMATH
Erica Melis and Eric Andres
DFKI Saarbrücken, Germany
melis|eandres@dfki.de
Abstract:
We investigate feedback in a learning environment for mathematics. In
particular, we distinguish local and global feedback. In more detail, we
describe the global feedback and suggestions in ACTIVEMATH, its
blackboard-architecture with its reusable and easily modifyable components and
some of the user-adaptive global learning suggestions implemented so far.
keywords: learning suggestions, blackboard architecture, suggestion rules
ACTIVEMATH is a user-adaptive web-based learning environment for mathematics.
It generates the learning material for the individual learner according to her
goals, preferences, and mastery of concepts as well as to certain learning
scenarios [12].
ACTIVEMATH's user model consists of the
components history, profile, and mastery-level
The history contains information about the user's activities (reading time for
items, exercise success rate, editing of user model)
The user profile contains her preferences, scenario, goals submitted for the
session.
To represent the concept mastery assessment, the user model
contains values for a subset of the competences of Bloom's mastery taxonomy
[5]:
- Knowledge
- Comprehension
- Application.
Knowledge-mastery is connected to
reading (or transfer of) a text. Bloom defines the needed skills as recall of
information, knowledge of major ideas; Comprehension-mastery can be achieved by
relating several concepts and then answer questions or understand examples.
Bloom defines the needed skills as understanding information, translate
knowledge into new context, predict consequences; Application-mastery relates to
solving problems by applying the concept. Bloom defines the needed skills as use
information (in new situations), solve problems using required skills or
knowledge.1This is also similar to Merrill and Shute's taxonomies of outcome types [].
Finishing an exercise or going to another page triggers an updating of the user
model. Since different types of user actions reflect and uncover different
competencies they serve as sources for primarily
modifying2 the values of corresponding competencies. In particular, reading
concepts corresponds to 'Knowledge', following examples corresponds to
'Comprehension', and solving exercises corresponds to 'Application'.
Now we added some
diagnoses of several of the student's activities and the generation of learning
suggestions (feedback).
This article addresses the different types of feedback
in ACTIVEMATH, the blackboard architecture of diagnoses and global feedback,
and concrete suggestors.
Traditionally, user-adaptive feedback and help in tutor systems (ITSs) has been
designed for a direct response to students' problem solving actions and the
feedback is designed to help students to accomplish a solution, e.g., in
the PACT tutors [4] or in CAPIT
[11].
Frequently, the feedback is an explicitly authored text that reflects the
author's experience with typical errors in a specific problem solving context.
In some sense, the specific feedback is questionable because authoring all the
specific feedback is a very laborious task (see e.g., [18]) and
often require an extreme authoring effort, e.g., for mal-rules and for
explicitly authoring what can go wrong and what the reason is for each erroneous
action for each exercise. Therefore, we try to avoid such a kind of diagnosis
(and corresponding feedback) in ACTIVEMATH.
Moreover, the usage and benefit of more and more detailed help may strongly
depend on the individual user [1] and some users might
even dislike frequent suggestions and intrusion [6].
Although most feedback in ITSs targets problem solving activities, some systems provide
feedback targeting meta-cognitive skills of the student. For instance,
[9,2] try to support self-explanation of
worked-out examples; SciWise [20] provides feedback for planning and
reflecting activities in experimental science and for collaboration.
We think that there is much room for further development in this and similar
directions and present an approach to deliver global feedback in the learning process.
This article suggests to distinguish local and global feedback. It then focuses
on global feedback as implemented in ACTIVEMATH. In more detail, we describe
the architecture and some of the user-adaptive global learning suggestions
implemented so far.
Separation of Local and Global Feedback
Two general types of feedback and guidance can be provided by an ITS, a local
response to student activities which is targeted at the recognition and
correction of single problem solving steps of the learner and a global
feedback targeting the entire learning process. This differentiation somewhat
resembles the distinction of task-level and high-level described in the
four-process model of [3].
These two types of feedback differ with respect to realm, content, and aim and,
correspondingly, they will be arranged closely to exercises/examples or not.
That is, local also means, the feedback is provided immediately after each
problem solving action of the user and local feedback should be given directly
attached to the problem solving (maybe even locally attached. Instead, the
global feedback and suggestions can be provided independently and may be
delayed, i.e. delivered, when the user has finished reading a text, studying an
example, or working on an exercise.
Many ITSs do not provide global feedback at all. And even if they do, such as
SQL-Tutor [14] and CAPIT [11],
they do not separate local and global feedback, say architecturally.
In ACTIVEMATH, local and global feedback is distinguished because the of
their different aims, different foci, different learning dimensions, and
different mechanisms. In addition, the employment of service systems for the
check of the correctness of a problem solving step and for the generation of
local problem solving feedback is a (practical) reason for separating local and
global feedback. The local feedback such as 'syntax error', 'step not correct,
because...', 'task not finished yet', or 'step not applicable' is computed by a system and related to a
problem solving step in an exercise or to the final achievement in an exercise.
The current implementation of local feedback is explained in [13] and
more technically in http://www.ags.uni-sb.de/ãdrianf/activemath/.
The global feedback scaffolds the student's navigation, her content sequencing
(including examples and exercises), and her meta-cognition.
The global feedback and suggestions may concern, e.g., planning
what to learn, to repeat, to look at, or to do next, navigating the content,
reflecting, monitoring (which also includes summarizing, comparing, finding
similar problems, etc).
In what follows we deal with global feedback. Note that K/C/A-present() is
an abbreviation for: present content contributing to the concept in a K-,
C-, or A-oriented way respectively.
K/C/A-present() are functions of the ACTIVEMATH' pre-existing course
generator. K-oriented means present just concepts and possibly explanations;
C-oriented means present concepts and examples; A-oriented means present the
full spectrum of content including concepts, examples, and exercises.
Global Feedback in ACTIVEMATH
The computation of global feedback requires diagnostic information of several
user activities. The information about the student's navigation, reading,
understanding, and problem solving actions, e.g. their duration, and success
rate, has to be used as a basis for user-adaptive suggestions. Moreover,
information about the learner's the history of actions and information from her
user model is necessary to generate useful suggestions.
Blackboard Architecture for Global Feedback
The architecture for the global suggestion mechanism clearly separates diagnoses
and suggestions as shown in in Figure 1. An advantage of the
separation of evaluation and suggestions is that the same evaluation results can
be used by different suggestion mechanisms, in different pedagogical strategies,
and later also by a dialog system. For instance, if the diagnosis yields a seen(example, insufficient)3, then example is presented again in a strict-guidance
strategy but not so in a weak-guidance strategy.
Some evaluators provide a diagnosis immediately from one of a user's action while
other evaluators infer a diagnosis from the immediate diagnoses and additional
information In Figure 1, several immediate and one
intermediate evaluator are displayed. New immediate and intermediate diagnosis
agents can be easily added, e.g., an evaluator for the individual average
reading time.
Figure 1:
The architecture of evaluator and suggestion mechanisms
|
The immediate evaluators each watch one of the following types of activities
- navigation
- reading (time)
- problem solving (assessed performance)
- MCQ exercises
- exercises with a Computer Algebra System
- exercises with the Omega proof planner
The immediate evaluators pass their results to a diagnosis blackboard (DBB)
and to the user model (user's mastery-level of concepts and to the activity
history). The current updating mechanism for mastery-values in the user model is
described in [12]. Essentially, K-mastery values are
triggered by reading, C-mastery values by dealing with examples, and A-mastery
values by dealing with exercises. A Bayesian Net user model including its
updating mechanism is future work.
Intermediate diagnoses
are computed by other evaluators from the information on the DBB and in the user
model. These diagnosis are written on the DBB too.
The evaluators for intermediate diagnoses each watch the DBB-entries and the
user model and currently infer the following intermediate diagnoses:
- missingPrerequisite(concept, level), where level can be K(nowledge), C(omprehension), or A(pplication) currently. missingPrerequisite is inferred only in case of an insufficient
solutionresult of the user actions of an exercise
related to the concept which is in focus (focus-concept). This feature means that
concept is a prerequisite concept of the badly mastered focus-concept and its level-mastery in the user model is insufficient. (The mastery of the
focus-concept itself is not intermediate but immediate.)
- SeenAndKnown, notSeenButKnown, SeenButUnknown(level),
notSeenAndUnknown(level) are all stated for the focus-concept and level is one of the values K, C, or A. ..Unknown(K) means that in
the user model the
K-value (for the focus-concept) is insufficient (say, less than 80..Unknown(C) means that in the user model the C-value (for the
focus-concept) is insufficient but not the K-value. ..Unknown(A) means
that in the user model the A-value (for the focus-concept) is insufficient but
not the K- and C-values.
The intermediate features are inferred for the focus-concept, for the
Seen diagnoses of those items that contribute to the focus-concept
in the learning material, and from the K,C,A-mastery for the focus-concept.
As displayed in Figure 1, several suggestors compute global
feedback from the diagnoses on the DBB and write on the suggestion
blackboard (SBB). If necessary, the results are sent
to a metaReasoner that rates the different suggestions on the SBB. Then the best
rated suggestions are executed.
Suggestors
The single suggestors evaluate JESS-rules and write their results on the SBB
which are then realized to deliver a smile/noSmile shortcut and the more
detailed feedback that consists of a verbal feedback ``xx'' (only abbreviated in
this paper) and one or several presentation-actions.
We specified the reassuring 'smile' feedback because we feel that reassurance
and positive feedback is important for motivational reasons [10] and for
avoiding situations in which the learner feels insecure.
These presentation-actions include
- navigation help
- content suggestions
- present new or skipped example
- present similar example
- present counter example
- present new exercise
- present same exercise
- present again focus-concept maybe also examples, exercises
- present (missed) introduction, motivation, or elaboration
- present again certain prerequisites maybe together with examples and exercises
For each learning-goal level (K-, C-, or A), a suggestion strategy can be
designed as a set of suggestors. In what
follows, we present the suggestors of an A-level oriented suggestion
strategy which (1) reacts to navigation problems and (2) suggests content.
First, we explain their essence and then the actual rules follow.
Note that in the expressionseenButUnknown() denotes the missing level of mastery
of the focus concept.
The verbal feedback is abbreviated by 'noSmile' or 'smile' in the table. The
detailed verbal and personalized feedback is being designed at the moment.
are needed because
ACTIVEMATH delivers a hypertext learning document and it is known that
navigation in hypertexts needs special attention [15] and getting lost in hyperspace
puts an additional load on the learner.
- if the user navigated appropriately, then provide reassuring
feedback (abbreviated by 'smile').
IF Navig(okay) THEN - smile -
- if an irrational navigation that started at point ?start of the table of
content (TOC) is diagnosed, then two pointers show the current position in the
TOC and the ?start position. In this case, the user can click the ?start
position to return to a 'useful' learning path.
IF Navig(irrational,?start) THEN - noSmile -
``did you get lost by chance?''
pointer(current) and pointer(?start)
are needed especially if the
goal-level of mastery is not yet reached by the learner. As opposed to the
local feedback that corrects single problem solving steps, the global feedback
described below promts and supports the learner in activities such as
repeating, self-explaining, comparing, varying, information gathering that are known to
improve learning, see, e.g., [7,2,16,17,8].
- If everything went fine, then provide reassurance. If there are exercises
that are more difficult than those solved already, then present these. No substantiation
needed for these rules.
IF Known(A ?focus) THEN - smile -
IF Known(A ?focus) THEN - smile -
AND solution(?id correct) ``see more''
AND exerciseFor(?focus ?id) exerciseFor(?focus moreDiff(?id ?exc1))
present(?exc1)
- If seenButUnknown(A) holds for the focus-concept, then present examples similar to the failed
exercise of the focus-concept, unseen simpler exercises, and then the not yet
solved exercise of the
focus-concept.
This suggestion is made because A-mastery is the learning goal but not yet
achieved, and therefore another exercise for the focus-concept should be
offered to be solved. This exercise should be a bit simpler in order to keep
the user's motivation up (in the proximal zone of development
[19]). Then an example similar to the exercise should be shown for
comparison. Finally, the originally failed exercises should be presented
again.
IF SeenButUnknown(A ?focus) - noSmile -
AND solution(?id incorrect) THEN ``see more''
AND exerciseFor(?focus ?id) exerciseFor(?focus lessDiff(?id ?exc1))
exampleFor(?focus simTo(?id ?exm1))
present(?exm1 ?exc1 ?id)
- If seenButUnknown(C) holds for the focus-concept, then show not yet
sufficiently seen examples and counter-examples (if available) and ask for
an explanation the example.
This suggestion is made because C-mastery of the focus-concept is not yet
achieved and therefore, this comprehension has to be supported. This is tried
by showing more examples and prompting the learner to engage herself in
self-explanation.
IF SeenButUnknown(C ?focus) - noSmile -
AND exampleFor (?focus ?exm) THEN ``please self-explain''
AND NOT Seen(okay ?exm) present(?exm)
- If seenButUnknown(C) holds for the focus-concept and
an error/counter-property (cp) is diagnosed in the solution of the exercise (id) for which a
counter-example can be given, then present an example and a counter-example for
the focus-concept and ask the learner to compare them.
This suggestion is made because the C-mastery of the
focus-concept is not yet achieved and therefore, this comprehension has to be
supported. Since a misconception (cp) is diagnosed for which a counter-example is
available, the support is offered by showing an example and a counter-example in
parallel and ask the learner for a comparison that should yield deeper
comprehension and learning.
IF SeenButUnknown(C ?focus) - noSmile -
AND solution(?id ?cp) THEN ``please compare''
AND exerciseFor(?focus ?id) exampleFor(?focus simTo(?id ?exm))
counterExampleFor(?cp,?ce)
present(?exm + ?ce)
- If seenButUnknown(C) holds for the focus-concept then prompt the learner
to explain and vary an example of the focus-concept.
This suggestion is
made because C-mastery of the focus-concept is not yet achieved and therefore,
this comprehension has to be supported. This is tried by
showing more examples and prompting the learner to engage
herself in self-explanation.
IF SeenButUnknown(C ?focus) - noSmile -
AND exampleFor (?focus ?exm) THEN ``explain and vary the example''
AND Seen(okay ?exm) present(?exm)
- if seenButUnknown(K) holds for the focus-concept, then show again
concepts, explanations, examples, not yet solved exercises. This suggestion
is made because K-mastery (and also A- and C-mastery) is not yet achieved.
Since A-mastery is the learning goal in the overall strategy, everything
(reading, examples, and exercises) for the focus-concept, except any solved
exercises, needs to be repeated.
IF seenButUnknown(K ?focus) THEN - noSmile -
``please explain concept''
K-present(focus)
and
IF seenButUnknown(K ?focus)
AND solution(?id correct) THEN REJECT present(?id)
- if notSeenAndUnknown(K/C/A) holds for the focus-concept, then
K/C/A-present, respectively.
This suggestion is made because A-mastery is the learning goal and so every level
not mastered yet for the focus-concept is suggested again.
IF notSeenAndUnknown(K/C/A) THEN ``repeat''
K/C/A-present(focus)
- if there is a missing K/C/A-prerequisite (that is, the user
insufficiently masters the focus-concept at one of the levels respectively), then
K/C/A-present that prerequisite at the respective level.
No further substantiation needed because the failure is likely to be caused by
missing mastery of the prerequisites of the focus-concept.
IF missingPre(?c, ?level) THEN -noSmile -
``missing prerequisite''
?level-present(?c)
- if an item ID has been seen sufficiently and more than twice, then
do not present ID again. Otherwise the motivation might drop. This is not a
suggestion to the learner though but has to be considered in the conflict
resolution on the SBB.
IF seen(?id, okay)
AND history(seen(?id)$> 2$) THEN REJECT present(?id)
More rules are planned for improving motivation of the learner.
The web-based learning environment ACTIVEMATH presents content, worked-out examples and
exercises to the student rather than exercises only or examples
only. This presentation may include incomplete examples, elaborations and examples
with built-in questions, etc.
Now we added two types of feedback, local feedback described elsewhere and
global feedback.
An on-line demo of ACTIVEMATH is available at http://www.activemath.org.
In this article, we described our research on feedback in ACTIVEMATH and
beyond. It includes the distinction between local and global feedback as well as
the separation of diagnosis and global suggestion mechanisms by an architecture
with two blackboards.
Future Work
Most importantly, the diagnoses and suggestion mechanisms have to be evaluated
empirically with actual users. Then the mechanisms will be improved according
to the test
results.
The next investigations focus on the design of a gender-specific and
personalized verbalization of the feedback.
In addition to the described way of delivering feedback to a user automatically,
we will investigate and design feedback that can be actively requested.
Moreover, we will devise several suggestion strategies for different learning
goal-levels and for different learning scenarios.
We thank the ACTIVEMATH group, in particular, Carsten Ullich, Paul Libbrecht,
Jochen Büdenbender, and Sabine Hauser for the involved discussions as well as
Bernhard Jacobs for his supply of empirical knowledge.
- 1
-
V. Aleven and K.R. Koedinger.
Limitations of student control: Do student know when they need help?
In G. Gauthier, C. Frasson, and K. VanLehn, editors, International Conference on Intelligent Tutoring Systems, ITS 2000, pages
292-303. Springer-Verlag, 2000.
- 2
-
V. Aleven, K.R. Koedinger, and K. Cross.
Tutoring answer explanation fosters learning with understanding.
In S.P. Lajoie and M. Vivet, editors, Artificial Intelligence in
Education, pages 199-206. IOS Press, 1999.
- 3
-
R.G. Almond, L.S. Steinberg, and R.J. Mislevy.
A sample assessment using the four process framework.
Educational Testing Service, 1999.
http://www.education.umd.edu/EDMS.
- 4
-
A.T.Corbett, K.R. Koedinger, and J.R. Anderson.
Intelligent tutoring systems.
In Landauer Helander, M. G., T.K., and P.V. Prabhu, editors, Handbook of Human-Computer Interaction, pages 849-874. Elsevier Science,
The Netherlands, 1997.
- 5
-
B.S. Bloom, editor.
Taxonomiey of Educational Objectives: The Classification of
Educational Goals: Handbook I, Cognitive Domain.
Longmans, Green, Totonto, 1956.
- 6
-
A. Bunt, C. Conati, M. Huggett, and Kasia Muldner.
On improving the effectiveness of open learning environments through
tailored support for exploration.
In J.D. Moore, C.L. Redfield, and W.L. Johnson, editors, International Conference on Artificial Intelligence and Education, pages
365-376. IOS Press, 2001.
- 7
-
M.T.H. Chi, N. De Leeuw, M. Chiu, and C. Lavancher.
Eliciting self-explanations improves understanding.
Cognitive Science, 18:439-477, 1994.
- 8
-
M.T.H. Chi, S.A. Siler, H. Jeong, T. Yamauchi, and R.G. Hausmann.
Learning from tutoring.
Cognitive Science, 25:471-533, 2001.
- 9
-
C. Conati and K. VanLehn.
Teaching meta-cognitive skills: Implementation and evaluation of a
tutoring system to guide self-explanation while learning from examples.
In S.P. Lajoie and M. Vivet, editors, Artificial Intelligence in
Education, pages 297-304. IOS Press, 1999.
- 10
-
H. Mandl, H. Gruber, and A. Renkl.
Enzyklopädie der Psychologie, volume 4, chapter Lernen
und Lehren mit dem Computer, pages 436-467.
Hogrefe, 1997.
- 11
-
M. Mayo, A. Mitrovich, and J. McKenzie.
An intelligent tutoring system for capitalisation and punctuation.
In M. Kinshuk, C. Jesshope, and T. Okamoto, editors, Adavanced
Learning Technology: Design and Development Issues, pages 151-154. IEEE
Computer Science, 2000.
- 12
-
E. Melis, J. Buedenbender E. Andres, Adrian Frischauf, G. Goguadse,
P. Libbrecht, M. Pollet, and C. Ullrich.
ACTIVEMATH: A generic and adaptive web-based learning
environment.
Artificial Intelligence and Education, 12(4), winter 2001.
- 13
-
E. Melis, J. Buedenbender, E. Andres, Adrian Frischauf, G. Goguadze,
P. Libbrecht, and C. Ullrich.
Using computer algebra systems as cognitive tools.
In International Conference on Intelligent Tutor Systems
(ITS-2002), LNAI, pages -. Springer-Verlag, 2002.
- 14
-
A. Mitrovich and S. Ohlsson.
Evaluationof a constraint-based tutor for database language.
International Journal on Artificial Intelligence in Education,
10(3-4):238-256, 1999.
- 15
-
T. Murray, J. Piemonte, S. Khan, T. Shen, and C. Condit.
Evaluating the need for intelligence in an adaptive hypermedia
system.
In G. Gauthier, Claude Frasson, and K. VanLehn, editors, Intelligent Tutoring Systems, Proceedings of the 5th International Conference
ITS-2000, volume 1839 of LNCS, pages 373-382. Springer-Verlag, 2000.
- 16
-
A. Renkl.
Worked-out examples: Instructional explanations support learning by
self-explanations.
Learning and Instruction, 2001.
- 17
-
R. Stark.
Instruktionale Effekte beim Lernen mit unvollständigen
Lösungen.
Forschungsberichte 117, Ludwigs-Maximilian-Universität,
München, Lehrstuhl für Empirische Pädagogik und Pädagogische
Psychologie, 2000.
- 18
-
K. VanLehn.
Bayesian student modeling, user interfaces and feedback: A
sensitivity analysis.
International Journal of Artificial Intelligence in Education,
12:.., 2001.
- 19
-
L. Vygotsky.
The Development of Higher Psychological Processes.
Harvard University Press, Cambridge, 1978.
- 20
-
B.Y. White and T.A. Shimoda.
Enabling students to construct theories of collaborative inquiry and
reflective learning: Computer support for metacognitive development.
International Journal of Artificial Intelligence in Education,
10:151-182, 1999.
Footnotes
- ...
knowledge.1
- Transfer-mastery could be the most advanced mastery but
currently, we subsume this in application.
- ...
modifying2
- The implicit connection between comptencies is also
considered but will be better formalized in Bayesian Net user model which is
in progrress.
- ... insufficient)3
- i.e., the time for reading the example
is insufficient