Student Modeling for Language Tutors

Workshop at AIED 2005

12th International Conference on Artificial Intelligence in Education

18-22 July

Amsterdam, The Netherlands

 

WORKSHOP PRESENTATIONS

 

Sequencing Vocabulary Instruction: Artificial vs. Real Users

Samuel R.H. Joseph

Stephen H. Joseph

Michael H. Joseph

University of Hawai'i

University of Sheffield

University of Leicester

 

There are various widely researched strategies that appear to be helpful in some, but not necessarily all vocabulary learning situations.  However, an early report suggested that an extremely simple strategy, in which only the ordering of the material presented is varied, might have very substantial effects on learning and recall.  These observations have been used as the basis of many subsequent developments, but rarely been subject to rigorous examination and replication.  We have recently been examining both the theoretical foundation, and the practical implementation, of this latter approach.  In this paper we present a comparison of data obtained using virtual users, operating in accordance with the underlying theory of memory, with the earlier experimental data obtained with real users.

Extensions to a Histogram-Based Student Modeling Approach to Facilitate Reading in Morphologically Complex Languages

Violetta Cavalli-Sforza

Mohamed Maamouri

Carnegie Mellon University

University of Pennsylvania

In this paper we describe our intended approach to student modeling for language tutoring in the context of a project titled "Teaching and Learning Linguistically Complex Languages", recently funded by the United States Department of Education under the Title VI International Research and Studies Program.  The project aims to support foreign language learning and to enhance cross-cultural understanding by producing substantive textual and lexical learning materials and computer-based instructional tools that aid learners in reading authentic materials in languages that present special difficulties for reading.  The specific goals of the project are:
(1) Providing readers with tools to negotiate the complex morphology of target languages;
(2) Enabling learners to read authentic texts containing unfamiliar and difficult words;
(3) Enabling teachers to prepare texts for classroom use and to test students
' reading ability; and
(4) Creating easy Internet access to all tools and materials for teachers and learners.

Using Speech Recognition to Construct a Student Model for a Reading Tutor

Kai-min Chang

Joseph Beck

Jack Mostow

Albert Corbett

Carnegie-Mellon University

Intelligent Tutoring Systems derive much of their power from having a student model that describes the learner's competencies. However, constructing a student model is challenging for computer tutors that use automated speech recognition (ASR) as input, due to inherent inaccuracies in ASR. We describe two extremely simplified models of developing word decoding skills and explore whether there is sufficient information in ASR output to determine which model fits student performance better, and under what circumstances is one model preferable to another. The two models described are a lexical model that assumes students learn words as whole-unit chunks, and a grapheme-to-phoneme (G->P) model that assumes students learn the individual letter-to-sound mappings that compose the words. We use the data collected by the ASR to show that the G->P model better describes student performance than the lexical model. We then determine which model performs better under what conditions. On one hand, the G->P model better correlates with student performance data when a student is older, more proficient at grapheme-to-phoneme mappings, or when the word is more difficult to read/spell. On the other hand, the lexical model better correlates with student performance data when the student has seen the word more times.

Using speech recognition to model children's reading skill development

Joseph E. Beck

Kai-min Chang

Jack Mostow

Albert Corbett

Carnegie-Mellon University

Intelligent computer tutors can derive much of their power from having a student model that describes the learner's competencies. However, constructing a student model is challenging for computer tutors that use automated speech recognition (ASR) as input. This paper reports using ASR output from a computer tutor for reading to compare two models of how students learn to read words: a model that assumes students learn words as whole-unit chunks, and a model that assumes students learn the individual letter->sound mappings that make up words. We use the data collected by the ASR to show that a model of letter->sound mappings better describes student performance. We then compare using the student model and the ASR, both alone and in combination, to predict which words the student will read correctly, as scored by a human transcriber. Surprisingly, majority class has a higher classification accuracy than the ASR. However, we demonstrate that the ASR output still has useful information, and that classification accuracy is not a good metric for this task, and the Area Under Curve (AUC) of ROC curves is a superior scoring method. The AUC of the student model is statistically reliably better (0.670 vs. 0.550) than that of the ASR, which in turn is reliably better than majority class. These results show that ASR can be used to compare theories of how students learn to read words, and modeling individual learner's proficiencies may enable improved speech recognition.

Modeling Student Knowledge in an Oral Reading Companion

Sherman R. Alpert

Peter G. Fairweather

Bill Adams

IBM T.J. Watson Research Center

Guided oral reading has been shown to have positive pedagogical value; our Reading Companion provides a shared reading experience in which students read on-screen books aloud guided by the Companion, which offers scaffolded modeling of expert skill and feedback based on speech recognition. A student model is maintained for each student. This model tracks student performance and decoding knowledge in terms of decoding/pronunciation generalizations and surface word features. Reading generalizations or heuristics involve mapping sequences and patterns of letters and letter categories to particular sounds. An example letter sequence heuristic makes "ph" sound like /f/; an example letter category pattern informs us that the pattern VCe at the end of a syllable usually makes the V have its long sound and makes the final "e" silent. An example word feature might be the fact that a word contains a specific consonant blend. We describe how the tutor maps spoken words to generalizations and word attributes, and how information about these data are maintained in the student model. The Reading Companion includes a sophisticated post hoc reporting facility, which provides a view into the student model, allowing teachers to gain insights into students' strengths and weaknesses, facilitating targeted instruction in specific areas. The Reading Companion is Web-based and accessible via an ordinary browser.

MAC: Adaptive, perception-based speech remediation s/w for mobile devices

Maria Uther

Pushpendra Singh

Iraide Zipitria

James Uther

University of Portsmouth

University of Sydney

In this paper, we present a mobile adaptive computer assisted language learning (MAC) software aimed to help Japanese-English speakers in perceptually distinguishing the non-native /r/ vs. /l/ English phonemic contrast with a view to improving their own English pronunciation in this regard. The software is adaptive and more practice is given for the learner on contrasts that are most difficult for them, and the learners themselves choose their level of adaptation. MAC is implemented in Java (J2ME), allowing the software to be used on a wide range of mobile devices including most recent mobile phones. This allows the application to be used anywhere and anytime, on a device that the learner probably already owns.