Facilitatied by Dr Sharon Oviatt, Director of Incaa Design
Multimodal learning analytics is an emerging area that analyzes students’ natural communication patterns (speech, writing, images) to predict learning-oriented behaviors and the consolidation of expertise during educational activities. It offers more robust techniques for evaluating students’ learning progress than click-stream analysis. In addition, it is compatible with the shift to multimodal-multisensor interfaces on cell phones, tablets, and other devices that now dominate educational technologies. In this seminar, Sharon will describe data resources available for working in this area, as well as promising initial research findings.
As one example, new research indicates that signal-level features of dynamic writing, which were extracted as students wrote with digital pens and paper, can reliably identify their domain expertise in mathematics. Analyses were conducted using the Math Data Corpus, in which collaborating student groups jointly solved mathematics problems varying in difficulty. Linear regressions confirmed that lower total energy expended during writing is a significant predictor of higher domain expertise, with models of energy accounting for 35-43% of the variance in students’ expertise level. Further convergent analyses have demonstrated that different empirical and machine learning techniques can yield 92% correct classification of students by expertise level.
These results demonstrate that signal-level writing features, and models of total energy expenditure based on them, can predict domain expertise with surprisingly high reliability. To interpret these findings, I’ll present a new limited resource theory that describes the role of adaptive energy expenditure during acquisition of domain expertise. From a more pragmatic perspective, corporate interest is building because the signal-level writing features outlined in this work can be collected automatically in real time as students use existing pen-based technologies. The race is now on to collect strategic new datasets, in partnership with corporations and school districts, so innovative and high-quality educational applications can be developed.
Sharon Oviatt is internationally known for her multidisciplinary work on multimodal and mobile interfaces, human-centered interfaces, educational interfaces and learning analytics. She has been recipient of the inaugural ACM-ICMI Sustained Accomplishment Award, National Science Foundation Special Creativity Award, and ACM-SIGCHI CHI Academy award. She has published over 160 scientific articles in a wide range of venues, and is an Associate Editor of the main journals and edited book collections in the field of human-centered interfaces. Her recent books include The Design of Future Educational Interfaces (2013, Routledge) and The Paradigm Shift to Multimodality in Contemporary Computer Interfaces (2015, Morgan Claypool). She currently is editing The Handbook of Multimodal-Multisensor Interfaces (forthcoming in 2017, ACM Books). Related to today’s talk, Sharon was a founder of the ACM international conference series on Multimodal Interfaces (ICMI), and also its satellite series of Data-Driven Grand Challenge Workshops on Multimodal Learning Analytics.
If you would like to join this research group or would like more information please contact Lorenzo Vigentini email@example.com
Room 1025, Level 10, Library Tower