Computers have become pervasive in our lives, including the way we learn. Advances in computing devices and network infrastructures have made it possible to access educational content anytime, anywhere. When the physical environment is relevant to the ubiquitous learning content, I believe that augmented reality (AR) – the seamless integration of virtual information to real environments – is the most suitable interface. Although many researchers agree that using AR in ubiquitous learning is beneficial, there are few studies on AR's design, evaluation and effects to learning.
My thesis studies both external and internal AR annotations for displaying contents in ubiquitous learning. Among several AR implementations, handheld AR (HAR) is the easiest to distribute because of the availability of tablet computers and smartphones as school equipment and as personal devices. To assess HAR systems, I developed the HAR Usability Scale (HARUS) which is useful for iterative prototyping.
Based on multimedia learning theory, annotations in the real world are advantageous for memorization because it leverages on the student's familiarity with places and objects found in that place. I tested this hypothesis using FlipPin, a HAR system that supports situated vocabulary learning. Results of my user studies show that HAR led to better retention of words, attention and satisfaction of students.
Unlike external annotations, internal annotations using AR x-ray systems are rather unnatural, superhuman visualizations. In response, I conducted an evaluation of the legibility of existing AR x-ray methods. Results show that users have varied preferences for proper amounts of occlusions. As such, methods for adjustment should be developed especially when using AR x-ray with varied lighting conditions and textures of occluding objects. Overall, saliency-based AR x-ray led to more legible visualizations compared to edge-based.