User Experience & Testing
User experience testing was restricted by the timeline while having many objectives as it was deemed necessary to test both user interaction and educational benefits. These small scale studies give valuable insights into how users are likely to interact with the system but lack iteration and plurality to be generalized.
The studies focused on non-university students who had experience with video games and specifically VR with the goal of recruiting participants possessing comparative media knowledge and experiences with commercial products in order to be able to more effectively contrast Terra’s novel approaches. Subjects were not repeated between studies.
Phase I: Object Interaction Preferences
Focus on user interaction preferences.
Phase II: UX Information Display Modes
Focus on ease of use.
Phase III: Enhanced Immersive Environment
Focus on knowledge retention and educational benefits.
While research methods were varied they all focused on actionable solutions and product improvement with the findings of one study informing the foundation of the next. The resulting virtual environment incorporates and iteratively improves on the products goals throughout this research cycle.
Phase I:
Object Interaction Preferences Testing
The goal of the study is to identify available interaction modes that are best suited to cultural heritage displaying in fully virtual environments. Interactive methods for cultural heritage that have been studied by Bekele, and others, usually deal with novel interaction models and not those that are already afforded by specific engines. This study looks at multiple input methods that are natively available within the Unreal Engine.
Number of participants: 27
Age of Participants: 18-28
All participants self Reporting as virtual reality literate.
Total number of virtual interactions: 3
Training: 5 Minutes / each interaction type
Virtual interaction length: 10 Minutes / each interaction type
Interaction Type: Virtual Reality Environment - In Person
Data Gathering: Mixed approach: Observation / Individual Interview / Focus Group
Task:
Assess the users ability to access information about multiple artworks within a single world environment.
Users were assigned to engage with a randomly ordering of the interaction model between three sessions set one day apart. Each participant was trained on the interaction model for five minutes and were interviewed after each experience. Finally, users were split into focus groups based on their chosen interaction preferences.
Interaction approaches tested:
Invisible Click: User is able to engage with whatever hud aimer is pointed at.
Laser Pointer: User interacts via use of laser pointer indicator.
Thrown Ball: Physics driven interaction that relies on spawn object interaction.Observation:
Participants were observed for the length of the short training session and full length of the experiment. Special attention was paid to participants asking for further clarification on tasks, control functions, and clarification of the virtual environment features. Participants were observed for visual signs of discomfort, stress, and attentiveness.
Interview:
Participants were asked a sequential series of questions directly after each virtual experience. Responses were recorded, digitally transcribed, and analyzed via a world bubble and treemap to create comparative visual representations of user feedback.
-
How do you feel about the virtual interaction that you just did?
-
What is most dissimilar with this interaction from the types of virtual interactions that you are used to?
-
What did you find most confusing/unfamiliar about this interaction?
-
Which features of the interface did you find most useful?
-
Which features of the interface did you find least useful?
-
What is your level of confidence in being able to navigate a much larger world environment with this interface?
-
Why do you think that is?
Focus Group:
Participants were invited to a follow up focus group upon the competition of the interview based on their interaction preferences. The focus group participated viewing footage of their virtual interactions while asked to discuss the following questions:
-
Why did you choose this interaction method as the most successful experience?
-
What interaction type or feedback mechanism were you surprised was missing from this method?
-
Which part of your personal experience was most jarring or alarming?
-
How did you feel physically when moving around the virtual environment?
-
What are some ways you feel this method could be improved?
-
Thematic analysis revealed that the invisible click interaction type was the most confusing and least useful overall to the participants. Over ½ of participants felt low levels of confidence being able to navigate a large world virtual environment with this interface, opposed to only 5% and 7% of participants that reported low levels of confidence with the laser and ball interaction approaches respectively.
There was not a large discrepancy in satisfaction between the laser and ball interaction modes. Analysis found that words like ‘fun’, ‘exciting’, ‘joy’, were associated with the ball interaction model where ‘Positive’, ‘Professional’, and ‘accurate’ were associated with the pointer model.
Based on these results, the software chose the ball interaction mode as the primary input method and created a setting that would allow users to change to pointer mode. This was done due to the greater user satisfaction of the ball interaction mode, which has fewer serious negative themes than the other two modes.
Invisible Click Mode: This mode allows participants to select objects and works by hovering over a object with a reticle. No other interaction feedback is given.
Ball Throw Mode: This mode allows participants to select objects and works with a physics object that spawns from the motion controller.
Laser Pointer Mode: This mode allows participants to select objects and works with a laser pointer that spawns from the motion controller.
Phase II:
UX Information Display Modes Testing
The study discerns preferences between five pre-programed display information types within a fully virtual environment. Research in this realm is typically focused on real-world information displays, with little evidence that information display models translate equivalently to the virtual world (J Grobelny, 2011 J Grobelny, 2015) or studies that are otherwise generalized (A Kolbeinsson,2023 Irina Popovici, 2019). This study instead focuses on virtual information display types that have been used in commercial virtual environments (video games, AR headset UX) and applies them for educational delivery displays.
Number of participants: 40
Age of Participants: 19-27
All participants self reporting as virtual reality literate.
Total number of interactions: 1
Training: 5 Minutes
Interaction length: 10 Minutes
Interaction Type: Virtual Reality Environment - In Person
Data Gathering: Mixed approach: Survey and Interview
Task:
Assess the users level of comfort with comparative virtual data representations for traditional learning (text and video) information.
Participants were presented with five virtual information systems for displaying traditional educational materials such as video and text. Each participant was randomly assigned to one approach. Each participant was asked to answer 10 general information questions pertaining to information accessible within the virtual environment and were given a survey about their experience after the interaction. Interactions lasted for a total of 30 minutes.
The virtual environment contained one of the following works of art and corresponding informational displays:
Zenobia in Chains, Harriet Hosmer
Lion Attacking a Horse, Unknown
American Gothic House, Architectural Model
These artworks were picked in particular because of their sculptural nature and relative obscurity with general audiences.
Interaction approaches tested:
Billboard Display: An interaction mode where, on interaction, information is overlaid over the environment.
Environmentally Fixed Display: An interaction mode that displays tablets next to each artworks. Allowing the user to experience the information as a part of the environment.
Immersive Display: An interaction mode that, on interaction, the object and player are isolated from the rest of the overworld and fixed displays spawned around the object. This allows the user to avoid distractions.
Hand Tablet Display: An interaction mode that displays networked tablets next to artworks. Allowing the user to move the information screen into view physically.
HUD Information Mode: The tablet interface is attached to the heads up display. Allowing the user to see both the information screen and world in their view.Interviews:
Users were instructed to spend a few minutes to familiarize themselves with the interface and artwork before being asked a series of questions meant to gauge their understanding of the artwork and their ability to access the relevant information within the environment. Specifically, the participants were asked:
-
What is the name of the object
-
Who is the Artist of the object
-
Where was this object created
-
When was this object created
-
Which major art movement does this piece fall under
-
Which characteristics of the object make you say that?
-
Could you physically describe the object, how large is it physically?”
-
Why is this artwork significant or important?
Survey:
At the end of the virtual interaction, participants were asked to rate their experience and comfort via a written survey. The survey consisted of 5 quantitative questions.
-
How would you rate your ability to use this interface to learn about a different artwork?
-
How often do you feel that it was difficult to find the information you were looking for?
-
How would you rate your level of comfort within the virtual environment?
-
How often did you feel that something was unintuitive, or didn’t work how you expected?
-
How well do you feel you understand the artwork that you just saw?
-
Participants reported that they both felt the most comfortable with the immersive mode. The hand-display and fixed display modes were too difficult to find information in for most participants but felt the most intuitive. Billboard display was the lowest ranked overall with few users reporting understanding beyond 4’ while immersive mode and hand-held mode were likely (80%) to be on or beyond a score of 6.
These results showed that a varied approach may be necessary to effectively fit varieties of learning styles and individual preferences. Immersive mode and Tablet display mode were enhanced with principles found to be successful in alternative display modes.
Standing Tablet Mode: Objects were displayed with a accompanying panel displaying information as is traditional in museums and galleries.
Context Display Mode: Interacting with displayed objects would pop-up information panel displays next to each piece.
Hand-Held Mode: Allows the user to use a VR tracked tablet display to view information on a selected subject.
Player Screen Mode: Allows the user to use a VR tracked tablet display to view information on a selected subject.
Next:
Phase III:
Enhanced Immersive Mode
The goal of this study was to discern the Enhanced Immersion system’s ability to deliver educational information. Like other studies that focus on educational technology, the methodology of this study consisted of observation and knowledge check methods meant to shed light on a user's educational journey and objectively test specific knowledge. The knowledge and testing within the environment are based off of Iowa State curriculum for Art History but have been applied to less familiar artworks to increase accuracy of knowledge testing.
Number of participants: 20
Age of Participants: 18-40
All participants self reporting as virtual reality literate.
Total number of interactions: 1
Training: 5 Minutes
Interaction length: 30 Minutes
Interaction Type: Virtual Reality Environment - In Person
Data Gathering: Mixed approach: Observation and Knowledge Testing
Task:
Assess the users ability of participants to learn about objects within the virtual environment.
Participants were placed into a virtual environment that displayed 20 interactive works. After a brief training consisting of movement and interaction controls, the users were allowed to freely explore the environment. Participants were told they would be participating in a knowledge testing exercise at the end of their interaction. Participants were asked to focus on any three interactive objects within the environment and that testing would be based exclusively off of these objects. Students were told to study demographic and historical information as if they were prepping for an exam.
Observation:
Virtual observation consists of watching each users experience from the perspective of the user via recording of the VR environment. Observations were focused on evidence of difficult with systems and interactions. Specifically:
-
How often the user has to repeat an action?
-
How long does the user spend interacting with the 3D model?
-
How much time does the user spend interacting with the 2D information?
-
How does the user react when the video/audio component plays unexpectedly?
-
How much time does the user spend with any given work?
-
Which buttons in the interact does the user hit? How often?
-
Does the user use the live-web screen in HUD or screen mode?
Knowledge Testing:
Knowledge testing occurred in two phases, one directly after the virtual environment experience and another one week after via email. Both tests were digitally delivered to the users personal device and customized to include only the three works that was gathered from observation.
The test consisted of 6 demographic multiple choice questions and two deeper learning short answer questions. Three of the multiple choice questions and one of the short answer stayed identical between the two testing phases where the others were shifted focus to alternative questions about the same artworks.
-
Participants scored well on the phase one knowledge check, generally obtaining a score of 75% or above. Phase two testing showed a decrease in the amount of right answers to an average of 62% Users that showed more confidence in their navigational abilities tended to score higher while users who struggled with the interface had considerably lower scores. Users that spent more time viewing the written content (screen based information displays) did markedly better than ones who spent more time with the video/audio components or the model itself. Regardless of length of interaction participants were able to accurately describe an object physically and approximate its height based on a single interaction.
Final Design of interaction VR Panels able to be controlled with VR controllers, mouse and keyboard, or gamepad.
Final Design of lighting configuration which allows users to select color, intensity, and shadow cast of lights on an object.