In exploring how I can move beyond my initial biases as an action researcher I did some research into the main types of biases and how my project may have been impacted by them:
- Recall bias
- Asking participants to rate how they felt in the morning AFTER the study had been conducted could have impacted their recall and memory of the events that had occurred prior to the study thus impacting the results.
- Selection bias
- Although the survey was anonymous so participants would not feel pressure to answer in a certain way, the majority of my sample size was aged between 18-30, and thus there was a degree of selection bias in the intervention. Most people was also proficient in English, had a good understanding of technology and were either academically or professionally in a creative field, which somewhat impacted the results of the study.
- Observation bias
- As the participants were aware that I was in the room with them, they may have felt inclined to do the experience in a certain order or comply with the instructions I had given them beforehand, rather than do the experience at their own pace. They may have also felt pressured to either complete the experience too quickly or not feel as though they had enough time to digest it.
- Confirmation bias
- In the analysis of my results I was more inclined to look at the positive outcomes from my feedback than the critical ones, and thus was more biased towards some feedback than others. I was also more inclined to follow the feedback of my professional stakeholders which resulted in bias for which critical aspects I had given priority to.
- Publishing bias
- In the analysis of my data I mainly summarised the positive aspects of feedback from my data, rather than having a more holistic view of the experience and summarising the criticisms as well. Although my intervention was a success, I need to pay more attention to negative feedback as well in order to improve on further iterations of both my research question as well as my intervention.
INTERVENTION CHECKLIST:
A short checklist to verify my research question as well as my intervention:
1. Are your interventions informed by your secondary research
Yes, as well as primary research. Each aspect of my intervention is backed by research, from the visuals, to the sounds, to the exercises and critically evaluated.
2. In testing your interventions with stakeholders and experts how/why have you chosen these people?
(Copied from Research Report) To date, the intervention has been tested on creative professionals between 18-50 due to several factors. In studies conducted by Limina Immersive, the demographic demonstrating the biggest interest in purchasing a VR headset in the UK were 18- 24 year olds, and the age group found most likely to own a headset was 35-44 (Allen 2021). Despite an even age distribution between downloads of popular meditation applications (Curry 2021), studies show that young adults in the UK (18-30) are the group most aware of daily mindfulness practices and have the highest engagement rates of regular practice (Simonsson et al 2020). My audience consisted of coursemates, creative professionals in my network, and external experts within the VR, Film and Television industries. Initially, I wanted to test the intervention with experts within the mindfulness industries, however after a few attempts it was clear that VR had not been adopted as a tool within the mindfulness industries in the same way that mindfulness was an area of exploration within VR. With that acknowledgement, my stakeholder pool consisted of VR creatives and producers who had worked on mindfulness projects, both as an independent practice as well as externally for clients such as Headspace and Calm.
3. How have your methods of gathering evidence changed?
Initially, I relied heavily on secondary research for the UX/UI part of my experience, but after working with VR designers I realised that the best way forward was to test my project with as many people as possible and collect primary feedback on the experience and the interactions within – I’m definitely a lot more confident now in approaching stakeholders externally and getting their (academic and professional) opinions on the project.
4. Have you tested your intervention with same stakeholders/experts or did this change? And why?
Yes, I thought the most fair way to evaluate the intervention would be to have an even mix of stakeholders who had already been part of the research and development as well as those who knew nothing about the research question and just wanted to try the experience, accompanied with anonymous feedback which provided organic results and an objective overview of what the experience was like.
5. To what extent have contingent circumstances had an effect on the way you have investigated your question?
My main circumstance was the accessibility to resources within the university, I was only allowed to rent equipment and rooms for a small window of time which affected my research question (seeing as in my research proposal I had initially wanted to use biometrics and Arduino systems), however I definitely feel as though my current intervention is a lot more reflective of the realistic and streamlined expectations I had for the course and thus has helped me consolidate my research question a lot better.
