In data collection, there is a lot of room for assumption-making on both the side of the researchers and the side of the participants, which will then give birth to assumptions in the interpretation. The wording of questions is automatically understood by the person(s) who write them. Those researcher-writers take for granted that any given participant in a study will therefore fully grasp what is being asked of her. The danger is greatest when inquiry revolves around a broad word, “an SAT word,” as we used to say in high school. Participants may have no personal investment in that study design’s outcome, and may therefore not take it seriously.
For example, I worked in a research group with beloved colleagues where our study hindered on the word “immersion.” We never defined that word for the participants as part of the survey instruction. Nearly all participants rated our provided playlist at the highest level of immersion. As student-researchers, we were happy, we felt successful, but as those results have aged a few months, it seems more like the participants were trying to be nice people giving us researchers the results we wanted. After all, we made the playlist, suggesting that those five song cuts must be immersive in the way of the survey’s intent, because those songs were chosen by the question writers.
Another kind of (possible) bias I’ve observed is when someone’s self-perception does not align with how others perceive them based on statements and behaviors. I think that where there is hypocrisy, you will always find justification. In my brief time as a Ph.D student, I have observed this issue far more with faculty than fellow students. Expertise can blind, like medical doctors who smoke cigarettes. Researcher bias might identify patterns that indicate causality without considering mitigating factors of correlation. I heard a professor say “international students are harder working,” a group to which this professor would have belonged, minutes after evangelizing the need for cultural awareness in research analysis. We pick and choose who deserves to know our better angels, and who doesn’t.
Collaboration and peer review are good tools to minimize bias, if you can find outspoken colleagues with sharp minds (I favor Megan Barnes as the Cohort 27 bar setter in this regard). A thorough literature review could assist a researcher’s honest self-reflection on personal vulnerability to biased views by removing some potential for feeling attacked if comparable studies do not merit comparable outcomes. When appropriate, statistical analyses might be useful. Replication studies can illuminate issues of bias in earlier studies, though of course would only be useful in retrospect.
Leave a comment