Friday 30 September 2016

On Research Ethics and Risk

On Wednesday I stole something from Coles.

(Would now be a good time to mention that this blog and the views expressed within are in no way affiliated with my university?)

I didn’t mean to do it. I didn’t even realise I had done it until I got to my car and unloaded my bags, and noticed a tiny jar of pesto sitting at the bottom of the trolley that I had failed to put through the self-serve checkout.

And because I’m taking an intensive unit in social research ethics of course I started analysing the situation from different ethical perspectives.

For example, from a consequentialist perspective, which is concerned with the relative good and bad of the outcome of our actions, I might argue it was okay for me to accidentally take a little jar of pesto from Coles without paying for it because the relative good of the outcome - I, a poor-arse student, saved $4 that I could spend on a cup of coffee for the advancement of my research - is greater than the relative bad - Coles, a multi-million dollar company, lose out on $4.

Whereas from a virtue perspective, which is concerned primarily with the character of the people doing the actions, I was maybe a little at fault in not noticing the jar of pesto (is ‘being observant’ a virtue?), and was much more at fault in being too lazy to do something about it afterwards. 

I decided to take the consequentialist perspective.

(By the way, it occurs to me that someone in Coles HQ must have done a crap tonne of calculations to work out how much they save by installing self-serve checkouts, compared to the loss incurred by every slightly dishonest customer checking out their entire trolley contents as ‘potatoes - $1.99/kg.’)

But that’s not a very interesting ethical question. A much more interesting question, that I’ve been thinking a lot about, has to do with who assumes the risk of human research and just what the hell is informed consent.

Let’s use a hypothetical example of New York crack dealers because: social science. I want to interview a bunch of NY crack dealers about their crack habits and social networks. I approach them and they agree in principle. We negotiate some form of consent to conduct the interview, on a number of conditions:

1) the participants can withdraw at any time. That’s standard ethical practice.
2) the particiants can vet the interview transcript afterwards; and that’s also not an uncommon practice.

Under both conditions, I as the researcher assume the risk that my participants might choose to stop the interview, and that they might censor all the good stuff. I think that’s fair.

But there is another condition that can be negotiated as part of consent:

3) the participants can veto any analysis and subsequent publication.

If we include this condition, I assume the risk that my participants may not like how I interpret what they said, and will veto my entire project.

If we don’t include this condition, my participants assume the risk that I may come to conclusions about their personal lives that are actually upsetting. Which is entirely possible, even if they’ve approved the contents of the interview transcript.





Does the risk of researching humans rest on the researcher, or the humans?

And is it even possible to give fully informed consent to participate in a project if you’re not able to foresee all possible final analyses?

Is it okay for the possible benefits of your research to veto the right of humans to protect their own stories and present themselves in the way they want? Does it make a difference if you’re a research hack publishing only on your own blog, or if you have the backing of an entire institution? What about the political and social factors of your participant community and your relationship with them? What about if your participants are far-right neo-Nazis?

The practical answer for most social scientists is that we’re going to do our best to analyse and report on our findings in such a way that gels with our own academic background and that is probably concordant with our research participants. Unless we think they’re wrong (whatever that means) and deserve to be publicly outed. But even if we’re sympathetic to our participants, I don’t think as a rule we tend to make our analyses and publications conditional on the approval of our participants - not least for logistical reasons.

Is this right? Is this ethical? Do we just keep asking these questions?

No comments:

Post a Comment