My PI recently pointed me (although not me specifically) to an interesting lecture by Srikant Sarangi. In this RECLAS Lecture Sarangi discusses various methodological issues that qualitative researchers face when interpreting and analyzing data. One of his main points is that in choosing an analytic lens, whether that is Conversation Analysis or some other qualitative method, we cannot escape interpreting data through that lens. There is no such thing as merely seeing a phenomenon in its pure form; anything you see, you interpret. This is obvious in everyday life, where depending on our knowledge we may see a flower, a rose, or rosa hulthemia. Or from another perspective, once we've learned to read, we cannot just see lines on a screen or a piece of paper: we see and immediately read text. There is no escaping it.
Analytic Goals
Sarangi then raises the specific issue of what Harvey Sacks calls "unmotivated looking," the ideal that we don't bring any analytic ideas or problems to the table, but start by just going through the data to see what comes up. Schegloff framed it as follows in his seminal paper on Confirming Allusions:
"An examination not prompted by prespecified analytic goals (not even that it be the characterization of an action), but by "noticings" of initially unremarkable features of the talk or of other conduct." (Schegloff, 1996: 172)
According to Sarangi, this is inherently impossible. Looking in some motivated way is unavoidable, because we always direct our attention to some channels, and not others. We cannot look at everything, and so we look at one thing or a few things at the exclusion of others. We may of course look at the same phenomenon repeatedly—which is what we do in data sessions—but we do this precisely because we will be looking at different aspects each time. Either that, or we're looking at the same aspect in more detail, but then we are clearly doing motivated looking.
Interestingly this exact issue came up recently when I had revised and resubmitted a paper. One of the reviewers had argued that my analysis was not data driven, but that I had come with preconceptions about what to look for and what I would find in the data, and that my paper was merely a confirmation of these preconceptions. So what I was doing was even worse than doing motivated looking.
I argued in response that while the presented analysis was indeed not gained through unmotivated looking as described by Schegloff, this did not mean it was not data driven. I had merely relied on earlier findings by other scholars and myself. While the topic of unmotivated looking was not discussed as such, it was tacitly addressed, when the reviewer responded that according to my description no CA study could ever be data driven, since we always bring other theories with us when we analyze. We cannot escape what we know about conversation. Obviously the reviewer specifically did not want to make the point that there is no such thing as unmotivated looking, quite the opposite really. But it raises the issue of what unmovitated looking truly is, what it means for Conversation Analysis, and whether it is indeed still feasible.
"An examination not prompted by prespecified analytic goals (not even that it be the characterization of an action), but by "noticings" of initially unremarkable features of the talk or of other conduct." (Schegloff, 1996: 172)
According to Sarangi, this is inherently impossible. Looking in some motivated way is unavoidable, because we always direct our attention to some channels, and not others. We cannot look at everything, and so we look at one thing or a few things at the exclusion of others. We may of course look at the same phenomenon repeatedly—which is what we do in data sessions—but we do this precisely because we will be looking at different aspects each time. Either that, or we're looking at the same aspect in more detail, but then we are clearly doing motivated looking.
Interestingly this exact issue came up recently when I had revised and resubmitted a paper. One of the reviewers had argued that my analysis was not data driven, but that I had come with preconceptions about what to look for and what I would find in the data, and that my paper was merely a confirmation of these preconceptions. So what I was doing was even worse than doing motivated looking.
I argued in response that while the presented analysis was indeed not gained through unmotivated looking as described by Schegloff, this did not mean it was not data driven. I had merely relied on earlier findings by other scholars and myself. While the topic of unmotivated looking was not discussed as such, it was tacitly addressed, when the reviewer responded that according to my description no CA study could ever be data driven, since we always bring other theories with us when we analyze. We cannot escape what we know about conversation. Obviously the reviewer specifically did not want to make the point that there is no such thing as unmotivated looking, quite the opposite really. But it raises the issue of what unmovitated looking truly is, what it means for Conversation Analysis, and whether it is indeed still feasible.
Methodological Lens
One way to understand Sacks and Schegloff, and that seems to be the way that Sarangi understands them, is that you start an analysis from a blank slate. You assume absolutely nothing and try to see the data for what it is—whatever that may be. And in that case he is right, that is an absurd notion. There is simply no way to pick up a recording of a conversation, and start looking at it completely unmotivated by any analytical goal whatsoever. The fact that you're going to do Conversation Analysis inherently means you have a limited set of possible analytical goals, and you're aware of these goals. In fact, you use the method to further constrain those goals, as with any other qualitative method.
But this is not the way Sacks, Schegloff, or my reviewer for that matter, understand unmotivated looking or data-driven research. It means that instead of formulating a research question before you study the data, you determine your research question based on what you find interesting in the data. What you find interesting is obviously inherently determined in part by your methodological lens and toolbox, there is no escaping that, but within the confines of that lens you can still do unmotivated looking.
Schegloff's own example was about what people do when they confirm a yes/no-type question by providing a repeat of that question. That is a very specific practice, but unmotivated looking can be much broader. When I started on my current research position at the Nuffield Department of Primary Care, I was tasked to study remote consultations, or video-mediated consultations. The research protocol specified a few research questions, but these were so broad as to be anything but constraining. My goal was to analyze the communicative practices that make up a succesful remote consultation. But not only is that a question one obviously cannot answer in a year, there are so many practices that I was basically free to study anything, as long as it dealt with communication. So anything.
My first couple of months were spent looking at the data, trying to figure out what would be interesting questions to answer. The beauty and the challenge of a field that is as understudied as video-mediated consultations, is that you can choose whatever you want, because chances are nobody will have done research on it, let alone published on it. In the end, or middle since the research is ongoing, I decided to focus on the greetings and the physical examinations. Of course these choices and my analyses were guided by the fact that they are about video-mediated consultations, and they are guided by the need to come up with questions and answers that benefit clinicians and patients, but at the same time they are still analyses that are being developed after "noticings of initially unremarkable features," and in that sense they are the result of unmotivated looking.
But this is not the way Sacks, Schegloff, or my reviewer for that matter, understand unmotivated looking or data-driven research. It means that instead of formulating a research question before you study the data, you determine your research question based on what you find interesting in the data. What you find interesting is obviously inherently determined in part by your methodological lens and toolbox, there is no escaping that, but within the confines of that lens you can still do unmotivated looking.
Schegloff's own example was about what people do when they confirm a yes/no-type question by providing a repeat of that question. That is a very specific practice, but unmotivated looking can be much broader. When I started on my current research position at the Nuffield Department of Primary Care, I was tasked to study remote consultations, or video-mediated consultations. The research protocol specified a few research questions, but these were so broad as to be anything but constraining. My goal was to analyze the communicative practices that make up a succesful remote consultation. But not only is that a question one obviously cannot answer in a year, there are so many practices that I was basically free to study anything, as long as it dealt with communication. So anything.
My first couple of months were spent looking at the data, trying to figure out what would be interesting questions to answer. The beauty and the challenge of a field that is as understudied as video-mediated consultations, is that you can choose whatever you want, because chances are nobody will have done research on it, let alone published on it. In the end, or middle since the research is ongoing, I decided to focus on the greetings and the physical examinations. Of course these choices and my analyses were guided by the fact that they are about video-mediated consultations, and they are guided by the need to come up with questions and answers that benefit clinicians and patients, but at the same time they are still analyses that are being developed after "noticings of initially unremarkable features," and in that sense they are the result of unmotivated looking.
Seeing things differently
The point is this, I think. When we as Conversation Analysts say we are doing unmotivated looking, that does not mean we pretend to leave behind all knowledge and experience and the assumptions and biases that go with that. We are very much aware of what CA can and cannot do, and what types of questions a CA lens will and can generate. What unmotivated looking means for us, is that we can just pick up a piece of data or dataset, and transcriptions of those data, and using the toolbox of CA go through it for noticeable features. We subsequently try to refine what those features are, so as to come to an analysis. Alternatively, we may do a practice-driven analysis, where we start with an active interest in a specific phenomenon that we then collect—as Mick Smith and I are have been doing for the past year for our study of Oh I thought X and Oh ik dacht dat X.
Unmotivated looking is thus very much an option for data analysis, but it does not mean what Sarangi takes it to mean. That's not to say his point, and the lecture in which he makes it, are without merit. Quite the opposite really. It forces us to rethink what it means to do analysis, and to think about what prior experiences we bring in to our research projects. It is a way of reminding us that once we make the decision to do Conversation Analysis, we will notice certain phenomena, but not others. Just as within CA, somebody like my with a theoretical linguistic background will notice other features than somebody who has spent their career studying embodied interaction in medical care.
We take unmotivated looking for granted, because Sacks and Schegloff developed the toolbox from that perspectice, but as Sarangi point out, the more assumptions we make, the more problematic our analyses. We need to be critical of what unmotivated looking means for us, and not assume that because as individual scholars we have an understanding of it, that understanding extends to our peers or colleagues across disciplines.
Unmotivated looking is thus very much an option for data analysis, but it does not mean what Sarangi takes it to mean. That's not to say his point, and the lecture in which he makes it, are without merit. Quite the opposite really. It forces us to rethink what it means to do analysis, and to think about what prior experiences we bring in to our research projects. It is a way of reminding us that once we make the decision to do Conversation Analysis, we will notice certain phenomena, but not others. Just as within CA, somebody like my with a theoretical linguistic background will notice other features than somebody who has spent their career studying embodied interaction in medical care.
We take unmotivated looking for granted, because Sacks and Schegloff developed the toolbox from that perspectice, but as Sarangi point out, the more assumptions we make, the more problematic our analyses. We need to be critical of what unmotivated looking means for us, and not assume that because as individual scholars we have an understanding of it, that understanding extends to our peers or colleagues across disciplines.