One thing that keeps amazing me when I study people talking is how incredibly organised it is, even on the smallest level. We don't just pay attention to words and sentences, but the smallest gesture, facial expression, or sound can have a significant impact on our conversations. Research in cognitive sciences, where they do beautiful brain scans, suggests that most of these tiny phenomena should not be possible: our brains should not be able to react so fast. But then you look at a conversation and they happen.
Talking at the same time
In the past year I've been studying how technology affects the way we talk, and in particular how we organise turn-taking. We generally don't talk at the same time and we abhor silence. So we need to be really good at deciding who gets to talk. But when we use for example Skype, there is always a bit of lag, some extra silence. And I want to understand in detail what kind of problems that causes and how we solve them.
To do this research I started looking at recordings of conversations, focusing at points where both people talk at the same time, or are about to talk at the same time. And it's those cases that truly fascinate me. One person somehow recognizes that the other is also about to talk, and just before they would have produced a sound, they retract whatever they were going to say. And we can see this happening. That person will direct their gaze, open their mouth, but then freeze, and swallow whatever they were going to say and do. It happens in a matter of milliseconds, and you think we should not be able to do this, but I find case after case.
Predicting machines
Our ability to adjust our behaviour on the fly seems to be part of a more general capacity we have for predictions. We know when it's our turn to talk, because we can predict when someone else is likely going to be done speaking. And that's possible because we share the same language, culture, and we know a lot about the other person. But this means that we are continuously making predictions and refining those predictions based on new information. And that is an impressive skill.
While listening to a podcast, I overheard this brief section of talk in an interview between Elizabeth Day and Dolly Alderton;
Day: that isn’t a quote from me, that is a quote from
a- it’s called Desiderata, and it’s prose
by Max Ehrmann
Alder: I- the d:’s made me thought you were gonna say
a Desiree song.
Day: ((laughing)) or Dizzee Rascal,
Alder: ((laughing)) a Dizzee Rascal song.
Day: Ha ha ha ha
a- it’s called Desiderata, and it’s prose
by Max Ehrmann
Alder: I- the d:’s made me thought you were gonna say
a Desiree song.
Day: ((laughing)) or Dizzee Rascal,
Alder: ((laughing)) a Dizzee Rascal song.
Day: Ha ha ha ha
Alderton's remark absolutely fascinates me. She claims that she just heard the 'd' of Desiderata and her brain predicted that Day was going to say Desiree (song). It's already remarkable that our brains can make such quick predictions, but that we are aware of them is even more crazy. She suggests that she is not just listening to Day word-by-word, or syllable-by-syllable, but sound-by-sound. And it gets even more bizarre when you realize that unlike letters in writing, we don't speak in clear separate units: our words just flow from our mouths in a continuous stream of sound.
Of course, when we see behaviour like this, caveats apply. We cannot know what Alderton actually predicted. It might be that she is merely making a joke after the fact. And it's not clear much access we really have to our predictive abilities. But it's an example that still rings true for me. I think everybody has had this experience, where they thought someone was going to say one thing, and then they said something else entirely. And so what exactly happened in Alderton's brain is not really interesting: what she thinks she heard and expected, and her response to that show that humans are incredible predicting machines.
Half a century ago, the founding father of conversation analysis, Harvey Sacks, was lecturing at various campuses at the University of California. In these lectures he said that we should not care about whether something is possible. The fact is that people do things all the time that are seemingly impossible. If we want to understand how conversation works, we should just let the data speak for itself. Whatever the brain does to make it possible, that's a secondary issue. And to me, it is these tiny impossible possibilities that make conversation such a fascinating research subject.