Lucas Seuren
  • Main
  • Conversation Analysis
    • Origins of CA
    • Current topics in CA
    • Applied CA
  • My Research
    • Why CA
    • Video Consultations
    • My Publications
  • Blog

Measuring academic scholarship and leadership

8/11/2021

 
The Dutch Research Council, NWO, recently changed the way it asks reviewers to assess grant applications. In the past, it took quantitative measures like Journal Impact Factor into consideration. The assessment process is now purely qualitative. It takes into account that the modern university is not a publication factory, that you cannot evaluate the ability of a researcher and the quality of their work merely by the journals in which they publish, and the frequency with which they do this. NWO want to acknowledge and appreciate the modern academic.

Funding is arbitrary

This new policy created a lot of anger among especially senior researchers. Over 170, including one Nobel Laureate, wrote a letter stating that the new policy would adversely affect the standing of Dutch researcher in the global scientific community. They argue that by ignoring established quantitative measures like Journal Impact Factor, junior researchers will no longer be able to compete. Furthermore, they claim that it’s harder to assess a “narrative CV” as there are no clear methods to measure skills like “leadership”, that they struggle to fill in the plusses and minuses of an application, which overrule their “gut feeling” about the application, and that they in the end just google the researcher to look at their publications list anyway. The whole process would make it arbitrary who receives funding.

These arguments are weak, to say the least. For starters, who receives funding is already arbitrary. The quality of the funding proposal and the researcher’s abilities are less relevant than who are the members of the committee. Many bids meets the standards for excellent research, which means that committees lack clear grounds to distinguish between them. They rely on arbitrary factors such as their renowned gut feeling. Whether an applicant receives funding is more or less a lottery – just with slightly better odds.

Narrative CVs or Impact Factors

The complaint about how one should review narrative CVs is also strange coming from senior researchers. Let’s ask a slightly different question: how do you evaluate good scholarship? The answer would seem obvious to these people: number of publications, journal impact factor, etc. But that answer takes something for granted, which isn’t at all obvious: that these are in fact valid metrics for scholarship and not to some extent arbitrary, a historical accident. Through the middle of the 20th century, researchers have come to value publications in certain journals, but it did not have to be that way. We could have focused on public outreach, on impact on technological development, healthcare, public approval, and an infinite number of other social values. There are no natural laws for good scholarship, only social conventions. That’s not to say all metrics would make sense, but it does mean that you should always re-assess existing metrics when they seem out of date.

These researchers are correct that assessing a narrative CV is harder than assessing a classic CV, but only because we already have metrics for the latter. It is no more difficult to measure leadership than it is to measure scholarship: as any 2nd year undergraduate student will learn, if you want to measure something, you need to operationalize it. And while not every operationalization is equally valid for the concept you’re interested in – there can be discussions about what are good metrics – there is no reason we cannot measure leadership any more than there would be complaints that we cannot measure scholarship. Again, our metrics are just social conventions – they are good, because we agree that they are good.

Noise

My final beef with the objections of these senior researchers is that a series of plusses and minuses does not allow them to evaluate according to what they think is a good application. But this isn’t a bug: it’s a feature. By having standardized ways of assessing what is good or bad about an application, we get rid of the noise that exists between the assessments of different reviewers. Different reviewers look at proposals in very different ways, and you do not want the funding decision to be dependent on who happens to be the reviewer. Multiple opinions help mitigate against noise, but a committee of 3-5 members is still a small sample, and thus very noisy. Moreover, the areas where they do agree are biased towards certain types of scholars. Decisions are made and arguments are then fabricated to justify the decision: that’s simply how human decision-making works if left unchecked. We need to make sure that when a researcher submits a grant application, any committee would come to the same assessment, and that means not relying on the gut feeling of a reviewer.

It is probably no surprise that these objections were made by mostly senior researchers. After all, they thrived in the system that is now being redesigned – the metrics that made them successful will no longer be valued. In fact, shortly after they wrote their letter, over a hundred junior researchers wrote their own letter to counter these objections. Senior researchers do not share the lived reality of junior researchers who buckle under the pressure of modern academia, pressure these senior scholars never had to deal with – at least not in the same way. Relying on 9-month contracts for years after finishing a PhD, moving from city to city, spending hours writing grants in one’s spare time, that are almost always rejected, no matter the quality, all for a salary that forces some to live in shared housing well into their 30s is insane. If these senior scholars truly care about us junior researchers, they will not try to stick to an antiquated system, just because it’s the way things have always been. They are in a position to make it better: the place to start would be to listen and work with the scholars they want to be helping.

Vegan meat, dairy and butchers

10/28/2020

 
Last week the European Parliament voted down a proposal to ban the use of words like burger, sausage and steak for products that do not contain meat. The EP did not think that these terms would be confusing if applied to plant-based products. As long as products are clearly labelled as plant-based, vegan, or meat-free, then there is no reason that consumers would accidentally buy them when they were planning to buy meat. At the same time, the EP did come down on plant-based dairy products, banning terms like cheese-like or yoghurt-style. Interestingly, The Guardian did not see this as an inconsistency since cheese and milk are specific products, but what is a product and what is a label is not so easily fixed.

Language is use

Let's begin by pointing out that the European Parliament made the right choice on meat-substitutes. I'm not just saying this as a vegan of 4.5 years: there is every reason that faux meats should be allowed to have similar names as actual meats. The words we use like sausage or steak are associated with meat, of course, but these are just conventions that arose through use. However, not all labels are equal. We can distinguish between labels that are associated with the nature of the product, such as steak, which involves the muscles of an animal, and labels that are primarily about form and use, such as burger, which tells you nothing about the type of meat, but just that it generally comes in a disc like patty that you put on a bun. In fact, a burger could just as well be made of fish.

Based on that observation, we should immediately reject the idea that terms like burger or sausage should not be allowed to be applied to plant-based products. The only reason we associate them with meat, is that in the past they have always been made primarily or exclusively from meat. But that's a historical artifact. Their main identifiers, the way we classify something as a burger or sausage, is their shape and how we eat them.

The argument becomes a bit more complicated when we start talking about words we associate with specific parts of an animal, such as steak. Indeed, this is likely the same reason why terms like almond milk are banned: these categorise a specific product. Milk is not just any white liquid, it is produced by lactating animals. Of course almond milk is a dairy replacement, but that's not the same as saying it's plant-based dairy. So what are other reasons we should ban or allow these terms?

Faux meats

Faux meats are not just made to be eaten in the same was as meat: they also have to look, taste, and smell like meat. Companies like Impossible Foods strive to making products that give the exact same sensory experience as meat, but without the animal agriculture. They are developed to offer consumers a product that they can use to replace meat, without actually losing out on anything they enjoy about meat. (Weirdly, non-vegans can get very hung up on this: they don't seem to understand, or are not willing to understand, that being against animal agriculture, does not mean you don't like the sensory experiences of meat.) In its ideal form, we have two products that are almost exactly the same, but they are made from different sources: plants or animals.

At this point, from a usability perspective, does it make sense to use the same label for both. Outside of a courtroom, words never have precise definitions that we take from dictionaries: meaning of words is shaped through how we use them. One important factor is what best helps us communicate, to make sense of the world and what other people say. If one product is the same for all practical purposes as the other, then we will generally use the same word, for no other reason than convenience. In fact, it goes so far that we can use specific brand names, to refer to generic products. We often don't talk about plastic containers, but about Tupperware. Okay, no other company can label their product as Tupperware, but that's an IP issue, not a language issue. Nobody is confused when you go to a store to buy Tupperware and you come home with a different brand.

Faux meats are meant to replace actual meats. To make it as easy as possible for people, using the same label, conveys better what the product is, than using some new word. And sure, vegan steaks are highly processed foods, but the issue is not health. You can label a bag of crisps as vegan, even if going through that bag in one sitting is not particularly healthy. For people who want to lower their meat intake, having a product labelled steak is convenient, because they immediately know what to replace their conventional meals with.

Vegan cheese and cheese-like

Now what about the plant-based dairy labels. There are at least two reasons why almond milk should not be called almond milk - although whether you enforce that by law is another matter. The first as I mentioned is that milk is produced by lactacting mammals. While that is not a meaning that is engraved in natural law, it is a meaning that we have always associated with it. For the same reason, talking about vegan meat is slightly more weird than faux meat, because meat is a more specific category, used to describe a specific substance. The second is that almond milk - and the same applies to soy, hemp, pea, coconut, etc. - is not designed to give the same sensory experience as dairy milk. Some companies try hard to give it a similar look and texture, but no company tries to make almond milk taste or smell like dairy milk. In fact, the flavour of plant-based milks is part of their unique selling point.

The ban on dairy milks thus seem justified from that perspective. I do, however, want to offer a counterpoint to all this: everybody calls it almond milk. I have literally never met anyone who asked for almond drink or something of the like. The meaning of language lies in its use, and it's very clear that almond milk is the term people use. Forcing companies to label their products differently is not going to stop that. The language of the law is just not the same as the language of human society. Legal language is a means of exercising power and control, and that is a very different ball game.

So why then do I disagree with the Guardian on terms like cheese-like and yoghurt-style? Simple: these labels clearly show that it is not actually cheese or yoghurt, but something that is made to give a similar sensory experience. Even if we agree that vegan cheese is wrong, because cheese has to made from dairy, cheese-like inherently means it's not cheese, and so there is no reason why it should not be allowed. For the same reason, IKEA can sell me artificial plants (which they have succesfully done; I cannot keep real plants alive), and I have yet to meet a lobby group that wants to ban this term. And of course, we have been using terms like peanut butter, and more generally nut butter, for so long, that the argument that these products need to contain dairy is plainly disindenguous.

Convenience

In the end, it all comes down to the convenience of human communication. We need to categorise the world around us in a way that allows us to efficiently and effectively communicate about the world. Making up new words for products that are almost identifical to other products is the very opposite of convenient. It is why terms like almond milk are ingrained in our culture, despite them being legally banned. The law may take a different perspective, but the law cannot prescribe how people should talk. As much as the meat and dairy lobby may be trying to use the law to exercise power over our dietary habits and preferences, in the end, when you go to a supermarket and you ask where you can find the almond milk or vegan steak, the employees will have no problem showing you the way. In fact, it is only a matter of time before you actually need to make clear that you want the plant-based products, because asking for milk or steak, will just not be enough information anymore.

What is Zoom fatigue and do we have it?

4/27/2020

 
Since the outbreak of the COVID-19 pandemic, we have been forced to distance ourselves physically from other people. As a result, we have started using videoconferencing tools a lot more. In fact, we are talking so much online, that some people say we have Zoom fatigue, or more generally, videoconferencing fatigue. Somehow, we feel that these video calls cost way more energy than face-to-face talks. Whether we're teaching, having meetings, or just catching up. I've seen a range of articles on news websites where experts provide plausible explanations. However, there is little to no scientific evidence to back up any of their claims. 

Zoom fatigue is vague

Zoom fatigue has entered our society at an incredibly high rate. Google it an you will get over 200,000 results. That is a lot for a term that we likely never used until a few weeks ago. Strangely though, nobody really knows what Zoom fatigue is. It essentially boils down to a feeling of exhaustion we associate with a lot of videoconferencing meetings. While that is a perfectly natural starting point, it does not give us much in terms of definitions. And if we want to give a scientfic explanation, we need to know what we are talking about first, as well as what we are NOT talking about.

if you use the academic version of Google, Google Scholar, you will get only six results, and some are accidental: Google gives you something like "... Zoom. Fatigue ..." That's no surpise. Research takes time to be published, and Zoom fatigue is a recent phenomenon. There simply has not been time to do adequate research on it. While understandable, that also means alarm bells should start ringing when you read explanations for Zoom fatigue: How can scientists possibly know what it is and how it works, if they have not done the research?

Just to emphasize: the fact that we haven't studied it, does not mean Zoom fatigue is not real. It does mean that any explanation of how it works is tentative at best.

Do we know anything?

For now, let us go with our commonsense understanding of Zoom fatigue: we feel mentally exhausted and that is a result of too many video calls. The question we want to answer is why would video calls cause us to feel mentally exhausted?

Clearly, video calls are not always a problem. Business, particularly those who engage extensively in what is called distributed work, have been using videoconferencing for decades. Despite that, there have been no previous large-scale complaints about video calls causing mental exhaustion. In fact, if we read the scientific literature on distributed work, we find that video calls are seen as an important tool to mitigate the psychological effects of social isolation. Working alone is hard for humans and video calls allow us to connect with people. Furthermore, companies like Google have had video portals for a while now. These are essentially "always on" video calls. But the way they are used, has not caused mental exhaustion.

The cause would seem to be the sheer number of video calls we are having right now. However, nobody knows what number that is. Not only that, there is a range of issues that are left unaddressed. How many video calls is too much? What is the first point where we start noticing it? When does it become so taxing that we just no longer have the energy to continue? How much does the type of call (work, teaching, informal) matter? I could keep asking these questions, but the point should be clear: we are trying to explain a phenomenon without having any real information.

Multitasking

If we look at the explanations given, you can put them in two categories: psychological and interactional. The first category focuses on how video calls supposedly affect our brain and mental state, and why having a lot of them would therefore be taxing. The second category focuses on how video calls differ from face-to-face conversations, and similarly, tries to explain the mental exhaustion from those differences. Neither type of explanation, however, goes beyond basic speculation.

Let's look at some psychology first. One explanation is that video calls require us to multi-task more than we can handle. We are looking at a lot of people at the same time (4 in Microsoft Teams, but a lot more in Zoom), when we should only focus on one. Normally all those other people would be in our peripheral vision, where they would be less distracting. However, at the same time, if only the speaker is visible, that's also no good, because then you cannot see what everybody else is doing, which again you would notice in your peripheral vision. And all this overwhelms the brain.

Now fun fact, it is entirely unclear if this indeed is what overwhelms the brain. While I don't doubt that multitasking is hard, there are no neurological studies that have measured how human brains react to these kinds of conversations. The anecdotal evidence actually suggests the opposite. All those family meetings with large groups of people run of well, and people are enjoying them, even people who go through a lot of Zoom meetings. Now I'm not saying that therefore video calls are not overtaxing our brain, but the evidence points both ways. Therefore, we cannot just extrapolate from what psychological theory tells us. 

Non-verbal cues

The second type of explanation is interactional, which is actually my area of expertise: I've been investigating video calls in medicine for nearly two years now. You'd think interactional explanations would be great, because we actually already know quite a bit about how video calls work, and we know a lot about how face-to-face conversations work. Alas, they are just as speculative as the psychological explanations.

Let's unpack. The interactional explanations focus on a few aspects. One, the continuous delay (lag) in video calls makes it hard to have a smooth conversation, which can be annoying. Two, we cannot have eye contact with other people, since the camera is above or below the screen. Three, we cannot use all our non-verbal behaviour, but because we can see each other, our brains are still trying to use them.

All these points are definitely true. Research on silence in conversation goes back decades, and there is good evidence that we have more trouble with setting up a smooth conversation by video. Similarly, it is obvious that we struggle with where to look in a video call and our gestures cannot always be seen. However, how much this actually exhausts us is a secondary question. Nobody has ever investigated this. 

The main problem here is that people are making claims about how our brain works, without having any way of connecting our behaviour to any responses in our brains. The explanations sound nice and plausible, but they are not supported by any evidence whatsoever. It makes for a nice story, but we should stick to what we know.

We need touch

To close off, let me propose an entirely different explanation. We were not built to live on video calls only. There is some evidence that when we talk remotely, we lack what could be called "social presence", that feeling of being together. This feeling, whatever it may be precisely, is important to us. Since COVID-19 and the lockdown in most countries, we have lost the ability to connect with our friends, family and colleagues face-to-face. We are not exhausted from all the Zoom calls, because Zoom is exhausting, but because using ONLY Zoom is exhausting.

We are living in a time of massive stress and anxiety. Some of us (your truly among them) live almost completely isolated from other people. I go out less than once a week for my shopping, and the face-to-face interaction with the cashier is all I have to sustain myself. Others live with partners and children, and are now forced to see each other all the time. This can create tensions even in good relationships, let alone when things are less then ideal, either in the relationship or in society at large. Anybody would feel exhaustion at some point.

Of course, I am just as most speculating about the cause of our exhaustion. In fact, that is the point. Without evidence, any explanation is as good as another - okay, any explanation that is not supernatural. Is Zoom Fatigue real? Possibly. But just because we feel exhausted now and are having a lot of video calls, there is no reason the two have to be causally related: Zoom does not have to be the cause of our fatigue. Ice cream consumption and murder both increase by about the same rate during the summer, but eating ice cream generally does not cause you to go into a murderous rage. Moral of the story: just because a scientist can provide a nice story and the BBC or National Geographic publishes it, that does not mean it's true.

The inadequacy of language

4/20/2020

 
Language seems an amazing way to express ourselves. Through language, we can communicate on a far more refined level, than would be possible if we were to rely on just symbols. Indeed, it may be that language made it possible for us as a species to learn from each other on an unprecedented scale, and that language in that way drove human cultural evolution from smashing stones to building skyscrapers.
​
The reality is that language is not that great a communication system. In fact,we seem to need language in order to learn language. We need to understand each other before we can start using language to refine that shared understanding. Which seems paradoxical, because it would seem that language is what makes it possible for us to understand each other in the first place. Chicken and the egg. What came first in human evolution: shared understanding or language?

Misunderstanding

I am not going to answer that question, mostly because I’m not a linguistic anthropologist and it’s way beyond my comprehension. But I do know this: we need a shared understanding to talk. Language only works because we share so much knowledge with our conversational partners. It’s not just knowledge about the language we speak, but knowledge about the culture, country and world we grew up and live in. We make sense of the language that people use, based on this vast array of “common ground”. Language is efficient, only because we can leave most things we would need to communicate unspoken. If we had to make everything explicit , we would never stop speaking.

The problem is that things can go wrong. When we mishear or misunderstand something our conversational partners say, and we don’t realise that, then that misapprehension will shape the rest of the conversation. Fortunately, humans are very good at noticing when things go wrong. When a speaker asks a question, the answer is often enough to make them realise whether or not they were understood. This can be obvious, such as when the answer does not address the question, but it can also be more subtle. Sometimes, the same sentence can mean two things, one meaning is an answer to the question, the other is not. 

When misunderstandings pass us by the consequences can be dire. In an article from 1992, Emanuel Schegloff, an American sociologist, discusses a conversation in which a radio host and a caller have a misunderstanding. One thinks they’re talking about the Korean war, the other thinks they’re talking about Vietnam. This misunderstanding continues to pass them by, and they get into a heated argument about whether or not the war was supported by the UN. It’s not until the caller hangs up, looks up the information, and calls back in, that they come to realize the misunderstanding. The problem is not their language: the host heard the same words as the caller and vice versa. It is in the assumptions they use to make sense of that language. Language only works when both speakers share the same assumptions, when there is common ground. When that common ground breaks down, language fails too.

Failure to listen

Now to the point of this blog. I recently had a similar experience to radio host and caller, but one without a happy conclusion. I got into a heated discussion about whether science should be accessible. The initial understanding was completely my fault: I misread and thought it was about whether the language should be accessible to everyone, whereas the point was accessible in terms of free access for everyone. We continued discussing, clearly not going anywhere, and I just could not understand why.

​When I finally realised my failure, it was too late. I tried to show we agreed, that I have to make my research accessible, but my co-participant made sense of my explanation based on what he perceived my unwillingness to listen. My words did not mean to him, what they meant to me, and so the discussion ended with him thinking me an arrogant academic. Of course, had I been taking the position he thought I was taking – that science should not be accessible to non-scientists – that would be totally justified. The assumptions he built about me shaped his understanding of my language. What happened between us is precisely what Schegloff describes: when a misunderstanding leads to an argument, we do always not get a change to rectify it and we will go our own ways, being angry, when we really should not be.

We likely all have been in this position multiple times in our lives without even realising it. We make sense of the world based on what we think we know. In other words, our assumptions shape how we see and understand the world. If there is a discrepancy between our assumptions and the world, our first instinct is not to update our assumptions. That is not a bad thing: we should not change our minds at a whim. There are times when we have to fix a discrepancy, or just have to accept that others see the world differently. However, had I been less busy trying to argue my own position, and instead been trying to understand my co-participant, we would likely not have had an argument. Here, as is so often the case, an argument started with a failure to listen. And no amount of language was enough to put that right.

Revisiting knowledge translation tools: academic pay-per-view

12/27/2019

 
In a recent blog I took issue with some knowledge translation tools that journals now offer such as TrendMD. I took these as nothing more than paying for visibility and citations, a new way for publishers to make money and generate shareholder value while researchers did the actual work using taxpayer funding. They are likely to increase the inequality in academia between the Haves and the Haves Nots, those who can afford to pay for visibility and those who cannot. However, since then a the people behind some of these tools have responded to these complaints, and it is only fair to take these points into account.

Getting noticed

The main problem for any academic is for their work to be noticed by other academics. We can do great work, but if nobody reads it or uses it, then it just represents wasted time, effort, and money. But getting noticed is not always easy. Many of us lack access to the right academic networks, which means fellow researchers may ignore our work: there is only so much they can read and they prioritise work that is at least somewhat familiar. Journal editors may even go so far as to reject manuscripts from people they do not know. They are not bad or lazy editors, but journals often have limited space and editors have to be selective.

From this perspective, knowledge translation tools make a lot of sense. If you can pay a small amount to guarantee or at least increase the chance that your work will be read by your peers, than that's money worth spending. A trial done my TrendMD of 3200 studies shows that cross-promotion of research can generate 50% more citations in a year. And these effects were particularly strong for Health and Medical Sciences. If we take citations as a valid measure of impact, than that's a good result, particularly if your work would normally not even be read, let alone cited. And once your work is out there, so is your name.

The flip side is that promoting your work is not necessarily cheap. A TrendMD campaign can cost around $200US, which for an established academic at a prestigious university need not be a whole lot, but for a researcher struggling at a small university or in a country where salaries and funding are a fraction of what they are universities like mine may represent a significant sum of money. Even a small campaign, like promotional Tweets, can represent a lot of money. So from this perspective, it would seem that these tools are likely to exacerbate the problem of inequality, the Matthew Effect.

Nothing is free

But visibility in any shape or form generally costs money. The most familiar way academics share work is at conferences, but these are notoriously expensive. Registration fees for larger conferences are often hunderds of dollars/pounds/euros. They are also held in major Western cities, like San Francisco or Zurich, which means significant travel and accomodation expenses. Consequently these conferences are beyond the reach of many scholars. Even I can often not attend, and the last time I went to a large conference I had to pay half the costs myself, skipping my summer vacation.

Instead of seeing knowledge translation tools as an additional challenge to equality, TrendMD argues that it is in fact an opportunity: $200US is a lot of money, but it is significantly less than the $1000US or more that you would have to spend to present at a major conference. And getting noticed at a conference takes work as well. If there are presentations parallel to yours by famous speakers, you might just be speaking to a nearly empty room. I've experienced this as well: in the second year of my PhD I talked to a room of four people, while a colleague was talking to well over fourty.

Conferences

I concede this point is a good one. In fact, it is an issue I've raised in my field for a few years. Conferences in my field like the International Conference for Confersation Analysis are held only once every few years and require a massive investment: traveling to Los Angeles or Brisbane, paying the fees, etc.. Moreover, much of what goes on at these conferences may be completely uninteresting: interdisciplinarity is great, but a massive conference like the International Pragmatics Conference where only 5-10% of the presentations may be of interest, seems a bad investment. Yet attending these conferences is highly recommended if you want to be part of the community, and many cannot afford this.

One solution, I think, it to move away from these massive conferences and focus on regular, small-scale conferences. The linguistics community in The Netherlands has a range of small conferences that are cheap and highly relevant to all. We are working to expand this on an international level with the first European Conference of Conversation Analysis, a small conference aimed to be accessible to PhD students, and early and mid-career researchers. Similar conferences could be organised throughout the world. There is then still a use for larger conferences where the global community can get together, but their import would be significantly less. Your success as an academic will depend less on the funding you have available - or so I hope.

Knowledge translation tools may be part of the solution. They cannot replace conferences, since conferences are about more than sharing work. Being part of a community means more than just having your work read and cited. It means meeting and talking to people: most of my impact I've generated by building my network in this way. But that does not mean that these tools can't be useful. 

Socialism

All these fixes are no true solutions. They are bandaids for an inherently unfair system. Academia is not a meritocracy, as much as we may want it to be. Money will keep playing a significant part in success. Open Access is great move forward in making research available, but it is expensive and it has facilitated the rise of predator journals.

And there are publishers that are determined to prevent the rise of open access, because it threatens their highly successful business model. But that does not, of course, mean all publishers are greedy. As JMIR Publications rightly pointed out to me on Twitter: publishing is not free and some publishers do make an effort to make science open. Not all publishers are getting into fights with major universities the way Elsevier is.

​If they want to help, then we need to make sure that it's not only the Haves that can use these tools. I can pay to fast-track an article, because my line managers bring in a lot of funding, which increases my chances for future grant applications, but many others cannot. The price for people like me may thus have to go up so the price for others can come down, the same as with Article Processing Fees. In the end we may need a socialist system for academia in which the Haves pay not just for themselves, but also for the Haves Nots. Although whether Socialism can work with Capitalism...
<<Previous

    Info

    Here you can find some thoughts on my own research, but also on language use more general.

    Archives

    August 2021
    October 2020
    April 2020
    December 2019
    November 2019
    May 2019
    April 2019
    February 2019
    November 2018
    October 2018
    September 2018
    July 2018
    April 2018
    October 2017
    June 2017
    May 2017
    January 2017
    December 2016
    July 2016
    April 2016
    February 2016
    October 2015
    August 2015
    June 2015
    January 2015

    RSS Feed

Powered by Create your own unique website with customizable templates.