Lucas Seuren
  • Main
  • Conversation Analysis
    • Origins of CA
    • Current topics in CA
    • Applied CA
  • My Research
    • Health Technologies
    • My Publications
  • Blog

Revisiting knowledge translation tools: academic pay-per-view

12/27/2019

 
In a recent blog I took issue with some knowledge translation tools that journals now offer such as TrendMD. I took these as nothing more than paying for visibility and citations, a new way for publishers to make money and generate shareholder value while researchers did the actual work using taxpayer funding. They are likely to increase the inequality in academia between the Haves and the Haves Nots, those who can afford to pay for visibility and those who cannot. However, since then a the people behind some of these tools have responded to these complaints, and it is only fair to take these points into account.

Getting noticed

The main problem for any academic is for their work to be noticed by other academics. We can do great work, but if nobody reads it or uses it, then it just represents wasted time, effort, and money. But getting noticed is not always easy. Many of us lack access to the right academic networks, which means fellow researchers may ignore our work: there is only so much they can read and they prioritise work that is at least somewhat familiar. Journal editors may even go so far as to reject manuscripts from people they do not know. They are not bad or lazy editors, but journals often have limited space and editors have to be selective.

From this perspective, knowledge translation tools make a lot of sense. If you can pay a small amount to guarantee or at least increase the chance that your work will be read by your peers, than that's money worth spending. A trial done my TrendMD of 3200 studies shows that cross-promotion of research can generate 50% more citations in a year. And these effects were particularly strong for Health and Medical Sciences. If we take citations as a valid measure of impact, than that's a good result, particularly if your work would normally not even be read, let alone cited. And once your work is out there, so is your name.

The flip side is that promoting your work is not necessarily cheap. A TrendMD campaign can cost around $200US, which for an established academic at a prestigious university need not be a whole lot, but for a researcher struggling at a small university or in a country where salaries and funding are a fraction of what they are universities like mine may represent a significant sum of money. Even a small campaign, like promotional Tweets, can represent a lot of money. So from this perspective, it would seem that these tools are likely to exacerbate the problem of inequality, the Matthew Effect.

Nothing is free

But visibility in any shape or form generally costs money. The most familiar way academics share work is at conferences, but these are notoriously expensive. Registration fees for larger conferences are often hunderds of dollars/pounds/euros. They are also held in major Western cities, like San Francisco or Zurich, which means significant travel and accomodation expenses. Consequently these conferences are beyond the reach of many scholars. Even I can often not attend, and the last time I went to a large conference I had to pay half the costs myself, skipping my summer vacation.

Instead of seeing knowledge translation tools as an additional challenge to equality, TrendMD argues that it is in fact an opportunity: $200US is a lot of money, but it is significantly less than the $1000US or more that you would have to spend to present at a major conference. And getting noticed at a conference takes work as well. If there are presentations parallel to yours by famous speakers, you might just be speaking to a nearly empty room. I've experienced this as well: in the second year of my PhD I talked to a room of four people, while a colleague was talking to well over fourty.

Conferences

I concede this point is a good one. In fact, it is an issue I've raised in my field for a few years. Conferences in my field like the International Conference for Confersation Analysis are held only once every few years and require a massive investment: traveling to Los Angeles or Brisbane, paying the fees, etc.. Moreover, much of what goes on at these conferences may be completely uninteresting: interdisciplinarity is great, but a massive conference like the International Pragmatics Conference where only 5-10% of the presentations may be of interest, seems a bad investment. Yet attending these conferences is highly recommended if you want to be part of the community, and many cannot afford this.

One solution, I think, it to move away from these massive conferences and focus on regular, small-scale conferences. The linguistics community in The Netherlands has a range of small conferences that are cheap and highly relevant to all. We are working to expand this on an international level with the first European Conference of Conversation Analysis, a small conference aimed to be accessible to PhD students, and early and mid-career researchers. Similar conferences could be organised throughout the world. There is then still a use for larger conferences where the global community can get together, but their import would be significantly less. Your success as an academic will depend less on the funding you have available - or so I hope.

Knowledge translation tools may be part of the solution. They cannot replace conferences, since conferences are about more than sharing work. Being part of a community means more than just having your work read and cited. It means meeting and talking to people: most of my impact I've generated by building my network in this way. But that does not mean that these tools can't be useful. 

Socialism

All these fixes are no true solutions. They are bandaids for an inherently unfair system. Academia is not a meritocracy, as much as we may want it to be. Money will keep playing a significant part in success. Open Access is great move forward in making research available, but it is expensive and it has facilitated the rise of predator journals.

And there are publishers that are determined to prevent the rise of open access, because it threatens their highly successful business model. But that does not, of course, mean all publishers are greedy. As JMIR Publications rightly pointed out to me on Twitter: publishing is not free and some publishers do make an effort to make science open. Not all publishers are getting into fights with major universities the way Elsevier is.

​If they want to help, then we need to make sure that it's not only the Haves that can use these tools. I can pay to fast-track an article, because my line managers bring in a lot of funding, which increases my chances for future grant applications, but many others cannot. The price for people like me may thus have to go up so the price for others can come down, the same as with Article Processing Fees. In the end we may need a socialist system for academia in which the Haves pay not just for themselves, but also for the Haves Nots. Although whether Socialism can work with Capitalism...

2019: A year in three failures

12/20/2019

 
​The year is nearly at an end and so it is time for lists. For my own enjoyment as much as for others to read, I decide to talk a bit about the things that did not go right this year. Although overall my year has been great, I feel it is important to talk about failure. If you let them, failures are after all part of the road to success. Everybody I know has had to deal with failure. In fact, the most successful academics I know are those who, I think, have failed the most. In line with the brilliant podcast by Elizabeth Day on How to Fail I discuss my main three of the year.

Unsubmitted fellowship application

My first failure of the year came early. I was planning to submit an application for a Rubicon Fellowship in February. This is prestigious scheme from the Netherlands Organisation for Scientific Research (NWO), which allows recent PhD graduates to spend up to two years at an international institute of excellence. My dream was to spend two years at the University of California Santa Barbara, doing research on how truth is constructed in and through courtroom interaction, and working with Geoff Raymond and Kevin Whitehead. I spend a good two years working on the proposal, and I got a ton of help writing it from colleagues in the Netherlands, the US, and Oxford. But after all that hard work, I never submitted.

The reason has nothing to do with me or the proposal: I simply (?) could not find a university to support the application. NWO requires that either the host university provides you with employment, or a Dutch institute has to employ you and then send you abroad. I was clear early on that UCSB would not offer me a contract, because the Rubicon does not cover overhead costs (seriously NWO, fix this!). Sadly for me, no Dutch institute was willing to support me either, because they were worried about possible additional costs like unemployment: if I would not have a job after the Rubicon, the university would need to pay part of my benefits.
​
I am still baffled by this, because I have colleagues in the same field who have received a Rubicon, which means that universities do not refuse this on principle. The only reason I can think of is that I was already working at an institute abroad, and so I was not asking for support from my own university, but as an external. It was an education though. I learned a lot about how to write a grant application, but more importantly, I learned that if you plan to apply for a grant, you have to tick off the finances and bureaucracy well in advance. They are an unseen but essential part of the workings of academia.

Rejected fellowship application

My second failure also concerned a fellowship application. In May and June I spent probably about 150 hours writing a proposal for the National Institute for Health Research. It often was a frustrating experience. I discussed a draft with a research support group. The statistician in this group clearly did not like (my?) qualitative research. She said that I had not been trained as a scientist (I completed my first year of undergrad in astrophysics, but who’s keeping score), and that the scheme was about science. My line manager repeatedly gave me in-depth comments, which were amazing, but which also made me feel wholly inadequate. And in the end I had to work well over 60 hours a week to get the proposal in before the deadline and complete my other duties.

Despite all that effort, the review committee rejected it at the first stage. One reviewer thought it was good, although not excellent, and should go to interview stage, but the other two thought it was not competitive. Partly that was because they felt I did not have a good enough CV yet, as I did not have grants or papers published in medical journals. But mainly they just did not think the proposal was any good.
​
The rejection came as a disappointment, but not as a shock. I had never written for this audience before and as much support as I had from my line manager, I was still trying to figure out how a wheel works while inventing one at the same time. Because two reviewers gave detailed comments, I have a far better understanding of what it takes to write an application that is right for this audience, that addresses the questions they want answered in a way that is clear to them. So where I had to start from scratch this time, I will have a good idea where to start next time. And that will save me a lot of work and make the experience a lot less frustrating.

Rejected after Revise & Resubmit

​My third failure was a collaborative failure. A few years ago, I started working on a small project with a colleague. His advisor had pointed out that we had been working on similar phenomenon and put us in touch. During a conference, we discussed the data and we saw that there was potential for a nice piece of collaborative and comparative work. After two years, we submitted the paper and following peer review the editor told us to revise and resubmit, a positive outcome for first submissions. We set about doing the revisions and after a few months, we felt ready and resubmitted. However, we clearly had not done a good job, because all three reviewers thought it still required major work and so the editor rejected the paper.

Rejections are a natural part of academia. It happens to all of us. But we rarely see it coming. And to have a paper rejected after making significant revisions feels even worse. Essentially, the reviewers are telling you that you did not listen or do your job properly. Or so it feels. I quickly realized the reviewers were correct. Sometimes, despite your best efforts, you do not get it right. There is no shame in that. Rejection is still frustrating, of course, but the reviewers engaged extensively with our work, which means there that we get to profit from their insights and ideas.

Live and learn

All three of these failures are common in academia. As I said at the start, the most successful people I know are also the ones who have dealt with the most rejections. To them failure is an opportunity. They take the lessons they need from failure, throw away the stuff they cannot use (like reviewers being rude), and try again. “Failure teaches us how to succeed better,” to quote Elizabeth Day.
​
That is easier said than done, but it is a lesson I am trying to learn. I reworked the failed NIHR application and submitted to the Wellcome Trust, and I am working with my line manager to submit a similar proposal to a different funding scheme of NIHR. I’m working with my co-author to rework our paper, and we are more enthusiastic than before. All of these might fail again, but if that happens, at least I will have a few anecdotes when I write this blog again next year.

Market forces in academia: Paying for visibility and citations

12/17/2019

 
​Academic publishing has been undergoing a significant transition in recent years. From the darkness of paywalls and massive profit margins for publishers, we are moving into the era of open access for all. It seems so logical: research and peer review are conducted by scientists and is funded with public funds, so the fruits of that labour should be available to all. But the transition is coming at an ever increasing costs. The new low: “knowledge translation tools” that “help you (…) draw citations”. In other words, paying to increase your status. 

Higher numbers

To be fair, this is not all due to greedy publishers – although they certainly play their role, but I will come to that in a second. Another problem is the way we evaluate scholarly work. With the sheer number of papers and researchers in the world, we need simple metrics to determine quality – whether for purpose of job applications or to determine what to read. And citations have become the shorthand for quality. That is justifiable: if your research is cited a lot, it would seem to have significant impact in the field. However, there are (at least) two major issues.

First, that rational argument skips over the practicalities, or more specifically, the privilege underlying citations. If you or your advisors have a good scholarly network, you will have a significantly better to get your work cited. It is the Matthew Effect in its most basic form. It used to be – and actually largely still is – that if you are a man, you are much more likely to be recognized. In fact, you might even get recognition if others (i.e. non-men) did the work. In addition, if you start out your career at the right institute with the right advisor, you will have it far easier to get your work known and thus cited. (I definitely profited from my advisors excellent connections). If you do not have that privilege, you will have a hard time to get cited and your impact will be perceived to be low.
​
The second problem is that Journal Impact Factors are used as a shorthand for research quality. The impact factor is based on the number of times each article in that journal is on average cited within the first few years after publication. Now it should be obvious that the average number of citations says next to nothing about the quality of individual papers. The data is often massively skewed with a few papers getting most of the citations. If I publish in a journal with an impact factor of 5, that does not mean my work will or will not be of higher quality or will be cited more frequently than were I to have it published in a journal with an impact factor of 1. Impact factors are slowly losing their status in academia, but definitely not slowly enough.

More money

Clearly both of these problems drive researchers to get more citations. I want to stress this: the goal is not to increase the quality of their research; it is to get others to cite their work more frequently. This is nothing new: it has been going on in academia for decades. But with the increased funding constraints, and the stricter evaluation criteria from governments and funders, the problem has become more urgent. We should therefore not be surprised that publishers are looking to profit. We need citations to keep our jobs and get funding. Publishers are happy to charge us for that, just as they have always been happy to sell our own work back to us. In fact, I am slightly surprised there are no advertising agencies that you can pay to get your work noticed.

So in that sense the main problem is greedy publishers. The transition to open access means they are only paid once for each piece of work. The costs are not actually that high, which is why some more ethical journals only charge a few hundred pounds/dollars/euros. Others, however, are happy to charge thousands for the simple business of editing a paper. We are basically paying for the prestige of getting work into a journal with a certain impact factor, and the publishers know this.
​
You would think it would be in the publishers’ interests to increase their exposure. And it is: the more papers in their journals are cited, the bigger their impact factor, the more people will want to publish with them, and the bigger their profits. So if anyone should be making an effort to get work noticed, it would be the publishers themselves. Like when you publish non-scholarly books, papers, or journals: the publishers makes an investment in order to sell more work. Academic publishers think differently. They just see another way to get money from researchers. And so now we get to pay for promotional tweets, advertisements on other publishers’ websites, or we can get our work “indexed” within 24 hours so it will be easier to find.

Positive outlook

​Fortunately, not all is doom and gloom. There are great plans to evaluate research and researchers on a more qualitative level instead of the current quantitative measures, which would definitely be a big help. When impact factors no longer have prestige, we can be less concerned with citations. That is not to say we should not try to have impact on the field or society, but we can use more honest ways. It would also partly undermine the incentive for journals to ask money from researchers to increase the exposure of their work. Unfortunately, it is unlikely that citations will become irrelevant entirely.

What this means for the future, I do not know. I recently paid a fee to a journal to speed up peer review. The journal in return pays the reviewers for their work. This seems positive, and in some ways I am very happy about the opportunity. I needed to get an article out there to use as a foundational study for further grant applications: it shows the feasibility of my work. But at 450 pounds, it is a steep fee and one that is beyond anyone who does not already have funding. So it exacerbates the Matthew Effect: once you are successful, you will have to money to easily further your career. If you are not from a privileged background or if you do not manage to break through, you will have a harder time of climbing the ladder.
​
The only solution I see is for academics to take back publishing. We need to make the costs as low as possible and financially prioritise the work of scholars with less financial means. Private publishing companies do not work in the interest of academia, but in the interest of their shareholders. I do not say they should not make money, but profits should flow back into academia through things like scholarships, fellowships, and grants. It is the only way to have a true meritocracy where we assess ideas and not the wallet of the people who present them.

    Info

    Here you can find some thoughts on my own research, but also on language use more general.

    Archives

    August 2021
    October 2020
    April 2020
    December 2019
    November 2019
    May 2019
    April 2019
    February 2019
    November 2018
    October 2018
    September 2018
    July 2018
    April 2018
    October 2017
    June 2017
    May 2017
    January 2017
    December 2016
    July 2016
    April 2016
    February 2016
    October 2015
    August 2015
    June 2015
    January 2015

    RSS Feed

Powered by Create your own unique website with customizable templates.