Laurance et al.'s "Predicting Publication Success" study is fundamentally flawed (maybe).

Diego (above) has not published any papers yet, so he is obviously doomed professionally. 

 

NB: There is an update on this post here.

 

Bill Laurance and colleagues just published a really interesting paper in the October issue of Bioscience in which they analyzed the pre- and post-PhD publication records of  182 “academics” (faculty members?) from around the world. There is a very nice write up of the results and a link to the paper by co-author Corey Bradshaw, of ConservationBytes fame, here. Their conclusion?

The most important determinant of your ‘long’-term (10-year) publication success is how many papers you’ve written by the time you’ve completed your PhD. This effect increases markedly if we take the number of papers you’ve published three years after PhD completion as a predictor.

I read this paper with great enthusiasm and am inclined to believe the result, even if I disagree with some of their take-home messages for academic hiring practices (see below). I also agree with them that 1) publications are the coin of the realm and 2) we should be encouraging students to publish early (and training them to do so). However, I have a few nagging questions about the data and analyses I haven’t yet gotten decent answers to (I wrote Bill and Posted at ConservationBytes, but no answer yet). In light of the commentary I’ve seen on twitter I’d like to put these questions out there in the hopes they’ll be answered (or at the very least will cause others to pause before fully embracing the conclusions). I sense that some of these issues could be readily addressed with some quick analyses, while others might require more data collection.

Why their analyses might be fundamentally flawed (or not…) or at least not as broadly applicable as they suggest (unless they are).

1) They didn’t sample properly. Well, maybe they did, but we can’t tell because they don’t tell us anything about how the “academics” were selected. Were they selected randomly from national CV databases? By selecting universities and then faculty within a department? The directories of academic societies? How did they choose the countries to include? Were they pooling faculty members, postdocs, research scientists - all of whom have different career demands and time to devote to publication? (NB: on first read I assumed they meant faculty members, since they “incorporated the effects of postdoctoral productivity” and focused on universities to find focal subjects). Scientometric research is no different from ecological work in that how you collect the data will influence your results. Without knowing how the study subjects were chosen, or even what career tracks were included, we can’t evaluate how general their results truly are.

2) They failed to control for the type of institution where their focal researchers (=faculty) are based. Faculty at different kinds of institutions – major research universities, small liberal arts colleges, government or private research institutes, regional state universities – would not be expected to have the same research productivity (indeed, I’d be surprised if they did). If they collected data from faculty at a diversity of school categories, does the trend hold up irrespective of institution type? I doubt it would.

3) They failed to control for Full-time Equivalent (FTE). Even at a primarily research institution like mine (an “RU/VH”, according to the Carnegie Classification System) , the proportion of FTE devoted to research can vary from 10-100%. The correct dependent variable they should have used is the number of publications produced by a faculty member per % of FTE devoted to research. Does the relationship fall apart when correcting for Research FTE?

4) They failed to include “country” as a factor in their model. The study includes faculty from around the world, and they suggest the results are broadly applicable, but they fail to back this up with any stats. Most of my academic experience is in two countries – the US and Brazil. These countries have vastly different academic cultures, educational programs, resources, expectations, and motivational structures. It may very well be that pre- and post publication trends are similar in these vastly different systems, but I’m going to hold off buying how global this effect is until they rerun the models with “country” as an effect in the model. As an aside, how many faculty came from each country? Without seeing some sample sizes, it’s tough to evaluate if the effect might have been driven by just a few countries.

What do I think this means for their results? My sense is that institutional category and % of FTE devoted to research are probably at least as important as pre-hire publication record, and maybe even more so. I think the effect may well be consistent for faculty in different countries, but the magnitude of the effect will vary geographically. If so, all of these would takes a fair amount of the air out of their results. But it may very well be that they can (or even did) test for some of these things, in which case their results stand. Honestly, I don’t care either way, because I think the results would be interesting regardless. I just wish they were as careful collecting and reporting their data and as thorough with their scientometric analyses as they are in their exceptional ecological ones.

 

PS. Thinking about these issues made me wish they had archived their data and code so I could just go and look up the answers myself. Like this group, who analyzed publication patterns by ecologists with a dataset of over 2000 papers that is freely available at Dryad.

 

PPS. From p. 821:

 If one is comparing different candidates for academic or research jobs, simply tallying the number of early-career publications (at some standardized point in one’s career, such as 3 years post-PhD) appears to be an effective way to identify prospective rising stars.

Oh boy. Let’s leave this one for another day.

 

UPDATE (12/20): Neither Corey nor Bill responded to my emails or posts on Corey’s blog asking about these issues, so I had to go old school and submit a letter to Bioscience.  It and their reply are in press.

Professor & Distinguished Teaching Scholar

My research & teaching interests include Tropical ecology and conservation, plant population ecology, plant-animal interactions, scientometrics and bibliometrics, science & science policy in Latin America matter.

Related