24 September 2017

Paying to publish and Poynder

Richard Poynder and Michael Eisen got into it on Twitter over the weekend over open access publishing. Poynder wrote:


My view is that PLOS legitimised a deeply-flawed business model: pay-to-publish.

Hm. The problem is that many journals used “pay to publish” before PLOS journals came along. They were called “page charges.” You can still find many journals with page charges that are not open access. Cofactor has a list here.

These seem to indicate that asking scientists to bear some of the cost of publication is not inherently problematic. At least, I certainly don’t recall any serious discussion about them as deeply flawed. There probably should have been. But people accepted page charges as a normal, routine part of some corners of academic publishing. Saying PLOS legitimized that model is questionable.

PLOS ONE revolutionized academic publishing. But what was revolutionary was its editorial policy of not screening for “importance.” That lead to it publishing a lot of papers and generating a lot of money. It was through that combination that PLOS ONE paved the way for many imitators, including bad journals (documented in Stinging the Predators).

To me, the bigger problem is that “pay to publish” is very often equated – wrongly – with “open access.” The business model used to support publishing is not closely related to whether people can freely read the paper.

External links

Journals that charge authors (and not for open access publication)

22 September 2017

When two lines of research collide

It’s so nice to have two new papers drop in short succession! I had one come out in Journal of Coastal Research last week, and another paper drops today in PeerJ!

A couple of years ago, I posted this picture to try to explain who I ended with papers strewn across multiple research fields.


Little did I know then that a couple of those lines of research were to destined to collide:


This paper started, as several of my papers had, with an unplanned chance observation. I was working with a summer undergraduate student on a project related to my ongoing project to understand the basic biology of the sand crab crab Lepidopa benedicti (reviewed in yesterday’s post).

I looked under the microscope at a sand crab brain we were staining, and thought, “Hey, I recognize that!” It was a larval tapeworm. I’d coauthored two papers about how they infect the nervous system of local white shrimp (Carreon et al. 2011, Carreon and Faulkes 2014).

I had already co-authored published a paper on parasitic nematodes in sand crab (Joseph and Faulkes 2014). But when we did the initial screen for whether there were any parasites in this species, we missed the tapeworm larvae entirely! Even though we has spent a lot of time looking at them in shrimp, we did not notice them.

Once I recognized that there was this familiar parasite in sand crabs, it was off to the races. I knew how to visualize the parasite from the “tapeworm in shrimp” papers. I knew behaviour tests we could do from the “nematodes in sand crabs” paper. This project was, to me, very low hanging fruit that I was confident could yield a paper quite quickly.

But it became so much cooler than I ever expected as the data started rolling in. I had both sand crabs and mole crabs available, so I checked both for tapeworms. It became obvious quickly that the infection patterns in sand crabs and mole crabs were very different. I tweeted out a few graphs while I was collecting the data:


You don’t get differences that obvious that early that often. And it held up! And it was consistent with something else in my archive...

I had some unpublished data from the nematode project. My former student, Meera, had searched for those nematodes in mole crabs. We couldn’t find any. That result was okay for a conference poster at the 2014 parasitology meeting, but on its own was just an observation and probably not publishable.

But having two parasites show the same infection pattern in two species – one species heavily infected, the other one practically uninfected – now that was much more interesting.

The paper came together, as expected, pretty quickly. I submitted it to PeerJ. I’ve published with them before, and I was recently reminded how much I like their editorial process. They truly did build the better mousetrap. They are prompt but thorough. I still think PeerJ’s submission process for figures is still far more fiddly than it needs to be, even though I realize why it is that way.

I also wanted to milk my PeerJ lifetime membership more. I got it when it was $99 per author for life. With two papers, buying that membership when I did had probably save me thousands of dollars in article processing fees.

One thing that makes me happy about this pair of papers that has just come out (this and the phenology one) is that I genuinely feel that I have made progress in understanding the basic biology of these sand crabs. Yes, albuneid sand crabs are obscure little critters that few other people care about.

But a lot of papers feel like you’re mostly filling in details, or are variations on an established theme. It’s very satisfying to have a project where you genuinely feel you are shedding new light on topic. That’s why I kept doing the sand crab papers.

And I did have a student email me with a question about sand crabs not too long ago, so maybe these papers aren’t just to make me happy. Maybe some other people will find them cool and useful, too.


Related posts

Connections in my scientific career
Staying active in the lab and/or field when you’re the boss
823 days: A tale of parasite publication
Where’s the site for the parasite?
Tracking tiny worms

References

Carreon N, Faulkes Z. 2014. Position of larval tapeworms, Polypocephalus sp., in the ganglia of shrimp, Litopenaeus setiferus. Integrative and Comparative Biology 54(2): 143-148. https://doi.org/10.1093/icb/icu043

Carreon N, Faulkes Z, Fredensborg BL. 2011. Polypocephalus sp. infects the nervous system and increases activity of commercially harvested white shrimp (Litopenaeus setiferus). Journal of Parasitology 97(5): 755-759. https://doi.org/10.1645/GE-2749.1

Faulkes Z. 2017. Filtering out parasites: sand crabs (Lepidopa benedicti) are infected by more parasites than sympatric mole crabs (Emerita benedicti). PeerJ 5: e5832. https://doi.org/10.7717/peerj.3852

Joseph M, Faulkes Z. 2014. Nematodes infect, but do not manipulate digging by, sand crabs, Lepidopa benedicti. Integrative and Comparative Biology 54(2): 101-107. https://doi.org/10.1093/icb/icu064 

21 September 2017

Fiddly bits and increments


You have to be honest about your papers. I am happy with my latest paper, for several reasons.

  • It’s probably the longest I’ve ever collected data for a paper (five years). 
  • Part of it was crowdfunded. 
  • Part of it was first published in a tweet. 
  • It’s open access.

But I admit this paper is a little scrappy.

My newest sand crab paper continues a project that started because an REU student, Jessica Murph, wanted to do a field project. Jessica collected about a year’s worth of data from the field. I continued for a second year because I didn’t think it would be publishable with only one year of data. It took a long time (don’t get me started), but we got that paper published (Murph and Faulkes 2013).

But even after two years of data gave us a paper, I just kept going out to the field every month. I didn’t have any super strong reason to do so. I needed sand crabs for other projects (like Joseph and Faulkes 2014), but I didn’t need to keep records of size and sex and number per transect of animals I was bringing back to the lab. But I did anyway.

One cool thing that happened while I did so was that I found a new species for the areaLepidopa websteri – in 2012. That turned into a little paper of its own (Faulkes 2014). But a couple of years later, I found another specimen of this species. And then a third. While range extensions are an accepted thing in describing the distribution of a species, confirmations saying, “Yes, it’s still here” are not enough to publish a paper. Even when they are notoriously hard beasties to find.


Later, I found an orange sand crab. I’d co-authored a paper (Nasir and Faulkes 2011) saying that they were all grey or white, so that was a neat little wrinkle on the colour story. I found a second orange one when I was curating Real Scientists, and tweeted that out. Thus, a tweet was the first official “publication” of a new colour morph for Lepidopa benedicti! But I only had a couple of individuals, which was, again, not enough to publish a paper.

I did have a few ideas percolating in the back of my mind. I was interested in comparing the local sand crab population with the Atlantic population, and ran a successful crowdfunding campaign in 2012 to do so. (If you weren’t around for my crowdfunding campaigns, those were a lot of fun.)

I collected sand crabs in Florida, but the number of animals I found (three) was – againnot enough to hold up a paper on its own.

Are you seeing a pattern here yet?

Meanwhile, the basic data was slowly piling up and I was getting a sharper and sharper picture of what this population of sand crabs locally was doing month in, month out. Things that I thought were bad luck when I started (like, not finding any animals for months at a time) turned out to be part of a pretty predictable pattern. But that wasn’t a new finding; it was just a refinement of a pattern I’d published in the first paper (Murph and Faulkes 2013). An incremental improvement in understanding seasonal abundance was probably not enough for a paper.

The one finding that was genuinely new, and that made me think another paper was viable, was figuring out the reproductive cycle of the sand crabs. In the first two years of data (Murph and Faulkes 2013), we had no evidence of these animals reproducing at my field site at all. Now I know that while reproductive females are hard to find, they are there, I know when they appear (summer). And I know when the little ones appear (September / October).

That’s why I say this paper is a little scrappy. It includes a lot of fiddly bits and bobs that would not be enough to stand as independent papers. But I wanted to get them in the scientific record somehow. So I used one finding, the annual reproductive cycle, as a sort of tentpole to hold up a few others.

After experimenting with posting a preprint that contained a lot of these data, I settled down to the job of trying to find a real home for all this. I like to try to get papers in different journals, and I had been eyeing the Journal of Coastal Research. Some senior biology faculty at UTPA (Frank Judd and Bob Lonard) had published there multiple times. It was even more on my radar after attending the 2013 national conference of the ASPBA on South Padre Island.

The submission date on the paper says received 8 July 2016, but I hit “submit” in March. It was only through a haphazard “Hey, I wonder what’s the deal with my paper?” that I thought to log in to the journal’s manuscript review system, when I learned what was going on. The editor wanted me to fix a things in the manuscript to bring it in line with the journal’s formatting rules before it went out for review. But the submission system never generated an email to me from the editor saying, “Fix these.” Great. There’s a few months wasted.

But I do want to give the journal credit for things they did well. First, they did very intense copyediting, for which I am always grateful. There are always typos and errors and things that need fixing, and I never find them all on my own. And they drive me mad afterwards.

Second, Journal of Coastal Research is not known as an open access journal. There is no mention of open access publishing options in their (extensive) instructions to authors. But I asked about it during the copyediting and production stage, and was delighted to find that they did have an open access option. And the article processing fee was quite competitive.

I am glad to tell you the story of this sand crab paper, for I have another one to tell you about when it drops... tomorrow!

References

Faulkes Z. 2014. A new southern record for a sand crab, Lepidopa websteri Benedict, 1903 (Decapoda, Albuneidae). Crustaceana 87(7): 881-885. https://doi.org/10.1163/15685403-00003326

Faulkes Z. 2017. The phenology of sand crabs, Lepidopa benedicti (Decapoda: Albuneidae). Journal of Coastal Research 33(5): 1095-1101. https://doi.org/10.2112/JCOASTRES-D-16-00125.1

Joseph M, Faulkes Z. 2014. Nematodes infect, but do not manipulate digging by, sand crabs, Lepidopa benedicti. Integrative and Comparative Biology 54(2): 101-107. https://doi.org/10.1093/icb/icu064

Murph JH, Faulkes Z. 2013. Abundance and size of sand crabs, Lepidopa benedicti (Decapoda: Albuneidae), in South Texas. The Southwestern Naturalist 58(4): 431-434. https://doi.org/10.1894/0038-4909-58.4.431

Nasir U, Faulkes Z. 2011. Color polymorphism of sand crabs, Lepidopa benedicti (Decapoda, Albuneidae). Journal of Crustacean Biology 31(2): 240-245. https://doi.org/10.1651/10-3356.1  

Related posts

Back to the start
1,017 days: when publishing the paper takes longer than the project
Way down south: stumbling across a sand crab (Lepidopa websteri)
Amazons and Goliaths, my #SciFund expedition blog, now available!
A pre-print experiment: will anyone notice?
 
External links

Are two years’ data better than one?

Metrics do not mean academia has been “hacked”

Portia Roelofs and Max Gallien argue that the high altmetric score for a dire article defending colonialism is evidence that academia has been “hacked.”

Academic articles are now evaluated according to essentially the same metrics as Buzzfeed posts and Instagram selfies.

The problem with their thesis is that article metrics are not new. They even discuss this:

Indexing platforms like Scopus, Web of Science and Google Scholar record how many other articles or books cite your article. The idea is that if a paper is good, it is worth talking about. The only thing is, citation rankings count positive and negative references equally.

I’m pretty sure that it has been literally decades since I read articles about using citation data as metrics for article impact. And one of the concerns raised then was about mindless counting of citation data. “But,” people would object, “if an article got a lot of citations because people were saying how bad it was?”

This is not a hypothetical scenario. Go into Google Scholar, and look at the number of citations for the retracted paper that tried to link vaccination to autism. 2,624 citations. Or the “arsenic life” paper, which has been discredited, though not retracted. 437 citations. By comparison, my most cited papers are in the low tens of citations.

The defence for using citations as metrics was that negative citations rarely happen. (I seem to recall seeing some empirical data backing that, but it’s been a long time.) But it was certainly much harder for people to dig down and find whether citations were positive or negative before so much of scientific publishing moved online. (Yes, I remember those days when journals were only on paper. ‘Cause I be old.)

Indeed, one of the advantages of the Altnetric applet is that it is trivial to go in and see what people are saying. Click on the recorded tweets, and you can see comments like, “So hard to read it without being outraged,” “Apparently not a parody: paper making 'the case for colonialism'. How does a respected journal allow this? Shocked.”and simply, “Seriously?!” Hard to find any tweets saying something like, “This is a thoughtful piece.”

It wouldn’t surprise me if the Altmetric folks are working on code that will pick out the valence of words in tweets about the paper; “excellent” versus “outraged,” say. Some papers are analyzing “mood” in tweets already (e.g., Suguwara and colleague 2017).

So the issue that Roelofs and Gallien are discussing is not a new kind of issue, although it could be new to the degree it is happening. But Roelofs and Gallien fail to show even a single example of the thing they claim to be such a danger: someone, anyone arguing that this is a good paper, a worthy piece of scholarship, because of its altmetric score.

It is fair to point out that any metric needs to be used thoughtfully. But I am not seeing any chilling effects, or even much potential for chilling effects, that they are so worried about.

A partisan news site can play to outrage and be rewarded because of their business models. They generate revenue by ads and clickthroughs. Academic journals are not like such news sites. They are not ad supported. They are not generating revenue by clicks and eyeballs. Academic journals exist in a reputation economy. They rely on reputation for article submissions, peer reviews, and editors.

For me, a bigger problem that journals might be rewarded from criticism by high altmetric scores is that journals can be so effectively isolated from criticism by publisher bundling (a.k.a “big deals”). It’s almost impossible for a library to cancel a subscription to one journal. (And yes, I know there are arguments for cancelling all subscriptions.)

18 September 2017

A pre-print experiment, continued


Over a year ago, I uploaded a preprint into bioRxiv. When people upload preprints, bioRxiv sensible puts on a disclaimer that, “This article is a preprint and has not been peer-reviewed.”

A little over a week ago, the final, paginated version of the paper that arose from the preprint was published. Now, bioRxiv is supposed to update its notice automatically to say, “Now published in (journal name and DOI).”

Perhaps because the final paper was substantially different than the preprint – in particular, the title changed – bioRxiv didn’t catch it. I had to email bioRxiv’s moderators through the contact form asking them to make the update.

The preprint was making more work for me. Again. It wasn’t a lot of work, I admit, but people advocating preprints often talk about them as though they take effectively zero time. They don’t. You have to pay attention to them to ensure things are being done properly. I want people to cite the final paper when it’s available, not the preprint.

Some journals are talking about using bioRxiv as their submission platform. This would be a good step, because it would remove work duplication.

I’m glad I’ve been through the preprint experience. But I am still not sold on its benefits to me as a routine part of my workflow. It seems all the advantages that I might gain from preprints can be achieved by other methods, notably publishing in open access journals with a good history of good peer review and production time.

Related posts

A pre-print experiment: will anyone notice?

13 September 2017

A look back at the crystal ball

I wrote the bulk of this post this post five years ago, back in 2012. That week, a paper came out in Nature that claimed to predict... the future! At least, it claimed to predict one part of my academic future, namely, my h-index:


At the time the paper came out, there was an online calculator. It hasn’t succumbed to link rot: it’s still there! I entered in the following values then:

  • Current h-index: It was 8 in 2012 (according to Google Scholar).
  • Number of articles: 24 (I only counted my original technical articles).
  • Years since first article: 20 (then; my first paper was in 1992).
  • Number of distinct journals: 20.
  • Number in “top” journals (a.k.a. the glamour mags): 0.

The program predicted my h-index now, five years later, would be 13. Since I used my Google Scholar data, I went back and checked my Google Scholar profile.


How did the prediction fare? Zooming in...


Holy cow!


Perfect. The prediction was perfect.

It’s a bit spooky.

Now I’m having one of those existential crises of whether my fate is set and whether there is anything I can do about it. As Ahab said in Moby Dick:

Is it I, God, or who, that lifts this arm? But if the great sun move not of himself; but is as an errand-boy in heaven; nor one single star can revolve, but by some invisible power; how then can this one small heart beat; this one small brain think thoughts; unless God does that beating, does that thinking, does that living, and not I.

The 2012 prediction reaches ten years forward, predicting an h-index of 21 in 2022. Of course, my publishing profile has changed in five years. I entered my updated data, and experienced my second existential crisis of the day:


My predicted h-index for 2022 has gone down from five years ago! The new prediction drops my 2022 h-index by 3 points! Argh! It does kind of make you feel like you’re failing at science.

Next, to schedule a post with this graph for 2022. We’ll see how close it is.

Related posts

Gazing into the crystal ball of h-index
Academic astrology

11 September 2017

Chasing pidgeys


In the game Pokémon Go, pidgeys are pokémon that you see everywhere. They’re super common, super small. They are not very powerful. You’d be hard pressed to win any gym battle with them.

When I started playing the game, I quickly stopped collecting them because, well, I had them already. And they seemed useless.

But I was wrong. And now I chase after them all the time.

There are a lot of different resources in Pokémon Go, but one is experience. You “level up” as a player with a certain number of experience points. One of the ways to get experience points is to evolve pokémon, and you get quite a lot of experience for doing so. It turns out that pidgeys are cheap to evolve. A few other pokémon are just as cheap, but they are much less common, and harder to catch.


Thus, what looks like something trivial and boring turns out to be one of the most reliable ways to advance in the game.

It occurred to me that this is a good metaphor for careers, including academic careers. Much of your success comes from chasing pidgeys: the boring, mundane tasks that you have to do a lot of, and that earn little recognition individually. Grading assignments, getting reviews back to editors, going to meetings, consistently working on papers.

(This post inspired by a student in General Biology who asked me what level I was at in Pokémon Go and whether I’d caught Raikuo yet.)

Picture from here.

08 September 2017

The Voynich manuscript and academic writing


The Voynich manuscript is a potentially obsession creating item. It’s routinely described with phrases like, “the world’s most mysterious book.” For more than a century, nobody could read it or make heads nor tails about what it was about. Debate raged about whether it was coded or just an unreadable hoax.

Until recently.

The book has, apparently, finally yielded to insightful scholarship and has been decoded.

(I)t was more or less clear what the Voynich manuscript is: a reference book of selected remedies lifted from the standard treatises of the medieval period, an instruction manual for the health and wellbeing of the more well to do women in society, which was quite possibly tailored to a single individual.

But what I want to talk about is not the solution, but about writing style and communication.

Here we have a century old mystery, solved. Here’s how I learned about it. A tweet from Benito Cereno that read:

Holy shit holy shit holy shit holy shit

The Voynich manuscript, decoded

You can feel Benito’s excitement in that tweet! This is so exciting, there’s no time for punctuation marks!

Now read Nicholas Gibbs’s first hand account of solving this mystery. Here’s the opening paragraph, which does use a good narrative structure, the ABT (and, but therefore) template (championed by Randy Olson):

For medievalists or anyone with more than a passing interest, the most unusual element of the Voynich manuscript – Beinecke Ms. 408, known to many as “the most mysterious manuscript in the world” – is its handwritten text. (And) Although several of its symbols (especially the ligatures) are recognizable, adopted for the sake of economy by the medieval scribes, the words formed by its neatly grouped characters do not appear to correspond to any known language. (And) It was long believed that the text was a form of code – one which repeated attempts by crypt­o­graphers and linguists failed to penetrate. (And) As someone with long experience of interpreting the Latin inscriptions on classical monuments and the tombs and brasses in English parish churches, I recognized in the Voynich script tell-tale signs of an abbreviated Latin format. But interpretation of such abbreviations depends largely on the context in which they are used. (Therefore) I needed to understand the copious illustrations that accompany the text.

But even with that good narrative structure in place, the opening paragraph shows so many of the problems of this article. Like many academics, Gibbs overloads on facts, with “and, and, and...” before we get to the “but.”

It’s about as devoid of excitement as you can imagine. This is a very careful walk through of the process. To use another of Randy Olson’s ideas, the “four organs of communication” (pictured; more in Don’t Be Such a Scientist) this description is all head (intellect). There’s nothing from the heart (emotion) or gut (intuition, humour). No emotion, nothing personal.

It’s disappointing.

Gibbs completely bypasses the intensity of interest in the strange book, of how many people have tried to crack it. “Repeated attempts” is so weak to describe a century long set of efforts to crack this this. It is such a typically cautious, couched language that is used in academic writing all the time.

And having solved a problem that so many people have brought so much talent and effort to bear upon, you might expect Gibbs to describe opening a bottle of champagne in celebration. Or maybe a beer. Or a description of the satisfaction he had from his insights – the “Aha!” moments, as it were.
Instead, Gibbs treats it with about as much enthusiasm as a walk to from the living room couch to the bathroom. 

You want to hear about the feeling of triumph of solving the puzzle, not just the step by step solution to it.

If you want to connect with people, you need the passion. You need the guts. You need the emotions.

Update, 9 September 2017: I’m seeing tweets from people grumbling that the Voynich manuscript probably hasn’t been solved. Nobody that I’ve seen has said why they doubt that the problem is solved. (Update, 10 September 2017: Ah, see here.) Regardless, that doesn’t change the points made here.

Related posts

Connection: Hollywood Storytelling Meets Critical Thinking review
Review: Don’t Be Such a Scientist

External links

Voynich manuscript: the solution
So much for that Voynich manuscript “solution”

Picture from here.

02 September 2017

Thank you, New Hampshire


It’s been a week since Harvey changed everything for Houston, Texas.

And since then, I’ve been waiting. After Katrina hit new Orleans, my university (then The University of Texas Pan American) offered enrollment to students affected by the hurricane. Since Harvey was hitting Texas, I expected that and more.

I emailed our president’s office, reminding them of what happened back in 2005. I got an email back from our Office of Emergency Preparedness, saying:

(UTRGV) has been in communication with University of Texas System... since last week. There are system-wide plans in place in the event student relocation becomes necessary.

I waited to hear what those system-wide plans were. I waited all week. All that happened at my institution was that the Atheletics department teamed with a Texas grocery store to fundraise. Hardly an institution wide response or plan.

Finally, University of Texas System Chancellor William McRaven writes this, titled, “Texans stop for no storm.”

This annoys me to no end. It feels like McRaven is taking this moment to do say, “Look how tough we are,” posturing instead of actually offering concrete plans for help.

On Twitter, the UT System account tweeted a Storyify about how institutions were helping people affected by Harvey. And this is nice, but it’s things like student organizations doing fundraising, universities offering counseling services, not institutions offering anything like what a New Hampshire university has done.

Franklin Pierce University will provide free tuition, room and board to up to 20 students for the fall semester.

That’s what I was expecting UTRGV and other UT System universities to do. But no.

Thank you, Franklin Pierce University, for doing for Texas students what Texas universities didn’t.

Related posts

Credit where it’s due

External links

New Hampshire university to take in students after Harvey
Texans stop for no storm

Picture from here.