## Sunday, November 15, 2015

### Why I chose to stay on as a subject editor for the RIO Journal

Two months ago I wrote about my decision to tentatively agree to be a subject editor for the RIO Journal.

The Price (APC)
The APC for a traditionally peer reviewed research article in the RIO Journal is €750 (~\$850). This is significantly more than what I usually pay at PeerJ (~\$300), but only a little higher than the \$695 per article price they recently announced. The APC is significantly less expensive than PLoS ONE (whose APC recently increased to \$1450 and the no-questions-asked waiver fee appears to be gone) and less than SpringerPlus (\$1085) and F1000Research (\$1000). RIO Journal is thus the second-cheapest OA journal I know of.  As such I would recommend RIO Journal to people who publish non-bio or review papers. (Well right now I would recommend Royal Society Open Science, since they are waiving their APCs for a while).

Getting "publishing-credit" for my research proposals and other research output
It turns out RIO Journal does not offer traditional peer review for research proposals so posting a proposal there is not very different from posting on a pre-print server with a comment section, such as PeerJPreprints.  The main difference is that it would be typeset at the RIO Journal but at a substantial cost (€190/€650 depending on length).  No thanks, I'll go with PeerJPreprints or ArXiv or FigShare or just posting the pdf on Google Docs.

An interesting alternative
Shortly after I wrote the first post on the RIO Journal, Tim Gowers announced an arXiv overlay journal called Discrete Analysis.  The main difference from a traditional OA journal is that the papers are not typeset (and hosted on arXiv).  The APC is \$10 (which is waived for the foreseeable future). I actually think this is the way forward for scientific publishing long term. Conclusion If I write a paper that is outside the scope of PeerJ, and if Royal Society Open Science starts charging, then I might submit to RIO Journal based on the current APCs. This would also be my advice to my colleagues should they ask about OA publishing options. Thus, I'll stay on for now as subject editor. This work is licensed under a Creative Commons Attribution 4.0 ### OpenCon2015 shows how to make a conference open I am currently following OpenCon2015 online and really enjoying the experience. There are two main components: A YouTube live feed and a Twitter feed (#opencon). The Twitter feed and the high number of live Tweets is really what makes this work! You not only watch the talk about you also "hear" the thoughts of the audience (both the "live" and "feed" audience) and you can interact with these people live (and follow up on discussions afterwards). This makes you really feel like you're there and not sitting on the couch in your living room. To make this work in practice it appears that you need 1. A camera and someone operating it (zooming/panning) 2. A sound system linked to the camera, a microphone for the speaker and 1-2 rowing mikes for questions. 3. A camera/computer interface for the live feed and reasonably fast internet 4. Some basic html skills to make the live feed page, i.e. embed the YouTube feed and Twitter feed side-by-side. (I found it quite important to be able to view the two simultaneously). 5. Lots of people live Tweeting I really hope that these talks also will be available for later viewing! Anyway, if more scientific meetings and conferences would adopt this model it would greatly improve their impact and further science as a whole. Also, for smaller meetings the streaming could probably be done with a smartphone and Periscope. This work is licensed under a Creative Commons Attribution 4.0 ## Saturday, October 17, 2015 ### Copying multiple files using scp Perhaps the shortest blog post ever, but it took me while to find the command. I'll know I'll need it again so here it is: scp -v -r user@host.name:/path/folder folder This command created a folder named "folder" on the local machine and copies the content of "/path/folder" on host.name into it This work is licensed under a Creative Commons Attribution 4.0 ## Saturday, October 3, 2015 ### PeerJ vs F1000Research Update 2015.10.05: Correction based on the comment by +Eva Amsen: F1000Research editors do chase down reviewers to help ensure reviews. So the main real difference between F1000Research and PeerJ appears to be the price - assuming PeerJ authors post a pre-print. This post is a comment I left on Michael Eisens post on the Mission Bay Manifesto on Science Publishing A purely practical comment about point 5 in general and F1000Research price in particular. My main point is that PeerJ offers better service at lower cost (and I am not affiliated with PeerJ in any way). Let’s take my latest paper which just got accepted in PeerJ and contrast it to how it would have worked at F1000Research 1. I submitted my draft to PeerJ PrePrints who made it available online within a day for free. It showed up on Google Scholar about a week later. F1000Research would take about a week and cost \$1000 as it was >2500 words.  On the other hand at this point it is typeset.

2. I solicit reviews on social media and by emailing select experts.  There is a commenting section on PeerJ PrePrints where these reviews can be added.  I got some suggestions by email but no one added comments for this particular paper.

From what I can tell the idea is much the same on F1000Research

3. I revise my manuscript and put a new version on PeerJ PrePrints with another plea for comments/reviews.  Then I submit to PeerJ.  PeerJ finds 2 reviewers for me, typesets the manuscript (after minor corrections in this case), publishes the reviews, provides a comment section for further review, and gets it indexes, for \$298 (in this case). Again, there is a comment sections where people can continue to review the manuscript and also the reviewers comments, which I choose to make public. So, from where I stand I pay F1000Research \$1000 extra for guaranteed and immediate typesetting of a manuscript which may not get reviewed, while I pay PeerJ \$300 for guaranteed reviews of a manuscript which may not get typeset (if it is rejected). I couldn’t care less about the typesetting. When I deposit my preprint I consider my work published - and I can do that for free. The remaining steps are taken mainly to be able to add it on my CV under “Peer Reviewed Publication” with additional indexing as a nice bonus. As Gowers has shown, if you remove the typesetting this can done for \$10/paper.

## Sunday, September 20, 2015

### Surface tension and the non-polar solvation entropy

I made a stupid sign mistake in one of my video lectures and have spent part of the weekend sorting things out in my mind.  So here is a note to self before while it is still fresh in my mind.

The non-polar free energy of solvation can be written as
$$\Delta G_{\text{np-solv}} = \gamma_{\text{np}} SASA$$
where where $SASA$ is the solvent accessible surface area.  The argument is that one of the main contributions to $\Delta G_{\text{np-solv}}$ is the energy required to make the molecular cavity in the solvent, which, for macroscopic objects, is a function of the surface tension of the liquid $\gamma$ and the surface area of the cavity.
$$\Delta G_{\text{np-solv}} \propto \gamma SASA$$
$\gamma$ is positive for water so $\Delta G_{\text{np-solv}}$ is positive in water.

So far, so good.  But this simple picture fails when considering the solvation entropy
$$\Delta S_{\text{np-solv}} = - \left( \frac{\partial \gamma_{\text{np}}}{\partial T} \right) SASA$$
For water the bulk surface tension decreases with increasing temperature as you would expect, which suggests that $\Delta S_{\text{np-solv}}$ is positive when in fact it is observed to be negative.

So if $\gamma_{\text{np}}$ has anything to do with $\gamma$ this would imply that the surface tension associated with molecular-sized cavities increase with temperature.  It is not clear why that would be so and this, in part, has led Graziano to argue that, effectively, $\gamma_{\text{np}}$ has nothing to do with $\gamma$ but is a strictly empirical parameter.

A little more detail that ultimately doesn't shed any more light
$\gamma_{\text{np}}$ is also positive but not equal to $\gamma$. One reason is that $\Delta G_{\text{np-solv}}$ also contains contributions from repulsion and dispersion interactions with the solute.  However, if one computes $\Delta G_{\text{np-solv}}$ from hard sphere simulations the corresponding $\gamma_{\text{np}}$ values still does not match the $\gamma$ value for bulk water.

Tolman has argued that the surface tension depends on the curvature of the surface and suggested the following approximation
$$\gamma (R) = \gamma \left( 1 - \frac{2\delta}{R} \right)$$
where $R$ is the cavity radius and $\delta$ is a parameter called the Tolman length.  When $R < 2\delta$
$$\frac{\partial \gamma (R)}{\partial T} = \left( \frac{\partial \gamma}{\partial T} \right) \left( 1 - \frac{2\delta}{R} \right)$$
will indeed be positive, but only when $\Delta G_{\text{np-solv}}$ is negative.  What is observed is positive $\Delta G_{\text{np-solv}}$ and  $\Delta S_{\text{np-solv}}$.

Ashbaugh has pointed out that a  temperature-dependent $\delta$ solves this problem but Graziano fired back that since there is no analytical form for $\delta$, $\frac{\partial \delta}{\partial T}$ is just another temperature dependent parameter, and you might as well use $\frac{\partial \gamma_{\text{np}}}{\partial T}$ as a parameter (I am paraphrasing here).

## Saturday, September 19, 2015

### ProCS15 paper: reviews are in

2015.10.2 update: Our rebuttal can be found here.  The paper is now accepted.

The reviews of the ProCS15 paper we submitted on August 25 arrived last evening. 25 days to first decision. The verdict was "minor revisions". The editor was Freddie Salsbury, Jr (who also handled our very first PeerJ paper) and both reviewers chose sign their reviews.  Another very pleasant publishing experience with PeerJ.

Both reviewers have some minor corrects to make and the second reviewer raises a point of skepticism about QM-based vs empirical estimators. A discussion addressing this would likely be of benefit to the field.

Reviewer 1 (Xiao He)
Basic reporting
Experimental design
Validity of the findings
This manuscript is of great importance and I totally support its publication in PeerJ. The authors present an excellent and accurate chemical shift prediction program (ProCS15) based on millions of DFT calculations on simplified models. ProCS15 has extended the capability of previous ProCS program, which predicts the backbone amide proton chemical shift, to fast estimation of chemical shifts of backbone and C beta atoms in large proteins. The accuracies of chemical shifts on two proteins (namely, Ubiquitin and GB3) predicted by ProCS15 are very close to the results from fragment-based DFT calculations by Zhu et al., and Exner and co-workers. Nevertheless, the computational cost of ProCS15 is within a second. This program will be widely used in the NMR community. I only have a few minor points.

1) In the Introduction section, “RMSD observed for QM-based chemical shift predictions may, at least in part, be due to relatively small errors in the protein structures used for the predictions, and not a deficiency in the underlying method.” I agree with the first half of the statement, however, the limitation of current density functionals also contributes to the discrepancy between experiment and DFT calculations, especially for the 15N chemical shift prediction.

2) The first AF-QM/MM work is highly recommended to be cited in the paper,
He X., Wang B. and Merz K.M., Protein NMR Chemical Shift Calculations Based on the Automated Fragmentation QM/MM Approach. J. Phys. Chem. B 113, 10380 (2009)
Reviewer 2 (Dawei Li)
Basic reporting
Experimental design
Validity of the findings
This work is a direct extension of the author’s previous work on quantum based protein chemical shift calculation. The performance is comparable to other quantum based predictors but is worse than current empirical predictors. Because of this, I am still skeptical about all quantum-based predictors. Without solid cross-validation, it is very hard to argue that quantum predictors can capture subtle effect better than empirical predictors. It is true they respond more sensitively to minor structural change, but not necessary in a correct way. On the other hand, it is very useful for the whole community to have more selections that is different from previous ones. (Note that predictions from most empirical predictors are highly correlated, i.e., it won’t provide more information by switching from one to another empirical predictor.) In this context, this work should be published.

It is nice that the prediction performance can be improved a lot if applied to more realistic NMR-derived ensembles. This is expected because the experimental chemical shift of a given nucleus reflects the Boltzmann-weighted average of the 'instantaneous' chemical shifts of a large number of conformational substates that interconvert on the millisecond timescale or faster. This behavior has been discussed many times in the literature. All Ubiquitin NMR structures cited in this work are generated specifically to be a more realistic presentation of protein ensemble in solutions, except 1D3Z. 1D3Z is a traditional NMR structure model, where NMR conformer “bundle” should not be confused with a dynamic ensemble representation of the protein. In these types of NMR models, the spread of atomic positions merely provides information about the uncertainties of the atomic positions with respect to the average structure and has no direct physical meaning. The author may need to provide more comments on this in their last section titled “Comparison to experimental chemical shifts using NMR-derived ensembles”.

## Saturday, September 5, 2015

### Why I chose to become a subject editor for the RIO Journal

I agreed to become a subject editor on a new journal called The Research Ideas and Outcomes (RIO) Journal. When Scientific Reports asked be I declined. Here's some of the reasons why I said yes to RIO Journal, in rapidly descending order of importance.

1. The world needs a low-cost alternative to PLoS ONE*
Many people say things like "I couldn't afford to publish all my papers OA at $1350/paper" and so they publish none as OA. While PLoS ONE offers a no-questions-asked full or partial fee-waiver most people feel funny about asking for it (not me though). PeerJ and PeerJ Computer Science offer very cost effective alternatives to PLoS ONE for bio- and computer science-related papers. For example, on average a PeerJ paper costs me about$200-300. But what about other areas? I was assured that the cost of publishing in RIO Journal would be comparable to PeerJ.  Should this prove not to be the case (the pricing is still a bit up in the air) then I'll resign as subject editor.

(*note that this implies PLoS ONE-like review criteria and use of the CC-BY license)

2. I like the idea of getting "publishing-credit" for my research proposals and other research output
Roughly speaking for every proposal I write, I write one paper less. With the current ~10% success rate I now write more proposals and, hence, fewer papers. I would like to change that because my productivity is judged in large part by my production of peer reviewed papers, and RIO Journal looks like the way to do this.

There are plenty of places where you can share your proposals (I have used figshare which even gives you a DOI) but if I can get them peer reviewed (what RIO Journal calls "validated") at RIO Journal then I can list them on my publication list and get "credit".  If RIO Journal can deliver this for \$200-300 count me in.

3. All the other stuff
A. The manuscript is visible upon submission, i.e. you "automatically post your pre-print".
B. The reviews are made public and are assigned DOIs
C. Commenting is possible
D. The people behind the journal are doing this to improve science rather than making money

All these things are very nice but I am not willing to pay extra for it.