ABSTRACT We describe a scalable automated method for measuring the pharyngeal pumping of _Caenorhabditis elegans_ in controlled environments. Our approach enables unbiased measurements for prolonged periods, a high throughput, and measurements in controlled yet dynamically changing feeding environments. The automated analysis compares well with scoring pumping by visual inspection, a common practice in the field. In addition, we we observed overall low rates of pharyngeal pumping and long correlation times when food availability was oscillated.
This collaborative document has been created for the panel discussion on “Rotation in massive stars” (FOE 2015), held on Thursday 6/4/2015 in Raileigh. All conference participants have been added to the document and can edit / comment / add figures (just drag&drop) / references and even LaTeX equations if needed (check the help page for more info on how to edit the document). Hopefully this will capture the essential ideas and interactions that will stem during and after the discussion. The document can be forked at any time, so that particular discussions can be taken further and potentially lead to active collaborations.
This post is part of a series called _Is Academia Broken?_ It relates the experiences of Jeff, Authorea’s Community coordinator, and weighing the options on pursuing a PhD. Be sure to check out Alberto’s first blog post, on the perils of early career interdisciplinary research, and his second, on the overabundance of PhDs and dearth of academic positions.
In recent coverage of a massive meta-analysis of the Google Scholar archives, the top-ten “elite” journals are compared to “the rest” in several broad disciplines. For papers published from 1995 to 2013, there was a 64% average increase of top-1000 cited papers coming out of non-elite journals (here, “elite” = top-ten most-cited journals for a given category; “non-elite” = the rest). Lest you worry these represent the _only_ cited articles in non-elite journals: the total share of citations going to non-elite articles rose from 27% to 47% over the same period. Part of the reason for this sudden shift is digitization. In the conclusion to the paper the team responsible for Google Scholar (released 10 years ago in November 2014) state: Now that finding and reading relevant articles in non-elite journals is about as easy as finding and reading articles in elite journals, researchers are increasingly building on and citing work published everywhere. With the introduction of exactingly searchable databases, the playing field is indeed leveling for access and awareness of all tiers of journals, splashy-high-impact or otherwise. This naturally leads to faster and more efficient scientific endeavors. (Imagine getting even closer, accessing new developments and discoveries in near-real-time. If you think the rate of progress in science is dizzying _now_...) Not mentioned, however, is the fact fields have grown more specialized, and publishers have responded by producing more specialty-specific journals. This may in part account for the increased share of non-elite citations: the publication of a groundbreaking article in a lower impact specialty journal will become a necessary citation in many subsequent papers in that and related fields. Another interesting point to consider in future studies is how open access journals measure up in citation rate. It has also been documented that high impact, elite journals have higher rates of retraction . Do the high impact works from non-elite journals show comparable rates of retraction? Given their high impact, many of the same explanations high impact journals give for higher retraction rates should still apply (i.e. increased exposure and thus increased scrutiny). Regardless, it is clear that new considerations must be made and changes are underway with respect to academic publications. Hopefully scientists return to their roots of open discourse and dissemination of their data SO WE CAN GET FURTHER, FASTER, TOGETHER.
SO YOU ACTUALLY WANT YOUR RESEARCH READ... Every year in science, tech, and medicine, on the order of 2 MILLION PAPERS are published. That’s a lot of papers. To remain current with their field, physicians must read about 20 PAPERS A DAY. Given the growing “scourge” of cross-disciplinary science and the interconnectivity of life, our world, and everything, 20 papers honestly seems low. How, then, is an average journal article only read by 10 people, or only 20% of _cited_ papers actually read? Maybe it has to do with the overextension of researchers (see Alberto’s post above for massive discipline-spanning course lists). Or maybe it has to do with the way papers are presented. They’re long, in archaic formats, and only accessible with a background in the given discipline (and, critically, freedom from paywalls). Why can’t we - scientists/communicators of knowledge/sharers of discoveries - agree to write clearly, concisely, and for broad impact and appeal? Many universities and other research institutions have press offices that interface with the public for just this reason. This is critical, as institutions’ research and resources help attract more funding and, nobly, should be shared with the world. The problem? You, as the person who did the research, probably know it better! And you (hopefully) won’t oversell it!
A recent article in Nature Communications is extremely informative. Like many good studies, it takes assumed fixtures or mainstays of a field (in this case isolated culturing in microbiology), flips them in some way, and arrives at novel observations and conclusions. Bacteria have usually been studied in single culture in rich media or in specific starvation conditions. These studies have contributed to understanding and characterizing their metabolism. However, they coexist in nature with other microorganisms and form consortia in which they interact to build an advanced society that drives key biogeochemical cycles. Briefly, the authors showed co-cultured bacteria (i.e. two different species from the same environment were grown together) formed physical connections with each other to allow ONE SPECIES TO HARNESS THE OTHER’S UNIQUE METABOLIC CHEMISTRY WHEN THE FORMER COULD NOT SURVIVE UNDER THE GIVEN STARVATION CONDITIONS. In turn, the donor species growth was elevated compared to isolation due to accessing it’s partners’ own metabolites. The researchers got some great pictures.
_BEYOND MARIE CURIE_ Marie Curie. Maybe Rosalind Franklin. These are two of the main names that come to mind when one thinks “women in science.” The reasons more female contributors to science aren’t a larger part of our collective consciousness are many and unjust and unfounded. Better coverage of these issues abound, and the tides are _very_ slowly turning, but MANY MAJOR SCIENTIFIC ADVANCES, OFTEN BY WOMEN, ARE STILL NOT WELL-KNOWN. That’s why I wanted to give Dr. Esther Lederberg a mention. She was a microbiologist at the forefront of 20th century discoveries (lambda virus, gene transfer, fertility factor F, etc.) in bacterial genetics that are now ushering in 21st century revolutions in biotechnology. What’s unfortunately not revolutionary, however, was the overshadowing of her career by that of her (ex-)husband, Nobel Prize winner Joshua Lederberg. Besides making major contributions to his Nobel-winning work, she developed innovative tools and methods that allowed better study of the incredibly small. It goes without saying, but lacking the edge these techniques provided, her husband’s laureateship may have been at risk. One of these tools was remarkably simple, but nevertheless incredibly powerful. This was REPLICA PLATING. A piece of velvet is held taut in the shape of a petri dish, a dish with isolated bacterial colonies on it transfers an identical pattern of the colonies to the velvet. This creates a “stamp” for the colonies, allowing the re-creation of the same species’ colonies in the same pattern on any type of plate a researcher would want (e.g. with or without a critical nutrient to see the effect on the bacteria). Then, researchers can test differentially affected colonies and probe what makes them distinct. Later in her career, Lederberg headed the Plasmid Research Center, a now-defunct institute at Stanford. Here, she oversaw the study, cataloging, and distribution of countless newly discovered bacterial plasmids (circular pieces of DNA) that contained resistance-contributing genes and many others that are now hallmarks of microbiology labs across the world. Beyond the gender bias in science, Esther Lederberg serves as another example of bias: that of a researcher who makes ENORMOUS AND IMPACTFUL CONTRIBUTIONS that don’t get big splashy headlines. That don’t get Nobels (fun fact: she and her husband were the first team to share a microbiology prize a mere two years before he received the Nobel). That don’t necessarily get you a place in popular memory. Why is this? Tens of thousands of researchers everyday must use plasmids of genes she first systematically studied. How can we better ensure tool makers, information sharers, disseminators, and distributors get fair credit? BY SHARING THEIR STORIES BIT BY BIT AND BASE BY BASE.
The peer review process is a pillar of modern research, verifying and validating the ever-increasing output of academia. While the academic community agrees that some process of review is necessary to ensure the quality of published research, not everybody agrees on the best approach. In particular, doubts have been cast on the current peer review process: most journals select and assign one anonymous referee (few journals assign two or more) who is in charge of reviewing the manuscript and recommending it for publication or rejection. The argument is that the current peer review system is becoming inadequate. Here’s an incomplete list of issues: - Research is increasingly collaborative, complex, and specialized. Thus, it is less likely that one or a few referees can have the necessary expertise (and time) to properly handle many modern articles. Simply put, THE AVERAGE NUMBER OF AUTHORS PER PAPER HAS BEEN STEADILY INCREASING IN THE LAST FEW DECADES, WHILE THE NUMBER OF REFEREES PER PAPER HAS NOT. - “Publication pressure” means there is a growing number of papers to referee. This need can not be easily matched since scholars, who need to constantly publish and engage in the “funding race”, HAVE LESS TIME TO BE DEDICATED TO COMMUNITY SERVICE (in a “single referee” system the review process is very time consuming). - Given the anonymous nature of peer reviewing manuscripts, RESEARCHERS WHO VOLUNTEER THEIR VALUABLE TIME AND KNOWLEDGE DON’T GET RECOGNITION for contributing. - Cases of peer-review scams, mostly from predatory open access publishers, have grown in number over recent years. A number of journals, exploiting the publication pressure climate, accept and publish articles with LITTLE OR NO PEER REVIEW. - Similarly, there are reports of fraud in which authors review their own or close friends’ manuscripts to give favorable reviews .
Still writing your own documents? That’s so 3200 BCE! At Authorea, we take digital publishing so seriously, we want to WRITE YOUR PAPER FOR YOU! Partnering with SCIGEN (who we briefly profiled last week), writing, submitting and disseminating your research will never again require a keyboard. Scholarly research and writing have never been easier. By stitching together a home-brewed soup of technical lingo, a cursory glance at your paper will yield resounding “Oh, hum, yes?”s — we guarantee! Concerned your manuscript won’t get accepted for publication? SCIgen has a proven record that _many_ of their randomly generated manuscripts have made it in to “peer reviewed” journals. And don’t worry, this is only horrifying if you aren’t in on it! Best of all, you can use a private article on Authorea to keep unwanted questions and comments out of the equation. Open and honest peer review is after all too dangerous an experiment, when we have venues ready and capable of accepting your SCIgen derived piece of utter genius. Happy Writing! ;)
On the left, you’ll see a little clock icon - this opens the article’s HISTORY: a Git-based log of updates and edits authors have applied to the article. This post should hopefully only have one entry as it’s short and typed in one sitting (_edit: this is never the case_), but we all make mistakes. Two interesting ideas to meditate on, w.r.t. science and scholarly communication: 1.) What would a Git history look like for an entire piece of research, or even just the many iterations of a single experimental procedure? GitHub does this for software development of course (we can integrate your articles with your GitHub repos by the way), but there’s a whole untapped academic ecosystem - how do thoughts mature and develop in other fields? 2.) If you had a _Git History of Science_, there would be so many re-additions and re-deletions and entire huge sections removed (phlogiston, anyone?), Compare views would be a wash of green and red. How many “mistakes” have been made and re-made over time? What could we learn from the trends and developments of knowledge? SCIENCE IS REALLY A PROCESS AND A WAY OF THINKING. Why aren’t we keeping better track of the thought process and showing errors made along the way? It would help us build or fork better off each others’ works for one thing. Less redundancies and unnecessary pitfalls as well. Plus “mistakes” are a helpful and fateful force in the scientific process itself. Think about any great thinker, writer, artist, maker. I bet any of their rough drafts would seem pretty valuable now. In what other ways might we benefits from having detailed histories of inventive, creative, and thoughtful processes?
Perhaps you have heard of the peer review fraud scandals rocking several big journals. Rings of researchers’ quid pro quo favorable reviews; PIs reviewing their own work unbeknownst to editors; probably other bad things that we haven’t found out about yet. Or perhaps you remember prank paper generator SCIgen: it has produced many nonsensical manuscripts that were “peer reviewed”, accepted, and later and embarrassingly retracted. To combat the systemic problem these jokes expose, Springer designed SciDetect to do the job a “peer” should be able to do in the first place – spot blatantly obvious bullshit. Maybe you even know of “soft fraud” – knowing that editors have sympathies or vested interests in a sub-discipline at _Journal X_; reaching out to an old colleague likely to review your manuscript; frequently collaborating with big name PIs whose brand has more clout than carefully done and clearly communicated science likely ever could. WHAT CAN WE DO?! That is the question. Certainly _Nature_ charging authors for faster peer review is not an intended answer . At Authorea, we think all levels of the scientific process would benefit from some openness and transparency. While different researchers might draw different lines, experimenting with open peer review seems like a good place to start (its kind of astounding that post-publication open review isn’t widely practiced yet). Open up your work to the light of day and get some honest open feedback that makes it better – what if adding more eyes brought about changes that got your manuscript accepted to a higher tier journal than you hoped? If that’s a solidly achievable best case, what’s the worst case? “BUT WHAT IF I GET SCOOPED?” This is always meant as the inevitable and terrible outcome of open access. To ensure speed, maybe you specify a time frame. To ensure security, maybe you specify no anonymous viewing or commenting. But really, that won’t change much. Without any data (open or paywalled), I’m pretty confident the majority of “scooping” incidents are the result of many players shooting for the same goals, smart people working hard, and good old-fashioned word of mouth. Maybe if we shared more we’d all get so much further! That’s the thing: as scientists we are proud of our work. We publish to show the world, so why not show it off sooner? Get credit faster? Get more feedback and make more useful connections? These represent some major features of the Internet that researchers are still chronically under-utilizing, and it was invented for us! THIS IS THE 21ST CENTURY. WE SHOULD SCIENCE LIKE IT.
Philippus Aureolus Theophrastus Bombastus von Hohenheim, self-styled as PARACELSUS, was a Swiss-German polymath and occultist active in the early 1500s. Notable among his many contributions (including the designation “father of toxicology”) was his emphasis on observation when knowledge from the past held in highest regard. This belief, admittedly revolutionary at the time, was further reflected in his personal motto: _alterius non sit qui suus esse potest_ (“let no man belong to another who can belong to himself”). He refused to follow centuries-old schools of thought, relying on his own wits to understand the world around him. Paracelsus’s defiant independence naturally clashed with authorities, only serving to stoke his ego (see quote below). His challenges to traditional medicine, advocacy for observation as the path to knowledge, and use of common language for scholarly communication (learned individuals only lectured in Latin) all reflect changes society still struggles with today. WHAT CAN WE LEARN ABOUT SCIENCE FROM A 16TH CENTURY MYSTIC? Science, compared to other fields like math or art or finance, is formally a recent development. The first text to resemble a modern journal article - Galileo’s _Starry Messenger_ - like Paracelsus and his philosophy, is prophetic of open science and data. Paracelsus believed knowledge and the information behind it should be wide-spread (e.g. even physicians of his time were comparably educated with barbers and butchers ) as well as rigorously examined and questioned. He also thought he was incredibly smart:
A recent article titled The spin rate of pre-collapse stellar cores: wave driven angular momentum transport in massive stars was written on Authorea and submitted to the Astrophysical Journal (ApJ) and to the arXiv as a pre-print. While waiting on peer review from the ApJ, the authors want to test Authorea as a platform for OPEN PEER-REVIEW. By going to the document’s page, you can comment on a section, figure, observation, sentence, or the whole piece. The authors and other commenters can respond and further the discussion. And it’s all out in the open, just how science was meant to be. But it doesn’t stop there. You can also view full-size, high-resolution versions of the paper’s figures, as well as easily follow links in the References at the bottom of the page. In the paper, show for the first time how internal gravity waves, excited in the turbulent layers of stars at least ten times larger than the Sun, can radically change their internal rotation rate. In particular, these waves – somewhat analogous to ocean waves – can determine how rapidly the stellar core spins around its axis when the star is about to die and become a supernova. The spin of a pre-supernova core is important because it deeply affects the stellar explosion and determines the rotation rate of the stellar remnant (neutron star or black hole).
THE SYSTEM AS IT STANDS A study published in July 2014 used the Freedom of Information Act to request access to contracts between academic publishers and 55 university and 12 consortia of libraries . 360 contracts were received, documenting prices and bundling of deals from 9 major publishers (including Elsevier, Springer, Wiley, ACS, and Oxford University Press). The contracts show the result of opaque sales practices, manipulation, and varying degrees of negotiation skill: publishers can charge vastly different prices for the same products and services. Keep in mind they are selling to nonprofit institutions whose members - conduct groundbreaking and lifesaving research (often taxpayer-funded) - volunteer their time and talent to the publishers’ peer review process - pay for the submission of articles published in journals - and are now buying it all back. Also keep in mind that top publishers have profit margins on the order 30% or more. In the mid 1990s, with the shift from print-only to digital distribution, economic formulations changed. No longer would a research university _need_ to subscribe to multiple copies of in-demand journals. No longer would storage space play a significant role in decisions (e.g. storage and maintenance costs for a 2500 page journal volume range from $300-1000). No longer would impact be a limiting factor for purchased titles, or as it’s now emerging, should it even be. And publishers could now offer their whole catalog of journals at one discounted “Big Deal” price. In the words of Derk Haank, then Elsevier and current Springer CEO: But what it [electronic publishing] does do is to _DRAMATICALLY LOWER THE MARGINAL COSTS OF ALLOWING ACCESS_.... [The cost for each new users] is virtually nil and that means that we should be more creative in the business model.... where we make a deal with the university, the consortia or the whole country, where we say for this amount we will allow all your people to use our material, unlimited, 24 hours per day. And, basically the price then depends on a rough estimate of how useful is that product for you; and we can adjust it over time. [emphasis added] Here, “adjust it over time” means mandate an average 5-6% price increase annually. Bergstrom, et al calculate: “A bundle whose price increased by 5.5% per year would DOUBLE ITS PRICE BETWEEN 1999 AND 2012, whereas over the same period the US consumer price index rose by 38%.” [emphasis added] What’s more, such “creative” business models force library administrators to try to quantify abstractions like the value of information. Information, however, is context dependent. The difference of opinion on a paper’s importance could range from “meaningless” to a critical insight for unraveling a disease pathway. At the end of the day, an all-inclusive “Big Deal” bundle may be easiest – if funds are available. When cost limits access, however, researchers may rely on e-mailed PDFs from helpful colleagues at better-equipped campuses. Another solution, when access is out of reach or publication slow (e.g. a year from initial acceptance to publication is common for some Statistics journals), is pre-print repositories like arXiv. Unfortunately, the articles aren’t peer-reviewed, a reason big publishers can charge so much. This is also a reason we think researchers (and journals!) might want to try their own pilot study of Authorea-as-interactive-repository or submission platform. This is the 21st Century, scientists should be writing and disseminating like it! Have thoughts about this? Let us know in the Comments or follow us to get updates!
Today we are proud to announce the winners of our travel grant for European student attendees of the APS March meeting in San Antonio, TX. WHY DID AUTHOREA SPONSOR THESE TRAVEL GRANTS? At Authorea, we want to build bridges between scholars, disciplines, and cultures in order to form a collaborative scholarly community at a global scale. Sometimes, _face-to-face meetings are the best catalysts for sharing and creating new connections_. Two of us at Authorea - Alberto and Matteo - are from Italy. They have benefited from academic careers abroad (postdocs at Harvard and University of California, Santa Barbara, respectively) also thanks to important connections they made at international conferences in the early stages of their academic career. Our winners for the March Meeting are ALBERTO DE LA TORRE and JUAN TRASTOY QUINTELA, both from Spain. We hope that the connections they made at the March Meeting will bring fruitful collaborations.
INTRODUCTION TO VERSION CONTROL Many scientists write code as part of their research. Just as experiments are logged in laboratory notebooks, it is important to document the code you use for analysis. However, a few key problems can arise when iteratively developing code that make it difficult to document and track which code version was used to create each result. First, you often need to experiment with new ideas, such as adding new features to a script or increasing the speed of a slow step, but you do not want to risk breaking the currently working code. One often utilized solution is to make a copy of the script before making new edits. However, this can quickly become a problem because it clutters your filesystem with uninformative filenames, e.g. analysis.sh, analysis_02.sh, analysis_03.sh, etc. It is difficult to remember the differences between the versions of the files, and more importantly which version you used to produce specific results, especially if you return to the code months later. Second, you will likely share your code with multiple lab mates or collaborators and they may have suggestions on how to improve it. If you email the code to multiple people, you will have to manually incorporate all the changes each of them sends. Fortunately, software engineers have already developed software to manage these issues: version control. A version control system (VCS) allows you to track the iterative changes you make to your code. Thus you can experiment with new ideas but always have the option to revert to a specific past version of the code you used to generate particular results. Furthermore, you can record messages as you save each successive version so that you (or anyone else) reviewing the development history of the code is able to understand the rationale for the given edits. Also, it facilitates collaboration. Using a VCS, your collaborators can make and save changes to the code, and you can automatically incorporate these changes to the main code base. The collaborative aspect is enhanced with the emergence of websites that host version controlled code. In this quick guide, we introduce you to one VCS, Git (git-scm.com), and one online hosting site, GitHub (github.com), both of which are currently popular among scientists and programmers in general. More importantly, we hope to convince you that although mastering a given VCS takes time, you can already achieve great benefits by getting started using a few simple commands. Furthermore, not only does using a VCS solve many common problems when writing code, it can also improve the scientific process. By tracking your code development with a VCS and hosting it online, you are performing science that is more transparent, reproducible, and open to collaboration . There is no reason this framework needs to be limited only to code; a VCS is well-suited for tracking any plain-text files: manuscripts, electronic lab notebooks, protocols, etc.