PREAMBLE This article is an experiment. I was writing up a brief post on the crowdfunding effort (see DONATE below). Then the news broke about the second Dallas health care worker. And then the cancellation of the President’s campaigning made the NYT’s push notification. And then I started getting texts and emails from friends who know about my obsessive outbreak-following ways. Of course searching “recent ebola news” delivered updates from local, national, and international media outlets, mere minutes apart. All telling slight variations on the same story. Stories that are always developing, but at different rates and depths. And each lacking certain small facts or contexts (leading to frequent searches for references, guidelines, previous reports, etc.). It occurred to me that maybe, just maybe, there could be a better way to: 1. Aggregate and synthesize real-time events. 2. Easily reference new reports to previous ones. 3. Cross-reference and cite accumulated facts and data. 4. Have everything regularly updated in one convenient place. 5. Integrate the decades of Ebola research into those reports, the ensuing debates, and ideally the public at large. So. This is meant to be an experiment in the collaborative review of a pressing global issue. There is so much good information available to everyone (see LEARN below), but it doesn’t make sense to have every individual slog through, read every article, and find it alone. If we can centralize and synthesize enough of the background, biology, history of the current outbreak, and hot-button issues, it will be that much easier for all those contributing to stay up to date. More importantly, it will also help those just beginning to look for answers to necessarily scary questions. If you have suggestions, quality news updates, or want to be added as a co-author to actively contribute, leave a comment. It would be a pleasure to work together. In the nature of experimentation, we won’t start out with too many rules beyond: A) BE KIND. B) BACK UP WHAT YOU SAY. C) IF SOMEONE IS ADHERING TO A AND B, DON’T TAKE IT PERSONALLY.
With research, it’s not always clear what we should do or where we should go next. Dr. Uri Alon expertly discusses this phenomenon in his 2013 Ted Talk. Having an open mind is critical, because most researchers are at least tangentially up against the unknown. THE UNKNOWN IS WHERE DISCOVERY HAPPENS BECAUSE, SIMPLY PUT, ALMOST ANYTHING CAN HAPPEN THERE. With IPython integrated in your articles, you can bring your readers along the same path you took through the unknown. They can certainly judge its validity (after all, scrutiny and skepticism are givens in science). But more importantly, by opening up your findings to a wider community, someone will come up with a slightly different (or _very different_) way through the unknown.
Friday, an op-ed piece _actually_ titled “Academic Science Isn’t Sexist” went up on the _New York Times_ blog (a version appeared in the Sunday Review). It was about academic research and the lack of sexism therein. The two editorialists are co-authors on a recently released analysis on the subject (it _is_ beautifully open access, and much of the raw data is available). The piece and the paper claim sexism has largely waned in academic research, the result of shifts from a previously sexist, male-dominated academy. Further, that any remaining incongruities between male and female enrollment, advancement, and achievement are artifacts and anecdotal. Academic research is completely gender-blind now. Any differences are largely the product of society-at-large and earlier life decisions (like the choice to play with dolls/cute animals versus trucks/destructive robots). Huh. The response from the science blogging community and Twittersphere was immediate and is still on-going. Jonathan Eisen responded Halloween night, soon after the piece was posted. His immediate critique was of the acknowledgement of reports of “physical aggression” in the op-ed piece, without ever addressing these in their data or analysis (even the 60+ page research paper is short on coverage). The assumption: they are also anecdotal? So everything is actually fine? Probably not (<- this article details accounts of _sexual misconduct in field work_ involving biology, anthropology, and other social sciences, disciplines the authors above highlight as _largely welcoming and open to women_). Emily Willingham provides excellent analysis of the data presented in the paper and in the broader debate at hand. It turns out there are numerous discrepancies and avoided topics of analysis (e.g. salary figures often had statistically significant differences by gender; women more often reported lack of inclusion; more details in her impeccable post). Likewise, Matthew Francis covered the story, emphasizing the need to actively address these still-existent problems and not ignore them: the importance of even a little explicit encouragement of female students in the face of implicit discouragement (like he sees in his native field of physics) is often all that’s needed. The ever-emphatic PZ Myers rounds out the debate by breaking down the major reasoning and assumptions in the original paper, with characteristic gusto. So what exactly were the original authors thinking? A handful of distributed scientists were able to challenge the key arguments of their paper, using their data and citations, in free time over the weekend. Talk about peer-review. Seriously though, what were they thinking? I would _like to think_ that this was actually a brilliantly orchestrated publicity stunt to get more attention on this critical issue. AFTER ALL, WHO IS GOING TO BLOG/TWEET/COUNTER-OP-ED “ACADEMIC SCIENCE IS SLIGHTLY LESS SEXIST THAN WHEN MALE ACADEMICS COULD STILL SMOKE IN THEIR OFFICES”? Because when you look at the data, the background on this issue, and the immediate response from the community, it’s obvious academic research isn’t now some utopian meritocracy brimming with equality. There is still institutional and systemic biases. Whether its gender, race, sexual-preference, or need related, or tied up in the archaic publishing system that is all too easily gamed, we have a long way to go before things can be considered “fair”. What might a fair system even look like?
Humans are naturally curious. And curiosity is the foundation of science: we ask questions, search for answers, and share what we learn. But as budgets shrink and piles of paperwork and email grow, the passion for research and sharing gets lost. 400 years ago, it was different. Galileo Galilei could simply turn his telescope to Jupiter and chart some moving points of light. He discovered the Galilean Moons orbiting Jupiter. With this knowledge, he then determined Earth orbited the Sun. He shared his results - all his data, drawings, calculations and conclusions - by publishing a paper. It took him just 3 months from observation to publication. Why can’t we be like Galileo? Today, research articles often have many authors and can take years to get data. And final papers take longer to write, with constant emails and revisions made to meet the standards of big publishing companies. We’re in the future, people! Shouldn’t we write and share like it? Meet Authorea, the truly collaborative online platform that lets you write and share science together. It keeps track of all your changes in one place; handles mathematical notation and visualization; organizes citations; and lets others review your work. Authorea can also be the home where your data and code live, letting others easily access, reproduce, and extend your work. All of this so you can spend less time writing and more time on your passion. Curious? Start writing for free at authorea.com.
WHAT’S ON YOUR MIND? This is an experimental “article”. I’ll periodically drop citations to papers and news I’m reading, I’ve read, or I’m still thinking about. Dropped citations might have comments, links to related articles/writings, or be effectively untouched. At any point, anyone can comment on anything (please do; it’s encouraged). Maybe you have some insights on a paper, a particular concept, or technique. Maybe you disagree with something. Maybe you have more recommended reading. Maybe you just want to say something is “cool”. Any kind of comment is good because it adds information to the thought-process or discussion. As this is a work very much in progress, organizational changes will be made I’m sure, and if anyone has suggestions or wants to build off anything here, go for it. The goal, while not yet entirely clear, is some kind of radical or innovative collaboration.
R, YOUR DATA, AND YOU This is a walk-through for using the rmagic extension in IPython Notebook. This lets you access the statistical programming language R in IPython. You can Fork this article to have a working R console in your browser, right now, for storing, analyzing, and visualizing your data. A great option for quickly implementing or introducing statistical programming, with Authorea you can even write your own use-specific walk-through (e.g. a lab instrument protocol) in the main article and have a Notebook built in for the resulting data. Need analysis now? FORK, ATTACH THE DATA, MODIFY THE R CODE IN IPYTHON, AND THEN SIMPLY RUN TO GENERATE THE ANSWERS AND GRAPHS YOU NEED. All the pieces of your research live in one place so you can easily record, index, and even compile reports from your browser - anywhere, anytime. Get on with your research, your teaching, and what matters most. WHY R? Because sometimes you don’t want to use or need to learn Python; R is a hammer when you have a nail. The people behind IPython know this and have evolved their efforts into Project Jupyter, the name inspired by the IPython implementations of Julia, Python, and R (the open languages of science computing) - and a certain big red planet. On their website, you can see a talk where Fernando Perez even points to Alberto’s “Science was always meant to be open” on the foundations of open science that are obvious in Galileo’s work.