AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP

Preprints

Explore 39,076 preprints on the Authorea Preprint Repository

A preprint on Authorea can be a complete scientific manuscript submitted to a journal, an essay, a whitepaper, or a blog post. Preprints on Authorea can contain datasets, code, figures, interactive visualizations and computational notebooks.
Read more about preprints.

Moonlight Shadow
Matteo Cantiello

Matteo Cantiello

November 30, 2021
"Like every great river and every great sea, the moon belongs to none and belongs to all. It still holds the key to madness, still controls the tides that lap on shores everywhere, still guards the lovers who kiss in every land under no banner but the sky". E.B. White The New Yorker, July 26, 1969Where does the Moon come from?Scientist believe that our Moon formed out of a ‘giant impact’ that occurred between a Mars-sized planet and the early Earth, some 4.5 billion years ago. The Moon was then formed from the coalescence of the orbiting debris scattered during the impact.Recent results seem to confirm this scenario
Experiments testing Bell’s inequality with local real source
Peifeng Wang

Peifeng Wang

May 15, 2019
Aside from Bell’s inequality, entanglement and local real model have other aspects which are expected in experiments. Analysis on a) the physics concept of entanglement and b) precise interpretation of experiments shows that 1) In a reported loophole-free violation of Bell inequality, the transition of wave function from odd parity to even parity reveals that the experiment is performed on the spin of a pair of local real nitrogen vacancy (NV) centre. 2) The equivalence between rotating spin by θ and rotating measurement basis by −θ is not applicable in entanglement case, thus in long range entanglement setups for closing locality loophole, the operation of rotating spin followed by measurement puts the entanglement in question. 3) Fair sampling assumption arises when a finite sample is used to represent the entire population space, thus it is a basic requirement of statistical experiment, fair sampling loophole can not be closed.
Augmented Reality with Hololens: Experiential Architectures Embedded in the Real Wo...
Paul Hockett
Tim Ingleby

Paul Hockett

and 1 more

February 27, 2017
_Additional notes:_ Authors: - Paul Hockett, National Research Council of Canada, 100 Sussex Drive, Ottawa, K1A 0R6, Canada - Tim Ingleby, Department of Architecture and Built Environment, Northumbria University, Ellison Place, Newcastle upon Tyne, NE1 8ST, UK Links: - Online version: Authorea, DOI: 10.22541/au.148821660.05483993 - Repository for videos and files, Figshare, DOI: 10.6084/m9.figshare.c.3470907 - Arxiv version (1610.04281) - Ongoing work: femtolab.ca
BillCorrectly: A software tool to help psychiatrists bill E&M codes appropriately
Kevin J. Black

Kevin J. Black

February 23, 2017
© 2016, Kevin J. Black. This work is licensed under a Creative Commons Attribution 4.0 International License.
Time-resolved multi-mass ion imaging: femtosecond UV-VUV pump-probe spectroscopy wi...
Paul Hockett
Ruaridh Forbes

Paul Hockett

and 8 more

March 23, 2017
_Publication history_ - Original document (Authorea), DOI: 10.22541/au.149030711.19068540. - arXiv 1702.00744 (Feb. 2017) - J. Chem. Phys. special issue “Developments and Applications of Velocity Mapped Imaging Techniques”, (March 2017), DOI: 10.1063/1.4978923 - Data and analysis scripts (OSF), DOI: 10.17605/OSF.IO/RRFK3. _See also_ - AIP Press Release: _The Inner Lives of Molecules_ (April 2017) - PImMS camera website - Vallance group website - Femtolab website
PolyLog_2 of Inverse Elliptic Nome Exponential Generating Function
Benedict Irwin

Benedict Irwin

November 02, 2020
MAIN Let G(q)=_2(m(q)) be an exponential generating function, where Li₂ is the polylogarithm of order 2, _2(z)=^\infty {k^2} and m(q) is the inverse elliptic nome which can be expressed through the Dedakind eta function as m(q)={2})^{8}\eta(2\tau)^{16}}{\eta(\tau)^{24}} where q = eiπτ or by Jacobi theta functions m(q)=\left({\theta_3(0,q)}\right)^4 where \theta_2(0,q)=2^\infty q^{(n+1/2)^2}\\ \theta_3(0,q)=1+2^\infty q^{n^2} giving explicitly G(x)=^\infty {k^2}\left(^\infty x^{(n+1/2)^2}}{1+2^\infty x^{n^2}}\right)^{4k}=^\infty {k!} if we consider the sequence of coefficients ak associated with G(x), modulo 1, or the fractional part of the coefficients, frac(ak) we gain the following sequence 0,0,0,{3},0,{5},0,{7},0,0,0,{11},0,{13},0,0,0,{17},0,{19},0,0,0,{23},0,0,0,0,0,{19},0,{31},0,0,0,0,0,{37},0,0,0,{41},\cdots we see the primes in the denominator in positions where the power of x is a prime. We also note that so far, the numerators are always less than the denominator (obviously), but count, succesively upwards, producing monotonically increasing subsequences. The prime only parts continue {3},{5},{7},{11},{13},{17},{19},{23},{29},{31},{37},{41},{43},{47},{53},{59},{61},{67},{71},{73},{79},{83},{89},{97}, After closer inspection, we see the numerators from the point 1, 3, 7, 13, 15, 21, 25, 27, 31, 37, 43, 45, 51, 55, 57, ... take the form prime(k)−16, the numerators before this take the form 2 ⋅ prime(k)−16, for 6, 10, 3 ⋅ prime(k)−16 for 5, 4 ⋅ prime(k)−16 for 4 and 6 ⋅ prime(k)−16 for the first numerator 2. It is likely then that for the rest of the numbers this pattern continues. This then gives for the coefficient ak of G(x), with k > 6, (a_k)={k}, \; k\in We find that if we take the original coefficients ak, and subtract this fractional part in general \delta_k=a_k-{k} for numbers m which cannot be written as a sum of at least three consecutive positive integers, δm is an integer (empirical). A111774 “ Numbers that can be written as a sum of at least three consecutive positive integers.” apart from odd primes, numbers which cannot are powers of two. OTHER We find a similar relationship with G_2(x)=_2\left({(1-x)^2\left(1-{x-1}\right)^2}\right)=^\infty {k!} where bk seem to follow for k > 2 (b_k)={k}, \; k\in GENERATING FUNCTION FOR FRACTIONAL PART We see the Generating function for n/2 is {2(x-1)^2} but the generating function for the fractional part of n/2, which is (n mod2)/2, is given by {2(x^2-1)} the property described is associated with the polylog, and we seen that the fractional part of _2(2x)=^\infty {k!} gives (c_k)={k}, \; k\in\\ 0, \; this means \left({k^2}\right) = {k}, \; k\in\\ 0,\; or \left({k}\right)= {k}, \; k\in\\ 0,\; we also see that \left({k}\right)= {k}, \; k\in\\ {2}, 4\\ 0,\;
ePSproc: Post-processing suite for ePolyScat electron-molecule scattering calculati...
Paul Hockett

Paul Hockett

September 03, 2019
_Article details_ Software metapaper, structured for the Journal of Open Research Software (JORS). Online version (Authorea) can be found at https://www.authorea.com/users/71114/articles/122402/_show_article Github (software repository): github.com/phockett/ePSproc Figshare repository (manuscript & source files): DOI: 10.6084/m9.figshare.3545639 10/08/16 - This fork for review. 12/11/16 - arXiv version uploaded, 1611.04043 03/09/19 - Finally working on a python version, see Github pages for updates. Full documentation now on on Read the Docs.
Suggestions for new NIH grant applicants
Kevin J. Black

Kevin J. Black

February 23, 2017
INTRODUCTION I've learned a few things from my experience with NIH grants, and in talking recently with a former research trainee, I realized those lessons might be helpful to others. What experience? I've successfully competed for NIH grants including R01s, R21s, a K08, a K24, an R13, and an ARRA supplement. I've also contributed as investigator or key personnel to other PIs' R01s. I've served on several NIH review panels and was a standing member of the Clinical Neuroscience and Neurodegeneration (CNN) study section for 4 years. A recent NIH biosketch is available here with all the details. DISCLAIMER: this is free advice ... and guaranteed to be worth every penny. ☺ As they say, YMMV.
Several Proofs of Security for a Tokenization Algorithm
Riccardo Longo
Riccardo Aragona

Riccardo Longo

and 2 more

March 28, 2017
In this paper we propose a tokenization algorithm of Reversible Hybrid type, as defined in PCI DSS guidelines for designing a tokenization solution, based on a block cipher with a secret key and (possibly public) additional input. We provide some formal proofs of security for it, which imply our algorithm satisfies the most significant security requirements described in PCI DSS tokenization guidelines. Finally, we give an instantiation with concrete cryptographic primitives and fixed length of the PAN, and we analyze its efficiency and security.
How To Write Mathematical Equations, Expressions, and Symbols with LaTeX: A cheatshee...
Authorea Help
Matteo Cantiello

Authorea Help

and 3 more

May 15, 2019
WHAT IS LATEX? LaTeX is a programming language that can be used for writing and typesetting documents. It is especially useful to write mathematical notation such as equations and formulae. HOW TO USE LATEX TO WRITE MATHEMATICAL NOTATION There are three ways to enter “math mode” and present a mathematical expression in LaTeX: 1. _inline_ (in the middle of a text line) 2. as an _equation_, on a separate dedicated line 3. as a full-sized inline expression (_displaystyle_) _inline_ Inline expressions occur in the middle of a sentence. To produce an inline expression, place the math expression between dollar signs ($). For example, typing $E=mc^2$ yields E = mc². _equation_ Equations are mathematical expressions that are given their own line and are centered on the page. These are usually used for important equations that deserve to be showcased on their own line or for large equations that cannot fit inline. To produce an inline expression, place the mathematical expression between the symbols \[! and \verb!\]. Typing \[x=}{2a}\] yields \[x=}{2a}\] _displaystyle_ To get full-sized inline mathematical expressions use \displaystyle. Typing I want this $\displaystyle ^{\infty} {n}$, not this $^{\infty} {n}$. yields: I want this $\displaystyle ^{\infty}{n}$, not this $^{\infty}{n}.$ SYMBOLS (IN _MATH_ MODE) The basics As discussed above math mode in LaTeX happens inside the dollar signs ($...$), inside the square brackets \[...\] and inside equation and displaystyle environments. Here’s a cheatsheet showing what is possible in a math environment: -------------------------- ----------------- --------------- _description_ _command_ _output_ addition + + subtraction - − plus or minus \pm ± multiplication (times) \times × multiplication (dot) \cdot ⋅ division symbol \div ÷ division (slash) / / simple text text infinity \infty ∞ dots 1,2,3,\ldots 1, 2, 3, … dots 1+2+3+\cdots 1 + 2 + 3 + ⋯ fraction {b} ${b}$ square root $$ nth root \sqrt[n]{x} $\sqrt[n]{x}$ exponentiation a^b ab subscript a_b ab absolute value |x| |x| natural log \ln(x) ln(x) logarithms b logab exponential function e^x=\exp(x) ex = exp(x) deg \deg(f) deg(f) degree \degree $\degree$ arcmin ^\prime ′ arcsec ^{\prime\prime} ′′ circle plus \oplus ⊕ circle times \otimes ⊗ equal = = not equal \ne ≠ less than < < less than or equal to \le ≤ greater than or equal to \ge ≥ approximately equal to \approx ≈ -------------------------- ----------------- ---------------
T-SNE visualization of large-scale neural recordings
George Dimitriadis
Adam Kampff

George Dimitriadis

and 2 more

April 25, 2016
Electrophysiology is entering the era of ‘Big Data’. Multiple probes, each with hundreds to thousands of individual electrodes, are now capable of simultaneously recording from many brain regions. The major challenge confronting these new technologies is transforming the raw data into physiologically meaningful signals, i.e. single unit spikes. Sorting the spike events of individual neurons from a spatiotemporally dense sampling of the extracellular electric field is a problem that has attracted much attention , but is still far from solved. Current methods still rely on human input and thus become unfeasible as the size of the data sets grow exponentially. Here we introduce the t-student stochastic neighbor embedding (t-sne) dimensionality reduction method as a visualization tool in the spike sorting process. T-sne embeds the n-dimensional extracellular spikes (n = number of features by which each spike is decomposed) into a low (usually two) dimensional space. We show that such embeddings, even starting from different feature spaces, form obvious clusters of spikes that can be easily visualized and manually delineated with a high degree of precision. We propose that these clusters represent single units and test this assertion by applying our algorithm on labeled data sets both from hybrid and paired juxtacellular/extracellular recordings . We have released a graphical user interface (gui) written in python as a tool for the manual clustering of the t-sne embedded spikes and as a tool for an informed overview and fast manual curation of results from other clustering algorithms. Furthermore, the generated visualizations offer evidence in favor of the use of probes with higher density and smaller electrodes. They also graphically demonstrate the diverse nature of the sorting problem when spikes are recorded with different methods and arise from regions with different background spiking statistics.
Predicting Peptide-MHC Binding Affinities With Imputed Training Data
Alex Rubinsteyn
Timothy O'Donnell

Alex Rubinsteyn

and 3 more

April 19, 2016
Predicting the binding affinity between MHC proteins and their peptide ligands is a key problem in computational immunology. State of the art performance is currently achieved by the allele-specific predictor NetMHC and the pan-allele predictor NetMHCpan, both of which are ensembles of shallow neural networks. We explore an intermediate between allele-specific and pan-allele prediction: training allele-specific predictors with synthetic samples generated by imputation of the peptide-MHC affinity matrix. We find that the imputation strategy is useful on alleles with very little training data. We have implemented our predictor as an open-source software package called MHCflurry and show that MHCflurry achieves competitive performance to NetMHC and NetMHCpan.
How many scholarly articles are written in LaTeX?      
Alberto Pepe

Alberto Pepe

February 21, 2017
How many people use the typesetting language LaTeX? This is obviously a hard question. However, another way to look at it is to calculate the percentage of published scholarly articles written in LaTeX.
Agnostic cosmology in the CAMEL framework
plaszczy

plaszczy

March 07, 2016
Cosmological parameter estimation is traditionally performed in the Bayesian context. By adopting an “agnostic” statistical point of view, we show the interest of confronting the Bayesian results to a frequentist approach based on profile-likelihoods. To this purpose, we have developed the _Cosmological Analysis with a Minuit Exploration of the Likelihood_ () software. Written from scratch in pure C++, emphasis was put in building a clean and carefully-designed project where new data and/or cosmological computations can be easily included. CAMEL incorporates the latest cosmological likelihoods and gives access _from the very same input file_ to several estimation methods: - A high quality Maximum Likelihood Estimate (a.k.a “best fit”) using , - profile likelihoods, - a new implementation of an Adaptive Metropolis MCMC algorithm that relieves the burden of reconstructing the proposal distribution. We present here those various statistical techniques and roll out a full use-case that can then used as a tutorial. We revisit the  parameters determination with the latest  data and give results with both methodologies. Furthermore, by comparing the Bayesian and frequentist approaches, we discuss a “likelihood volume effect” that affects the optical reionization depth when analyzing the high multipoles part of the  data. The software, used in several  data analyzes, is available from http://camel.in2p3.fr. Using it does not require advanced C++ skills.
Cataloguing Molecular Cloud Populations in Galaxy M100
Natalie Hervieux
Erik Rosolowsky

Natalie Hervieux

and 1 more

March 01, 2016
We compare the properties of giant molecular associations in the galaxy Messier 100 (M100) with those of the less massive giant molecular clouds in the Milky Way and Local Group, while also observing how those properties change within M100 itself. From this analysis of cloud mass, radius, and velocity dispersion, we determine that the clouds are in or near virial equilibrium and that their properties are consistent with the underlying trends for the Milky Way. We find differences between nuclear, arm and inter-arm M100 populations, such as the nuclear clouds being the most massive and turbulent, and arm and inter-arm populations having differently shaped mass distributions from one another. Through the analysis of velocity gradients, cloud motion can be attributed to turbulence rather than large scale shearing motion. This is supported by our comparison with turbulence regulated star formation models. Finally, we calculate ISM depletion times to see how quickly clouds turn gas into stars and found that clouds form stars more efficiently if they are turbulent or dense.
El Niño Composites
Tristan Hauser

Tristan Hauser

May 21, 2020
A lot of attention has been given to the consequences of the latest strong El Niño event. People often talk about meteorological phenomena as El Niño (or La Niña) conditions, but what are these, and how do we come about our notions of what is a ’typical’ El Niño event? How consistent do we expect the effects of this phenomena to be, especially when these ’signature effects’ occur thousands of kilometers away from the Pacific Ocean? Often understanding about the typical effects of large scale climate variations are derived from _composites_. This is a common statistical method where elements are classified into groups based on some external consideration, and then the properties of each group is expressed by the average of all the elements it contains. This can be a very efficient way to visualize large data sets, but it can also imply more consistency within groups than is actually the case. This post goes over some of mechanics of creating composites, and ways to explore to what degree they can be taken at ’face value’.
The Surfer's Guide to Gravitational Waves
Matteo Cantiello

Matteo Cantiello

February 20, 2017
IN A NUTSHELL: Gravitational waves are ripples in the fabric of space time produced by violent events, like merging together two black holes or the explosion of a massive star. Unlike light (electromagnetic waves) gravitational waves are not absorbed or altered by intervening material, so they are very clean proxies of the physical process that produced them. They are expected to travel at the speed of light and, if detected, they could give precious information about the cataclysmic processes that originated them and the very nature of gravity. That’s why the direct detection of gravitational waves is such an important endeavor. Definitely worthy of a Nobel prize in physics.
Tourette syndrome research highlights from 2016
Kevin J. Black

Kevin J. Black

August 02, 2017
This article presents highlights chosen from research that appeared during 2016 on Tourette syndrome and other tic disorders. Selected articles felt to represent meaningful advances in the field are briefly summarized.
Generation of Shear Waves by Laser in Soft Media in the Ablative and Thermoelastic Re...
Pol Grasland-Mongrain
Yuankang Lu

Pol Grasland-Mongrain

and 1 more

January 07, 2016
This article describes the generation of elastic shear waves in a soft medium using a laser beam. Our experiments show two different regimes depending on laser energy. Physical modeling of the underlying phenomena reveals a thermoelastic regime caused by a local dilatation resulting from temperature increase, and an ablative regime caused by a partial vaporization of the medium by the laser. Computed theoretical displacements are close to experimental measurements. A numerical study based on the physical modeling gives propagation patterns comparable to those generated experimentally. These results provide a physical basis for the feasibility of a shear wave elastography technique (a technique which measures a soft solid stiffness from shear wave propagation) by using a laser beam.
Pharmit: Interactive Exploration of Chemical Space
Jocelyn Sunseri
David Koes

Jocelyn Sunseri

and 1 more

January 07, 2016
Pharmit (http://pharmit.csb.pitt.edu) provides an online, interactive environment for the virtual screening of large compound databases using pharmacophores, molecular shape, and energy minimization. Users can import, create, and edit virtual screening queries in an interactive browser-based interface. Queries are specified in terms of a pharmacophore, a spatial arrangement of the essential features of an interaction, and molecular shape. Search results can be further ranked and filtered using energy minimization. In addition to a number of pre-built databases of popular compound libraries, users may submit their own compound libraries for screening. Pharmit uses state-of-the-art sub-linear algorithms to provide interactive screening of millions of compounds. Queries typically take a few seconds to a few minutes depending on their complexity. This allows users to iteratively refine their search during a single session. The easy access to large chemical datasets provided by Pharmit simplifies and accelerates structure-based drug design. Pharmit is available under a dual BSD/GPL open-source license.
Thermodynamics of the magnetocaloric effect in the swept field and stepped field meas...
Yasu Takano
Nathanael A. Fortune

Yasu Takano

and 1 more

January 29, 2021
ENERGY CONSERVATION IN SWEPT FIELD LIMIT For a calorimeter sample (plus addenda) weakly thermally linked to a temperature controlled reservoir, energy conservation implies -T dS = \kappa \Delta T dt + C_{} dT where κ is the sample to reservoir thermal conductance and addenda is the heat capacity of the actual addenda (such as the thermometer, heater, and glue or grease binding the sample to the sensors) plus the heat capacity of the sample lattice (due to phonons). The left hand side term is the heat released by the system — which in the case of a spin system, for example, would be the heat released by the spins — when the field is changed by dH. The minus sign indicates that system entropy decreases as heat is released. Most of the released heat flows to the reservoir but some fraction heats up the addenda (to the same temperature as the system). The first term on the right hand side describes heat flow to the reservoir. The second term describes the temperature rise of the addenda. In a non-adiabatic relaxation-time or ac-calorimeter like that used in our swept-field measurements , the first term dominates. In contrast, in an adiabatic measurement, the first term is negligible.
The Rainfall Annual Cycle Bias over East Africa in CMIP5 Coupled Climate Models
Wenchang Yang

Wenchang Yang

November 05, 2015
East Africa has two rainy seasons: the long rains (March–May, MAM) and the short rains (October–December, OND). Most CMIP3/5 coupled models overestimate the short rains while underestimating the long rains. In this study, the East African rainfall bias is investigated by comparing the coupled historical simulations from CMIP5 to the corresponding SST-forced AMIP simulations. Much of the investigation is focused on the MRI-CGCM3 model, which successfully reproduces the observed rainfall annual cycle in East Africa in the AMIP experiment but its coupled historical simulation has a similar but stronger bias as the coupled multimodel mean. The historical−AMIP monthly climatology rainfall bias in East Africa can be explained by the bias in the convective instability (CI), which is dominated by the near surface moisture static energy (MSE) and ultimately by the MSE’s moisture component. The near surface MSE bias is modulated by the sea surface temperature (SST) over the western Indian Ocean. The warm SST bias in OND can be explained by both insufficient ocean dynamical cooling and latent flux, while the insufficient short wave radiation and excess latent heat flux mainly contribute to the cool SST bias in MAM.
White matter connectivity differences converge with gene expression in a neurodevelop...
Joe Bathelt

Joe Bathelt

October 29, 2015
ABSTRACT Knowledge of genetic cause in neurodevelopmental disorders can highlight molecular and cellular processes critical for typical development. Furthermore, the relative homogeneity of neurodevelopmental disorders of known genetic origin allows the researcher to establish the subsequent neurobiological processes that mediate cognitive and behavioural outcomes. The current study investigated white matter structural connectivity in a group of individuals with intellectual disability due to mutations in _ZDHHC9_. In addition to shared cause of cognitive impairment, these individuals have a shared cognitive profile, involving oro-motor control difficulties and expressive language impairment. Analysis of structural network properties using graph theory measures showed global reductions in mean clustering coefficient and efficiency in the _ZDHHC9_ group, with maximal differences in frontal and parietal areas. Regional variation in clustering coefficient across cortical regions in _ZDHHC9_ mutation cases was significantly associated with known pattern of expression of _ZDHHC9_ in the normal adult human brain. The results demonstrate that a mutation in a single gene impacts upon white matter organisation across the whole-brain, but also shows regionally specific effects, according to variation in gene expression. Furthermore, these regionally specific patterns may link to specific developmental mechanisms, and correspond to specific cognitive deficits.
DNA barcoding and taxonomy: dark taxa and dark texts
Roderic Page

Roderic Page

October 20, 2015
Summary Both classical taxonomy and DNA barcoding are engaged in the task of digitising the living world. Much of the taxonomic literature remains undigitised. The rise of open access publishing this century, and the freeing of older literature from the shackles of copyright has greatly increased the online availability of taxonomic descriptions, but much of the literature of the mid- to late twentieth century remains offline ("dark texts"). DNA barcoding is generating a wealth of computable data that in many ways is much easier to work with than classical taxonomic descriptions, but many of the sequences are not identified to species level. These "dark taxa" hamper the classical method of integrating biodiversity data using shared taxonomic names. Voucher specimens are a potential common currency of both the taxonomic literature and sequence databases, and could be used to help link names, literature, and sequences. An obstacle to this approach is the lack of stable, resolvable specimen identifiers. The paper concludes with an appeal for a global "digital dashboard" to assess the extent to which biodiversity data is available online. Keywords: DNA barcoding, taxonomy, dark taxa, dark texts, digitisation
← Previous 1 2 … 1621 1622 1623 1624 1625 1626 1627 1628 1629 Next →
Authorea
  • Home
  • About
  • Product
  • Preprints
  • Pricing
  • Blog
  • Twitter
  • Help
  • Terms of Use
  • Privacy Policy