Public Articles
Алгоритмы локации и маршрутизации. Алгоритм Калмана-Басимов
and 1 collaborator
Фильтр Калмана — это алгоритм обработки данных, который убирает шумы и лишнюю информацию. На вход подаётся набор измерений. Предполагается, что эти измерения всегда наделены некоторой ошибкой – это обуславливается погрешностью измерительных приборов. В простейшем случае получаемые с помощью прибора измерения(сигнал) можно описать в виде суммы полезного сигнала и ошибки. Поскольку погрешность измерения есть у любого прибора, то она уже передается сразу вместе с сигналом, и нам, как раз, нужно найти этот исходный сигнал, убрав ошибку. В этом заключается задача фильтра Калмана, то есть, необходимо отфильтровать (отсеять) из полученного сигнала только истинное значение сигнала, а искажающий шум (ошибки измерения) убрать\cite{49885e}.
моделируется эксперимент, при котором в тихой комнате при помощи микрофона считывается некоторый гудящий звук, громкость которого постоянно увеличивается. В качестве входного сигнала для фильтра Калмана берется амплитуда звуковой волны. Амплитуда данного сигнала будет расти с течением времени (нарастающие колебания) – рис. [fig1]. При проведении эксперимента используется микрофон не очень хорошего качества, поэтому при считывании звука накладываются некоторые помехи на получаемый сигнал.
Beaver Activity Assessment on Alkali Creek, Beaverhead County, Montana
and 11 collaborators
Carotid intima-media thickness and related vascular measures: Population epidemiology and concordance in 11-12 year old Australians and their parents
and 1 collaborator
Topological Data Analysis for Hackers and Data Scientists
and 1 collaborator
«Topology belongs to the stratosphere of human thought! It might conceivable turn out to be of some use in the 24th.»,
Soljenystine, The First Circle.
As great a writer Soljenystine was, he was off 3 centuries with his prediction. Recent advances in computational topology have made this abstract field of mathematics relevant to society by defining new ways of finding structures in complex dataset.
Apuntes economía de la empresa
Raspberry Pi
Primary adult-onset tics
Adult-onset tics are rare, and usually secondary (i.e. due to some other neurological insult). However, there must be exceptions, and they are of interest both to patients and for understanding what factors cause tics and modify their course. I am aware of at least one study with relevant information. A multicenter German family study of Tourette syndrome and OCD found that 5 relatives were convinced that tics started between ages 21 and 25 \cite{9368194}. Their symptoms and course were in other ways typical of TS. Of course some could have had childhood tics that were not noticed at the time, but alternatively, there is no strong reason to expect that a tic disorder starting at age 22 is so very different from a tic disorder starting at age 17.
Today the journal Tremor and Other Hyperkinetic Movements shows on its main page, under "in press," this new article: "How much do we know about adult-onset primary tics? Prevalence, epidemiology, and clinical features," by Yale's Daphne Robakis. Their group previously reported the more common occurrence of tic exacerbation during adulthood \cite{28289551}. The new article is not yet live, but I am eager to read it when it is published. Tremor claims an average time from acceptance to publication of about 3 weeks, so hopefully the wait will be short.
The relationship between Static and Thermodynamic in the biological equilibrium of the universe
and 1 collaborator
How to Bring Science Publishing into the 21st Century
and 1 collaborator
The paradox of 21st-century science is that increasingly complex and collaborative cutting-edge research is still being written and published using 20th-century tools.The essential question—How come the internet age has yet to deliver a collaborative writing and publishing tool for research?—is what two of my physicist friends and I were thinking about several years ago while working at CERN, before we started Authorea. It didn’t occur to us then, but in retrospect it seems like CERN—the birthplace of the World Wide Web—was the perfect place to hatch our new idea.
Before we get into this particular story, take a look at Galileo Galilei’s seminal paper Starry Messenger (Sidereus Nuncius) below. This 400-year-old piece of observational astronomy chronicles, among other things, details of the lunar terminator, the Medicean stars (which later of course became the Galilean moons), and the array of dimmer stars present in the Ptolemaic nebulae.
MEASURING THE SEVERITY OF CLOSE ENCOUNTERS BETWEEN RINGED SMALL BODIES AND PLANETS
and 3 collaborators
The field of ringed Centaurs is only a few years old. Since Centaurs are known to regularly encounter the giant planets, it is of interest to explore the effect of a close encounter between a ringed Centaur and a giant planet on the ring structure. The severity of such an encounter depends on quantities such as the small body mass; velocity at infinity, vinf; ring orbital radius, r; and encounter distance. In this work, we derive a formula for a critical distance at which the radial force is zero on a collinear ring particle in the four-body, circular restricted, planar problem. Numerical simulations of close encounters with Jupiter or Uranus in the three-body planar problem are made to experimentally determine the largest encounter distance, R, at which the effect on the ring is “noticeable” using different values of small body mass, vinf, and r. R values are compared to the critical distance. We find that R lies inside the critical distance for Centaurs with masses ≪ the mass of Pluto but can lie beyond it for Centaurs with the mass of Pluto and ring structure analogous to Chariklo’s. Changing the mass by a factor of almost 4 changed R by ≤0.2 tidal disruption distance, Rtd. Effects on R due to changes in vinf, or r are found to be ≤ 1.5Rtd. R values found using a four-body problem suggest that the critical distance might be useful as a first approximation of the constraint on R.
Географический и структурный анализ банков России и мира
and 1 collaborator
Исследуем свойства банков на основе базы знаний международного проекта Викиданные. С помощью SPARQL-запросов, вычисляемых на объектах вида “банк” в Викиданных, далее решены такие задачи: выведен список всех банков мира, получен перечень стран, упорядоченных по числу банков, построен граф банков и их материнских компаний или владельцев. Кроме того, выполнена оценка полноты Викиданных по данной теме.
Статья распространяется по лицензии Creative Commons Attribution-ShareAlike. Материалы этой статьи использованы в главе курса Викиверситета “Программирование Викиданных” \cite{WDBanks}. Иллюстрации загружены на Викисклад. Над статьёй в 2017 году работали Крижановский А. А., Панфилова О. С.
Tailored motivational support to maintain physical activity in healthy individuals
Исследование землетрясений и других природных катастроф с помощью SPARQL-запросов
and 1 collaborator
Примечание
Статья распространяется по лицензии Creative Commons Attribution-ShareAlike. Материалы этой статьи использованы в главе курса Викиверситета “Программирование Викиданных” \cite{WDNaturalDisaster}. Иллюстрации загружены на Викисклад. Над статьёй в 2017 году работали Крижановский А. А., Азаренкова Е. М.
Географический и структурный анализ банков России и мира
Исследуем свойства банков на основе базы знаний международного проекта Викиданные. С помощью SPARQL-запросов, вычисляемых на объектах типа “банк” в Викиданных, далее решены такие задачи: выведен список всех банков мира, получен перечень стран, упорядоченных по числу банков, построен граф банков и их материнских компаний или владельцев. Кроме того, выполнена оценка полноты Викиданных по данной теме.
Статья распространяется по лицензии Creative Commons Attribution-ShareAlike. Материалы этой статьи использованы в главе курса Викиверситета “Программирование Викиданных”\cite{WDBanks}. Иллюстрации загружены на Викисклад. Над статьёй в 2017 году работали Крижановский А. А., Панфилова О. С.
Построим список всех банков.
Используются:
объект “банк (Q22687)”,
свойство “экземпляр (P31)”.
#List of `instances of` "bank"
SELECT ?bank ?bankLabel
WHERE
{
?bank wdt:P31 wd:Q22687.
SERVICE wikibase:label { bd:serviceParam wikibase:language "ru" }
}
SPARQL-запрос, 783 записи на 16.02.2017.
Недостаток полученного списка в том, что ряд объектов получился безымянным на Викиданных (No label defined). Попробуем получить список банков, у которых поле “метка” будет непустым.
#added 2017-02
#List of `instances of` "bank" only with a label.
SELECT ?item ?item_label
WHERE
{
?item wdt:P31 wd:Q22687
; rdfs:label ?item_label .
FILTER (LANG(?item_label) = "en") .
}
SPARQL-запрос, 404 записи на 16.02.2017.
old
Исследуем свойства банков на основе базы знаний международного проекта Викиданные. С помощью SPARQL-запросов, вычисляемых на объектах типа “банк” в Викиданных, далее решены такие задачи: выведен список всех банков мира, получен перечень стран, упорядоченных по числу банков, построен граф банков и их материнских компаний или владельцев. Кроме того, выполнена оценка полноты Викиданных по данной теме.
Статья распространяется по лицензии Creative Commons Attribution-ShareAlike. Материалы этой статьи использованы в главе курса Викиверситета “Программирование Викиданных”\cite{WDBanks}. Иллюстрации загружены на Викисклад. Над статьёй в 2017 году работали Крижановский А. А., Панфилова О. С.
Poster SHS
and 4 collaborators
Consistência e qualificação de dados diários de precipitação na região Sul do Brasil (preprint)
and 1 collaborator
Dados de chuva provenientes de uma rede meteorológica do Sul do Brasil são usados para avaliar o desempenho de dois algoritmos de detecção de dados espúrios. Ambos os métodos usam uma abordagem estatística e consistência espacial baseada nas distâncias e diferenças de altitude entre duas medidas de pluviômetros. Uma variação do método de múltiplas Gamas de \citet{You_2007} é considerado neste estudo. A distribuição de precipitação média de estações vizinhas é particionada e é feita a suposição de que cada intervalo pode ser modelado por uma distribuição Gama. O segundo método não assume nenhuma distribuição a priori, usando informação pontual espacial e acumulada temporal de medidas pluviométricas vizinhas para consistir dados de chuva diários. Para avaliar a confiabilidade e a precisão em detectar dados espúrios pelo algoritmo, são introduzidos erros semeados na série histórica de chuvas. Um modelo probabilístico bidimensional de erros introduzidos/detectados (sim-não) é empregado para calcular métricas referentes a probabilidades de correta detecção e alarme falso cometido pelo algoritmo. Verifica-se que o novo algoritmo proposto supera o algoritmo das múltiplas distribuições Gama.
Palavras-chave: dados de chuva, pluviômetro, detecção de dados espúrios.
Engineering a Table Tennis Smart Racket for Game Analysis and Coaching
Geoscience Papers of the Future: Lessons Learned from Practicing Reproducible Research, Open Science, and Digital Scholarship
and 15 collaborators
Hey, welcome. Double click anywhere on the text to start writing. In addition to simple text you can also add text formatted in boldface, italic, and yes, math too: E = mc2! Add images by drag’n’drop or click on the “Insert Figure” button.
Citing other papers is easy. Voilà: \cite{2012} or \cite{Holstein_2009}. Click on the cite
button in the toolbar to search articles and cite them. Authorea also comes with a powerful commenting system. Don’t agree that E = mc3?!? Highlight the text you want to discuss or click the comment button. Find out more about using Authorea on our help page.
The Geosciences Paper of the Future Initiative was created by the EarthCube OntoSoft project and its Early Career Advisory Committee formed by 30 geoscientists in different disciplines in order to disseminate best practices for reproducible publications, open science, and digital scholarship. The Initiative consists of three major efforts:
the compilation of best practices from a variety of community organizations (e.g, ESIP, RDA), scientific societies (e.g., AGU, AAAS, CODATA), curators (e.g., IEDA, NSIDC), and publishers (Nature, Science)
the dissemination of best practices through training sessions at major scientific conferences (e.g., AGU, GSA, ASLO, CEDAR); and research institutions (e.g., WHOI, USGS). The training materials are openly available, including a summary checklist for authors, and show how to manage their scholarly identity, reputation, and impact throughout their careers.
the publication of a special issue of the AGU Earth and Space Science journal on Geoscience Papers of the Future containing articles that illustrate how to apply these best practices in different geosciences areas, with another special issue of the journal Geophysics under way.
A Geosciences Paper of the Future follows best practices to document all the associated digital products that result from the research reported in the paper. This means that a paper would include:
Data available in a public repository, including documented metadata, a clear license specifying conditions of use, and citable using a unique and persistent identifier
Software available in a public repository, with documentation, a license for reuse, and a unique and citable using a persistent identifier
Provenance of the results by explicitly describing the series of computations and their outcome in a workflow sketch, a formal workflow, or a provenance record, possibly in a shared repository and with a unique and persistent identifier
These best practices are described in detail in \cite{Gil-etal-ess16}.
The Geoscience Papers of the Future published to date not only serve as exemplars of how to implement best practices, but also expose limitations of existing cyberinfrastructure capabilities to support scientists in their work.
In this paper, we give a synthesis of perspectives by GPF authors contrasting the approaches used to implement GPF best practices in their own disciplines, the lessons learned, the challenges encountered, and the benefits found. We should summarize here the main findings.
The paper starts with an overview of the articles that illustrates the breadth of disciplines, motivations, and approaches covered by all the GPFs. We then compare the different papers along common dimensions. We discuss the benefits and the challenges found. We conclude with prospects for the future.
NOTE from 5/15/17 meeting: Add a comment about the different levels of reproducibility.
Comparison of the Differential Cross Section of High Energy Gamma Rays using Classical and Quantum Electromagnetism
In an experiment conducted in 1923, Arthur Holly Compton observed an inelastic scattering of photons by a charged particle at high energy and correctly predicted the result from derivation made possible by attributing particle-like momenta to light particle, quanta. This result laid claim to the particle-like behavior of the light and continued the discussion on the wave-particle duality of light. This was significant as the behavior of the light was widely accepted as purely wave-like in the study of classical electromagnetism. The modern interpretaion of light considers it both wave and particle.
At the time of Compton’s discovery, the light was largely understood as a wave. One example of wave-like behavior of light was observed in the slit experiments in which the light scattered similar to water rather than particles. In 1905, this view was challenged by the discovery of photo-electric effect \cite{Einstein_1905} which associated the likelinesshood of electron production from metal shined by light to the frequency of the light rather than intensity of the light. This interpretation of the light suggested that light may be closer in behavior to a stream of particles rather than a wave phenomenon. The observation has given rise to the expression of the energy in terms of frequency, \begin{equation} E = h f \end{equation} where \(h\) is the Planck’s constant and \(f\) is the frequency of the light. The result was inconsistent with the classical electromagnetism which claimed that scattering is determined by the intensity of the light.
Nature Communications Template
Алгоритмы локации и маршрутизации. Алгоритм Калмана-Кузьмин
and 1 collaborator
Фильтр Калмана – эффективный рекурсивный фильтр, оценивающий вектор состояния динамической системы, используя ряд неполных и зашумленных измерений. Назван в честь Рудольфа Калмана. Впервые был описан в 1960 году\cite{Selcuk2002}.
Parallel implementations of a TV-\(L^{1}\) image-denoising algorithm
Imaging acquisition involves many hardware and software stages that introduce error sources. This is seen as visual artifacts in the image, typically recognized as noise. This can be especially noticeable in images acquired with low levels of illumination, such as night photography.
Two common manifestations of noise in digital images are Gaussian noise and salt-and-pepper noise. Gaussian noise is typically associated with errors in detection. It produces pixel values that vary within a quasi-normally distributed range about the “true” value at that point in the image. Salt-and-pepper noise typically arises from transmission errors, and the pixel value is recorded as either fully on or fully off (in grayscale, white or black). Removing these artifacts can be desirable from an aesthetic perspective or in order to pre-process images for other workflows. Common techniques to address this include Gaussian blurring, which takes the convolution of an image with a Gaussian kernel window, and median filtering, which replaces pixels with the median value of a sliding window. These techniques can reduce noise; however, they are also susceptible to blurring edges of features.
The total variation technique was introduced in 1992 by Rudin, Osher and Fatemi (ROF) \cite{Rudin_1992} as an alternative denoising method. The method works by iteratively constructing a function u on a domain Ω that minimizes its energy with an input function f: \begin{equation} \min_{u}\int_{\Omega}\left\Vert{\nabla{u}}\right\Vert + \lambda\int_{\Omega}(u-f)^2 \end{equation} with ‖ ⋅ ‖ being the L2 norm. The first term is the total variation of the image. It is a regularizer for the minimization function. The second term also includes an L2 norm, which means that the problem is convex and has a unique solution \cite{chambolle:hal-00437581}.
Replacing the second term above with an L1 norm leads to the TV-L1 model: \begin{equation} \min_{u}\int_{\Omega}\left\Vert{\nabla{u}}\right\Vert + \lambda\int_{\Omega}\left\vert{u-f}\right\vert \end{equation} The TV-L1 model is not a (necessarily) convex model; however, when applied to discrete images, the model tends to offer better performance at removing salt-and-pepper noise from images than the ROF model and is thus an important image processing technique. It is a standard method implemented in many computer vision packages, including OpenCV \cite{noauthor_opencv:_2017}.