Public Articles
Building
Hey, welcome. Double click anywhere on the text to start writing. In addition to simple text you can also add text formatted in boldface, italic, and yes, math too: E = mc2! Add images by drag’n’drop or click on the “Insert Figure” button.
Improved surface temperature estimates with MASTER / AVIRIS sensor fusion
and 4 collaborators
Kinetic temperature exerts a measurable effect on most physical processes, and is explicitly used as an input to model both plant water stress \cite{jackson1981canopy} and evapotranspiration \cite{monteith1965evaporation, allen1998crop}.
Kinetic temperature exerts a measurable effect on most physical processes, and is explicitly used as an input to model ecological processes such as photosynthesis \cite{Townsend_1992}, leaf litter decomposition \cite{Fierer_2005}, and evapotranspiration \cite{Courault_2005}. Evapotranspiration is a process of particular interest to farmers and water managers in arid drought prone regions such as California, where the 2012 nut and fruit crop receipts alone totaled 18.7 billion dollars (CDFA, 2013-2014). The economic impact from the 2014 California drought is not yet known, but caps on total water allotment (30% of 2013 levels) \cite{howitt2014preliminary} to the San Joaquin Valley forces difficult decisions for farmers: inefficient watering may bring a particular crop to harvest, at the cost of available water to other fields; water a crop too little and it will undergo cavitation and wilt, ruining the harvest. Severe cavitation due to under watering requires replanting, with loss of productivity stretching over years for perennial crops such as vineyards. Accurate modeling of temperature and evapotranspiration can provide farmers with robust estimates of water demand and enable more conservative agricultural water use to salvage a harvest or keep orchards alive, reducing the human impact of drought.
While in situ measurements are valuable tools for farmers, the size and scope of agriculture25.4 million acres and 80,500 farms in 2012 for California alone (CDFA, 2013-2014)underscore the necessity for regional scale remotely sensed temperature estimates that are accurate. The accuracy of temperature estimates are particularly important for physical processes like evapotranspiration, which is driven by the temperature gradient between the air and leaves, and can be less than 1K \cite{Jarvis_1986}. Current remotely sensed temperature estimates typically have errors on the order of 1K when averaged over all surface types \cite{Hulley_2012}; however, errors up to 4K are typical for spectral greybodies such as vegetation \cite{gustafson2006revisions}, due to both uncertainty in emissivity and moister atmospheric profiles present over large contiguous vegetation patches. Errors as large as 3K-8K can occur over vegetation in humid conditions \cite{tonooka2005accurate}, and even in less humid mediterranean climates robust atmospheric correction of thermal data is essential to provide operational data to farmers and resource managers.
AASTex (v6.1) example article for Authorea
This example manuscript is intended to serve as a tutorial and template for authors to use when writing their own AAS Journal articles with Authorea. The manuscript includes a history of and documents the new features in the previous version, 6.0, as well as the new features in version 6.1. This manuscript includes figure and table examples to illustrate these new features. Authorea features a rich text editor so that you can write in LaTeX, Markdown, or rich text and render directly on the web. Authorea supports and renders the vast majority of LaTeX notation needed for AAS Journal articles. A few features provided by AASTex v6.1. that are not available in Authorea have been listed in this document. Authors are welcome replace the text, tables, figures, and bibliography with their own and submit the resulting manuscript to the AAS Journals peer review system. The first lesson in the tutorial is to remind authors that the AAS Journals, the Astrophysical Journal (ApJ), the Astrophysical Journal Letters (ApJL), and Astronomical Journal (AJ), all have a 250 word limit for the abstract. If you exceed this length the Editorial office will ask you to shorten it.
AASTex (v6.1) example article for Authorea
This example manuscript is intended to serve as a tutorial and template for authors to use when writing their own AAS Journal articles with Authorea. The manuscript includes a history of and documents the new features in the previous version, 6.0, as well as the new features in version 6.1. This manuscript includes figure and table examples to illustrate these new features. Authorea features a rich text editor so that you can write in LaTeX, Markdown, or rich text and render directly on the web. Authorea supports and renders the vast majority of LaTeX notation needed for AAS Journal articles. A few features provided by AASTex v6.1. that are not available in Authorea have been listed in this document. Authors are welcome replace the text, tables, figures, and bibliography with their own and submit the resulting manuscript to the AAS Journals peer review system. The first lesson in the tutorial is to remind authors that the AAS Journals, the Astrophysical Journal (ApJ), the Astrophysical Journal Letters (ApJL), and Astronomical Journal (AJ), all have a 250 word limit for the abstract. If you exceed this length the Editorial office will ask you to shorten it.
Microelectronics Reliability Template
Title
CudaHashedNet Midterm Report
and 1 collaborator
As available datasets increase in size, machine learning models can successfully use more and more parameters. In applications such as computer vision, models with up to 144 million \cite{simonyan2014very} parameters are not uncommon and reach state-of-the-art performance. Experts can train and deploy such models on large machines, but effective use of lower-resource hardware such as commodity laptops or even mobile phones remains a challenge.
One way to address the challenge of large models is through model compression using hashing \cite{hashnets}. In general, this amounts to reducing a parameter set S = {s0, s1, ...sD} to a greatly reduced set R = {r0, r1, ..., rd} with d ≪ D by randomly tying parameters to hash buckets (si = rh(i)). This turned out to perform very well for neural networks, leading to the so-called HashedNets.
Many machine learning models involve several linear projections representable by matrix-vector products W ⋅ x where x is input data and W consists of model parameters. In most such models, this linear algebra operation is the performance bottleneck; neural networks, in particular, chain a large number of matrix-vector products, intertwined with non-linearities. In terms of dimensionality, modern systems deal with millions training samples xi lying in possibly high-dimensional spaces. The shape of W, (dout, din), depends on how deep a layer is in a network: at the first layer, din depends on the data being processed, while dout at the final layer depends on the desired system output (i.e., dout = 1 for binary classification, and dout = p if the output can fall in p classes). In middle layers, dimensionality is up to the model designer, and increasing it can make the model more powerful but bigger and slower. Notably, middle layers often have square Wh. When W is stored in a reduced hashed format Wh, many common trade-offs may change.
The goal of our project is to explore the performance bottlenecks of the Wh ⋅ x operation where Wh is a hashed representation of an array that stays constant for many inputs xi. Since neural networks are typically trained with batches of input vectors x concatenated into an input matrix X, we will look at the general case of matrix-matrix multiplication, where the left matrix is in a reduced hashed format Wh ⋅ X.
Taking advantage of massively parallel GPU architecture can be important even when dealing with smaller models. In March 2015, Nvidia announced a SoC for mobile devices with a GPU performance of 1 teraflop, the Tegra X1 \cite{tegra}; we foresee future mobile devices to have stronger and stronger GPUs.
The objectives of our project are to:
Investigate fast applications of Wh ⋅ X when Wh is small enough to be fully loaded into memory. In this case, is it faster to first materialize the hashed array and use existing fast linear algebra routines? Can the product be computed faster on a GPU with minimal memory overhead? This can lead to highly efficient deployment of powerful models on commodity hardware or phones.
Analyze performance when even after hashing Wh is too big. In the seminal work that popularized the usage large scale deep convolutional neural networks and training using the GPU \cite{krizhevsky2012imagenet}, Krizhevsky predicts that GPUs with more memory can lead to bigger networks with better performance. Hashing-based compression can help practitioners prototype very large models on their laptops before deciding which configuration to spend cloud computing resources on. Can we make HashedNets training on GPUs efficient? This may involve forcing a predictable split on the hash function to allow for independent division of work.
GeneCompressor: A gene based summary of variant calls
and 3 collaborators
Python
Wireless Transport Network Emulator for SDN Applications Development
and 2 collaborators
Software-Defined Netowrks, Wireless Transport, Open Networking Foundation
Computer networks have become, nowadays, complex and increasingly challenging from the configuration and setup point of view. Therefore, the need for key architectural changes to the paradigm of networking has risen. Software-Defined Networking (SDN) emerged around the year 2009, from the work that was done in Stanford University in the context of the OpenFlow project. It is a revolutionary approach in networking, which focuses on mitigating the limitations proven by traditional networks. The concepts proposed by this paradigm are not new, some being even 25 years old, but the timing was not right at the time, thus their adoption in the industry was not possible then.
SDN proposes a novel network architecture, where the forwarding state of the data plane is managed by a distant control plane, decoupled from the data plane \cite{stancu2015}. In this way, network devices become simple packet forwarding devices, while the control logic or the control plane is implemented in what is called the controller. This has numerous advantages, from being able to much more easily introduce new policies in the network through software, to being able to centrally configure all network devices instead of configuring individually each one. This way SDN can provide enhanced mechanisms for network management and configuration.
SDN can be used for optimizing the radio (e.g. remote radio units - RRUs and baseband units - BBUs) and transport (e.g. optical cross connects, microwave links) resources in future 5G systems. These resources can be managed by centralized controllers, on top of which an orchestrator may be placed. Therefore the SDN orchestrator has to be exposed to an adequately detailed abstraction of these resources.
Wireless Transport Group is part of the Open Networking Foundation (ONF) and is focuses on the development of a microwave information model that would abstract the characteristics of any wireless transport device. Several Proofs of Concept (PoCs) were conducted by the group (\cite{wireless_1st_poc}, \cite{onf_2nd_poc} and \cite{onf_3rd_poc}), where the model was tested and several use-cases that prove the utility of the model were implemented successfully. This led to the emergence of the first version of the Microwave Information Model, which is a technical recommendation by ONF, called TR-532 \cite{onf_tr_532}. The main author contributed to both the TR-532 and the PoCs.
The main contribution of the paper is the design of a Wireless Transport Emulator (WTE). WTE uses different technologies in order to simulate a wireless transport network, consisting of emulated Network Elements, that implement a Microwave Information Model, TR-532. This tool is extremely useful for SDN application developers that want to create applications using the aforementioned information model, because it eliminates their need of owning real, expensive, wireless transport devices in order to test the functionality that they are developing.
This paper is organized as follows: section [relatedWork] makes an overview of some tools that relate to WTE, but are used in other types of networks, section [architecture] defines the architecture of the emulator, section [implementation] provides high level details about the implementation and the technologies used and, finally, section [conclusion] concludes the paper.
Molecular Diversity Template
Title
For the implementation we decided to use Unity specifically version 5.5.2f1 as it is stable and provides great compatibility for the HTC Vive and Steam VR. The implementation is fully written in C#, on a side note, we did use a few shaders that are written in the Cg programming language, which is a high-level shading language developed by Nvidia. The shaders for the GUI and the virtual environment were all standard Unity shaders, using the deferred lighting rendering path. The Unity engine offers many additional features that makes it useful, like the component based programming architecture and hierarchical organization of game objects in the scene making transforming children a breeze.
Before implementing the WIP method we wanted to record data from the HTC Vive that could be played back on demand. This was done as a measure to accelerate the development as it would allow us to work without the constant need for a HTC Vive setup. We therefore implemented a tool specifically for this purpose which is capable of collecting data (location and rotation in world space) from the HMD and the two controllers strapped to both legs. To ensure that the data is played back at the correct speed (since frame rates tend to fluctuate) after collecting it, we ensured that the data was being collected at a fixed frame rate using the FixedUpdate()-method. A custom SaveLoadManager-class was then responsible for serializing data and saving it in a specified path. Since Unity does not flag both the Vector3 and Quaternion types that we need as serializable, we needed to create our own structs that convert them into serializable formats. The code below shows how this is accomplished for Quaternions, but the exact same approach is used to convert Vector3:
[Serializable]
public struct SerializableQuaternionArray
{
public float[] x;
public float[] y;
public float[] z;
public float[] w;
public SerializableQuaternionArray (float[] rX, float[] rY, float[] rZ, float[] rW)
{
x = rX;
y = rY;
z = rZ;
w = rW;
}
// Returns a string representation of the object
public override string ToString ()
{
return String.Format ("[{0}, {1}, {2}, {3}]", x, y, z, w);
}
// Automatic conversion from SerializableVector3 to Quaternion
public static implicit operator Quaternion[] (SerializableQuaternionArray rValue)
{
Quaternion[] output = new Quaternion[rValue.x.Length];
float[] tempX = new float[rValue.x.Length];
float[] tempY = new float[rValue.x.Length];
float[] tempZ = new float[rValue.x.Length];
float[] tempW = new float[rValue.x.Length];
for (int i = 0; i < rValue.x.Length - 1; i++) {
output [i] = new Quaternion (rValue.x [i], rValue.y [i], rValue.z [i], rValue.w [i]);
}
return output;
}
public static implicit operator SerializableQuaternionArray (Quaternion[] rValue)
{
float[] tempX = new float[rValue.Length];
float[] tempY = new float[rValue.Length];
float[] tempZ = new float[rValue.Length];
float[] tempW = new float[rValue.Length];
for (int i = 0; i < rValue.Length - 1; i++) {
tempX [i] = rValue [i].x;
tempY [i] = rValue [i].y;
tempZ [i] = rValue [i].z;
tempW [i] = rValue [i].w;
}
return new SerializableQuaternionArray (tempX, tempY, tempZ, tempW);
}
}
Methods for saving and loading are implemented in the same class. We serialize the whole inner class called SaveManager as an object. Through object serialization we takes an object’s state, and convert it to a stream of data, that we can later de-serialize. After loading the data it is assigned to the MotionAnimator-class that stores and replays the data stored in arrays at a fixed frame rate. Below is an excerpt of the MotionAnimator-class responsible for playback. To animate we simply increment the index i of the arrays and assign the values to the position and rotation of the game object that matches the device type (HMD, left controller, right controller) that we define with an enumerator inside the DeviceManager-class.
void FixedUpdate ()
{
if (SaveLoadManager.loaded) {
if (playback && simulated && i < pos_left_controller.Length) {
switch (gameObject.GetComponent<DeviceManager> ().deviceType) {
case StringID.Left_Controller:
transform.localPosition = pos_left_controller [i];
transform.localRotation = rot_left_controller [i] * Quaternion.Euler (new Vector3 (90, 0, 0));
break;
case StringID.Right_Controller:
transform.localPosition = pos_right_controller [i];
transform.localRotation = rot_right_controller [i] * Quaternion.Euler (new Vector3 (90, 0, 0));
break;
case StringID.HMD:
transform.localPosition = pos_hmd [i];
transform.localRotation = rot_hmd [i];
break;
default:
print ("Index out of bounds");
break;
}
i++;
} else if (i >= pos_left_controller.Length) {
i = 0;
}
}
}
Note that if we wish to use the Vive Controllers instead of playing back recorded movements we can do so simply by switching the boolean called ’simulated’ to false. Doing so will assign the movements of the motion controllers inside the Update()-method meaning that the delta time between frames will not be fixed. This is to ensure that game objects update at the highest rate possible so the tracking is as close to 1:1 as possible. An excerpt of the code is shown below.
if (!simulated) {
if (TrackerTransforms.trackingIsValid ()) {
switch (gameObject.GetComponent<DeviceManager> ().deviceType) {
case StringID.Left_Controller:
transform.position = tracker.GetTransform ("left_Controller").position;
transform.rotation = tracker.GetTransform ("left_Controller").rotation;
break;
case StringID.Right_Controller:
transform.position = tracker.GetTransform ("right_Controller").position;
transform.rotation = tracker.GetTransform ("right_Controller").rotation;
break;
case StringID.HMD:
transform.position = tracker.GetTransform ("hmd").position;
transform.rotation = tracker.GetTransform ("hmd").rotation;
break;
default:
print ("Index out of bounds");
break;
}
}
}
The calibration step is necessary before we can calculate step length and perform WIP since we need the y-coordinate of the controllers when the user is standing as reference points. Calibration is simple; the user stands in a neutral pose with his legs together. When triggered the position of the controllers are saved as references or ’control anchors’ inside the UserProfile-class. We can then reference these values later on.
A step can be performed by the user under certain conditions. First and foremost the system checks whether or not the user’s legs are grounded by comparing the y-coordinates of the controllers to that of the control anchors. If the y-position of the controller is below the control anchors y-position plus a threshold value (set to 0.01) or if the angle between the down direction and the controller is below 15 degrees then the foot is grounded. The code below demonstrates this.
// Check if grounded for each leg.
if (CheckGroundedHeight(0.01f) || CheckGroundedRotation(15)) {
if (!isStepping) {
if ((int)GetComponent<DeviceManager> ().deviceType == 0) {
leftLegIsGrounded = true;
} else {
rightLegIsGrounded = true;
}
} else {
if ((int)GetComponent<DeviceManager> ().deviceType == 0) {
leftLegIsGrounded = true;
prevStepLeg = false;
} else {
rightLegIsGrounded = true;
prevStepLeg = true;
}
}
} else {
if ((int)GetComponent<DeviceManager> ().deviceType == 0) {
leftLegIsGrounded = false;
} else {
rightLegIsGrounded = false;
}
}
When the user lifts his leg and the ground conditions return false the systems initiate an aiming phase. The leg that is aiming is set as the active leg by switching the static boolean activeLeg (refer to the code example below). This means only one leg can be active at a time, we use this constraint since one can only step with one leg at a time; the system should reflect this. We also need to ensure that the other leg cannot become active before the user either cancels or executes a step, as this would be an error, the system takes care of this by using static variables to determine when the user is stepping. Note that when the aiming phase is initiated for the active leg the indicator (a white dot) representing the location of the step will become visible, this is simply a matter of checking when the boolean isAiming is true and turning on the renderer for the billboard with the step indicator texture.
// Change to aiming phase when above a threshold.
if (!CheckGroundedHeight(0.01f) && !CheckGroundedRotation (15) && !isStepping) {
rayColorIfGrounded = Color.red;
activeLeg = Convert.ToBoolean ((int)GetComponent<DeviceManager> ().deviceType);
isAiming = true;
}
The step length is determined by taking the difference between the controllers strapped to the legs and the control anchors (3-dimensional fixed vectors from the calibration step). We ensure that the float is always positive using a max function to remove values below zero (see code below). The step length is applied to the step indicators position to give the user feedback about how long the step is.
// Calculate the stride length
stepLength = Mathf.Max (0, transform.position.y - Administrator.admin.userProfile.controlAnchor [(int)GetComponent<DeviceManager> ().deviceType].y);
A ’dead zone’ is defined to allowing to avoid false positive, the user will have to exit the dead zone to trigger a step - in other words, within the dead zone the system cannot change state. The dead zone is an area of 10 cm extending above the control anchor plus its threshold value. To trigger a step the user has to move his leg back down to a standing pose, when system measures that the local angular velocity and local angular acceleration are both negative around the x-axis it triggers a step. In the real walking implementation this part of the code is removed entirely as steps are performed by moving around physically. When the step is triggered the SetMovementDistance()-method sets the distance of the movement and the currentPosition is set to the last known position of the parent transform before the root position is changed. We use the root transform to transform all its children including the player, the reason we do not only use the HMD to estimate the player’s position in world coordinates is that it is non-static and therefore not reliable. We use an empty game object as a static reference point for the player position for this reason; we then use the HMD to calibrate the user’s position inside the local space of the parent game object only when he is standing with both legs on the ground, when he enters the aiming phase the position is frozen until the step is either executed or cancelled. When the user triggers a step indicator changes colour to red and locks to its position at the time of the trigger.
// Step trigger should be here
if (!isStepping && !CheckGroundedHeight (0.2f) && !CheckGroundedRotation (15) && canStep) {
if (angularVelocity.x < 0 && angularAcceleration.x < 0) {
isStepping = true;
canStep = false;
SetMovementDistance ();
// Update the current position for later use
currentPosition = transform.root.position;
}
}
When the player moves a vector is applied to the root position, we decided to interpolate to the targeted step location by mapping the stepLength from the interval defined as [0, movementDistance] inversely to the interval of targetPosition [0, Vector3.Magnitude(targetPosition)]. As such, when the user moves his leg down to a standing pose, he will gradually move towards the targeted step position resulting in a 1:1 correspondence between the user’s gesture and the resulting motion. The cde below illustrates this. When the stepInterpolation value is greater than or equal to the magnitude of the targetPosition vector the step has been successfully executed and the state of the system is reset to its initial state.
// Decide what happens if isStepping is true/false
if ((int)GetComponent<DeviceManager> ().deviceType == Convert.ToInt32 (activeLeg)) {
if (isStepping) {
isAiming = false;
stepInterpolation = MathExtension.map (stepLength, movementDistance, 0, 0, Vector3.Magnitude (targetPosition));
if (stepInterpolation >= oldStepInterpolation) {
transform.root.position = currentPosition + new Vector3 (stepInterpolation, 0, 0);
oldStepInterpolation = stepInterpolation;
}
if (stepInterpolation >= Vector3.Magnitude (targetPosition)) {
stepInterpolation = Vector3.Magnitude (targetPosition);
isStepping = false;
}
} else if (isStepping == false) {
targetPosition = new Vector3 (strideLength, 0, 0);
stepInterpolation = 0;
oldStepInterpolation = 0;
movementDistance = 0;
}
}
Алгоритмы локации и маршрутизации. Алгоритм Калмана-Басимов
and 1 collaborator
Фильтр Калмана — это алгоритм обработки данных, который убирает шумы и лишнюю информацию. На вход подаётся набор измерений. Предполагается, что эти измерения всегда наделены некоторой ошибкой – это обуславливается погрешностью измерительных приборов. В простейшем случае получаемые с помощью прибора измерения(сигнал) можно описать в виде суммы полезного сигнала и ошибки. Поскольку погрешность измерения есть у любого прибора, то она уже передается сразу вместе с сигналом, и нам, как раз, нужно найти этот исходный сигнал, убрав ошибку. В этом заключается задача фильтра Калмана, то есть, необходимо отфильтровать (отсеять) из полученного сигнала только истинное значение сигнала, а искажающий шум (ошибки измерения) убрать\cite{49885e}.
моделируется эксперимент, при котором в тихой комнате при помощи микрофона считывается некоторый гудящий звук, громкость которого постоянно увеличивается. В качестве входного сигнала для фильтра Калмана берется амплитуда звуковой волны. Амплитуда данного сигнала будет расти с течением времени (нарастающие колебания) – рис. [fig1]. При проведении эксперимента используется микрофон не очень хорошего качества, поэтому при считывании звука накладываются некоторые помехи на получаемый сигнал.
Beaver Activity Assessment on Alkali Creek, Beaverhead County, Montana
and 11 collaborators
Carotid intima-media thickness and related vascular measures: Population epidemiology and concordance in 11-12 year old Australians and their parents
and 1 collaborator
Topological Data Analysis for Hackers and Data Scientists
and 1 collaborator
«Topology belongs to the stratosphere of human thought! It might conceivable turn out to be of some use in the 24th.»,
Soljenystine, The First Circle.
As great a writer Soljenystine was, he was off 3 centuries with his prediction. Recent advances in computational topology have made this abstract field of mathematics relevant to society by defining new ways of finding structures in complex dataset.
Apuntes economía de la empresa
Raspberry Pi
Primary adult-onset tics
Adult-onset tics are rare, and usually secondary (i.e. due to some other neurological insult). However, there must be exceptions, and they are of interest both to patients and for understanding what factors cause tics and modify their course. I am aware of at least one study with relevant information. A multicenter German family study of Tourette syndrome and OCD found that 5 relatives were convinced that tics started between ages 21 and 25 \cite{9368194}. Their symptoms and course were in other ways typical of TS. Of course some could have had childhood tics that were not noticed at the time, but alternatively, there is no strong reason to expect that a tic disorder starting at age 22 is so very different from a tic disorder starting at age 17.
Today the journal Tremor and Other Hyperkinetic Movements shows on its main page, under "in press," this new article: "How much do we know about adult-onset primary tics? Prevalence, epidemiology, and clinical features," by Yale's Daphne Robakis. Their group previously reported the more common occurrence of tic exacerbation during adulthood \cite{28289551}. The new article is not yet live, but I am eager to read it when it is published. Tremor claims an average time from acceptance to publication of about 3 weeks, so hopefully the wait will be short.
The relationship between Static and Thermodynamic in the biological equilibrium of the universe
and 1 collaborator
How to Bring Science Publishing into the 21st Century
and 1 collaborator
The paradox of 21st-century science is that increasingly complex and collaborative cutting-edge research is still being written and published using 20th-century tools.The essential question—How come the internet age has yet to deliver a collaborative writing and publishing tool for research?—is what two of my physicist friends and I were thinking about several years ago while working at CERN, before we started Authorea. It didn’t occur to us then, but in retrospect it seems like CERN—the birthplace of the World Wide Web—was the perfect place to hatch our new idea.
Before we get into this particular story, take a look at Galileo Galilei’s seminal paper Starry Messenger (Sidereus Nuncius) below. This 400-year-old piece of observational astronomy chronicles, among other things, details of the lunar terminator, the Medicean stars (which later of course became the Galilean moons), and the array of dimmer stars present in the Ptolemaic nebulae.
MEASURING THE SEVERITY OF CLOSE ENCOUNTERS BETWEEN RINGED SMALL BODIES AND PLANETS
and 3 collaborators
The field of ringed Centaurs is only a few years old. Since Centaurs are known to regularly encounter the giant planets, it is of interest to explore the effect of a close encounter between a ringed Centaur and a giant planet on the ring structure. The severity of such an encounter depends on quantities such as the small body mass; velocity at infinity, vinf; ring orbital radius, r; and encounter distance. In this work, we derive a formula for a critical distance at which the radial force is zero on a collinear ring particle in the four-body, circular restricted, planar problem. Numerical simulations of close encounters with Jupiter or Uranus in the three-body planar problem are made to experimentally determine the largest encounter distance, R, at which the effect on the ring is “noticeable” using different values of small body mass, vinf, and r. R values are compared to the critical distance. We find that R lies inside the critical distance for Centaurs with masses ≪ the mass of Pluto but can lie beyond it for Centaurs with the mass of Pluto and ring structure analogous to Chariklo’s. Changing the mass by a factor of almost 4 changed R by ≤0.2 tidal disruption distance, Rtd. Effects on R due to changes in vinf, or r are found to be ≤ 1.5Rtd. R values found using a four-body problem suggest that the critical distance might be useful as a first approximation of the constraint on R.
Географический и структурный анализ банков России и мира
and 1 collaborator
Исследуем свойства банков на основе базы знаний международного проекта Викиданные. С помощью SPARQL-запросов, вычисляемых на объектах вида “банк” в Викиданных, далее решены такие задачи: выведен список всех банков мира, получен перечень стран, упорядоченных по числу банков, построен граф банков и их материнских компаний или владельцев. Кроме того, выполнена оценка полноты Викиданных по данной теме.
Статья распространяется по лицензии Creative Commons Attribution-ShareAlike. Материалы этой статьи использованы в главе курса Викиверситета “Программирование Викиданных” \cite{WDBanks}. Иллюстрации загружены на Викисклад. Над статьёй в 2017 году работали Крижановский А. А., Панфилова О. С.