Public Articles

Iris: Build and Analyze Broadband Spectral Energy Distributions

and 2 collaborators

The abstract goes here

The Effect of Range on Paintball Shooting Accuracy and Shooting Speed (Paintball is awesome you should all do it!)

and 2 collaborators

We examined the relationship between distance, accuracy, and time in the firing of a paintball marker. Participants were asked to fire 5 paintballs at targets at 3 different distances and the time between the first and last shot was recorded. Accuracy was measured by taking the distance between the mark and the centre of the target. It was hypothesized that accuracy would increase as distance decreased and that shooting time would decrease as distance decreased. The difference between mean accuracy at 10 and 30 metres was found to be 67.77 cm. The difference between the mean shooting time at 10 and 30 metres was found to be 2.493 seconds. These were the most pronounced differences in the means which allowed us to confirm our alternate hypothesis. The study also briefly examined the effects of experience on speed and accuracy.

"Are We there yet?" - Android powered tangible educational geography game

and 2 collaborators

With the vast availability of Android devices around in the world today, people have developed a number of different ways to communicate with and use their devices. These methods however, tend to follow the traditional patterns of interaction. To demonstrate the flexibility of the Android platform to develop a tangible user interface, we have developed a game that uses many Android sensors to give a real sense of feedback to the user. The game allows users to travel to a country through means of directional and tilt sensors and explore the world.

Automatic Detection and Classification of Ca2+ Release Events in Confocal Line- and Framescan Images

abstract text

Algo HW #1

and 2 collaborators

$f(n)=\sqrt{2^{7n}}, g(n)=lg(7^{2n})$

*f*(*n*)=2^{7n/2},*g*(*n*)=2*n**l**g*(7)*f*(*n*)=*l**g*(2^{7n/2}),*g*(*n*)=*l**g*(2*n**l**g*(7))*f*(*n*)=*l**g*(2^{7n/2}),*g*(*n*)=*l**g*(2*n**l**g*(7))*f*(*n*)=7*n*/2,*g*(*n*)=*l**g*(2*n*)+*l**g*(*l**g*(7))*f*(*n*)=7*n*/2,*g*(*n*)=*l**g*(2*n*)*f*=*Ω*(*g*)*f*(*n*)=2^{nln(n)},*g*(*n*)=*n*!*f*(*n*)=*l**n*(2^{nln(n)}),*g*(*n*)=*l**n*(*n*!)*f*(*n*)=*n**l**n*(*n*),*g*(*n*)=*n**l**g*(*n*) (via previously proved identity)*f*=*θ*(*g*)*f*(*n*)=*l**g*(*l**g*^{*}*n*),*g*(*n*)=*l**g*^{*}(*l**g**n*)*f*=*O*(*g*)$f(n)=\frac{lgn^2}{n},g(n)=lg^*n$

*f*=*O*(*g*) via limits. f approaches 0.*f*(*n*)=2^{n},*g*(*n*)=*n*^{lgn}*f*(*n*)=*n*,*g*(*n*)=(*l**g**n*)^{2}*f*=*Ω*(*g*)$f(n)=2^{\sqrt{lgn}}, g(n)=n(lgn)^3$

*f*(*n*)=(*l**g**n*)^{1/2},*g*(*n*)=*l**g*(*n*)+*l**g*((*l**g*(*n*))^{3})*f*=*O*(*g*)*f*(*n*)=*e*^{cos(n)},*g*(*n*)=*l**g**n**f*(*n*)=*c**o**s*(*n*),*g*(*n*)=*l**n*(*l**g*(*n*))*f*=*O*(*g*)*f*(*n*)=*l**g**n*^{2},*g*(*n*)=(*l**g**n*)^{2}*f*(*n*)=2*l**g*(*n*),*g*(*n*)=(*l**g**n*)^{2}*f*=*O*(*g*)$f(n)=\sqrt{4n^2-12n+9}, g(n)=n^(\frac{3}{2})$

$f(n)=2n, g(n)=n^(\frac{3}{2})$

*f*(*n*)=*O*(*g*)$f(n)=\sum_{k=1}^{n} k, g(n)=(n+2)^2$

$f(n) = \frac{k(k+1)}{2}$ via summation formula

*g*(*n*)=(*n*+ 2)^{2}*f*=*θ*(*g*)

Report

This is a summary of the paper *The Milky Way Has No Distinct Thick Disk* by \cite{Bovy_Rix_Hogg_2012}.

Traditionally the stars within the disks of spiral galaxies are considered to form two distinct populations. One population, termed the “thin disk”, is generally comprised of young and metal-rich stars while the other population of older and more metal-poor stars make up the “thick disk”. The paper from \citet{Bovy_Rix_Hogg_2012} challenges this assumption of bi-modality within the Galactic disk and argue for a “continuous and monotonic scale-height distritbution”.

Exact Solutions in the 3+1 Split

This began as some documentation Erik Schnetter wrote for the Penn State Maya code. I wanted to enter some more simple sample (3+1 splits of) exact space-times. It you see an obvious error, or have something to suggest, let me know.

There is, of course, a definite bias towards black hole space-times. I may add cosmological ones when/if I get the chance.

A few words about notation: In what follows, Greek indices such as *α*, *β*, *μ*, *ν* are four-vector indices and run from 0 to 3. Latin indices such as *i*, *j*, *k*, *l*, *m*, *n* are three-vector indices and run from 1 to 3.

When using spherical polar coordinates, in general *R* will denote the standard “areal” radial coordinate; when dealing with a conformally flat solution, I’ll use *r* to denote the radial coordinate, as then it will *not* be areal. Additionally, I often use the letter *q* to denote the cylindrical polar quantity $\sqrt{x^2 + y^2} = r \, \sin\theta$; most references I know use the Greek letter *ρ* for this purpose, but I’ve found *ρ* to be used for too many other purposes.

Laboration 1.3, Group 28

and 1 collaborator

This is a hand-in written by **Mazdak Farrokhzad** and **Niclas Alexandersson** in **group 28** for the 3rd assignment on Lab1.

The assignment is about analysing an algorithm written in 3 different ways where all of them are functionally equivalent. The methods are available in the appendix.

a. Description of what the algorithm does

b. Complexity analysis

c. Testing and numerical analysis

Look & Listen 2014

Overview of the material that will be covered at the “Look & Listen” School 2014

We will first cover some basic principles of stellar evolution, with a particular emphasis on the physics of massive stars. Then we will focus on the phenomena that are currently believed to prominently influence the life of massive stars and determine the conditions at the pre-SN stage. Informed by the most recent observational results, we will also focus on some unsolved problems in stellar physics and how they could impact the death of massive stars and their stellar remnants.

Tuesday 14 Jan

Principles of stellar evolution (20+10)

Massive Stars evolution: Main Sequence and He-Burning (20+10)

Massive Stars evolution: Late evolutionary stages (20+10)

Thursday 16 Jan

Stellar Rotation (20+10)

Stellar Rotation and Magnetic Fields (20+10)

Mass loss (20+10)

Friday 17 Jan

Massive Binary Stars (20+10)

Very Massive Stars (20+10)

Progenitors of NS/BH, SNe, PISNe, GRBs (20+10)

Strong Lens Time Delay Challenge: I. Experimental Design

and 7 collaborators

**Abstract**: The time delays between point-like images in gravitational lens systems can be used to measure cosmological parameters as well as probe the dark matter (sub-)structure within the lens galaxy. The number of lenses with measuring time delays is growing rapidly due to dedicated efforts. In the near future, the upcoming *Large Synoptic Survey Telescope* (LSST), will monitor ∼10^{3} lens systems consisting of a foreground elliptical galaxy producing multiple images of a background quasar. In an effort to assess the present capabilities of the community to accurately measure the time delays in strong gravitational lens systems, and to provide input to dedicated monitoring campaigns and future LSST cosmology feasibility studies, we pose a “Time Delay Challenge” (TDC). The challenge is organized as a set of “ladders,” each containing a group of simulated datasets to be analyzed blindly by participating independent analysis teams. Each rung on a ladder consists of a set of realistic mock observed lensed quasar light curves, with the rungs’ datasets increasing in complexity and realism to incorporate a variety of anticipated physical and experimental effects. The initial challenge described here has two ladders, TDC0 and TDC1. TDC0 has a small number of datasets, and is designed to be used as a practice set by the participating teams as they set up their analysis pipelines. The non mondatory deadline for completion of TDC0 will be December 1 2013. The teams that perform sufficiently well on TDC0 will then be able to participate in the much more demanding TDC1. TDC1 will consists of 10^{3} lightcurves, a sample designed to provide the statistical power to make meaningful statements about the sub-percent accuracy that will be required to provide competitive Dark Energy constraints in the LSST era. In this paper we describe the simulated datasets in general terms, lay out the structure of the challenge and define a minimal set of metrics that will be used to quantify the goodness-of-fit, efficiency, precision, and accuracy of the algorithms. The results for TDC1 from the participating teams will be presented in a companion paper to be submitted after the closing of TDC1, with all TDC1 participants as co-authors.

Ordering My Thoughts (Summer 2013)

In this document I outline my ideas and goals for a handful of projects to work on over the summer (2013). The purpose is to maintain ordered thoughts and to be constructive/directed about tackling problems.

**I have abandoned these questions (at least for the meantime) to pursue some more “useful” work.**

Transforming Scholarly Communication

and 1 collaborator

The Transforming Scholarly Communication workshop was largely the brainchild of Lee Dirks, of the Microsoft Research "Connections" group. Lee passed away in August 2012, and we hope that all of the good outcomes of this workshop serve forever as a tribute to Lee.

What to Keep and How to Analyze It: Data Curation and Data Analysis with Multiple Phases

and 12 collaborators

This open document is being used to describe and record the events at the Radcliffe Exploratory Seminar on Data Curation and Analysis, to be held at the Radcliffe Institute for Advanced Study, May 9-10 2013.

This Google Drive Directory should be used to deposit all files contributed by participants before and during the meeting. (Click "Open in Drive" on your browser to make a new folder, e.g. with your name as its name.)

This Google Doc is used for collaborative real-time note-taking.

**ABSTRACT:** Rapid advances in technology have allowed us to collect vast amounts of data in myriad fields and forms, but our ability to manage and analyze these data has not kept pace. As a result, the amount of data collected far exceeds what can be analyzed and, often, what can be archived. These issues only become more pressing as data collection accelerates. Astronomers and astrophysicists, for example, collect terabytes of data per night; the phrase “drowning in a data tsunami” is increasingly used to describe this situation. The issues of what to keep and what to distribute are surprisingly complex, even when we put aside technological issues such as long-term storage and retrieval. A central challenge is the fundamental conflict between reducing the size of data and preserving information for future scientific inquires and statistical analyses. Complicating matters further, the parties/teams involved in the entire data collection, curation, and analysis process often have only limited communication with each other owing to the sequential nature of this process. This seminar brings together a core group of leading experts and emerging scholars in information and natural sciences to discuss, debate, and design principles and strategies to address this grand challenge, which increasingly affects almost every aspect of science and society.

**GOAL:** By gathering experts from information and natural sciences, we aim to start building a set of principles and methods that will allow us to understand such problems and to provide better preprocessing, analyses, and data preservation, especially in the context of the natural sciences. The ultimate goals of this research include providing methods for assessing the validity of such collaborative analyses, guidance on statistically-principled preprocessing, and a rich new theory of statistical learning and inference with multiple parties. We believe that this collaboration will simultaneously sow the seeds for innovative mathematical theory and shed light on directly usable guidelines for the construction and curation of scientific databases.

Ideal Gas Law Simulation Report

The Ideal Gas Law describes the characteristics of ideal gas in a container. Often written as *P**V* = *n**R**T*, this law displays the relationship between Pressure, Volume, Temperature, moles of the particle, and the universal gas constant in a system. The Ideal Gas Law can be derived from combining three other gas laws: Boyle’s Law, Charles’s Law, and Avogadro’s Law.

Boyle’s Law postulates that in a system with uniform temperature, the pressure of an ideal gas is inversely proportional with volume of the gas. Thus, the pressure times the volume is equal to a constant value in the system, often shown as *P**V* = *k* (where k is the constant). Since the constant is the same no matter the circumstances in the system, the law can be used to relate changes in pressure or volume as *P*_{1}*V*_{1} = *P*_{2}*V*_{2} (where 1 indicates the initial and 2 is the final state).

Charles’s Law states that in a system with uniform pressure, the temperature is inversely proportional to the volume of the container holding the ideal gas (*V* ∝ *T*). Since this law applies to any variation in volume or temperature, it can be written as $\frac{V_{1}}{T_{1}} = \frac{V_{2}}{T_{2}}$

Avogadro’s Law declares that in a system with a constant temperature and pressure, equivalent volumes of the same ideal gas will contain an equal number of particles. Mathematically, the relationship can be shown using $\frac{V}{n} = k$ (where k is the constant in the system).

These three laws can be combined mathematically to create $\frac{PV}{Tn} = R$ (R is a constant in the system). When rearranged, this creates *P**V* = *n**R**T* or the Ideal Gas Law.

In this lab, I set out to create a 3D simulation of ideal gas particles in a cubic container in order to experimentally determine the pressure of the gas based on given circumstances. From there, I planned to explore the relationship between pressure and volume as well as pressure and number of particles.To produce an sumlation, a replication of a real world circumstance using programming, of a gas particle it is first necessary to understand exactly how particles affect the pressure of a system. Pressure is the amount of force over a specific area, also written as *P**r**e**s**s**u**r**e* = *F*/*A*. Force can also be described as change in momentum over change in time: $F = \frac{\Delta p}{\Delta t}$. The change momentum of a single particle equals its mass multiplied by its change in velocity: *Δ**p* = *m**Δ**v*. Since there is more than one particle in a system, the entire change in momentum is the combined change in velocities of each particle that hits the specified area. Thus, the following formula can be used to determine total force:

$F = \frac{2m * \displaystyle\sum\limits_{i=1}^n v}{\Delta t}$ Where *n* is the number of collisions and *v* is the velocity of the particle hitting the wall. Since the change in velocity is double the initial velocity, the 2 can be placed outside the summation along with the mass.

Once the force has been computed using the momentum of the particles, the pressure can then be determined with the initial formula *P* = *F*/*A*.