Public Articles
Global TB Report 2015: Technical appendix on methods used to estimate the global burden of disease caused by TB
and 4 collaborators
Estimates of the burden of disease caused by TB and measured in terms of incidence, prevalence and mortality are produced annually by WHO using information gathered through surveillance systems (case notifications and death registrations), special studies (including surveys of the prevalence of disease), mortality surveys, surveys of under-reporting of detected TB and in-depth analysis of surveillance data, expert opinion and consultations with countries. This document provides case definitions and describes the methods used in Global TB Report 2015 to derive TB incidence, prevalence and mortality.
Incidence is defined as the number of new and recurrent (relapse) episodes of TB (all forms) occurring in a given year. Recurrent episodes are defined as a new episode of TB in people who have had TB in the past and for whom there was bacteriological confirmation of cure and/or documentation that treatment was completed. In the remainder of this technical document, relapse cases are referred to as recurrent cases because the term is more useful when explaining the estimation of TB incidence. Recurrent cases may be true relapses or a new episode of TB caused by reinfection. In current case definitions, both relapse cases and patients who require a change in treatment are called retreatment cases. However, people with a continuing episode of TB that requires a treatment change are prevalent cases, not incident cases.
Prevalence is defined as the number of TB cases (all forms) at a given point in time.
Mortality from TB is defined as the number of deaths caused by TB in HIV-negative people occurring in a given year, according to the latest revision of the International classification of diseases (ICD-10). TB deaths among HIV-positive people are classified as HIV deaths in ICD-10. For this reason, estimates of deaths from TB in HIV-positive people are presented separately from those in HIV-negative people.
The case fatality rate is the risk of death from TB among people with active TB disease.
The case notification rate refers to new and recurrent episodes of TB notified to WHO for a given year. The case notification rate for new and recurrent TB is important in the estimation of TB incidence. In some countries, however, information on treatment history may be missing for some cases. Patients reported in the unknown history category are considered incident TB episodes (new or recurrent).
Regional analyses are generally undertaken for the six WHO regions (that is, the African Region, the Region of the Americas, the Eastern Mediterranean Region, the European Region, the South-East Asia Region and the Western Pacific Region). For analyses related to MDR-TB, nine epidemiological regions were defined (Figure [fig:epiregions]). These were African countries with high HIV prevalence, African countries with low HIV prevalence, Central Europe, Eastern Europe, high-income countries, Latin America, the Eastern Mediterranean Region (excluding high-income countries), the South-East Asia Region (excluding high-income countries) and the Western Pacific Region (excluding high-income countries).
Risk of Bias Assessments in Ophthalmology Systematic Reviews and Meta-Analyses
and 2 collaborators
Introduction
In order for systematic reviews to make accurate inferences concerning clinical therapy, the primary studies that constitute the review must provide valid results. The Cochrane Handbook for Systematic Reviews states that assessment of validity is an “essential component” of a review that “should influence the analysis, interpretation, and conclusions of the review”(p. 188) \cite{higgins2008cochrane}. The internal validity of a review’s primary studies must be considered to ensure that bias has not compromised the results, leading to inaccurate estimates of summary effect sizes.
In ophthalmology, there is a need for closer examination of the validity of primary studies comprising a review. As an illustrative example, Chakrabarti et al. (2012) discussed emerging ophthalmic treatments for proliferative (PDR) and nonproliferative diabetic retinopathy (NDR) noting that anti-vascular endothelial growth factor (VEGF) agents consistently received recognition as a possible alternative treatment for diabetic retinopathy. Treatment guidelines from the Scottish Intercollegiate Guidelines Network and the American Academy of Ophthalmology consider anti-VEGF treatment as merely useful as an adjunct to laser for treatment of PDR; however, the Malaysian guidelines indicate that these same agents were to be considered in combination with intraocular steroids and vitrectomy. Most extensively, the National Health and Medical Research Council guidelines recommend the addition of anti-VEGF to laser therapy prior to vitrectomy \cite{Chakrabarti_2012}. The evidence base informing these guidelines is comprised of trials of questionable quality. Martinez-Zapata et al. (2014) conducted a systematic review of this anti-VEGF treatment for diabetic retinopathy, which included 18 randomized controlled trials (RCTs). Of these trials, seven were at high risk of bias while the rest were unclear in one or more domains. The authors concluded, “there is very low or low quality evidence from RCTs for the efficacy and safety of anti-VEGF agents when used to treat PDR over and above current standard treatments" \cite{martinez2014anti}. Thus, low quality evidence provides less confidence regarding the efficacy of treatment, makes suspect guidelines advocating use, and impairs the clinicians ablility to make sound judgements regarding treatment.
Over the years, researchers have conceived many methods in attempt to evaluate the validity or methodological quality of primary studies. Initially, checklists and scales were developed to evaluate whether particular aspects of experimental design, such as randomization, blinding, or allocation concealment were incorporated into the study. These approaches have been criticized for falsely elevating quality scores. Many of these scales and checklists include items that have no bearing on the validity of study findings, such as whether investigators used informed consent or whether ethical approval was obtained \cite{7743790}. Furthermore, with the proliferation of quality appraisal scales, it was found that the choice of scale could alter the results of systematic reviews due to weighting differences of scale components \cite{10493204}. Two such scales, the Jadad scale - also called the Oxford Scoring System \cite{8721797} and the Downs and Black checklist \cite{9764259} were among the popular alternatives. Quality of Reporting of Meta-analyses (QUORUM) \cite{Moher_1999}, the dominant reporting guidelines at that time, called for the evaluation of methodological quality of the primary studies in systematic reviews. This recommendation was short lived as the Cochrane Collaboration began to advocate for a new approach to assess the validity of primary studies. This new method assessed the risk of bias of 6 particular design features of primary studies, with each domain receiving a rating of either low, unclear, or high risk of bias \cite{higgins2008cochrane}. Following suit, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) - updated reporting guidelines, now calls for the evaluation of bias in all systematic reviews \cite{19622511}.
A previous review examining primary studies from multiple fields of medicine revealed that the failure to incorporate an assessment of methodological quality can result in the implementation of interventions founded on misleading evidence \cite{588948720011204}. Yet, questions remain regarding the assessment of quality and risk of bias in clinical specialties. Therefore, we examined ophthalmology systematic reviews to determine the degree to which methodological quality and risk of bias assessments were conducted. We also evaluated the particular method used in the evaluation, the quality components comprising these assessments, and how systematic reviewers integrated primary studies with low quality or high risk of bias into their results.
Test of unicode characters in figures
Oh, an empty article!
You can get started by double clicking this text block and begin editing. You can also click the Text button below to add new block elements. Or you can drag and drop an image right onto this text. Happy writing!
Common and phylogenetically widespread coding for peptides by bacterial small RNAs
and 5 collaborators
Background:
While eukaryotic noncoding RNAs have recently received intense scrutiny, it is becoming clear that bacterial transcription is at least as pervasive. Bacterial small RNAs and antisense RNAs (sRNAs) are often assumed to be noncoding, due to their lack of long open reading frames (ORFs). However, there are numerous examples of sRNAs encoding for small proteins, whether or not they also have a regulatory role at the RNA level.
Results:
Here, we apply flexible machine learning techniques based on sequence features and comparative genomics to quantify the prevalence of sRNA ORFs under natural selection to maintain protein-coding function in phylogenetically diverse bacteria. A majority of annotated sRNAs have at least one ORF between 10 and 50 amino acids long, and we conservatively predict that 188 ± 25.5 unannotated sRNA ORFs are under selection to maintain coding, an average of 13 per species considered here. This implies that overall at least 7.5 ± 0.3% of sRNAs have a coding ORF, and in some species at least 20% do. 84 ± 9.8 of these novel coding ORFs have some antisense overlap to annotated ORFs. As experimental validation, many of our predictions are translated in ribosome profiling data and are identified via mass spectrometry shotgun proteomics. B. subtilis sRNAs with coding ORFs are enriched for high expression in biofilms and confluent growth, and S. pneumoniae sRNAs with coding ORFs are involved in virulence. sRNA coding ORFs are enriched for transmembrane domains and many are novel components of type I toxin/antitoxin systems.
Conclusions:
We predict over a dozen new protein-coding genes per bacterial species, but crucially also quantified the uncertainty in this estimate. Our predictions for sRNA coding ORFs, along with novel type I toxins and tools for sorting and visualizing genomic context, are freely available in a user-friendly format at http://disco-bac.web.pasteur.fr. We expect these easily-accessible predictions to be a valuable tool for the study not only of bacterial sRNAs and type I toxin-antitoxin systems, but also of bacterial genetics and genomics.
Final Draft Lab 3 (LL and NZ) Determination of Carrier Density through Hall Measurements and Determination of Transition Temperature (\(Tc\)) in a High-Tc Superconductor
and 1 collaborator
An electromagnet was used to provide a magnetic field to 3 different conducting samples: n type Geranium (n-Ge), p type Geranium (p-Ge), and silver (Ag). A calibrated Hall probe was used to obtain the current ($\vec{I}_{mag}$) to magnetic field ($\vec{B}$) calibration of the iron-core electromagnet. The Hall voltages (VH) produced by each of the three samples were plotted against B, and a linear line was produced, as expected. The slope ($\frac{\Delta V_H}{\Delta B}$) of each of the graphs were used to calculate the Hall coefficient for each sample, which we found to be $-4.99\cdot 10^{-3}\pm -0.0998 \cdot 10^{-3}\frac{\textrm{Vm}}{\textrm{AT}}$, $5.64 \cdot 10^{-3}\pm 0.11 \cdot 10^{-3} \frac{\textrm{Vm}}{\textrm{AT}}$, $-2.24 \cdot 10^{-10}\pm -0.04 \cdot 10^{-10} \frac{\textrm{Vm}}{\textrm{AT}}$ respectively. These do not really agree with given values of $-5.6\cdot 10^{-3}\frac{\textrm{Vm}}{\textrm{AT}}$ for n-Ge, $6.6\cdot 10^{-3}\frac{\textrm{Vm}}{\textrm{AT}}$ for p-Ge, and $-8.9\cdot 10^{-11}\frac{\textrm{Vm}}{\textrm{AT}}$ for silver by the manufacturer. Using the Hall coefficients, we found their carrier densities to be −1.25 ⋅ 1021 ± 0.025 ⋅ 1021m−3 for n-Ge, 1.11 ⋅ 1021 ± 0.02 ⋅ 1021m−3 for p-Ge, −2.79 ⋅ 1028 ± 0.06 ⋅ 1028m−3 for silver, which are all in the same order of magnitude as the given absolute values of 1.2 ⋅ 1021m−3, 1.1 ⋅ 1021m−3, 6.6 ⋅ 1028m−3.
A current was applied to the superconductor Bi2Sr2Ca2Cu3O10 which was cooled in liquid nitrogen until it became superconducting, and was allowed to warm slowly. Its voltage and temperature were monitored in the warming process which we used to produce a graph of voltage against temperature. The graph showed a transition temperature of about 118K ± 2K, similar to the provided critical temperature of 108K.
Final Lab Report 3 (AV and EK): Earth’s Field NMR
and 1 collaborator
We examined the relationship between magnetization, polarizing field time, magnetic field and precession frequency using a 125 mL sample of water and the TeachSpin Earth’s Field NMR instrument. Through varying these different parameters, we could determine the Larmor precession frequency of protons within Earth’s field, spin-lattice relaxation time, and the gyromagnetic ratio for protons. We found the Larmor precession frequency to be 1852 ± 18 Hz corresponding to a local magnetic field of 43.3 ± 0.3μT due to Earth’s magnetic field, the spin-lattice relaxation time to be 2.15 ± 0.05 s, and the gyromagnetic ratio to be $(2.65\pm0.04) \cdot 10^8~\frac{1}{s\cdot T}$, agreeing with the known value of $2.68\cdot 10^8~\frac{1}{s\cdot T}$.
Start doing something!
Mapping stellar content to dark matter halos. II. Halo mass is the main driver of galaxy quenching
and 1 collaborator
\label{sec:intro}
The quenching of galaxies, namely, the relatively abrupt shutdown of star formation activities, gives rise to two distinctive populations of quiescent and active galaxies, most notably manifested in the strong bimodality of galaxy colours \citep{strateva2001, baldry2006}. The underlying driver of quenching, whether it be stellar mass, halo mass, or environment, should produce an equally distinct split in the spatial clustering and weak gravitational lensing between the red and blue galaxies. Recently, \citet[][hereafter Paper I]{zm15} developed a powerful statistical framework, called the model, to interpret the spatial clustering (i.e., the projected galaxy autocorrelation function wp) and the galaxy-galaxy (g-g) lensing (i.e., the projected surface density contrast $\ds$) of the overall galaxy population in the Sloan Digital Sky Survey \citep[SDSS;][]{york2000}, while establishing a robust mapping between the observed distribution of stellar mass to that of the underlying dark matter halos. In this paper, by introducing two empirically-motivated and physically-meaningful quenching models within , we hope to robustly identify the dominant driver of galaxy quenching, while providing a self-consistent framework to explain the bimodality in the spatial distribution of galaxies.
Galaxies cease to form new stars and become quenched when there is no cold gas. Any physical process responsible for quenching has to operate in one of three following modes: 1) it heats up the gas to high temperatures and stops hot gas from cooling efficiently ; 2) it depletes the cold gas reservoir via secular stellar mass growth or sudden removal by external forces \citep[e.g., tidal and ram pressure;][]{gunn1972}; and 3) it turns off gas supply by slowly shutting down accretion \citep[e.g., strangulation;][]{balogh2000}. However, due to the enormous complexity in the formation history of individual galaxies, multiple quenching modes may play a role in the history of quiescent galaxies. Therefore, it is more promising to focus on the underlying physical driver of the average quenching process, which is eventually tied to either the dark matter mass of the host halos, the galaxy stellar mass, or the small/large-scale environment density that the galaxies reside in, hence the so-called “halo”, “stellar mass”, and “environment” quenching mechanisms, respectively.
Halo quenching has provided one of the most coherent quenching scenarios from the theoretical perspective. In halos above some critical mass ($M_{\mathrm{shock}}{\sim}10^{12}\hmsol$), virial shocks heat gas inflows from the intergalactic medium, preventing the accreted gas from directly fueling star formation . Additional heating from, e.g., the active galactic nuclei (AGNs) then maintains the gas coronae at high temperature \citep{croton2006}. For halos with Mh < Mshock, the incoming gas is never heated to the virial temperature due to rapid post-shock cooling, therefore penetrating the virial boundary into inner halos as cold flows. This picture, featuring a sharp switch from the efficient stellar mass buildup via filamentary cold flow into low mass halos, to the halt of star formation due to quasi-spherical hot-mode accretion in halos above Mshock, naturally explains the colour bimodality, particularly the paucity of galaxies transitioning from blue, star-forming galaxies to the red sequence of quiescent galaxies \citep{cattaneo2006, dekel2006}. To first order, halo quenching does not discriminate between centrals and satellites, as both are immersed in the same hot gas coronae that inhibits star formation. However, since the satellites generally lived in lower mass halos before their accretion and may have retained some cold gas after accretion, the dependence of satellite quenching on halo mass should have a softer transition across Mshock, unless the quenching by hot halos is instantaneous.
Observationally, by studying the dependence of the red galaxy fraction $f\red$ on stellar mass $\ms$ and galaxy environment δ5NN (i.e., using distance to the 5th nearest neighbour) in both the Sloan Digital Sky Survey (SDSS) and zCOSMOS, \citet[][hereafter P10]{peng2010} found that $f\red$ can be empirically described by the product of two independent trends with $\ms$ and δ5NN, suggesting that stellar mass and environment quenching are at play. By using a group catalogue constructed from the SDSS spectroscopic sample, \citet{peng2012} further argued that, while the stellar mass quenching is ubiquitous in both centrals and satellites, environment quenching mainly applies to the satellite galaxies.
However, despite the empirically robust trends revealed in P10, the interpretations for both the stellar mass and environment trends are obscured by the complex relation between the two observables and other physical quantities. In particular, since the observed $\ms$ of central galaxies is tightly correlated with halo mass $\mh$ (with a scatter ∼0.22 dex; see Paper I), a stellar mass trend of $f\red$ is almost indistinguishable with an underlying trend with halo mass. By examining the inter-relation among $\ms$, $\mh$, and δ5NN, \citet{woo2013} found that the quenched fraction is more strongly correlated with $\mh$ at fixed $\ms$ than with $\ms$ at $\mh$, and the satellite quenching by δ5NN can be re-interpreted as halo quenching by taking into account the dependence of quenched fraction on the distances to the halo centres. The halo quenching interpretation of the stellar and environment quenching trends is further demonstrated by \citet{gabor2015}, who implemented halo quenching in cosmological hydrodynamic simulations by triggering quenching in regions dominated by hot (105.4K) gas. They reproduced a broad range of empirical trends detected in P10 and \citet{woo2013}, suggesting that the halo mass remains the determining factor in the quenching of low-redshift galaxies.
Another alternative quenching model is the so-called “age-matching” prescription of \citet{hearin2013} and its recently updated version of \citet{hearin2014}. Age-matching is an extension of the “subhalo abundance matching” \citep[SHAM;][]{conroy2006} technique, which assigns stellar masses to individual subhalos (including both main and subhalos) in the N-body simulations based on halo properties like the peak circular velocity \citep{reddick2013}. In practice, after assigning $\ms$ using SHAM, the age-matching method further matches the colours of galaxies at fixed $\ms$ to the ages of their matched halos, so that older halos host redder galaxies. In essence, the age-matching prescription effectively assumes a stellar mass quenching, as the colour assignment is done at fixed $\ms$ regardless of halo mass or environment, with a secondary quenching via halo formation time. Therefore, the age-matching quenching is very similar to the $\ms$-dominated quenching of P10, except that the second variable is halo formation time rather than galaxy environment.
The key difference between the $\mh$- and $\ms$-dominated quenching scenarios lies in the way central galaxies become quiescent. One relies on the stellar mass while the other on the mass of the host halos, producing two very different sets of colour-segregated stellar-to-halo relations (SHMRs). At fixed halo mass, if stellar mass quenching dominates, the red centrals should have a higher average stellar mass than the blue centrals; in the halo quenching scenario the two coloured populations at fixed halo mass would have similar average stellar masses, but there is still a trend for massive galaxies to be red because higher mass halos host more massive galaxies. This difference in SHMRs directly translates to two distinctive ways the red and blue galaxies populate the underlying dark matter halos according to their $\ms$ and $\mh$, hence two different spatial distributions of galaxy colours.
Therefore, by comparing the wp and $\ds$ predicted from each quenching model to the measurements from SDSS, we expect to robustly distinguish the two quenching scenarios. The framework we developed in Paper I is ideally suited for this task. The is a global “halo occupation distribution” (HOD) model defined on a 2D grid of $\ms$ and $\mh$, which is crucial to modelling the segregation of red and blue galaxies in their $\ms$ distributions at fixed $\mh$. The quenching constraint is fundamentally different and ultimately more meaningful compared to approaches in which colour-segregated populations are treated independently \citep[e.g.,][]{tinker2013, puebla2015}. Our quenching model automatically fulfills the consistency relation which requires that the sum of red and blue SHMRs is mathematically identical to the overall SHMR. More importantly, the quenching model employs only four additional parameters that are directly related to the average galaxy quenching, while most of the traditional approaches require ∼20 additional parameters, rendering the interpretation of constraints difficult. Furthermore, the framework allows us to include ∼80% more galaxies than the traditional HODs and take into account the incompleteness of stellar mass samples in a self-consistent manner.
This paper is organized as follows. We describe the selection of red and blue samples in Section [sec:data]. In Section [sec:model] we introduce the parameterisations of the two quenching models and derive the s for each colour. We also briefly describe the signal measurement and model prediction in Sections [sec:data] and [sec:model], respectively, but refer readers to Paper I for more details. The constraints from both quenching mode analyses are presented in Section [sec:constraint]. We perform a thorough model comparison using two independent criteria in Section [sec:result] and discover that halo quenching model is strongly favored by the data. In Section [sec:physics] we discuss the physical implications of the halo quenching model and compare it to other works in [sec:compare]. We conclude by summarising our key findings in Section [sec:conclusion].
Throughout this paper and Paper I, we assume a $\lcdm$ cosmology with (Ωm, ΩΛ, σ8, h) = (0.26, 0.74, 0.77, 0.72). All the length and mass units in this paper are scaled as if the Hubble constant were $100\,\kms\mpc^{-1}$. In particular, all the separations are co-moving distances in units of either $\hkpc$ or $\hmpc$, and the stellar mass and halo mass are in units of $\hhmsol$ and $\hmsol$, respectively. Unless otherwise noted, the halo mass is defined by $\mh\,{\equiv}\,M_{200m}\,{=}\,200\bar{\rho}_m(4\pi/3)r_{200m}^3$, where r200m is the corresponding halo radius within which the average density of the enclosed mass is 200 times the mean matter density of the Universe, $\bar{\rho}_m$. For the sake of simplicity, lnx = logex is used for the natural logarithm, and lgx = log10x is used for the base-10 logarithm.
Peeragogy Pattern Catalog
and 5 collaborators
\label{sec:Introduction}
r.52
This paper outlines an approach to the organization of learning that draws on the principles of free/libre/open source software (FLOSS), free culture, and peer production. Mako Hill suggests that one recipe for success in peer production is to take a familiar idea – for example, an encyclopedia – and make it easy for people to participate in building it \cite[Chapter 1]{mako-thesis}. We will take hold of “learning in institutions” as a map (Figure [madison-map]), although it does not fully conform to our chosen tacitly-familiar territory of peeragogy. To be clear, peeragogy is for any group of people who want to learn anything.1
Despite thinking about learning and adaptation that may take place far outside of formal institutions, the historical conception of a university helps give shape to our inqury. The model university is not separate from the life of the state or its citizenry, but aims to “assume leadership in the application of knowledge for the direct improvement of the life of the people in every sphere” . Research that adds to the store of knowledge is another fundamental obligation of the university . The university provides a familiar model for collaborative knowledge work but it is not the only model available. Considering the role of collaboration in building Wikipedia, StackExchange, and free/libre/open source software development, we may be led to ask: What might an accredited free/libre/open university look like? How would it compare or contrast with the typical or stereotypical image of a university from Figure [madison-map]? Would it have similar structural features, like a Library, Dormitory, Science Hall and so on? Would participants take on familiar roles ? How would it compare with historical efforts like the Tuskegee Institute that involved students directly in the production of physical infrastructure \cite{washington1986up,building-peeragogy-accelerator}?
We use the word peeragogy to talk about collaboration in relatively non-hierarchical settings. Examples are found in education, but also in business, government, volunteer, and NGO settings. Peeragogy involves both problem solving and problem definition. Indeed, in many cases it is preferable to focus on solutions, since people know the “problems” all too well \cite{ariyaratneXorganizationX1977}. Participants in a peeragogical endeavor collaboratively build emergent structures that are responsive to their changing context, and that in turn, change that context. In the Peeragogy project, we are developing the the theory and practice of peeragogy.
Design patterns offer a methodological framework that we have used to clarify our focus and organize our work. A design pattern expresses a commonly-occurring problem, a solution to that problem, and rationale for choosing this solution \cite{meszaros1998pattern}. This skeleton is typically fleshed out with a pattern template that includes additional supporting material; individual patterns are connected with each other in a pattern language. What we present here is rather different from previous pattern languages that touch on similar topics – like Liberating Voices \cite{schuler2008liberating}, Pedagogical Patterns \cite{bergin2012pedagogical}, and Learning Patterns \cite{iba2014learning}. At the level of the pattern template, our innovation is simply to add a “What’s next” annotation, which anticipates the way the pattern will continue to “resolve”.
This addition mirrors the central considerations of our approach, which is all about human interaction, and the challenges, fluidity and unpredictability that come with it. Something that works for one person may not work for another or may not even work for the same person in a slightly different situation. We need to be ready to clarify and adjust what we do as we go. Even so, it is hard to argue with a sensible-sounding formula like “If W applies, do X to get Y.” In our view, other pattern languages often achieve this sort of common sense rationality, and then stop. Failure in the prescriptive model only begins when people try to define things more carefully and make context-specific changes – when they actually try to put ideas into practice. The problem lies in the inevitable distance between do as I say, do as I do, and do with me . If people are involved, things get messy. They may think that they are on the same page, only to find out that their understandings are wildly different. For example, everyone may agree that the group needs to go “that way.” But how far? How fast? It is rare for a project to be able to set or even define all of the parameters accurately and concisely at the beginning. And yet design becomes a “living language” just insofar as it is linked to action. Many things have changed since Alexander suggested that “you will get the most ‘power’ over the language, and make it your own most effectively, if you write the changes in, at the appropriate places in the book” . We see more clearly what it means to inscribe the changing form of design not just in the margins of a book, or even a shared wiki, but in the lifeworld itself. Other recent authors on patterns share similar views \cite{reiners2012approach, plast-project, schummer2014beyond}.
Learning and collaboration are of interest to both organizational studies and computer science, where researchers are increasingly making use of social approaches to software design and development, as well as agent-based models of computation \cite{minsky1967programming,poetry-workshop}. The design pattern community in particular is very familiar with practices that we think of as peeragogical, including shepherding, writers workshops, and design patterns themselves \cite{harrison1999language,coplien1997pattern,meszaros1998pattern}. We hope to help design pattern authors and researchers expand on these strengths.
r.52
Motivation for using this pattern. |
---|
Context of application. |
---|
Forces that operate within the context of application, each with a mnemonic glyph. |
Problem the pattern addresses. |
Solution to the problem. |
Rationale for this solution. |
Resolution of the forces, named in bold. |
Example 1: How the pattern manifests in current Wikimedia projects. |
---|
Example 2: How the pattern could inform the design of a future university. |
What’s Next in the Peeragogy Project: How the pattern relates to our collective intention in the Peeragogy project |
---|
Table [tab:pattern-template] shows the pattern template that we use throughout the paper. Along with the traditional design patterns components \cite{meszaros1998pattern}, each of our patterns is fleshed out with two illustrative examples. The first is descriptive, and looks at how the pattern applies in current Wikimedia projects. We selected Wikimedia as a source of examples because the project is familiar, a demonstrated success, and readily accessible. The second example is prospective, and shows how the pattern could be applied in the design of a future university. Each pattern concludes with a boxed annotation: “What’s Next in the Peeragogy Project”.
Section [sec:Peeragogy] defines the concept of more explicitly the form of a design pattern. Sections [sec:Roadmap]–[sec:Scrapbook] present the other patterns in our pattern language. Figure [fig:connections] illustrates their interconnections. Table [tab:core] summarizes the “nuts and bolts” of the pattern language. Section [sec:Distributed_Roadmap] collects our “What’s Next” steps, summarizes the outlook of the Peeragogy project. Section [sec:Conclusion] reviews the contributions of the work as a whole.
When one relative was still in the onboarding process in Peeragogy project, she hit a wall in understanding the “patterns” section in the Peeragogy Handbook v1. A more seasoned peer invited her to a series of separate discussions with their own to flesh out the patterns and make them more accessible. At that time the list of patterns was simply a list of paragraphs describing recurrent trends. During those sessions, the impact and meaning of patterns captured her imagination. She went on to become the champion for the pattern language and its application in the Peeragogy project. During a “hive editing” session, she proposed the template we initially used to give structure to the patterns. She helped further revise the pattern language for the Peeragogy Handbook v3, and attended PLoP 2015. While a new domain can easily be overwhelming, this newcomer found to start with, and scaffolded her knowledge and contributions from that foundation.
|C| [sec:Peeragogy].
How can we find solutions together?
Get concrete about what the real problems are.
[sec:Roadmap].
How can we get everyone on the same page?
Build a plan that we keep updating as we go along.
[sec:Reduce, reuse, recycle].
How can we avoid undue isolation?
Use what’s there and share what we make.
[sec:Carrying capacity].
How can we avoid becoming overwhelmed?
Clearly express when we’re frustrated.
[sec:A specific project].
How can we avoid becoming perplexed?
Focus on concrete, doable tasks.
[sec:Wrapper].
How can people stay in touch with the project?
Maintain a summary of activities and any adjustments to the plan.
[sec:Heartbeat].
How can we make the project “real” for participants?
Keep up a regular, sustaining rhythm.
[sec:Newcomer].
How can we make the project accessible to new people?
Let’s learn together with newcomers.
[sec:Scrapbook].
How can we maintain focus as time goes by?
Move things that are not of immediate use out of focus.
Determination of the Boltzmann Constant through Measurement of Johnson Noise and Determination of the Elementary Electron Charge through Measurement of Shot Noise
and 1 collaborator
Two sources of noise, Johnson noise and shot noise, are investigated in this experiment. The Johnson noise, which is the voltage fluctuations across a resistor that arose from the random motion of electrons, is measured using the Noise Fundamentals box. The noise was measured across different resistances and at different bandwidths at room temperature, resulting in a calculation of the Boltzmann constant of 1.4600 ± 0.0054 ⋅ 10−23 m2 kg s−2 K−1 and 1.4600 ± 0.0052 ⋅ 10−23 m2 kg s−2 K−1. The shot noise occurs due to the quantization of charge, and was measured by varying current in the system, with which we calculated the electron charge of 1.649 ± 0.007 ⋅ 10−19 Coulombs. They agree quite well with the accepted values of 1.38064852 ⋅ 10−23 m2 kg s−2 k−1, and 1.64 ⋅ 10−19C for the Boltzmann constant and electron charge respectively. Errors are discussed.
Cosmic Ray Decay
and 1 collaborator
Cosmic particles are found everywhere in the Universe from various high and low energy interactions. Muon and Gamma decay are two of the most frequent decays studied because muons are one of the most common particles and gamma rays are found everywhere in the Universe from many different type of radioactive decays. We used scintillators in order to produce the two different decays. For the gamma decay, the goal was to find an unknown radioactive sample and to find the different ages of two Cesium-137 samples by using Cesium-137 and Cobolt 60 to calibrate the energies. After analysis, we found that the unknown sample given to us was Sodium-22. We also found that one of the Cesium-137 samples was 17.583 years old and the other was 37.80 years old. The goal for the muon decay was to analyze long term data to see if we could calculate the time dilation effect commonly calculated using muons.
Earth’s Field NMR: first draft
and 1 collaborator
We examined the relationship between magnetization, polarizing field time, magnetic field and precession frequency using a 125 mL sample of water and the TeachSpin Earth’s Field NMR instrument. Through varying these different parameters, we could determine the Larmor precession frequency of protons within Earth’s field, spin-lattice relaxation time, and the gyromagnetic ratio for protons. We found the Larmor precession frequency to be 1852 ± 18 Hz corresponding to a local magnetic field of 43.3 ± 0.3μT due to Earth’s magnetic field, the spin-lattice relaxation time to be 2.15 ± 0.05 s, and the gyromagnetic ratio to be $(2.65\pm0.04) \cdot 10^8~\frac{1}{s\cdot T}$, agreeing with the known value of $2.68\cdot 10^8~\frac{1}{s\cdot T}$.
Focusing on Interest: Do High School Students Like the Idea of Helping Astronomers Revive Data in “oldAstronomy”
and 1 collaborator
Internet technologies make it easier and easier to share data globally, enabling a dramatic proliferation of online “citizen science” projects. One new project, called “oldAstronomy,” is in development by the Zooniverse team, based at Chicago’s Adler Planetarium, in collaboration with the WorldWide Telescope Ambassadors program at Harvard. The goal of the project is to restore hidden metadata to images in published astronomical articles, some more than 100 years old, making the images useful to researchers. In this paper, I investigate a possible role for high school students in the oldAstronomy project. Using two focus groups, one at Milton School and one at Cambridge Ringe and Latin School, I investigate which aspects of participating in oldAstronomy would be of most interest: connections to real data? to real scientists? connecting to other students worldwide? viewing interesting images? researching a topic related to images encountered? It was explained to the focus group students, before they were surveyed, that requirements for their participation in oldAstronomy will include: digesting a scientific paper; summarizing results; and writing a summary that is understandable to the general public or participating in a more creative final project. Results show that students are very interested in working with real data and in the beauty and meaning of images. However, the results also show that students are, perhaps surprisingly, not interested in collaborating and communicating with other students, either in-person (as group work), or online. In response to the feedback from these students’ negative responses to group work, instead of a group final paper, students could benefit in a similar way with a reproduction of the peer review process. Additionally from the feedback of students, there was interest in an alternative form of final assessment. The results of our study suggest that instead of a standard write up, students can create: a 3D model of their object; a website about it; or a WorldWide Telescope tour.
Johnson and Shot Noise. First Draft
and 1 collaborator
Two sources of noise, Johnson noise and shot noise, are investigated in this experiment. The Johnson noise which is the voltage fluctuations across a resistor that arose from the random motion of electrons. It was measured across different resistances and at different bandwidths at room temperature, resulting in a calculation of the Boltzmann constant of 1.46 ⋅ 10−23 m2 kg s−2 K−1 ± 2.5 ⋅ 10−21 m2 kg s−2 K−1 and 1.46 ⋅ 10−23 m2 kg s−2 K−1 ± 2.6 ⋅ 10−21 m2 kg s−2 K−1. The shot noise occurs due to the quantization of charge, and was measured by varying current in the system, with which we calculated the electron charge of 1.64 ⋅ 10−19 ± 7.0 ⋅ 10−22. They agree quite well with the accepted values of 1.38064852 ⋅ 10−23 m2 kg s−2 K−1, and 1.64 ⋅ 10−19C for the Boltzmann constant and electron charge respectively. Errors are discussed.
Determination of Carrier Density through Hall Measurements and Determination of Transition Temperature (\(Tc\)) in a High-Tc Superconductor
and 1 collaborator
證明常見錯誤:以一次基數小考為例
這篇短文,想藉由一次小考的題目來點出大家在寫證明題時常見的錯誤;由於每個人的寫法不盡相同,這邊所提的只是一個大方向,請大家自行判斷文章所指出的,是否就是自己曾經犯過的錯誤。
是非題答題方式很簡單:對的給證明,錯的給反例;但也是最難寫的題型,因為判斷對錯本身就不是一件容易的事。撇除這個部份不談,是非題比較容易犯的,嚴格來說不是錯,而是寫法上失焦。我們來看底下這個例子。
If x and y are integers of the same parity, then xy and (x + y)2 are of the same parity. (Two integers are of the same parity if they are both odd or both even.)
這個命題是錯的,所以我們只需要給一個反例即可。
Solution.
Let x = y = 1. Then x and y are of the same parity. However, xy = 1 and (x + y)2 = 4 are of distinct parities. ▫
有的同學將x, y同為奇數和同為偶數的情形分別討論一下,然後得到第一種情形與命題的結論不合,代表命題為非。這樣當然不是不行,只是多了一堆不必要的討論罷了。
第二種常見的錯誤,就是把運算元搞混;例如將集合的減法和實數的減法混淆在一起。
Let A and B be two sets. If A \ B = B \ A, then A \ B = ⌀.
這個命題為真,所以我們必須給予證明;常見的錯誤寫法為使用了 $$ A \setminus C = B \setminus C \quad \Rightarrow \quad A = B $$ 這種論證。
(錯誤寫法)
Since A \ B = A \ (A ∩ B) and B \ A = B \ (A ∩ B), we have \begin{align*}
A \setminus (A \cap B) = B \setminus (A \cap B) \quad \Rightarrow \quad A = B \quad \Rightarrow \quad A \setminus B = \varnothing
\end{align*}
集合的減法英文為difference,而實數的減法英文是minus,從字面上即可得知這是兩種不同的運算規則,因此 $$ a - c = b - c \quad \Rightarrow \quad a = b $$ 這種推論並不適用於集合。如果我們讓A = {1, 2, 3}、B = {1, 2}以及C = {3};則A \ C = B \ C但A ≠ B。正確作法如下。
Proof.
Suppose that A \ B ≠ ⌀. Let x ∈ A \ B. Then x ∈ A but x ∉ B. Since A \ B = B \ A, it is seen that x ∈ B but x ∉ A. This shows that x ∈ A and x ∉ A, which is a contradiction. Hence A \ B = ⌀. ▫
這大概是最常見的問題,也是最不容易拿捏的地方。說明是否足夠嚴謹,其標準因人而異,但無論如何,寫得詳細些至少不會出錯。若以考試的觀點,大原則就是:只要是課本或課堂上沒出現過的命題,就必須給予證明。
For every positive irrational number b, there is an irrational number a such that 0 < a < b.
這一題很簡單,大多數的同學會直接取$a = \frac{b}{2}$,接著就證明完畢。當然,0 < a < b的部分並沒有太大的疑義;但a是否為無理數就需要驗證了,請千萬不要漏了這個部分。
另外一種常見的錯誤,則是把要證明的結論當已知,然後推出一個恆真的結論。底下我們舉一個例子。
n3 + 1 > n2 + n for every integer n ≥ 2.
(錯誤寫法)
Suppose that n3 + 1 > n2 + n for every integer n ≥ 2. Then \begin{align*}
n^3 + 1 > n^2 + n \quad &\Rightarrow \quad n^3 + 1 - n^2 - n > 0 \\
&\Rightarrow \quad n^2(n-1) - (n-1) > 0 \\
&\Rightarrow \quad (n-1)(n^2-1) > 0 \\
&\Rightarrow \quad (n-1)^2 (n+1) > 0
\end{align*} The last inequality is always true for every integer n ≥ 2. ▫
雖然上面這個作法不對,但卻不是完全無用,因為它指點了一條正確證明的路。正確的作法應該從最後一行往回寫,這樣就沒問題了。換句話說,上面這種錯誤的做法,其實是正確證明的思考過程,只是需要正確的使用就是了。
Franck-Hertz Experiment for Neon and Argon: final draft
and 1 collaborator
The goal of the Franck-Hertz experiment is to demonstrate that electrons occupy only discrete, quantized energy states for neon and argon atoms.
Earth’s Field NMR: first draft
We performed an experiment to examine the relationship between magnetization, polarizing field time, magnetic field and precession frequency using a 125 mL sample of water and the TeachSpin Earth’s Field NMR instrument. Through varying thee different parameters, we could determine the Larmor precession frequency of protons within Earth’s field, spin-lattice relaxation time, and the gyromagnetic ratio for protons. We found the Larmor precession frequency to be 1852 ± 18 Hz, the spin-lattice relaxation time to be 2.15 ± 0.05 s, and the gyromagnetic ratio to be $(2.65\pm0.04) \cdot 10^8~\frac{1}{s\cdot T}$, agreeing with the known value of $2.68\cdot 10^8~\frac{1}{s\cdot T}$.
Observing Convectively Excited Gravity Modes in Main Sequence Stars
Abstract
This paper will primarily focus on a study by \cite{Shiode_2013} on how gravity modes could be excited by convection in massive main sequence stars. The first portion of this paper will explain the more commonly understood method of how gravity modes are driven by adiabatic expansion at the core and why gravity modes produced this way are so difficult to observe. The second part of this paper will briefly go over the methods for detecting gravity modes and the observational challenges faced. The third portion will look at models \cite{Shiode_2013} constructed using the MESA stellar evolution code that were used to estimate mode frequencies, excitation amplitudes and where in the stellar interior of various sized main sequence stars, gravity modes would propagate. The final portion of this paper will look at future advancements in detecting gravity modes and promising observations taken from the Kepler space satellite.
Measurement of Faraday Rotation in SF57 glass at 670 nm Third and Final Draft
and 1 collaborator
We performed an experiment to measure the Faraday rotation of polarized light passing through a magnetic field, as well as measuring the Verdet constant of an SF57 glass tube with a length of 0.1 m. Our results are consistent with the general idea of Faraday rotation, which suggests that linearly polarized light experiences rotation when applying a magnetic field. We used three different methods to find Verdet constants, which are Direct Fit, Slope Fit and Lock-in Method. The values we found are $21\pm 5 \frac{radians}{T \cdot m}$, $21.095\pm0.003 \frac{radians}{T \cdot m}$ and $20.43\pm0.06 \frac{radians}{T \cdot m}$ respectively, and those values are consistent with each other within uncertainty.
Critique of Occupancy Schedules Learning Process Through a Data Mining Framework
CS emission near MIR-bubbles
and 3 collaborators
Prior to post-main-sequence evolution, ionizing radiation is one of the most important mechanisms by which massive stars influence their surrounding environments. For example, ionizing radiation potentially triggers subsequent star-formation. The influences of massive stars are observed in the form of bubble-shaped emission in the 8 μm band of the Spitzer-GLIMPSE survey of the Galactic Plane \cite{Benjamin_2003}. \citet{Churchwell_2006,Churchwell_2007} observed bubble-shaped 8 μm emission to be common throughout the Galactic plane. \citet{Watson_2008, Watson_2009} found 24 μm and 20 cm emission centered within the 8 μm emission. They interpreted the bubbles seen in the GLIMPSE data as caused by hot stars which ionize their surroundings, creating 20 cm free-free emission, and at larger distances excite PAHs, creating 8 μm emission. \citet{Deharveng_2010} also interpreted the bubbles as classical HII regions.
\citet{Watson_2010} used 2MASS and GLIMPSE photometery and SED-fitting to analyze the young stellar object (YSO) population around 46 bubbles and found about a quarter showed an overabundance of YSOs near the boundary between the ionized interior and molecular exterior. Bubbles with an overabundance of YSOs along the bubble-ISM boundary are a potentially excellent set of sources to study the mechanisms of triggered star-formation. Star formation along the bubble rims may be triggered by the expanding ionization and shock fronts created by the hot star. Star formation triggered by previous generations of stars is known to occur but the specific physical mechanism is still undetermined. The collect-and-collapse model \cite{Elmegreen_1977} describes ambient material swept up by the shock fronts which eventually becomes gravitationally unstable, resulting in collapse. Other mechanisms, however, have been proposed. Radiatively-driven implosion \cite{1994A&A...289..559L}, for example, describes clumps already present in the ambient material whose contraction is aided by the external radiation from the hot star.
The method of identifying YSOs through SED-fitting used in \citet{Watson_2010}, however, is limited. Robitaille & Whitney (2006) showed that YSO age is degenerate with the observer’s inclination angle. Briefly, an early-stage YSO and a late-stage YSO seen edge on, so the accretion or debris disk is observed as thick and blocking the inner regions, can appear similar, even in the infrared. Thus, we require other diagnostics of the YSOs along the bubble edge to determine the youngest, and most likely to have been triggered, YSOs.
We selected a subset of the YSOs identified as triggered star formation candidates by \citet{Watson_2010} for follow-up observations in the CS (1-0) transition near 49 GHz with the Green Bank Telescope (GBT1). We sought evidence of infall, outflows or hot cores associated with these YSOs. CS is a probe of young star-formation. It has been detected in outflows from protostars, infall, disks and in hot cores \cite{1997A&A...317L..55D,1996A&AS..115...81B,Morata_2012}. The chemistry is, naturally, complex, and it appears that CS can play several roles \cite{Beuther_2002}, such as tracing outflows \cite{Wolf_Chase_1998} or hot cores \cite{1997MNRAS.287..445C}. Our aim is to use CS as a broad identifier of young star-formation and use any non-Gaussian line-shapes to infer molecular gas behavior.
After describing the CS survey and CS mapping observations (sec 2) and numerical results (sec 3), we analyze the Herschel-HiGAL emission toward all our sources to determine, along with our CS detections, the CS abundances (sec 4.1). We also analyze three sources for evidence of rapid infall (sec 4.2) and three mapped regions (sec 4.3). We end with a summary of the conclusions (sec 5).
The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.↩
Grafos String
Seja S um conjunto finito de curvas no plano. Dizemos que duas curvas a, b ∈ S se intersectam se eles têm ao menos um ponto em comum e denotaremos por a ∩ b ≠ ∅. O grafo G = (V, E) tal que V = S e ab ∈ E se, e somente se, as curvas a e b se intersectam é o grafo de interseção de S. E um grafo G é string se existe um conjunto de curvas no plano no qual G é o grafo de interseção de S e dizemos que S é uma representação de curvas de G. Como cada curva corresponde a um vértice, utilizaremos os termos vértice e curva indistintamente.
Formalizando as definições, uma curva é uma função $\gamma: I \rightarrow \R^2$ homeomorfa a sua imagem, onde I = [0, 1]. E duas curvas γ e δ se intersectam se existem s, t ∈ I tais que γ(s)=δ(t).
TODO incluir exemplos
\label{lemma:prop} Todo grafo string possui uma representação de string satisfazendo:
duas curvas se intersectam um número finito de vezes,
por cada ponto passam no máximo duas curvas, e
se deg(u)≤2, então a curva de u possui somente um ponto de interseção com cada um de seus vizinhos.
TODO trivial?!
Sempre podemos alterar as curvas localmente em torno do ponto de interseção entre três ou mais curvas de modo que haja somente interseções entre duas curvas.
Temos dois casos. Primeiramente, se deg(u)=1, então basta reduzirmos a curva u de forma a ter somente uma única interseção com o seu único vizinho. Se deg(u)=2 e γ é a curva representando u, então existem s < t ∈ I tais que γ(s) é uma interseção com um dos vizinhos de u, γ(t) é uma interseção com o outro vizinho de u e γ não possui nenhuma interseção no intervalo (s, t). Então, podemos substituir a curva γ por γ′=γ|[s − ϵ0, t + ϵ1] para algum ϵ0, ϵ1 > 0.
Mostraremos agora que a classe de grafos string é um subconjunto próprio do conjunto de grafos através do seguinte resultado.
Dado um grafo G, seja G′ o grafo resultante da subdivisão de cada aresta de G. Temos que G′ é grafo string se, e somente se, G é planar.
Se G é planar, então G′ também é planar. Logo, G′ é grafo string (ver [prop:planar]). Por outro lado, assuma que G′ é grafo string. Seja R uma representação de curvas de G. Temos que os vértices de G em G′ estão representadas por curvas que só possuem interseção com as curvas que representam os vértices da subdivisão. Estes, por sua vez, têm exatamente duas interseções: uma para cada vértice de G adjacente. Assim, sem perda, podemos assumir que as interseções dessas curvas ocorrem exatamente nos extremos. Se contrairmos as curvas que representam os vértices de G a um ponto, teremos uma representação plana de G. Portanto, G é planar.
O grafo resultante da subdivisão das arestas do K5 não é grafo string.
TODO incluir figuras
Final Project: Open Cluster Photometry and HR Diagram Final Report
Goal: The goal in this project is to analyze two different clusters (NGC 7160 and NGC 6940) by creating a color magnitude diagram and find their respective isochrones. We want to do this because we want to categorize the different stellar populations in order to learn more about clusters. All of the data comes from Smith College’s 16-inch telescope and both clusters will be analyzed in the B, V, and R filters.
Technique: To create the color magnitude diagram, we used IDL to reduce and analyze the original dataset. We used the bias’, flats, and darks to fully reduce the data to get the pure picture of the clusters (without noise, internal telescope fluctuations, and hot pixels). We used standard stars (with their darks, flats, and bias’) in order to calculate the zero point, which is subtracted from the magnitudes of the stars. After, we aligned and trimmed the images in order to have the B, V, and R filters the same size and all of the stars aligned. From there, we subtracted the background brightness and created the color magnitude diagram and compared it to known isochrones.
Results: After analyzing the data and finding isochrones, we compared our data to published data from Webda. Both of our clusters are similar to the published data and our isochrones fit our data very well. Also, the ages of isochrones and the metalicity used is very similar to the published data from Webda for both of our clusters. This means that our data is accurate and that our isochrones were a good match.