Public Articles
UZFor2015 - Timing Analysis Report
The data has been taken from \cite{Potter_2011}. All timing stamps are in BJD using the TDB time scale. No further transformations needed. A total of 42 timing measurements exist. However, Potter et al. have not included data points from Dai et al. (2010). See Potter et al. (2011) text for details. In this analysis I will consider the full set of timings as a start and as presented in Potter et al. (2011).
In general I am using IDL for the timing analysis. The cycle or ephemeris numbers have been obtained from IDL> ROUND((BJDMIN-TZERO)/PERIOD) where BJDMIN are all 42 timing measurements, TZERO is an arbitrary timing measurement that defines the CYCLE=E=0 and PERIOD is the binary orbital period (0.087865425 days) and was taken from \cite{Potter_2011}, Table 2. In this work I will use TZERO=BJD 2,450,021.779388. It is a bit different from the TZERO used in \cite{Potter_2011} in order to introduce a bit variation and also because I think the center of mass of the data points is as chosen by me.
As a first step I used IDL’s LINFIT code to fit a straight line with the MEASURE_ERROR keyword set to an array holding the timing measurements errors (Table 2, 3rd column, Potter et al. 2011). This way the square of deviations are weighted with 1/σ2 where σ is the standard timing error for each timing measurement. This is standard procedure and was also used in Potter et al. (2011). The average or mean timing error for the 42 measurements is 6.0 seconds (the standard deviation is also 6.0 seconds) with 0.74 seconds as the smallest and 17 seconds as the largest error. Also I have rescaled the timing measurements by subtracting the first timing measurement from all the others. Rescaling introduces nothing spooky to the analysis and has the advantage to avoid dynamic range problems. This is in particular needed for a later analysis when using MPFIT. Using LINFIT the resulting reduced χ2 value was 95.22 (χ2 = 3808.82 with (42-2) degrees of freedom) with the ephemeris (or computed timings) given as \begin{equation} T(E) = BJD~2450021.77890(6) + E \times 0.0878654291(1) \end{equation} The corresponding root-mean-square (RMS) scatter of the data around the best-fit line is 27.5 seconds and the corresponding standard deviation is 27.7 seconds. As expected they should both be similar. To measure scatter of data around any best-fit model, I will use the RMS quantity. The RMS scatter is 5 times the average timing error and could be indicative of a systematic process.
As a test the CURVEFIT routine has been used in a similar manner. The resulting reduced chi2 was also 95.22 matching and confirming the result from the previous section. The /NODERIVATIVE keyword does not change anything and expressions for the partial derivative has been included. The RMS also agrees with the results obtained from LINFIT. However, the formal 1σ uncertainties in the best-fit parameters (TZERO and PERIOD) are one magnitude smaller compared to the equivalent values obtained from LINFIT. The data and the best-fit line (obtained from LINFIT) is shown in Fig. [linearfit] with the residuals plotted in Fig. [linearfit_res]. There is absolutely no difference when using the results from CURVEFIT.
After fitting a straight line and visually inspecting the residual plots I cannot see any convincing trend that should justify a quadratic ephemeris (linear + a quadratic term). What I see is a sinusoidal variation around the best-fit line. Relative to the linear line the first timing measurement arrives 20s earlier than expected. Then the trend goes down and increases again to 40s at E=0, then decreases again to a minimum to around 20s and increases again thereafter. There is no obvious quadratic trend from looking at the residuals in Fig. [linearfit_res].
Although there is no obvious reason to include a quadratic term I will nevertheless consider a quadratic model. I will do this by again using IDL’s CURVEFIT procedure and the MPFIT package (also IDL) which is a more sophisticated fitting tool utilizing the Levenberg-Marquardt least-squares minimization algorithm developed by Marwardt.
The results from CURVEFIT are surprising. The best-fit χ2 value was 3718.89 yielding a reduced χ2 of 95.36 with (42-3 DoF). The RMS scatter of the residuals around the quadratic model fit was 31 seconds. This means that the fit became worse compared to a linear ephemeris model. The resulting residual plot is shown in Fig. [quadfit_res]. The corresponding best-fit parameters along with formal uncertainties for a quadratic ephemeris are \begin{eqnarray} T(E) &=& T + P \times E + A \times E^2 \\ &=& 24550021.778895(6) + 0.0878654269(3) \times E + 4.3(5)\times 10^{-14} \times E^2 \end{eqnarray}
I have also used MPFIT to fit a quadratic ephemeris to the Potter et al. (2011) timing data. The resulting χ2 is 3718.94 with (42-3) degrees of freedom yielding a reduced χ2 of 95.36. This is identical to the results obtained with CURVEFIT and thus confirmed independently. This is really surprising. The RMS scatter of data around the quadratic ephemeris is around 31 seconds. I will not state the best-fit values for the three model parameters (and their uncertainties) as obtained from MPFIT.
Based on the above result I cannot see that the residuals relative to a linear ephemeris allow the inclusion of a secular term accounting for a quadratic ephemeris. The χ2 increases with an extra parameter which is not what is expected. I will continue now and fit a 1- and 2-companion model.
We have considered a linear + 1-LTT model (excluding secular changes as described in a quadratic ephemeris). We have again used MPFIT for this task. The model is taken from Irwin (19??). We considered 107 initial guesses. The initial guess for the reference epoch and binary period were taken from the best-fit obtained from a linear ephemeris model. Inital guesses for the semi-amplitude of the light-time orbit were taken from an estimate of the amplitude as shown in Fig. 2. Initial guesses for the eccentricity covered the interval [0,0.9995]. Initial guess for the argument of pericenter covered the interval [0,360] degrees. Initial guess for the orbital period was also estimated from Fig. 2. Initial guess for the time of pericenter passage were obtained from T0 and the orbital period of the light-time orbit. Initial guesses were drawn at random. The methodology follows the same techniques as described in Hinse et al. (2012). Best-fit parameters were obtained from the best-fit solution covariance matrix as returned by MPFIT. Parameters errors should be considered as formal. The best-fit had a χ2 = 185.2 with (42-7) degrees of freedom resulting in a reduced χν2 = 5.3. The corresponding RMS scatter of data points around the best-fit is 15.7 seconds. The best-fit parameters are listed in Table [BestFitParamsLinPlus1LTT] and shown in Fig. [BestFitModel_LinPlus1LTT]. Recalling the average timing error (of 42 timing measurements) to be 6 seconds, that means that the RMS residuals are on a 2.6σ level.
T0 (BJD) | 2, 450, 021.77924 ± 3 × 10−5 |
P0 (days) | 0.0878654289 ± 2 × 10−10 |
asinI (AU) | 0.00043 ± 2 × 10−5 |
e | 0.65 ± 0.03 |
ω (radians) | 6.89 ± 0.04 |
Tp (BJD) | 2, 408, 616.0 ± 50 |
P (days) | 6020 ± 35 |
RMS (seconds) | 15.7 |
\label{BestFitParamsLinPlus1LTT}
At the present stage some inconsistencies were discovered in the reported timing uncertainties as listed in Table 1 in Potter et al. (2011). For example the timing uncertainty reported by \cite{Warren_1995} is 0.000023 days, while Potter et al. (2011) reports 0.00003 and 0.00004 days. Furthermore, after scrutinizing the literature we found that several timing measurements were omitted in Potter et al. (2011). We tested for the possibility that Potter et al. (2011) adopts timing uncertainties from the spread of data around a best-fit linear regression. However, that seems not the case: As a test, we used the five timing measurements from \cite{Beuermann1988} as listed in Table 1 in Potter et al. (2011). We fitted a linear straight line using CURVEFIT as implemented in IDL and found a scatter of 0.00004 to 0.00005 days depending on the metric used to measure scatter around the best-fit. The quoted uncertainties in Potter et al. (2011) are smaller by at least a factor of two. We conclude that Potter et al. (2011) must be in error when quoting timing uncertainties in their Table 1. Similar mistakes when quoting timing uncertainties apply to data listed in \cite{Ramsay1994}. Furthermore, after scrutinizing the literature for timing measurements of UZ For we found several timing measurements that were omitted in Potter et al. (2011). For example six eclipse timings were reported by \cite{BaileyCropper_1991} with a uniform uncertainty of 0.00006 days. However, Potter et al. (2011) only reports three of the six timings. Furthermore, a total of five new timings were reported by \cite{Ramsay1994}, but only one were listed in Potter et al. (2011). We can not come up with a good explanation why those extra timing measurements should be omitted or discarded. All of the new data points have been presented in the original works alongside with data points used in the analysis of Potter et al. (2011).
In this research we make use of all timing measurements that have been obtained with reasonable accuracy. We have therefore recompiled all available timing measurements from the literature. We list them in Table [NewTimingData]. The original HJD(UTC) time stamps from the literature were converted to the BJD(TDB) system using the on-line time utilities1 \citep{Eastman_2010}. Not all sources of timing measurements provide explicit information of the the time standard used. In that case we assume that HJD time stamps are valid in the UTC standard. This assumption is to some extend justified since the first timing measurement was taken in august 1983. At that time the UTC time standard for astronomical observations was widespread. All new measurements presented in \cite{Potter_2011} were taken directly from their Table 1. Some remarks are at place. By finding additional timing measurements (otherwise omitted in Potter et al. 2011) in the literature we decided to follow a different approach to estimate timing uncertainties. For measurements that were taken over a short time period one can determine a best-fit line and estimate timing uncertainties from the data scatter. The underlying assumption in this method is that no significant astrophysical signal (interaction between binary components or additional bodies) is contained in the timing measurements over a few consecutive observing nights. Therefore, the scatter around a linear ephemeris should be a reasonable measure of how well timings were measured. In other words, only a first-order effect due to a linear ephemeris is observed. Higher-order eclipse timing variation effects are negligible for data sets obtained during a few consecutive nights. The advantage is that for a given data set the same telescope/instrument were used as well as weather conditions were likely not to have changed much from night to night. Furthermore, most likely the same technique was applied to infer the individual time stamps of a given data set. In Table [NewTimingData] we list the original quoted uncertainties presented in the literature as σlit. We also list the uncertainty obtained from the scatter of the data around a best-fit linear regression line. The corresponding reduced χ2 statistic for each fit is also tabulated in the third column. From the reduced χ2 for each data set one can scale the corresponding uncertainties such that χν2 = 1 is enforced \citep{Bevington2003Book}. This step is only permitted if a high confidence in the applied model is justified. We think that this is the case when time stamps have been obtained over a short time interval. However, ultimately the timing uncertainty depends on the sampling of the eclipse event at a sufficiently high signal-to-noise ratio. The \cite{Imamura_1998} data set was split in two since those time stamps were obtained from two observing runs each lasting for a few days. Furthermore, we have calculated three data scatter metrics around the best-fit line: a) the root-mean-square, b) the standard deviation and c) the standard deviation as given by \cite{Bevington2003Book} and defined as \begin{equation} \sigma^2 = \frac{1}{N-2} \sum_{i=1}^{N}(y_{i} - a - bx_{i})^2 \label{BevEq6p15} \end{equation} where N is the number of data points, a, b the two parameters for a linear line and (xi, yi) is a given timing measurement at a given epoch. We have tested the dependence of scatter on the weight used and found no difference in the scatter metrics when applying a weight of one for all measurements. Finally some additional details need to be mentioned. We only inferred new timing uncertainties for data sets with more than two measurements. For a given data set we used the published ephemeris (orbital period) to calculate the eclipse epochs. For the time stamps presented in \cite{BaileyCropper_1991} no ephemeris was stated. We therefore, used their eclipse cycles for the independent variable to calculate a best-fit line. The reference epoch in each fit was placed to be in or near the middle of the data set. Two data points were discarded in the present analysis. We removed one time stamp from \cite{Ferrario_1989} due to a too high timing uncertainty. Another time stamp was removed from the new data presented in Potter et al. (2011), namely the time stamp BJD(TDB) 2,454,857.36480850. This eclipse is duplicated as it was observed also with the much larger SALT/BVIT instrument resulting in a lower timing error. We therefore use only the SALT/BVIT measurement in the present analysis which makes use of a total of 54 timing stamps. The average or mean timing error for the 54 measurements is 5.7 seconds (the standard deviation is 6.5 seconds) with 0.33 seconds as the smallest and 26.5 seconds as the largest error. Also we have rescaled the timing measurements by subtracting the first time stamp from all the others. Rescaling introduces nothing spooky to the analysis and has the advantage to avoid dynamic range problems when carrying out the process of least-squares minimization. The total baseline of the data set spans 27 years.
BJD(TDB) | σlit | χν2 | σlit, scaled | σRMS | STD | Eq. [BevEq6p15] | Remarks |
---|---|---|---|---|---|---|---|
2455506.427034 | 0.0000100 | – | – | – | – | – | HIPPO/1.9m, \cite{Potter_2011} |
2455478.485831 | 0.0000100 | – | – | – | – | – | HIPPO/1.9m, \cite{Potter_2011} |
2455450.544621 | 0.0000100 | – | – | – | – | – | HIPPO/1.9m, \cite{Potter_2011} |
2454857.364805 | 0.0000086 | – | – | – | – | – | SALT/BVIT, \cite{Potter_2011} |
2454417.334722 | 0.0000086 | – | – | – | – | – | SALT/SALTICAM, \cite{Potter_2011} |
2453408.288086 | 0.0000086 | 0.198 | 3.83E-6 | 0.0000070 | 0.0000070 | 0.0000100 | UCTPOL/1.9m, \cite{Potter_2011} |
2453407.321574 | 0.0000100 | 0.198 | 4.45E-6 | 0.0000070 | 0.0000070 | 0.0000100 | UCTPOL/1.9m, \cite{Potter_2011} |
2453405.300663 | 0.0000350 | 0.198 | 1.56E-5 | 0.0000070 | 0.0000070 | 0.0000100 | UCTPOL/1.9m, \cite{Potter_2011} |
2453404.334042 | 0.0000600 | – | – | – | – | – | SWIFT, \cite{Potter_2011} |
2452494.839196 | 0.0000870 | – | – | – | – | – | XMM OM, \cite{Potter_2011} |
2452494.575626 | 0.0000350 | – | – | – | – | – | UCTPOL/1.9m, \cite{Potter_2011} |
2452493.609058 | 0.0000700 | – | – | – | – | – | UCTPOL/1.9m, \cite{Potter_2011} |
2451821.702394 | 0.0000100 | – | – | – | – | – | WHT/S-Cam, \cite{de_Bruijne_2002} |
2451528.495434 | 0.0000200 | 0.134 | 7.32E-6 | 0.0000040 | 0.0000050 | 0.0000070 | WHT/S-Cam, \cite{Perryman_2001} |
2451528.407579 | 0.0000200 | 0.134 | 7.32E-6 | 0.0000040 | 0.0000050 | 0.0000070 | WHT/S-Cam, \cite{Perryman_2001} |
2451522.432730 | 0.0000200 | 0.134 | 7.32E-6 | 0.0000040 | 0.0000050 | 0.0000070 | WHT/S-Cam, \cite{Perryman_2001} |
2450021.779400 | 0.0000600 | 2.237 | 8.97E-5 | 0.0000500 | 0.0000600 | 0.0000900 | CTIO 1m/photometer, set II, \cite{Imamura_1998} |
2450021.691660 | 0.0000600 | 2.237 | 8.97E-5 | 0.0000500 | 0.0000600 | 0.0000900 | CTIO 1m/photometer, set II, \cite{Imamura_1998} |
2450018.704120 | 0.0000600 | 2.237 | 8.97E-5 | 0.0000500 | 0.0000600 | 0.0000900 | CTIO 1m/photometer, set II, \cite{Imamura_1998} |
2449755.634995 | 0.0000600 | 0.427 | 3.92E-5 | 0.0000200 | 0.0000300 | 0.0000300 | CTIO 1m/photometer, set I, \cite{Imamura_1998} |
2449755.547165 | 0.0000600 | 0.427 | 3.92E-5 | 0.0000200 | 0.0000300 | 0.0000300 | CTIO 1m/photometer, set I, \cite{Imamura_1998} |
2449753.614046 | 0.0000600 | 0.427 | 3.92E-5 | 0.0000200 | 0.0000300 | 0.0000300 | CTIO 1m/photometer, set I, \cite{Imamura_1998} |
2449752.647586 | 0.0000600 | 0.427 | 3.92E-5 | 0.0000200 | 0.0000300 | 0.0000300 | CTIO 1m/photometer, set I, \cite{Imamura_1998} |
2449733.405017 | 0.0000400 | – | – | – | – | – | EUVE, \cite{Potter_2011} |
2449310.332595 | 0.0000230 | – | – | – | – | – | EUVE, \cite{Warren_1995} |
2449276.680076 | 0.0000230 | – | – | – | – | – | EUVE, \cite{Warren_1995} |
2448784.721419 | 0.0000300 | – | – | – | – | – | HST, \cite{Potter_2011} |
2448483.606635 | 0.0000200 | 4.413 | 4.20E-5 | 0.0000300 | 0.0000400 | 0.0000400 | ROSAT, \cite{Ramsay1994} |
2448483.430915 | 0.0000200 | 4.413 | 4.20E-5 | 0.0000300 | 0.0000400 | 0.0000400 | ROSAT, \cite{Ramsay1994} |
2448483.343045 | 0.0000200 | 4.413 | 4.20E-5 | 0.0000300 | 0.0000400 | 0.0000400 | ROSAT, \cite{Ramsay1994} |
2448482.903785 | 0.0000200 | 4.413 | 4.20E-5 | 0.0000300 | 0.0000400 | 0.0000400 | ROSAT, \cite{Ramsay1994} |
2448482.727955 | 0.0000200 | 4.413 | 4.20E-5 | 0.0000300 | 0.0000400 | 0.0000400 | ROSAT, \cite{Ramsay1994} |
2447829.184858 | 0.0000600 | 0.120 | 2.08E-5 | 0.0000170 | 0.0000190 | 0.0000200 | AAT, \cite{BaileyCropper_1991} |
2447829.096998 | 0.0000600 | 0.120 | 2.08E-5 | 0.0000170 | 0.0000190 | 0.0000200 | AAT, \cite{BaileyCropper_1991} |
2447829.009088 | 0.0000600 | 0.120 | 2.08E-5 | 0.0000170 | 0.0000190 | 0.0000200 | AAT, \cite{BaileyCropper_1991} |
2447828.130518 | 0.0000600 | 0.120 | 2.08E-5 | 0.0000170 | 0.0000190 | 0.0000200 | AAT, \cite{BaileyCropper_1991} |
2447828.042638 | 0.0000600 | 0.120 | 2.08E-5 | 0.0000170 | 0.0000190 | 0.0000200 | AAT, \cite{BaileyCropper_1991} |
2447827.954778 | 0.0000600 | 0.120 | 2.08E-5 | 0.0000170 | 0.0000190 | 0.0000200 | AAT, \cite{BaileyCropper_1991} |
2447437.920514 | 0.0000300 | – | – | – | – | – | 2.3m Steward obs., \cite{Allen_1989} |
2447128.809635 | 0.0009000 | 0.059 | 2.18E-4 | 0.0002000 | 0.0002000 | 0.0002000 | 2.3m Steward obs., \cite{Berriman_1988} |
2447128.722035 | 0.0009000 | 0.059 | 2.18E-4 | 0.0002000 | 0.0002000 | 0.0002000 | 2.3m Steward obs., \cite{Berriman_1988} |
2447127.843835 | 0.0009000 | 0.059 | 2.18E-4 | 0.0002000 | 0.0002000 | 0.0002000 | 2.3m Steward obs., \cite{Berriman_1988} |
2447127.755635 | 0.0009000 | 0.059 | 2.18E-4 | 0.0002000 | 0.0002000 | 0.0002000 | 2.3m Steward obs., \cite{Berriman_1988} |
2447145.064339 | 0.0000600 | 1.046 | 6.14E-5 | 0.0002000 | 0.0002000 | 0.0003000 | AAT, \cite{Ferrario_1989} |
2447127.227739 | 0.0003000 | 1.046 | 3.07E-4 | 0.0002000 | 0.0002000 | 0.0003000 | AAT, \cite{Ferrario_1989} |
2447127.139439 | 0.0003000 | 1.046 | 3.07E-4 | 0.0002000 | 0.0002000 | 0.0003000 | AAT, \cite{Ferrario_1989} |
2447097.792555 | 0.0002500 | 0.069 | 6.58E-5 | 0.0000600 | 0.0000500 | 0.0000700 | ESO/MPI 2.2m, \cite{Beuermann1988} |
2447094.717355 | 0.0002300 | 0.069 | 6.05E-5 | 0.0000600 | 0.0000500 | 0.0000700 | ESO/MPI 2.2m, \cite{Beuermann1988} |
2447091.554235 | 0.0002300 | 0.069 | 6.05E-5 | 0.0000600 | 0.0000500 | 0.0000700 | ESO/MPI 2.2m, \cite{Beuermann1988} |
2447090.587785 | 0.0001200 | 0.069 | 3.16E-5 | 0.0000600 | 0.0000500 | 0.0000700 | ESO/MPI 2.2m, \cite{Beuermann1988} |
2447089.709005 | 0.0003000 | 0.069 | 7.89E-5 | 0.0000600 | 0.0000500 | 0.0000700 | ESO/MPI 2.2m, \cite{Beuermann1988} |
2447088.742545 | 0.0003000 | 0.069 | 7.89E-5 | 0.0000600 | 0.0000500 | 0.0000700 | ESO/MPI 2.2m, \cite{Beuermann1988} |
2446446.973823 | 0.0001600 | – | – | – | – | – | EXOSAT, \cite{Osborne_1988} |
2445567.177636 | 0.0001600 | – | – | – | – | – | EXOSAT, \cite{Osborne_1988} |
\label{NewTimingData}
In this work we are not using the F-test as a statistical tool to perform model selection. The F-test is based on the assumption that uncertainties are Gaussian. This assumption might be violated if the data is affected by time-correlated red noise due to atmospheric effects and/or additional astrophysical effects that influence the shape of the eclipse profile. There exist no studies in the literature that has addressed this question and therefore we judge that the outcome of an F-test is unreliable.
In the following we will consider the newly compiled data set with timing uncertainties obtained from rescaling the published uncertainties in order to ensure χ2 = 1 over short time intervals. We have determined the following linear ephemeris using MPFIT. We followed the monte-carlo approach and determined a best-fit model by generating 10 million random initial guesses. We used best-fit parameters from LINFIT to obtain a first estimate of the initial epoch and period. Then initial guesses were drawn from a Gaussian distribution centered at the LINFIT values with standard deviation given by five times the formal LINFIT uncertainties. The linear ephemeris is shown in Fig. [Linearfit_NEW]. The resulting reduced χ2 value was 162.5 (χ2 = 8448.6 with (54-2) degrees of freedom) with the ephemeris (or computed timings) given as \begin{equation} T(E) = BJD_{TDB}~2,450,018.703604(3) + E \times 0.08786542817(9) \end{equation} Residuals are shown in Fig. [Linfit_NEW_Res] and displays a systematic variation. The corresponding RMS scatter of the data around the best-fit line is 28.9 seconds. The scatter is 5 times the average timing error and could be indicative of a systematic process of astrophysical origin.
We have also considered a quadratic model to the new data set. However, and judged by eye from Fig. [Linfit_NEW_Res], there is no obvious upward or downward parabolic trend in the data. Nevertheless we added a quadratic term and generated 10 million initial guesses to find a best-fit model. The resulting reduced χ2 value increased to 165.7 with 54-3 degrees of freedom. We therefore, decide to not consider a quadratic ephemeris in our further analysis.
Using scaled uncertainties we have considered a linear + 1-LTT model. We have again used MPFIT. The model is taken from Irwin (19??) and described in Hinse et al. (2012). We considered 107 initial guesses. The initial guess for the reference epoch and binary period were taken from the best-fit obtained from a linear ephemeris model. Inital guesses for the semi-amplitude of the light-time orbit were taken from an estimate of the amplitude as shown in Fig. 2. Initial guesses for the eccentricity covered the interval [0,1]. Initial guess for the argument of pericentre covered the interval [0,360] degrees. Initial guess for the orbital period was also estimated from Fig. [Linfit_NEW_Res]. Initial guess for the time of pericentre passage were obtained from T0 and the orbital period of the light-time orbit. Initial guesses were drawn at random. The methodology follows the same techniques as described in Hinse et al. (2012). Best-fit parameters were obtained from the best-fit solution covariance matrix as returned by MPFIT. Parameters errors should be considered as formal [OFF-THE-RECORD: FINAL ERRORS WILL USE BOOTSTRAP TECHNIQUE]. The best-fit had a χ2 = 717.6 with 47 degrees of freedom resulting in a reduced χν2 = 15.3. The corresponding RMS scatter of data points around the best-fit is 20.0 seconds. The best-fit parameters are listed in Table [BestFitParamsLinPlus1LTT_New_AllData] and shown in Fig. [BestFitModel_LinPlus1LTT_New_AllData]. Recalling the average timing error to be 6 seconds, that means that the RMS residuals are on a 3.3σ level indicating a significant signal of some origin. However, upon close inspection of Fig. [BestFitModel_LinPlus1LTT_New_AllData] the origin of the large scatter is mainly due to data obtained by \cite{Beuermann1988}, \cite{Berriman_1988}, \cite{Ferrario_1989} and a single point from \cite{Allen_1989} located between cycle number -27,000 and -35,000. In the following we investigate the effect of the resulting model when removing those data points.
T0 (BJD) | 2, 450, 021.77919 ± 3 × 10−5 |
P0 (days) | 0.0878654283 ± 1 × 10−10 |
asinI (AU) | 0.00048 ± 3 × 10−5 |
e | 0.76 ± 0.03 |
ω (radians) | 3.84 ± 0.04 |
Tp (BJD) | 2, 461, 743.0 ± 53 |
P (days) | 5964 ± 25 |
RMS (seconds) | 20.0 |
χ2 | 717.6 |
red. χ2 | 15.3 |
\label{BestFitParamsLinPlus1LTT_New_AllData}
To start we have removed a total of eight points: three points from \cite{Ferrario_1989}, four points from \cite{Berriman_1988} and a single point from \cite{Allen_1989}. The average deviation of those points from our best-fit model (Fig. [BestFitModel_LinPlus1LTT_New_AllData] and Table [BestFitParamsLinPlus1LTT_New_AllData]) was around 35 seconds. The minimum timing uncertainty is 0.33 seconds. The maximum timing uncertainty is 13.8 seconds. The mean of the timing uncertainty is 3.7 seconds. This data set is very similar to the data set investigated by Potter et al. (2011). Our new model now had a χ2 = 467.1 and a reduced χ2 = 12 with 39 DoF resulting in a RMS scatter of 13 seconds. We show the resulting best-fit parameters in Fig. [BestFitModel_LinPlus1LTT_RedDataSet1] and Table [BestFitParamsLinPlus1LTT_RedDataSet1]. As a result we first note that the removal of eight data points did not change significantly the model. This points towards that those discarded points do not contribute significantly to constrain the model during the fitting process. Further we note that our model is significantly different from the first elliptical term model presented in Potter et al. (2011). The most striking difference is dominantly seen in the eccentricity parameter. While they found a near-circular model we find a highly eccentric solution. Next we continue our analysis by removing an additional six data points.
T0 (BJD) | 2, 450, 021.69149 ± 4 × 10−5 |
P0 (days) | 0.0878654287 ± 1 × 10−10 |
asinI (AU) | 0.00047 ± 3 × 10−5 |
e | 0.73 ± 0.04 |
ω (radians) | 0.74 ± 0.03 |
Tp (BJD) | 2455832.0 ± 28 |
P (days) | 6012 ± 23 |
RMS (seconds) | 13.0 |
χ2 | 467.1 |
red. χ2 | 12.0 |
\label{BestFitParamsLinPlus1LTT_RedDataSet1}
In this section we investigate the effects by removing a total of 14 data points. Six from \cite{Beuermann1988}, three from \cite{Ferrario_1989}, four points from \cite{Berriman_1988} and a single point from \cite{Allen_1989}. The minimum timing uncertainty is 0.33 seconds. The maximum uncertainty is 13.8 seconds and the mean is 3.5 seconds. The resulting best-fit model is shown in Fig. [BestFitModel_LinPlus1LTT_RedDataSet2] with best-fit parameters listed in Table [BestFitParamsLinPlus1LTT_RedDataSet2]. We note that the resulting best-fit model has not changed significantly. Also the RMS scatter is comparable with the mean timing uncertainty. From this we can conclude that the timing errors should be scaled with $\sqrt{\chi^2}$ if the model is the correct description of the signal.
T0 (BJD) | 2, 450, 021.69150 ± 3 × 10−5 |
P0 (days) | 0.0878654279 ± 1 × 10−10 |
asinI (AU) | 0.00049 ± 3 × 10−5 |
e | 0.79 ± 0.03 |
ω (radians) | 6.91 ± 0.03 |
Tp (BJD) | 2467502 ± 57 |
P (days) | 5901 ± 20 |
RMS (seconds) | 4.4 |
χ2 | 161.0 |
red. χ2 | 4.9 |
\label{BestFitParamsLinPlus1LTT_RedDataSet2}
Finally, we have also discarded the two first timing measurements from \cite{Osborne_1988}. The mean timing uncertainty is 3 seconds. Again we found a best-fit model as shown in Fig. [BestFitModel_LinPlus1LTT_RedDataSet3] with best-fit parameters listed in Table [BestFitParamsLinPlus1LTT_RedDataSet3]. Also in this case the model did not change much compared to previous investigations. This points towards that the data taken at earlier epochs (discarded) does not play an important role to constrain the model. The RMS scatter of 4 seconds is comparable with the mean uncertainty and does not point towards a signal that could be due to an additional companion.
Based on rescaled timing uncertainties we find: We find no qualitative (visual inspection of residuals) and quantitative (increased chi2) justification for including a quadratic term in any model. We find that certain data points can be discarded without significantly affecting the best-fit model as obtained when all data were included. Therefore those data points do no play a significant role to constrain the model. We find that there is no significant evidence for a 2nd companion when only considering timing data of good quality.
T0 (BJD) | 2, 450, 021.69149 ± 4 × 10−5 |
P0 (days) | 0.0878654279 ± 1 × 10−10 |
asinI (AU) | 0.00050 ± 5 × 10−5 |
e | 0.79 ± 0.05 |
ω (radians) | 5.66 ± 0.05 |
Tp (BJD) | 2467498 ± 70 |
P (days) | 5900 ± 23 |
RMS (seconds) | 4.0 |
χ2 | 160.0 |
red. χ2 | 5.2 |
\label{BestFitParamsLinPlus1LTT_RedDataSet3}
http://astroutils.astronomy.ohio-state.edu/time/↩
Data-driven, interactive article with d3.js plot and IPython Notebook
and 1 collaborator
This week we are launching a brand new look for Authorea and a couple of exciting new features aimed at making scientific research more interactive. Since the very beginning of Authorea, we have been striving to make collaborative scientific writing as easy as possible. But in addition to writing, we are also creating a space for new ways of reading science, and executing it.
For example, if you are a scientist, chances are that you do a lot of data analysis and you might want to visualize and provide access to your data in some fun, new, interactive, more meaningful, data-driven ways, rather than the usual static, data-less plot. There are many ways to create this kind of interactive plots. In this short blog post we will look at two of them.
ProCS15: A DFT-based chemical shift predictor for backbone and C\(\beta\) atoms in proteins
We present ProCS15: A program that computes the isotropic chemical shielding values of backbone and Cβ atoms given a protein structure in less than a second. ProCS15 is based on around 2.35 million OPBE/6-31G(d,p)//PM6 calculations on tripeptides and small structural models of hydrogen-bonding. The ProCS15-predicted chemical shielding values are compared to experimentally measured chemical shifts for Ubiquitin and the third IgG-binding domain of Protein G through linear regression and yield RMSD values below 2.2, 0.7, and 4.8 ppm for carbon, hydrogen, and nitrogen atoms respectively. These RMSD values are very similar to corresponding RMSD values computed using OPBE/6-31G(d,p) for the entire structure for each protein. The maximum RMSD values can be reduced by using NMR-derived structural ensembles of Ubiquitin. For example, for the largest ensemble the largest RMSD values are 1.7, 0.5, and 3.5 ppm for carbon, hydrogen, and nitrogen. The corresponding RMSD values predicted by several empirical chemical shift predictors range between 0.7 - 1.1, 0.2 - 0.4, and 1.8 - 2.8 ppm for carbon, hydrogen, and nitrogen atoms, respectively.
Global TB Report 2015: Technical appendix on methods used to estimate the global burden of disease caused by TB
and 4 collaborators
Estimates of the burden of disease caused by TB and measured in terms of incidence, prevalence and mortality are produced annually by WHO using information gathered through surveillance systems (case notifications and death registrations), special studies (including surveys of the prevalence of disease), mortality surveys, surveys of under-reporting of detected TB and in-depth analysis of surveillance data, expert opinion and consultations with countries. This document provides case definitions and describes the methods used in Global TB Report 2015 to derive TB incidence, prevalence and mortality.
Incidence is defined as the number of new and recurrent (relapse) episodes of TB (all forms) occurring in a given year. Recurrent episodes are defined as a new episode of TB in people who have had TB in the past and for whom there was bacteriological confirmation of cure and/or documentation that treatment was completed. In the remainder of this technical document, relapse cases are referred to as recurrent cases because the term is more useful when explaining the estimation of TB incidence. Recurrent cases may be true relapses or a new episode of TB caused by reinfection. In current case definitions, both relapse cases and patients who require a change in treatment are called retreatment cases. However, people with a continuing episode of TB that requires a treatment change are prevalent cases, not incident cases.
Prevalence is defined as the number of TB cases (all forms) at a given point in time.
Mortality from TB is defined as the number of deaths caused by TB in HIV-negative people occurring in a given year, according to the latest revision of the International classification of diseases (ICD-10). TB deaths among HIV-positive people are classified as HIV deaths in ICD-10. For this reason, estimates of deaths from TB in HIV-positive people are presented separately from those in HIV-negative people.
The case fatality rate is the risk of death from TB among people with active TB disease.
The case notification rate refers to new and recurrent episodes of TB notified to WHO for a given year. The case notification rate for new and recurrent TB is important in the estimation of TB incidence. In some countries, however, information on treatment history may be missing for some cases. Patients reported in the unknown history category are considered incident TB episodes (new or recurrent).
Regional analyses are generally undertaken for the six WHO regions (that is, the African Region, the Region of the Americas, the Eastern Mediterranean Region, the European Region, the South-East Asia Region and the Western Pacific Region). For analyses related to MDR-TB, nine epidemiological regions were defined (Figure [fig:epiregions]). These were African countries with high HIV prevalence, African countries with low HIV prevalence, Central Europe, Eastern Europe, high-income countries, Latin America, the Eastern Mediterranean Region (excluding high-income countries), the South-East Asia Region (excluding high-income countries) and the Western Pacific Region (excluding high-income countries).
Risk of Bias Assessments in Ophthalmology Systematic Reviews and Meta-Analyses
and 2 collaborators
Introduction
In order for systematic reviews to make accurate inferences concerning clinical therapy, the primary studies that constitute the review must provide valid results. The Cochrane Handbook for Systematic Reviews states that assessment of validity is an “essential component” of a review that “should influence the analysis, interpretation, and conclusions of the review”(p. 188) \cite{higgins2008cochrane}. The internal validity of a review’s primary studies must be considered to ensure that bias has not compromised the results, leading to inaccurate estimates of summary effect sizes.
In ophthalmology, there is a need for closer examination of the validity of primary studies comprising a review. As an illustrative example, Chakrabarti et al. (2012) discussed emerging ophthalmic treatments for proliferative (PDR) and nonproliferative diabetic retinopathy (NDR) noting that anti-vascular endothelial growth factor (VEGF) agents consistently received recognition as a possible alternative treatment for diabetic retinopathy. Treatment guidelines from the Scottish Intercollegiate Guidelines Network and the American Academy of Ophthalmology consider anti-VEGF treatment as merely useful as an adjunct to laser for treatment of PDR; however, the Malaysian guidelines indicate that these same agents were to be considered in combination with intraocular steroids and vitrectomy. Most extensively, the National Health and Medical Research Council guidelines recommend the addition of anti-VEGF to laser therapy prior to vitrectomy \cite{Chakrabarti_2012}. The evidence base informing these guidelines is comprised of trials of questionable quality. Martinez-Zapata et al. (2014) conducted a systematic review of this anti-VEGF treatment for diabetic retinopathy, which included 18 randomized controlled trials (RCTs). Of these trials, seven were at high risk of bias while the rest were unclear in one or more domains. The authors concluded, “there is very low or low quality evidence from RCTs for the efficacy and safety of anti-VEGF agents when used to treat PDR over and above current standard treatments" \cite{martinez2014anti}. Thus, low quality evidence provides less confidence regarding the efficacy of treatment, makes suspect guidelines advocating use, and impairs the clinicians ablility to make sound judgements regarding treatment.
Over the years, researchers have conceived many methods in attempt to evaluate the validity or methodological quality of primary studies. Initially, checklists and scales were developed to evaluate whether particular aspects of experimental design, such as randomization, blinding, or allocation concealment were incorporated into the study. These approaches have been criticized for falsely elevating quality scores. Many of these scales and checklists include items that have no bearing on the validity of study findings, such as whether investigators used informed consent or whether ethical approval was obtained \cite{7743790}. Furthermore, with the proliferation of quality appraisal scales, it was found that the choice of scale could alter the results of systematic reviews due to weighting differences of scale components \cite{10493204}. Two such scales, the Jadad scale - also called the Oxford Scoring System \cite{8721797} and the Downs and Black checklist \cite{9764259} were among the popular alternatives. Quality of Reporting of Meta-analyses (QUORUM) \cite{Moher_1999}, the dominant reporting guidelines at that time, called for the evaluation of methodological quality of the primary studies in systematic reviews. This recommendation was short lived as the Cochrane Collaboration began to advocate for a new approach to assess the validity of primary studies. This new method assessed the risk of bias of 6 particular design features of primary studies, with each domain receiving a rating of either low, unclear, or high risk of bias \cite{higgins2008cochrane}. Following suit, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) - updated reporting guidelines, now calls for the evaluation of bias in all systematic reviews \cite{19622511}.
A previous review examining primary studies from multiple fields of medicine revealed that the failure to incorporate an assessment of methodological quality can result in the implementation of interventions founded on misleading evidence \cite{588948720011204}. Yet, questions remain regarding the assessment of quality and risk of bias in clinical specialties. Therefore, we examined ophthalmology systematic reviews to determine the degree to which methodological quality and risk of bias assessments were conducted. We also evaluated the particular method used in the evaluation, the quality components comprising these assessments, and how systematic reviewers integrated primary studies with low quality or high risk of bias into their results.
Test of unicode characters in figures
Oh, an empty article!
You can get started by double clicking this text block and begin editing. You can also click the Text button below to add new block elements. Or you can drag and drop an image right onto this text. Happy writing!
Common and phylogenetically widespread coding for peptides by bacterial small RNAs
and 5 collaborators
Background:
While eukaryotic noncoding RNAs have recently received intense scrutiny, it is becoming clear that bacterial transcription is at least as pervasive. Bacterial small RNAs and antisense RNAs (sRNAs) are often assumed to be noncoding, due to their lack of long open reading frames (ORFs). However, there are numerous examples of sRNAs encoding for small proteins, whether or not they also have a regulatory role at the RNA level.
Results:
Here, we apply flexible machine learning techniques based on sequence features and comparative genomics to quantify the prevalence of sRNA ORFs under natural selection to maintain protein-coding function in phylogenetically diverse bacteria. A majority of annotated sRNAs have at least one ORF between 10 and 50 amino acids long, and we conservatively predict that 188 ± 25.5 unannotated sRNA ORFs are under selection to maintain coding, an average of 13 per species considered here. This implies that overall at least 7.5 ± 0.3% of sRNAs have a coding ORF, and in some species at least 20% do. 84 ± 9.8 of these novel coding ORFs have some antisense overlap to annotated ORFs. As experimental validation, many of our predictions are translated in ribosome profiling data and are identified via mass spectrometry shotgun proteomics. B. subtilis sRNAs with coding ORFs are enriched for high expression in biofilms and confluent growth, and S. pneumoniae sRNAs with coding ORFs are involved in virulence. sRNA coding ORFs are enriched for transmembrane domains and many are novel components of type I toxin/antitoxin systems.
Conclusions:
We predict over a dozen new protein-coding genes per bacterial species, but crucially also quantified the uncertainty in this estimate. Our predictions for sRNA coding ORFs, along with novel type I toxins and tools for sorting and visualizing genomic context, are freely available in a user-friendly format at http://disco-bac.web.pasteur.fr. We expect these easily-accessible predictions to be a valuable tool for the study not only of bacterial sRNAs and type I toxin-antitoxin systems, but also of bacterial genetics and genomics.
Final Draft Lab 3 (LL and NZ) Determination of Carrier Density through Hall Measurements and Determination of Transition Temperature (\(Tc\)) in a High-Tc Superconductor
and 1 collaborator
An electromagnet was used to provide a magnetic field to 3 different conducting samples: n type Geranium (n-Ge), p type Geranium (p-Ge), and silver (Ag). A calibrated Hall probe was used to obtain the current ($\vec{I}_{mag}$) to magnetic field ($\vec{B}$) calibration of the iron-core electromagnet. The Hall voltages (VH) produced by each of the three samples were plotted against B, and a linear line was produced, as expected. The slope ($\frac{\Delta V_H}{\Delta B}$) of each of the graphs were used to calculate the Hall coefficient for each sample, which we found to be $-4.99\cdot 10^{-3}\pm -0.0998 \cdot 10^{-3}\frac{\textrm{Vm}}{\textrm{AT}}$, $5.64 \cdot 10^{-3}\pm 0.11 \cdot 10^{-3} \frac{\textrm{Vm}}{\textrm{AT}}$, $-2.24 \cdot 10^{-10}\pm -0.04 \cdot 10^{-10} \frac{\textrm{Vm}}{\textrm{AT}}$ respectively. These do not really agree with given values of $-5.6\cdot 10^{-3}\frac{\textrm{Vm}}{\textrm{AT}}$ for n-Ge, $6.6\cdot 10^{-3}\frac{\textrm{Vm}}{\textrm{AT}}$ for p-Ge, and $-8.9\cdot 10^{-11}\frac{\textrm{Vm}}{\textrm{AT}}$ for silver by the manufacturer. Using the Hall coefficients, we found their carrier densities to be −1.25 ⋅ 1021 ± 0.025 ⋅ 1021m−3 for n-Ge, 1.11 ⋅ 1021 ± 0.02 ⋅ 1021m−3 for p-Ge, −2.79 ⋅ 1028 ± 0.06 ⋅ 1028m−3 for silver, which are all in the same order of magnitude as the given absolute values of 1.2 ⋅ 1021m−3, 1.1 ⋅ 1021m−3, 6.6 ⋅ 1028m−3.
A current was applied to the superconductor Bi2Sr2Ca2Cu3O10 which was cooled in liquid nitrogen until it became superconducting, and was allowed to warm slowly. Its voltage and temperature were monitored in the warming process which we used to produce a graph of voltage against temperature. The graph showed a transition temperature of about 118K ± 2K, similar to the provided critical temperature of 108K.
Final Lab Report 3 (AV and EK): Earth’s Field NMR
and 1 collaborator
We examined the relationship between magnetization, polarizing field time, magnetic field and precession frequency using a 125 mL sample of water and the TeachSpin Earth’s Field NMR instrument. Through varying these different parameters, we could determine the Larmor precession frequency of protons within Earth’s field, spin-lattice relaxation time, and the gyromagnetic ratio for protons. We found the Larmor precession frequency to be 1852 ± 18 Hz corresponding to a local magnetic field of 43.3 ± 0.3μT due to Earth’s magnetic field, the spin-lattice relaxation time to be 2.15 ± 0.05 s, and the gyromagnetic ratio to be $(2.65\pm0.04) \cdot 10^8~\frac{1}{s\cdot T}$, agreeing with the known value of $2.68\cdot 10^8~\frac{1}{s\cdot T}$.
Start doing something!
Mapping stellar content to dark matter halos. II. Halo mass is the main driver of galaxy quenching
and 1 collaborator
\label{sec:intro}
The quenching of galaxies, namely, the relatively abrupt shutdown of star formation activities, gives rise to two distinctive populations of quiescent and active galaxies, most notably manifested in the strong bimodality of galaxy colours \citep{strateva2001, baldry2006}. The underlying driver of quenching, whether it be stellar mass, halo mass, or environment, should produce an equally distinct split in the spatial clustering and weak gravitational lensing between the red and blue galaxies. Recently, \citet[][hereafter Paper I]{zm15} developed a powerful statistical framework, called the model, to interpret the spatial clustering (i.e., the projected galaxy autocorrelation function wp) and the galaxy-galaxy (g-g) lensing (i.e., the projected surface density contrast $\ds$) of the overall galaxy population in the Sloan Digital Sky Survey \citep[SDSS;][]{york2000}, while establishing a robust mapping between the observed distribution of stellar mass to that of the underlying dark matter halos. In this paper, by introducing two empirically-motivated and physically-meaningful quenching models within , we hope to robustly identify the dominant driver of galaxy quenching, while providing a self-consistent framework to explain the bimodality in the spatial distribution of galaxies.
Galaxies cease to form new stars and become quenched when there is no cold gas. Any physical process responsible for quenching has to operate in one of three following modes: 1) it heats up the gas to high temperatures and stops hot gas from cooling efficiently ; 2) it depletes the cold gas reservoir via secular stellar mass growth or sudden removal by external forces \citep[e.g., tidal and ram pressure;][]{gunn1972}; and 3) it turns off gas supply by slowly shutting down accretion \citep[e.g., strangulation;][]{balogh2000}. However, due to the enormous complexity in the formation history of individual galaxies, multiple quenching modes may play a role in the history of quiescent galaxies. Therefore, it is more promising to focus on the underlying physical driver of the average quenching process, which is eventually tied to either the dark matter mass of the host halos, the galaxy stellar mass, or the small/large-scale environment density that the galaxies reside in, hence the so-called “halo”, “stellar mass”, and “environment” quenching mechanisms, respectively.
Halo quenching has provided one of the most coherent quenching scenarios from the theoretical perspective. In halos above some critical mass ($M_{\mathrm{shock}}{\sim}10^{12}\hmsol$), virial shocks heat gas inflows from the intergalactic medium, preventing the accreted gas from directly fueling star formation . Additional heating from, e.g., the active galactic nuclei (AGNs) then maintains the gas coronae at high temperature \citep{croton2006}. For halos with Mh < Mshock, the incoming gas is never heated to the virial temperature due to rapid post-shock cooling, therefore penetrating the virial boundary into inner halos as cold flows. This picture, featuring a sharp switch from the efficient stellar mass buildup via filamentary cold flow into low mass halos, to the halt of star formation due to quasi-spherical hot-mode accretion in halos above Mshock, naturally explains the colour bimodality, particularly the paucity of galaxies transitioning from blue, star-forming galaxies to the red sequence of quiescent galaxies \citep{cattaneo2006, dekel2006}. To first order, halo quenching does not discriminate between centrals and satellites, as both are immersed in the same hot gas coronae that inhibits star formation. However, since the satellites generally lived in lower mass halos before their accretion and may have retained some cold gas after accretion, the dependence of satellite quenching on halo mass should have a softer transition across Mshock, unless the quenching by hot halos is instantaneous.
Observationally, by studying the dependence of the red galaxy fraction $f\red$ on stellar mass $\ms$ and galaxy environment δ5NN (i.e., using distance to the 5th nearest neighbour) in both the Sloan Digital Sky Survey (SDSS) and zCOSMOS, \citet[][hereafter P10]{peng2010} found that $f\red$ can be empirically described by the product of two independent trends with $\ms$ and δ5NN, suggesting that stellar mass and environment quenching are at play. By using a group catalogue constructed from the SDSS spectroscopic sample, \citet{peng2012} further argued that, while the stellar mass quenching is ubiquitous in both centrals and satellites, environment quenching mainly applies to the satellite galaxies.
However, despite the empirically robust trends revealed in P10, the interpretations for both the stellar mass and environment trends are obscured by the complex relation between the two observables and other physical quantities. In particular, since the observed $\ms$ of central galaxies is tightly correlated with halo mass $\mh$ (with a scatter ∼0.22 dex; see Paper I), a stellar mass trend of $f\red$ is almost indistinguishable with an underlying trend with halo mass. By examining the inter-relation among $\ms$, $\mh$, and δ5NN, \citet{woo2013} found that the quenched fraction is more strongly correlated with $\mh$ at fixed $\ms$ than with $\ms$ at $\mh$, and the satellite quenching by δ5NN can be re-interpreted as halo quenching by taking into account the dependence of quenched fraction on the distances to the halo centres. The halo quenching interpretation of the stellar and environment quenching trends is further demonstrated by \citet{gabor2015}, who implemented halo quenching in cosmological hydrodynamic simulations by triggering quenching in regions dominated by hot (105.4K) gas. They reproduced a broad range of empirical trends detected in P10 and \citet{woo2013}, suggesting that the halo mass remains the determining factor in the quenching of low-redshift galaxies.
Another alternative quenching model is the so-called “age-matching” prescription of \citet{hearin2013} and its recently updated version of \citet{hearin2014}. Age-matching is an extension of the “subhalo abundance matching” \citep[SHAM;][]{conroy2006} technique, which assigns stellar masses to individual subhalos (including both main and subhalos) in the N-body simulations based on halo properties like the peak circular velocity \citep{reddick2013}. In practice, after assigning $\ms$ using SHAM, the age-matching method further matches the colours of galaxies at fixed $\ms$ to the ages of their matched halos, so that older halos host redder galaxies. In essence, the age-matching prescription effectively assumes a stellar mass quenching, as the colour assignment is done at fixed $\ms$ regardless of halo mass or environment, with a secondary quenching via halo formation time. Therefore, the age-matching quenching is very similar to the $\ms$-dominated quenching of P10, except that the second variable is halo formation time rather than galaxy environment.
The key difference between the $\mh$- and $\ms$-dominated quenching scenarios lies in the way central galaxies become quiescent. One relies on the stellar mass while the other on the mass of the host halos, producing two very different sets of colour-segregated stellar-to-halo relations (SHMRs). At fixed halo mass, if stellar mass quenching dominates, the red centrals should have a higher average stellar mass than the blue centrals; in the halo quenching scenario the two coloured populations at fixed halo mass would have similar average stellar masses, but there is still a trend for massive galaxies to be red because higher mass halos host more massive galaxies. This difference in SHMRs directly translates to two distinctive ways the red and blue galaxies populate the underlying dark matter halos according to their $\ms$ and $\mh$, hence two different spatial distributions of galaxy colours.
Therefore, by comparing the wp and $\ds$ predicted from each quenching model to the measurements from SDSS, we expect to robustly distinguish the two quenching scenarios. The framework we developed in Paper I is ideally suited for this task. The is a global “halo occupation distribution” (HOD) model defined on a 2D grid of $\ms$ and $\mh$, which is crucial to modelling the segregation of red and blue galaxies in their $\ms$ distributions at fixed $\mh$. The quenching constraint is fundamentally different and ultimately more meaningful compared to approaches in which colour-segregated populations are treated independently \citep[e.g.,][]{tinker2013, puebla2015}. Our quenching model automatically fulfills the consistency relation which requires that the sum of red and blue SHMRs is mathematically identical to the overall SHMR. More importantly, the quenching model employs only four additional parameters that are directly related to the average galaxy quenching, while most of the traditional approaches require ∼20 additional parameters, rendering the interpretation of constraints difficult. Furthermore, the framework allows us to include ∼80% more galaxies than the traditional HODs and take into account the incompleteness of stellar mass samples in a self-consistent manner.
This paper is organized as follows. We describe the selection of red and blue samples in Section [sec:data]. In Section [sec:model] we introduce the parameterisations of the two quenching models and derive the s for each colour. We also briefly describe the signal measurement and model prediction in Sections [sec:data] and [sec:model], respectively, but refer readers to Paper I for more details. The constraints from both quenching mode analyses are presented in Section [sec:constraint]. We perform a thorough model comparison using two independent criteria in Section [sec:result] and discover that halo quenching model is strongly favored by the data. In Section [sec:physics] we discuss the physical implications of the halo quenching model and compare it to other works in [sec:compare]. We conclude by summarising our key findings in Section [sec:conclusion].
Throughout this paper and Paper I, we assume a $\lcdm$ cosmology with (Ωm, ΩΛ, σ8, h) = (0.26, 0.74, 0.77, 0.72). All the length and mass units in this paper are scaled as if the Hubble constant were $100\,\kms\mpc^{-1}$. In particular, all the separations are co-moving distances in units of either $\hkpc$ or $\hmpc$, and the stellar mass and halo mass are in units of $\hhmsol$ and $\hmsol$, respectively. Unless otherwise noted, the halo mass is defined by $\mh\,{\equiv}\,M_{200m}\,{=}\,200\bar{\rho}_m(4\pi/3)r_{200m}^3$, where r200m is the corresponding halo radius within which the average density of the enclosed mass is 200 times the mean matter density of the Universe, $\bar{\rho}_m$. For the sake of simplicity, lnx = logex is used for the natural logarithm, and lgx = log10x is used for the base-10 logarithm.
Peeragogy Pattern Catalog
and 5 collaborators
\label{sec:Introduction}
r.52
This paper outlines an approach to the organization of learning that draws on the principles of free/libre/open source software (FLOSS), free culture, and peer production. Mako Hill suggests that one recipe for success in peer production is to take a familiar idea – for example, an encyclopedia – and make it easy for people to participate in building it \cite[Chapter 1]{mako-thesis}. We will take hold of “learning in institutions” as a map (Figure [madison-map]), although it does not fully conform to our chosen tacitly-familiar territory of peeragogy. To be clear, peeragogy is for any group of people who want to learn anything.1
Despite thinking about learning and adaptation that may take place far outside of formal institutions, the historical conception of a university helps give shape to our inqury. The model university is not separate from the life of the state or its citizenry, but aims to “assume leadership in the application of knowledge for the direct improvement of the life of the people in every sphere” . Research that adds to the store of knowledge is another fundamental obligation of the university . The university provides a familiar model for collaborative knowledge work but it is not the only model available. Considering the role of collaboration in building Wikipedia, StackExchange, and free/libre/open source software development, we may be led to ask: What might an accredited free/libre/open university look like? How would it compare or contrast with the typical or stereotypical image of a university from Figure [madison-map]? Would it have similar structural features, like a Library, Dormitory, Science Hall and so on? Would participants take on familiar roles ? How would it compare with historical efforts like the Tuskegee Institute that involved students directly in the production of physical infrastructure \cite{washington1986up,building-peeragogy-accelerator}?
We use the word peeragogy to talk about collaboration in relatively non-hierarchical settings. Examples are found in education, but also in business, government, volunteer, and NGO settings. Peeragogy involves both problem solving and problem definition. Indeed, in many cases it is preferable to focus on solutions, since people know the “problems” all too well \cite{ariyaratneXorganizationX1977}. Participants in a peeragogical endeavor collaboratively build emergent structures that are responsive to their changing context, and that in turn, change that context. In the Peeragogy project, we are developing the the theory and practice of peeragogy.
Design patterns offer a methodological framework that we have used to clarify our focus and organize our work. A design pattern expresses a commonly-occurring problem, a solution to that problem, and rationale for choosing this solution \cite{meszaros1998pattern}. This skeleton is typically fleshed out with a pattern template that includes additional supporting material; individual patterns are connected with each other in a pattern language. What we present here is rather different from previous pattern languages that touch on similar topics – like Liberating Voices \cite{schuler2008liberating}, Pedagogical Patterns \cite{bergin2012pedagogical}, and Learning Patterns \cite{iba2014learning}. At the level of the pattern template, our innovation is simply to add a “What’s next” annotation, which anticipates the way the pattern will continue to “resolve”.
This addition mirrors the central considerations of our approach, which is all about human interaction, and the challenges, fluidity and unpredictability that come with it. Something that works for one person may not work for another or may not even work for the same person in a slightly different situation. We need to be ready to clarify and adjust what we do as we go. Even so, it is hard to argue with a sensible-sounding formula like “If W applies, do X to get Y.” In our view, other pattern languages often achieve this sort of common sense rationality, and then stop. Failure in the prescriptive model only begins when people try to define things more carefully and make context-specific changes – when they actually try to put ideas into practice. The problem lies in the inevitable distance between do as I say, do as I do, and do with me . If people are involved, things get messy. They may think that they are on the same page, only to find out that their understandings are wildly different. For example, everyone may agree that the group needs to go “that way.” But how far? How fast? It is rare for a project to be able to set or even define all of the parameters accurately and concisely at the beginning. And yet design becomes a “living language” just insofar as it is linked to action. Many things have changed since Alexander suggested that “you will get the most ‘power’ over the language, and make it your own most effectively, if you write the changes in, at the appropriate places in the book” . We see more clearly what it means to inscribe the changing form of design not just in the margins of a book, or even a shared wiki, but in the lifeworld itself. Other recent authors on patterns share similar views \cite{reiners2012approach, plast-project, schummer2014beyond}.
Learning and collaboration are of interest to both organizational studies and computer science, where researchers are increasingly making use of social approaches to software design and development, as well as agent-based models of computation \cite{minsky1967programming,poetry-workshop}. The design pattern community in particular is very familiar with practices that we think of as peeragogical, including shepherding, writers workshops, and design patterns themselves \cite{harrison1999language,coplien1997pattern,meszaros1998pattern}. We hope to help design pattern authors and researchers expand on these strengths.
r.52
Motivation for using this pattern. |
---|
Context of application. |
---|
Forces that operate within the context of application, each with a mnemonic glyph. |
Problem the pattern addresses. |
Solution to the problem. |
Rationale for this solution. |
Resolution of the forces, named in bold. |
Example 1: How the pattern manifests in current Wikimedia projects. |
---|
Example 2: How the pattern could inform the design of a future university. |
What’s Next in the Peeragogy Project: How the pattern relates to our collective intention in the Peeragogy project |
---|
Table [tab:pattern-template] shows the pattern template that we use throughout the paper. Along with the traditional design patterns components \cite{meszaros1998pattern}, each of our patterns is fleshed out with two illustrative examples. The first is descriptive, and looks at how the pattern applies in current Wikimedia projects. We selected Wikimedia as a source of examples because the project is familiar, a demonstrated success, and readily accessible. The second example is prospective, and shows how the pattern could be applied in the design of a future university. Each pattern concludes with a boxed annotation: “What’s Next in the Peeragogy Project”.
Section [sec:Peeragogy] defines the concept of more explicitly the form of a design pattern. Sections [sec:Roadmap]–[sec:Scrapbook] present the other patterns in our pattern language. Figure [fig:connections] illustrates their interconnections. Table [tab:core] summarizes the “nuts and bolts” of the pattern language. Section [sec:Distributed_Roadmap] collects our “What’s Next” steps, summarizes the outlook of the Peeragogy project. Section [sec:Conclusion] reviews the contributions of the work as a whole.
When one relative was still in the onboarding process in Peeragogy project, she hit a wall in understanding the “patterns” section in the Peeragogy Handbook v1. A more seasoned peer invited her to a series of separate discussions with their own to flesh out the patterns and make them more accessible. At that time the list of patterns was simply a list of paragraphs describing recurrent trends. During those sessions, the impact and meaning of patterns captured her imagination. She went on to become the champion for the pattern language and its application in the Peeragogy project. During a “hive editing” session, she proposed the template we initially used to give structure to the patterns. She helped further revise the pattern language for the Peeragogy Handbook v3, and attended PLoP 2015. While a new domain can easily be overwhelming, this newcomer found to start with, and scaffolded her knowledge and contributions from that foundation.
|C| [sec:Peeragogy].
How can we find solutions together?
Get concrete about what the real problems are.
[sec:Roadmap].
How can we get everyone on the same page?
Build a plan that we keep updating as we go along.
[sec:Reduce, reuse, recycle].
How can we avoid undue isolation?
Use what’s there and share what we make.
[sec:Carrying capacity].
How can we avoid becoming overwhelmed?
Clearly express when we’re frustrated.
[sec:A specific project].
How can we avoid becoming perplexed?
Focus on concrete, doable tasks.
[sec:Wrapper].
How can people stay in touch with the project?
Maintain a summary of activities and any adjustments to the plan.
[sec:Heartbeat].
How can we make the project “real” for participants?
Keep up a regular, sustaining rhythm.
[sec:Newcomer].
How can we make the project accessible to new people?
Let’s learn together with newcomers.
[sec:Scrapbook].
How can we maintain focus as time goes by?
Move things that are not of immediate use out of focus.
Determination of the Boltzmann Constant through Measurement of Johnson Noise and Determination of the Elementary Electron Charge through Measurement of Shot Noise
and 1 collaborator
Two sources of noise, Johnson noise and shot noise, are investigated in this experiment. The Johnson noise, which is the voltage fluctuations across a resistor that arose from the random motion of electrons, is measured using the Noise Fundamentals box. The noise was measured across different resistances and at different bandwidths at room temperature, resulting in a calculation of the Boltzmann constant of 1.4600 ± 0.0054 ⋅ 10−23 m2 kg s−2 K−1 and 1.4600 ± 0.0052 ⋅ 10−23 m2 kg s−2 K−1. The shot noise occurs due to the quantization of charge, and was measured by varying current in the system, with which we calculated the electron charge of 1.649 ± 0.007 ⋅ 10−19 Coulombs. They agree quite well with the accepted values of 1.38064852 ⋅ 10−23 m2 kg s−2 k−1, and 1.64 ⋅ 10−19C for the Boltzmann constant and electron charge respectively. Errors are discussed.
Cosmic Ray Decay
and 1 collaborator
Cosmic particles are found everywhere in the Universe from various high and low energy interactions. Muon and Gamma decay are two of the most frequent decays studied because muons are one of the most common particles and gamma rays are found everywhere in the Universe from many different type of radioactive decays. We used scintillators in order to produce the two different decays. For the gamma decay, the goal was to find an unknown radioactive sample and to find the different ages of two Cesium-137 samples by using Cesium-137 and Cobolt 60 to calibrate the energies. After analysis, we found that the unknown sample given to us was Sodium-22. We also found that one of the Cesium-137 samples was 17.583 years old and the other was 37.80 years old. The goal for the muon decay was to analyze long term data to see if we could calculate the time dilation effect commonly calculated using muons.
Earth’s Field NMR: first draft
and 1 collaborator
We examined the relationship between magnetization, polarizing field time, magnetic field and precession frequency using a 125 mL sample of water and the TeachSpin Earth’s Field NMR instrument. Through varying these different parameters, we could determine the Larmor precession frequency of protons within Earth’s field, spin-lattice relaxation time, and the gyromagnetic ratio for protons. We found the Larmor precession frequency to be 1852 ± 18 Hz corresponding to a local magnetic field of 43.3 ± 0.3μT due to Earth’s magnetic field, the spin-lattice relaxation time to be 2.15 ± 0.05 s, and the gyromagnetic ratio to be $(2.65\pm0.04) \cdot 10^8~\frac{1}{s\cdot T}$, agreeing with the known value of $2.68\cdot 10^8~\frac{1}{s\cdot T}$.
Focusing on Interest: Do High School Students Like the Idea of Helping Astronomers Revive Data in “oldAstronomy”
and 1 collaborator
Internet technologies make it easier and easier to share data globally, enabling a dramatic proliferation of online “citizen science” projects. One new project, called “oldAstronomy,” is in development by the Zooniverse team, based at Chicago’s Adler Planetarium, in collaboration with the WorldWide Telescope Ambassadors program at Harvard. The goal of the project is to restore hidden metadata to images in published astronomical articles, some more than 100 years old, making the images useful to researchers. In this paper, I investigate a possible role for high school students in the oldAstronomy project. Using two focus groups, one at Milton School and one at Cambridge Ringe and Latin School, I investigate which aspects of participating in oldAstronomy would be of most interest: connections to real data? to real scientists? connecting to other students worldwide? viewing interesting images? researching a topic related to images encountered? It was explained to the focus group students, before they were surveyed, that requirements for their participation in oldAstronomy will include: digesting a scientific paper; summarizing results; and writing a summary that is understandable to the general public or participating in a more creative final project. Results show that students are very interested in working with real data and in the beauty and meaning of images. However, the results also show that students are, perhaps surprisingly, not interested in collaborating and communicating with other students, either in-person (as group work), or online. In response to the feedback from these students’ negative responses to group work, instead of a group final paper, students could benefit in a similar way with a reproduction of the peer review process. Additionally from the feedback of students, there was interest in an alternative form of final assessment. The results of our study suggest that instead of a standard write up, students can create: a 3D model of their object; a website about it; or a WorldWide Telescope tour.
Johnson and Shot Noise. First Draft
and 1 collaborator
Two sources of noise, Johnson noise and shot noise, are investigated in this experiment. The Johnson noise which is the voltage fluctuations across a resistor that arose from the random motion of electrons. It was measured across different resistances and at different bandwidths at room temperature, resulting in a calculation of the Boltzmann constant of 1.46 ⋅ 10−23 m2 kg s−2 K−1 ± 2.5 ⋅ 10−21 m2 kg s−2 K−1 and 1.46 ⋅ 10−23 m2 kg s−2 K−1 ± 2.6 ⋅ 10−21 m2 kg s−2 K−1. The shot noise occurs due to the quantization of charge, and was measured by varying current in the system, with which we calculated the electron charge of 1.64 ⋅ 10−19 ± 7.0 ⋅ 10−22. They agree quite well with the accepted values of 1.38064852 ⋅ 10−23 m2 kg s−2 K−1, and 1.64 ⋅ 10−19C for the Boltzmann constant and electron charge respectively. Errors are discussed.
Determination of Carrier Density through Hall Measurements and Determination of Transition Temperature (\(Tc\)) in a High-Tc Superconductor
and 1 collaborator
證明常見錯誤:以一次基數小考為例
這篇短文,想藉由一次小考的題目來點出大家在寫證明題時常見的錯誤;由於每個人的寫法不盡相同,這邊所提的只是一個大方向,請大家自行判斷文章所指出的,是否就是自己曾經犯過的錯誤。
是非題答題方式很簡單:對的給證明,錯的給反例;但也是最難寫的題型,因為判斷對錯本身就不是一件容易的事。撇除這個部份不談,是非題比較容易犯的,嚴格來說不是錯,而是寫法上失焦。我們來看底下這個例子。
If x and y are integers of the same parity, then xy and (x + y)2 are of the same parity. (Two integers are of the same parity if they are both odd or both even.)
這個命題是錯的,所以我們只需要給一個反例即可。
Solution.
Let x = y = 1. Then x and y are of the same parity. However, xy = 1 and (x + y)2 = 4 are of distinct parities. ▫
有的同學將x, y同為奇數和同為偶數的情形分別討論一下,然後得到第一種情形與命題的結論不合,代表命題為非。這樣當然不是不行,只是多了一堆不必要的討論罷了。
第二種常見的錯誤,就是把運算元搞混;例如將集合的減法和實數的減法混淆在一起。
Let A and B be two sets. If A \ B = B \ A, then A \ B = ⌀.
這個命題為真,所以我們必須給予證明;常見的錯誤寫法為使用了 $$ A \setminus C = B \setminus C \quad \Rightarrow \quad A = B $$ 這種論證。
(錯誤寫法)
Since A \ B = A \ (A ∩ B) and B \ A = B \ (A ∩ B), we have \begin{align*}
A \setminus (A \cap B) = B \setminus (A \cap B) \quad \Rightarrow \quad A = B \quad \Rightarrow \quad A \setminus B = \varnothing
\end{align*}
集合的減法英文為difference,而實數的減法英文是minus,從字面上即可得知這是兩種不同的運算規則,因此 $$ a - c = b - c \quad \Rightarrow \quad a = b $$ 這種推論並不適用於集合。如果我們讓A = {1, 2, 3}、B = {1, 2}以及C = {3};則A \ C = B \ C但A ≠ B。正確作法如下。
Proof.
Suppose that A \ B ≠ ⌀. Let x ∈ A \ B. Then x ∈ A but x ∉ B. Since A \ B = B \ A, it is seen that x ∈ B but x ∉ A. This shows that x ∈ A and x ∉ A, which is a contradiction. Hence A \ B = ⌀. ▫
這大概是最常見的問題,也是最不容易拿捏的地方。說明是否足夠嚴謹,其標準因人而異,但無論如何,寫得詳細些至少不會出錯。若以考試的觀點,大原則就是:只要是課本或課堂上沒出現過的命題,就必須給予證明。
For every positive irrational number b, there is an irrational number a such that 0 < a < b.
這一題很簡單,大多數的同學會直接取$a = \frac{b}{2}$,接著就證明完畢。當然,0 < a < b的部分並沒有太大的疑義;但a是否為無理數就需要驗證了,請千萬不要漏了這個部分。
另外一種常見的錯誤,則是把要證明的結論當已知,然後推出一個恆真的結論。底下我們舉一個例子。
n3 + 1 > n2 + n for every integer n ≥ 2.
(錯誤寫法)
Suppose that n3 + 1 > n2 + n for every integer n ≥ 2. Then \begin{align*}
n^3 + 1 > n^2 + n \quad &\Rightarrow \quad n^3 + 1 - n^2 - n > 0 \\
&\Rightarrow \quad n^2(n-1) - (n-1) > 0 \\
&\Rightarrow \quad (n-1)(n^2-1) > 0 \\
&\Rightarrow \quad (n-1)^2 (n+1) > 0
\end{align*} The last inequality is always true for every integer n ≥ 2. ▫
雖然上面這個作法不對,但卻不是完全無用,因為它指點了一條正確證明的路。正確的作法應該從最後一行往回寫,這樣就沒問題了。換句話說,上面這種錯誤的做法,其實是正確證明的思考過程,只是需要正確的使用就是了。
Franck-Hertz Experiment for Neon and Argon: final draft
and 1 collaborator
The goal of the Franck-Hertz experiment is to demonstrate that electrons occupy only discrete, quantized energy states for neon and argon atoms.
Earth’s Field NMR: first draft
We performed an experiment to examine the relationship between magnetization, polarizing field time, magnetic field and precession frequency using a 125 mL sample of water and the TeachSpin Earth’s Field NMR instrument. Through varying thee different parameters, we could determine the Larmor precession frequency of protons within Earth’s field, spin-lattice relaxation time, and the gyromagnetic ratio for protons. We found the Larmor precession frequency to be 1852 ± 18 Hz, the spin-lattice relaxation time to be 2.15 ± 0.05 s, and the gyromagnetic ratio to be $(2.65\pm0.04) \cdot 10^8~\frac{1}{s\cdot T}$, agreeing with the known value of $2.68\cdot 10^8~\frac{1}{s\cdot T}$.
Observing Convectively Excited Gravity Modes in Main Sequence Stars
Abstract
This paper will primarily focus on a study by \cite{Shiode_2013} on how gravity modes could be excited by convection in massive main sequence stars. The first portion of this paper will explain the more commonly understood method of how gravity modes are driven by adiabatic expansion at the core and why gravity modes produced this way are so difficult to observe. The second part of this paper will briefly go over the methods for detecting gravity modes and the observational challenges faced. The third portion will look at models \cite{Shiode_2013} constructed using the MESA stellar evolution code that were used to estimate mode frequencies, excitation amplitudes and where in the stellar interior of various sized main sequence stars, gravity modes would propagate. The final portion of this paper will look at future advancements in detecting gravity modes and promising observations taken from the Kepler space satellite.
Measurement of Faraday Rotation in SF57 glass at 670 nm Third and Final Draft
and 1 collaborator
We performed an experiment to measure the Faraday rotation of polarized light passing through a magnetic field, as well as measuring the Verdet constant of an SF57 glass tube with a length of 0.1 m. Our results are consistent with the general idea of Faraday rotation, which suggests that linearly polarized light experiences rotation when applying a magnetic field. We used three different methods to find Verdet constants, which are Direct Fit, Slope Fit and Lock-in Method. The values we found are $21\pm 5 \frac{radians}{T \cdot m}$, $21.095\pm0.003 \frac{radians}{T \cdot m}$ and $20.43\pm0.06 \frac{radians}{T \cdot m}$ respectively, and those values are consistent with each other within uncertainty.
Critique of Occupancy Schedules Learning Process Through a Data Mining Framework