The data has been taken from (Potter 2011). All timing stamps are in BJD using the TDB time scale. No further transformations needed. A total of 42 timing measurements exist. However, Potter et al. have not included data points from Dai et al. (2010). See Potter et al. (2011) text for details. In this analysis I will consider the full set of timings as a start and as presented in Potter et al. (2011).

In general I am using IDL for the timing analysis. The cycle or ephemeris numbers have been obtained from IDL> ROUND((BJDMIN-TZERO)/PERIOD) where BJDMIN are all 42 timing measurements, TZERO is an arbitrary timing measurement that defines the CYCLE=\(E\)=0 and PERIOD is the binary orbital period (0.087865425 days) and was taken from (Potter 2011), Table 2. In this work I will use TZERO=BJD 2,450,021.779388. It is a bit different from the TZERO used in (Potter 2011) in order to introduce a bit variation and also because I think the center of mass of the data points is as chosen by me.

As a first step I used IDL’s LINFIT code to fit a straight line with the MEASURE_ERROR keyword set to an array holding the timing measurements errors (Table 2, 3rd column, Potter et al. 2011). This way the square of deviations are weighted with \(1/\sigma^2\) where \(\sigma\) is the standard timing error for each timing measurement. This is standard procedure and was also used in Potter et al. (2011). The average or mean timing error for the 42 measurements is 6.0 seconds (the standard deviation is also 6.0 seconds) with 0.74 seconds as the smallest and 17 seconds as the largest error. Also I have rescaled the timing measurements by subtracting the first timing measurement from all the others. Rescaling introduces nothing spooky to the analysis and has the advantage to avoid dynamic range problems. This is in particular needed for a later analysis when using MPFIT. Using LINFIT the resulting reduced \(\chi^2\) value was 95.22 (\(\chi^2 = 3808.82\) with (42-2) degrees of freedom) with the ephemeris (or computed timings) given as \[T(E) = BJD~2450021.77890(6) + E \times 0.0878654291(1)\] The corresponding root-mean-square (RMS) scatter of the data around the best-fit line is 27.5 seconds and the corresponding standard deviation is 27.7 seconds. As expected they should both be similar. To measure scatter of data around any best-fit model, I will use the RMS quantity. The RMS scatter is 5 times the average timing error and could be indicative of a systematic process.

As a test the CURVEFIT routine has been used in a similar manner. The resulting reduced chi2 was also 95.22 matching and confirming the result from the previous section. The /NODERIVATIVE keyword does not change anything and expressions for the partial derivative has been included. The RMS also agrees with the results obtained from LINFIT. However, the formal \(1\sigma\) uncertainties in the best-fit parameters (TZERO and PERIOD) are one magnitude smaller compared to the equivalent values obtained from LINFIT. The data and the best-fit line (obtained from LINFIT) is shown in Fig. \ref{linearfit} with the residuals plotted in Fig. \ref{linearfit_res}. There is absolutely no difference when using the results from CURVEFIT.

After fitting a straight line and visually inspecting the residual plots I cannot see any convincing trend that should justify a quadratic ephemeris (linear + a quadratic term). What I see is a sinusoidal variation around the best-fit line. Relative to the linear line the first timing measurement arrives 20s earlier than expected. Then the trend goes down and increases again to 40s at E=0, then decreases again to a minimum to around 20s and increases again thereafter. There is no obvious quadratic trend from looking at the residuals in Fig. \ref{linearfit_res}.

Although there is no obvious reason to include a quadratic term I will nevertheless consider a quadratic model. I will do this by again using IDL’s CURVEFIT procedure and the MPFIT package (also IDL) which is a more sophisticated fitting tool utilizing the Levenberg-Marquardt least-squares minimization algorithm developed by Marwardt.

The results from CURVEFIT are surprising. The best-fit \(\chi^2\) value was 3718.89 yielding a reduced \(\chi^2\) of 95.36 with (42-3 DoF). The RMS scatter of the residuals around the quadratic model fit was 31 seconds. This means that the fit became worse compared to a linear ephemeris model. The resulting residual plot is shown in Fig. \ref{quadfit_res}. The corresponding best-fit parameters along with formal uncertainties for a quadratic ephemeris are \[\begin{aligned} T(E) &=& T + P \times E + A \times E^2 \\ &=& 24550021.778895(6) + 0.0878654269(3) \times E + 4.3(5)\times 10^{-14} \times E^2 \end{aligned}\]

I have also used MPFIT to fit a quadratic ephemeris to the Potter et al. (2011) timing data. The resulting \(\chi^2\) is 3718.94 with (42-3) degrees of freedom yielding a reduced \(\chi^2\) of 95.36. This is identical to the results obtained with CURVEFIT and thus confirmed independently. This is really surprising. The RMS scatter of data around the quadratic ephemeris is around 31 seconds. I will not state the best-fit values for the three model parameters (and their uncertainties) as obtained from MPFIT.

Based on the above result I cannot see that the residuals relative to a linear ephemeris allow the inclusion of a secular term accounting for a quadratic ephemeris. The \(\chi^2\) increases with an extra parameter which is not what is expected. I will continue now and fit a 1- and 2-companion model.

We have considered a linear + 1-LTT model (excluding secular changes as described in a quadratic ephemeris). We have again used MPFIT for this task. The model is taken from Irwin (19??). We considered \(10^7\) initial guesses. The initial guess for the reference epoch and binary period were taken from the best-fit obtained from a linear ephemeris model. Inital guesses for the semi-amplitude of the light-time orbit were taken from an estimate of the amplitude as shown in Fig. 2. Initial guesses for the eccentricity covered the interval [0,0.9995]. Initial guess for the argument of pericenter covered the interval [0,360] degrees. Initial guess for the orbital period was also estimated from Fig. 2. Initial guess for the time of pericenter passage were obtained from T0 and the orbital period of the light-time orbit. Initial guesses were drawn at random. The methodology follows the same techniques as described in Hinse et al. (2012). Best-fit parameters were obtained from the best-fit solution covariance matrix as returned by MPFIT. Parameters errors should be considered as formal. The best-fit had a \(\chi^2=185.2\) with (42-7) degrees of freedom resulting in a reduced \(\chi^2_{\nu}=5.3\). The corresponding RMS scatter of data points around the best-fit is 15.7 seconds. The best-fit parameters are listed in Table \ref{BestFitParamsLinPlus1LTT} and shown in Fig. \ref{BestFitModel_LinPlus1LTT}. Recalling the average timing error (of 42 timing measurements) to be 6 seconds, that means that the RMS residuals are on a \(2.6\sigma\) level.

\(T_0\) (BJD) | \(2,450,021.77924 \pm 3 \times 10^{-5}\) |

\(P_0\) (days) | \(0.0878654289 \pm 2 \times 10^{-10}\) |

\(a\sin I\) (AU) | \(0.00043 \pm 2 \times 10^{-5}\) |

\(e\) | \(0.65 \pm 0.03\) |

\(\omega\) (radians) | \(6.89 \pm 0.04\) |

\(T_p\) (BJD) | \(2,408,616.0 \pm 50\) |

\(P\) (days) | \(6020 \pm 35\) |

RMS (seconds) | 15.7 |

\label{BestFitParamsLinPlus1LTT}

At the present stage some inconsistencies were discovered in the reported timing uncertainties as listed in Table 1 in Potter et al. (2011). For example the timing uncertainty reported by (Warren 1995) is 0.000023 days, while Potter et al. (2011) reports 0.00003 and 0.00004 days. Furthermore, after scrutinizing the literature we found that several timing measurements were omitted in Potter et al. (2011). We tested for the possibility that Potter et al. (2011) adopts timing uncertainties from the spread of data around a best-fit linear regression. However, that seems not the case: As a test, we used the five timing measurements from (Beuermann 1988) as listed in Table 1 in Potter et al. (2011). We fitted a linear straight line using CURVEFIT as implemented in IDL and found a scatter of 0.00004 to 0.00005 days depending on the metric used to measure scatter around the best-fit. The quoted uncertainties in Potter et al. (2011) are smaller by at least a factor of two. We conclude that Potter et al. (2011) must be in error when quoting timing uncertainties in their Table 1. Similar mistakes when quoting timing uncertainties apply to data listed in (Ramsay 1994). Furthermore, after scrutinizing the literature for timing measurements of UZ For we found several timing measurements that were omitted in Potter et al. (2011). For example six eclipse timings were reported by (Bailey 1991) with a uniform uncertainty of 0.00006 days. However, Potter et al. (2011) only reports three of the six timings. Furthermore, a total of five new timings were reported by (Ramsay 1994), but only one were listed in Potter et al. (2011). We can not come up with a good explanation why those extra timing measurements should be omitted or discarded. All of the new data points have been presented in the original works alongside with data points used in the analysis of Potter et al. (2011).

In this research we make use of all timing measurements that have been obtained with reasonable accuracy. We have therefore recompiled all available timing measurements from the literature. We list them in Table \ref{NewTimingData}. The original HJD(UTC) time stamps from the literature were converted to the BJD(TDB) system using the on-line time utilities^{1} (Eastman et al., 2010). Not all sources of timing measurements provide explicit information of the the time standard used. In that case we assume that HJD time stamps are valid in the UTC standard. This assumption is to some extend justified since the first timing measurement was taken in august 1983. At that time the UTC time standard for astronomical observations was widespread. All new measurements presented in (Potter 2011) were taken directly from their Table 1. Some remarks are at place. By finding additional timing measurements (otherwise omitted in Potter et al. 2011) in the literature we decided to follow a different approach to estimate timing uncertainties. For measurements that were taken over a short time period one can determine a best-fit line and estimate timing uncertainties from the data scatter. The underlying assumption in this method is that no significant astrophysical signal (interaction between binary components or additional bodies) is contained in the timing measurements over a few consecutive observing nights. Therefore, the scatter around a linear ephemeris should be a reasonable measure of how well timings were measured. In other words, only a first-order effect due to a linear ephemeris is observed. Higher-order eclipse timing variation effects are negligible for data sets obtained during a few consecutive nights. The advantage is that for a given data set the same telescope/instrument were used as well as weather conditions were likely not to have changed much from night to night. Furthermore, most likely the same technique was applied to infer the individual time stamps of a given data set. In Table \ref{NewTimingData} we list the original quoted uncertainties presented in the literature as \(\sigma_{lit}\). We also list the uncertainty obtained from the scatter of the data around a best-fit linear regression line. The corresponding reduced \(\chi^2\) statistic for each fit is also tabulated in the third column. From the reduced \(\chi^2\) for each data set one can scale the corresponding uncertainties such that \(\chi^2_{\nu} = 1\) is enforced (Bevington et al., 2003). This step is only permitted if a high confidence in the applied model is justified. We think that this is the case when time stamps have been obtained over a short time interval. However, ultimately the timing uncertainty depends on the sampling of the eclipse event at a sufficiently high signal-to-noise ratio. The (Imamura 1998) data set was split in two since those time stamps were obtained from two observing runs each lasting for a few days. Furthermore, we have calculated three data scatter metrics around the best-fit line: a) the root-mean-square, b) the standard deviation and c) the standard deviation as given by (Bevington 2003) and defined as \[\sigma^2 = \frac{1}{N-2} \sum_{i=1}^{N}(y_{i} - a - bx_{i})^2
\label{BevEq6p15}\] where \(N\) is the number of data points, \(a,b\) the two parameters for a linear line and \((x_{i}, y_{i})\) is a given timing measurement at a given epoch. We have tested the dependence of scatter on the weight used and found no difference in the scatter metrics when applying a weight of one for all measurements. Finally some additional details need to be mentioned. We only inferred new timing uncertainties for data sets with more than two measurements. For a given data set we used the published ephemeris (orbital period) to calculate the eclipse epochs. For the time stamps presented in (Bailey 1991) no ephemeris was stated. We therefore, used their eclipse cycles for the independent variable to calculate a best-fit line. The reference epoch in each fit was placed to be in or near the middle of the data set. Two data points were discarded in the present analysis. We removed one time stamp from (Ferrario 1989) due to a too high timing uncertainty. Another time stamp was removed from the new data presented in Potter et al. (2011), namely the time stamp BJD(TDB) 2,454,857.36480850. This eclipse is duplicated as it was observed also with the much larger SALT/BVIT instrument resulting in a lower timing error. We therefore use only the SALT/BVIT measurement in the present analysis which makes use of a total of 54 timing stamps. The average or mean timing error for the 54 measurements is 5.7 seconds (the standard deviation is 6.5 seconds) with 0.33 seconds as the smallest and 26.5 seconds as the largest error. Also we have rescaled the timing measurements by subtracting the first time stamp from all the others. Rescaling introduces nothing spooky to the analysis and has the advantage to avoid dynamic range problems when carrying out the process of least-squares minimization. The total baseline of the data set spans 27 years.

## Share on Social Media