Our sample consists of 56 confirmed quasar light curves from the seventh data release of the Sloan Digital Sky Survey (Schneider et al. 2010). These quasars range in redshift from \(z = 0.19\) to \(z = 3.83\) and in luminosity from \(L = 10^{15} L_{\odot}\) to \(L = 10^{20} L_{\odot}\). These samples were taken from the Southern Equitorial Stripe known as Stripe 82. Stripe 82 is a \(275^{2}\) degree area of sky with repeated sampling centered on the celestial equator. It reaches 2 magnitudes deeper than SDSS single pass data going as deep as magnitude 23.5 in the r-band for galaxies with median seeing of 1.1” (Annis et al. 2014). Each light curve contains photometric information from two bands (g and r) with as much as 10 years of data. The number of epochs of data range from 29 observations to 81 observations in all photometric bands with sampling intervals ranging from one day to two years. This inconsistent sampling will lead to issues with our analysis which will be discussed later. PSF magnitudes are calibrated using a set of standard stars (Ivezic et al 2007) to reduce the error in our data down to 1%. We then convert these magnitudes to fluxes for our analysis and convert observed time sampling intervals to the rest frame of the quasar. We use asinh magnitudes (also referred to as “Luptitudes”) for flux conversion (York et al. 2000) (Lupton, Gunn, & Szalay 1999) as is standard for SDSS.

These samples also exist within the field of the *Kepler* K2 mission’s campaign 8. The *Kepler* space telescope’s original purpose was planet finding, which required that it quickly and consistently take many exposures over long periods of time in order to look for small periodic dips in the light from potential planetary systems. After the failure of two of the four reaction wheels, the telescope was limited in its pointing to its orbital plane, which is approximately the ecliptic. Due to limited mobility, the K2 project was started which involved having the telescope observe single regions of the sky for approximately 75 days at a time with observations every 30 minutes. (Howell et al. 2014). Fortuitously, one of these regions happens to overlap with Stripe 82, which means short term time-series data will soon be available for these objects. This increase in time sampling rate will allow us to probe far deeper into the short-term variability properties of these objects as compared to SDSS.

0.5

Analysis of astronomical time-dependent data sets requires methods of quantifying and parameterizing the properties of the observed process in order to learn about the underlying physical processes driving them. The most common method of parameterization of light curve variability currently is to compute the structure function. The structure function is useful because it allows us to relate the variation in brightness to the period of time over which we observe the change. The structure function can be characterized in a number of ways, the simplest of which is with a power law with a slope and y-intercept as free parameters [GTR: Add citation, e.g. Schmidt et al. 2010]. While a useful tool, the structure function lacks the sophistication required to probe the complex behavior of AGN. Instead, we look to other tools commonly used in time-series analysis.

[GTR: Why is the next paragraph about stationary processes? Sort of odd to segue from a sentence talking about “tools” to stationary. Maybe put the “There exist...” paragraph here instead? And insert the stationary and white noise text as needed (e.g., after the first sentence)?]

Most astronomical time-series obey the properties of a stationary process. At the most basic level a stationary light curve would be one that has the same mean and variance regardless of where the light curve is sampled. In more detail, a process \(X_t, t \in \Z\) is said to be a stationary if (i) \(X_t\) has finite variance for all \(t\), (ii) the expectation value of \(X_t\) is constant for all \(t\), and (iii) the autocovariance function, \(\gamma_{X}(r,s) = \gamma_{X}(r+t, s+t)\), for all \(r,s,t \in \Z\). Property (iii) allows us to define the autocovariance function in a simpler way. \(\gamma_{X}(r,s) = \gamma_{X}(r-s, s-s) = \gamma_{X}(r-s,0)\) or substituting \(h = (r-s)\), we can now write the autocovariance function of a stationary process as \(\gamma_{X}(h) = Cov(X_{t}, X_{t+h})\) representing the autocovariance of \(X_{t}\) at some time lag \(h\).

The simplest stationary processes is one where the individual observations are independent and identically distributed (IID) random variables. Such processes, \(Z_{t}\), with zero mean and autocovariance function

\[\gamma_{Z}(h) =
\begin{cases}
\sigma^{2} & h = 0 \\
0 & h \neq 0 \\
\end {cases}\] are known as *white noise processes* and are written as \(Z_{t} \sim WN(0,\sigma^{2})\). [GTR: Need a segue between these two sentences.] We can use a white noise as a forcing term to build a useful set of linear difference equations to analyze a more complicated time-series.

There exist a class of finite difference equations used in the analysis of discrete time series known as autoregressive-moving average (ARMA) processes. These processes allow us to quantify properties of a time-series with a simple but thoroughly descriptive parametric structure. A stationary process \(\{X_t\}\) can be modeled by an ARMA(p,q) process if at every time \(t\)

\[X_t - \phi_1X_{t-1} - ... - \phi_pX_{t-p} = Z_t + \theta_{t-1} + ... + \theta_qZ_{t-q}\]

where \(\{Z_t\}\) is a white noise process with zero mean and variance \(\sigma^2\). For any autocovariance function \(\gamma(h)\) for which \(\lim_{h\to\infty}\gamma(h) = 0\) and any integer \(k > 0\), there exists an ARMA(p,q) process with autocovariance function \(\gamma_X(h)\) such that \(\gamma_X(h) = \gamma(h)\) for all \(h \leq k\). These relations make the ARMA process a very useful tool in the analysis and modeling of many different time-series.

1. Introduce a little stochastic calculus –Following Might not be necessary if we use derivation example for AR(2) -New definition of the limit -New definition of the derivative -Stochastic Continuity -Stochastic Integral (Maybe introduce when actually needed)

-Continuous White Noise

–Start CARMA Derivation -Introduce AR(1) model (in context of current popular method) -General Order AR model -General Order MA model -introduce stochastic integral briefly -Mixed AR MA process (CARMA) (Just do this in the next section)

In reality, the underlying phenomena driving AGN variability are not discrete processes. In order to get a proper understanding of the underlying physics and structure of the AGN, we need a continuous analog to the ARMA process. A continuous-time ARMA (CARMA) process is the continuous case of the discrete ARMA process. A system described by a CARMA process obeys the stochastic differential equation

\[d^{P}f(t) + \alpha_{P-1} d^{P-1}f(t) + ... + \alpha_{0} f(t) = \beta_{Q}d^{Q}w(t) + \beta_{Q-1} d^{Q-1}w(t) + ... + w(t)\]

Where \(d\) represents a change in a variable between times \(t\) and \(t + dt\), \(f\) represents the state of the system minus the mean, and \(w \sim WN(0, \sigma^{2})\) continuous-time white noise random process representing the driving noise in the system due to non-linear effects. In this case, it will represent temperature fluctuations due to magnetohydrodynamic instabilities in the accretion disk. In order for the process to be stationary we must require that \(p < q\). The most well known example is the case where \(p=1\) and \(q=0\) CAR(1) process also known in the astronomical community as a damped random walk (DRW).

We construct the auto-regressive polynomial as the characteristic polynomial of the auto-regressive side of the CARMA process.

## Share on Social Media