# Data

Our sample consists of 56 confirmed quasar light curves from the seventh data release of the Sloan Digital Sky Survey (Schneider et al. 2010). These quasars range in redshift from $$z = 0.19$$ to $$z = 3.83$$ and in luminosity from $$L = 10^{15} L_{\odot}$$ to $$L = 10^{20} L_{\odot}$$. These samples were taken from the Southern Equitorial Stripe known as Stripe 82. Stripe 82 is a $$275^{2}$$ degree area of sky with repeated sampling centered on the celestial equator. It reaches 2 magnitudes deeper than SDSS single pass data going as deep as magnitude 23.5 in the r-band for galaxies with median seeing of 1.1” (Annis et al. 2014). Each light curve contains photometric information from two bands (g and r) with as much as 10 years of data. The number of epochs of data range from 29 observations to 81 observations in all photometric bands with sampling intervals ranging from one day to two years. This inconsistent sampling will lead to issues with our analysis which will be discussed later. PSF magnitudes are calibrated using a set of standard stars (Ivezic et al 2007) to reduce the error in our data down to 1%. We then convert these magnitudes to fluxes for our analysis and convert observed time sampling intervals to the rest frame of the quasar. We use asinh magnitudes (also referred to as “Luptitudes”) for flux conversion (York et al. 2000) (Lupton, Gunn, & Szalay 1999) as is standard for SDSS.

These samples also exist within the field of the Kepler K2 mission’s campaign 8. The Kepler space telescope’s original purpose was planet finding, which required that it quickly and consistently take many exposures over long periods of time in order to look for small periodic dips in the light from potential planetary systems. After the failure of two of the four reaction wheels, the telescope was limited in its pointing to its orbital plane, which is approximately the ecliptic. Due to limited mobility, the K2 project was started which involved having the telescope observe single regions of the sky for approximately 75 days at a time with observations every 30 minutes. (Howell et al. 2014). Fortuitously, one of these regions happens to overlap with Stripe 82, which means short term time-series data will soon be available for these objects. This increase in time sampling rate will allow us to probe far deeper into the short-term variability properties of these objects as compared to SDSS.

0.5

# Continuous-Time Auto Regressive Moving Average

## ARMA

Analysis of astronomical time-dependent data sets requires methods of quantifying and parameterizing the properties of the observed process in order to learn about the underlying physical processes driving them. The most common method of parameterization of light curve variability currently is to compute the structure function. The structure function is useful because it allows us to relate the variation in brightness to the period of time over which we observe the change. The structure function can be characterized in a number of ways, the simplest of which is with a power law with a slope and y-intercept as free parameters [GTR: Add citation, e.g. Schmidt et al. 2010]. While a useful tool, the structure function lacks the sophistication required to probe the complex behavior of AGN. Instead, we look to other tools commonly used in time-series analysis.

[GTR: Why is the next paragraph about stationary processes? Sort of odd to segue from a sentence talking about “tools” to stationary. Maybe put the “There exist...” paragraph here instead? And insert the stationary and white noise text as needed (e.g., after the first sentence)?]

Most astronomical time-series obey the properties of a stationary process. At the most basic level a stationary light curve would be one that has the same mean and variance regardless of where the light curve is sampled. In more detail, a process $$X_t, t \in \Z$$ is said to be a stationary if (i) $$X_t$$ has finite variance for all $$t$$, (ii) the expectation value of $$X_t$$ is constant for all $$t$$, and (iii) the autocovariance function, $$\gamma_{X}(r,s) = \gamma_{X}(r+t, s+t)$$, for all $$r,s,t \in \Z$$. Property (iii) allows us to define the autocovariance function in a simpler way. $$\gamma_{X}(r,s) = \gamma_{X}(r-s, s-s) = \gamma_{X}(r-s,0)$$ or substituting $$h = (r-s)$$, we can now write the autocovariance function of a stationary process as $$\gamma_{X}(h) = Cov(X_{t}, X_{t+h})$$ representing the autocovariance of $$X_{t}$$ at some time lag $$h$$.

The simplest stationary processes is one where the individual observations are independent and identically distributed (IID) random variables. Such processes, $$Z_{t}$$, with zero mean and autocovariance function

$\gamma_{Z}(h) = \begin{cases} \sigma^{2} & h = 0 \\ 0 & h \neq 0 \\ \end {cases}$ are known as white noise processes and are written as $$Z_{t} \sim WN(0,\sigma^{2})$$. [GTR: Need a segue between these two sentences.] We can use a white noise as a forcing term to build a useful set of linear difference equations to analyze a more complicated time-series.

There exist a class of finite difference equations used in the analysis of discrete time series known as autoregressive-moving average (ARMA) processes. These processes allow us to quantify properties of a time-series with a simple but thoroughly descriptive parametric structure. A stationary process $$\{X_t\}$$ can be modeled by an ARMA(p,q) process if at every time $$t$$

$X_t - \phi_1X_{t-1} - ... - \phi_pX_{t-p} = Z_t + \theta_{t-1} + ... + \theta_qZ_{t-q}$

where $$\{Z_t\}$$ is a white noise process with zero mean and variance $$\sigma^2$$. For any autocovariance function $$\gamma(h)$$ for which $$\lim_{h\to\infty}\gamma(h) = 0$$ and any integer $$k > 0$$, there exists an ARMA(p,q) process with autocovariance function $$\gamma_X(h)$$ such that $$\gamma_X(h) = \gamma(h)$$ for all $$h \leq k$$. These relations make the ARMA process a very useful tool in the analysis and modeling of many different time-series.

## Taking the Continuous Limit

1. Introduce a little stochastic calculus –Following Might not be necessary if we use derivation example for AR(2) -New definition of the limit -New definition of the derivative -Stochastic Continuity -Stochastic Integral (Maybe introduce when actually needed)

### Continuous White Noise Process

-Continuous White Noise

–Start CARMA Derivation -Introduce AR(1) model (in context of current popular method) -General Order AR model -General Order MA model -introduce stochastic integral briefly -Mixed AR MA process (CARMA) (Just do this in the next section)

## CARMA

In reality, the underlying phenomena driving AGN variability are not discrete processes. In order to get a proper understanding of the underlying physics and structure of the AGN, we need a continuous analog to the ARMA process. A continuous-time ARMA (CARMA) process is the continuous case of the discrete ARMA process. A system described by a CARMA process obeys the stochastic differential equation

$d^{P}f(t) + \alpha_{P-1} d^{P-1}f(t) + ... + \alpha_{0} f(t) = \beta_{Q}d^{Q}w(t) + \beta_{Q-1} d^{Q-1}w(t) + ... + w(t)$

Where $$d$$ represents a change in a variable between times $$t$$ and $$t + dt$$, $$f$$ represents the state of the system minus the mean, and $$w \sim WN(0, \sigma^{2})$$ continuous-time white noise random process representing the driving noise in the system due to non-linear effects. In this case, it will represent temperature fluctuations due to magnetohydrodynamic instabilities in the accretion disk. In order for the process to be stationary we must require that $$p < q$$. The most well known example is the case where $$p=1$$ and $$q=0$$ CAR(1) process also known in the astronomical community as a damped random walk (DRW).

We construct the auto-regressive polynomial as the characteristic polynomial of the auto-regressive side of the CARMA process.

$A(z) = \sum_{k=0}^{P} \alpha_{k}z^{k}$

The roots of the auto-regressive polynomial, $$\rho_{p}$$, indicate timescales on which variability presents itself. The auto-correlation function of a CARMA(p,q) process goes as

$R(\tau) = \sigma^2\sum_{k=1}^{p}\frac{e^{\rho_{k}\tau}\big[\sum_{l=0}^{q}\beta_{l}(-\rho_{k})^{l}\big]\big[\sum_{l=0}^{q}\beta_{l}\rho_{k}^{l}\big]}{-2Re(\rho_{k})\prod_{l=1,l\ne k}^{p}(\rho_{l}-\rho_{k})(\rho_{l}^{*}+\rho_{k})}$

where $$Re(\cdot)$$ is the real part and $$z^{*}$$ is the complex conjugate of $$z$$. The roots here are assumed to be unique as a consequence of the operation of the Kalman filter which will be discussed later.

*then the PSD

**then maybe a little bit on the structure function

*Then introduce the greens function

# The Kalman Filter

Testing a stochastic model for accuracy requires a method of recreating our observed data from our chosen parameters. Since the CARMA model is stochastic, unguided attempts to model our light curve could have an infinite number of different results. We require a method of simulation that takes into account the value of the light curve at each observation, make a prediction of what the next observation will be, compare the observation and the prediction, and finally correct itself for the next step like in a Markov-chain. To do this, we implement a Kalman filter.

To be able to make predictions about the future state of our light curve given it’s current state and a specific CARMA model, we need to break down our observations into state space. (Define better) State space is simply an estimation of the true value of the light-curve at some point in time. By representing our light curve in state space, we can apply a simple transformation to predict the next step in state space. Let $$X_{i}$$ be the state space vector of our light curve at some observation $$i$$. We can predict the next observation by

$X_{i+1} = F_{i}X_{i}+V_{i+1}$

Where $$F_{i}$$ is called the transformation matrix at that observation determined by our CARMA model, and $$V_{i}$$ is a vector of zero mean Gaussian distributed random variables following a set of covariance matricies, $$Q_{i}$$, determined by our CARMA model, representing the intrinsic system error.

Observations of a system are generally made by observing some distortion of the system with some observation error. We can transform our state space representation into our observed variables, $$Y_{i}$$ by the observation equation.

$Y_{i} = G_{i}X_{i}+W_{i}$

Where $$G_{i}$$ is the observation matrix which in our case will simply represent whether or not the value of the light curve was observed, and $$W_{i}$$ is a random zero mean vector following a set of known covariance matricies, $$R_{i}$$, representing the observation error.

The operation of the Kalman filter can be broken down into a prediction step and an update step. The prediction step involves finding the best linear predictor of our system, $$\hat{X_{i}}$$, which is where we think our system should be according to our CARMA model (how is this done?). The update step compares our prediction to our true value and attempts to correct for the differences in order to find the best linear estimator, $$X^{\dagger}_{i}$$ for the state of the system (how is this done). (talk about how the likelihood is then calculated) Estimation of these vectors, and their corresponding covariance matricies, $$\Sigma_{i}$$,$$\Sigma^{\dagger}_{i}$$ is accomplished through the following recursive algorithm:

# Fitting CARMA Models

## CARMA_Pack

Computationally analyzing and fitting CARMA models requires a fairly sophisticated package. Though many scripting and data analysis languages offer libraries for ARMA analysis such as R and Matlab, few offer tools for the analysis of continuous-time data sets. For our analysis, we have decided to use a python library written in C++ called carma_pack. Carma_pack was originally written for the purpose of analyzing astronomical time-series, specifically, AGN. It has been used successfully to derive the parametric structure of light curves of AGN taken by the Kepler satellite, and has shown to be a useful tool in the analysis of SDSS light curves (Kelly et. al 2014). This package allows us to easily and conveniently fit CARMA process to light curves, determine the optimal model orders, and perform basic stochastic analysis of our best fit models. Carma_pack fits CARMA processes to light curves using the Metropolis-Hastings Markov-chain Monte-Carlo method (citation needed) with a Kalman filter providing the likelihood estimations for the proposals.

## Model Orders

In order to determine the best CARMA(p,q) model that fits a specific light curve, we need to know what the best possible values of $$p$$ and $$q$$ are. One method of doing this is to attempt to fit CARMA models to every possible combination of $$p$$ and $$q$$ up to some limit and compare the accuracy of each model by some metric. The metric by which we compare models should be one that takes into account the complexity of the chosen model, as increases in model complexity will cause us to lose information about the process to system noise. A commonly used metric for the comparison of time-series models is the Akaike Information Criterion or AIC. This criterion allows to to select the simplest model with the best fit by comparing likelihood estimations for each model and penalizing against model complexity. The best model is the one which minimized the AIC. The AIC for a specific CARMA model is

$AIC = 2k - 2ln(L)$

where $$k$$ is the number of model parameters, in this case $$k = p + q + 1$$, and $$L$$ is the likelihood of the CARMA process produced by the Kalman filter. Since we have a finite number of samples for each light curve we analyze, we need to use a corrected version of the AIC called the AICc which takes into account the sample size, $$n$$.

$AICc = AIC + \frac{2k(k+1)}{n-k-1}$

# Canonical Light Curves

## SDSS J134342.5-004243.8 (sample 36 in r-band)

This sample acts as a good example of a light curve well fit by a CARMA(2,1) model. We’ll be using this example for a good amount of the discussion as since most of the plots are relatively clean and easy to understand compared to some of the poorer fits. The PSD estimate is very simil