Jack O'Brien edited untitled.tex  almost 8 years ago

Commit id: 612cbf16cd70989398fa12fa76b308c1d2d0b698

deletions | additions      

       

\subsection{ARMA}  A process $X_t, t \in \Z$ is said to be a {\em stationary process} if (i) $X_t$ has finite variance for all $t$, (ii) the expected value of $X_t$ is the same for all $t$, and (iii) the autocovariance function, $\gamma_{X}(r,s) = \gamma_{X}(r+t, s+t)$, for all $r,s,t \in \Z$.   %**Talk about using white noise to drive the process  %define stationary  %are lightcurves stationary?  %**mention light curves as the time-series we're talking about  Analysis of astronomical time-dependent data sets requires methods of quantifying and parameterizing the properties of the observed process in order to learn about the underlying physical processes driving them. The most common method of parameterization of light curve variability currently is to compute the structure function. The structure function is useful because it allows us to relate the variation in brightness to the period of time over which we observe the change. The structure function is characterized by a power law with a slope and y-intercept as free parameters. While a useful tool, it lacks the sophistication required to probe the complex behavior of AGN which require far more parameters to effectively model due to their complicated structure. Instead, we look to other tools commonly used in time-series analysis. Most astronomical time-series obey the properties of a stationary process. %why?  A process $X_t, t \in \Z$ is said to be a {\em stationary process} if (i) $X_t$ has finite variance for all $t$, (ii) the expectation value of $X_t$ is constant for all $t$, and (iii) the autocovariance function, $\gamma_{X}(r,s) = \gamma_{X}(r+t, s+t)$, for all $r,s,t \in \Z$.   %What does this do for us? What are some useful properties relating to ARMA models?  % Can redefine the Autocovariance function (talk about relation to structure function earlier, why it's important to what we want to know)  Property (iii) allows us to define the autocovariance function in a simpler way. $\gamma_{X}(r,s) = \gamma_{X}(r-s, s-s) = \gamma_{X}(r-s,0)$ or substituting $h = (r-s)$, we can now write the autocovariance function of a stationary process as $\gamma_{X}(h) = Cov(X_{t}, X_{t+h})$ representing the autocovariance of $X_{t}$ at some time lag $h$.   %should we talk about white noise processes first? Probably, use the relation we found from the autocovariance function to define it. Then build the ARMA model from there.  The simplest of these stationary processes is one where the individual observations are independent and identically distributed random variables or $IID$. Such processes, $Z_{t}$, with zero mean and autocovariance function   $$\gamma_{Z}(h) =   \begin{cases}  \sigma^{2} & h = 0 \\  0 & h \neq 0 \\  \end {cases} $$  are known as {\em white noise processes} and are written as $Z_{t} \sim WN(0,\sigma^{2})$. We can use a white noise as a forcing term to build a useful set of linear difference equations to analyze a more complicated time-series.  There exist a class of finite difference equations used in the analysis of discrete time series known as autoregressive-moving average (ARMA) processes. These processes give us a way of inspecting the behavior of time-series with a simple but thoroughly descriptive parametric structure. A stationary process $\{X_t\}$ can be modeled by an ARMA(p,q) process if at every time $t$