CARMA Models and MCMC Methods

Markov Chains

A Markov chain is a stochastic process, i.e. a random process that evolves over time or space, for which the probability of a state depends only on the previous state. Typically, a Markov chain describes a process with a countable number of states taking discrete steps in time. Given a state, the state that follows is given by a random variable that chooses another state in the chain, including the current state. Thus, a walk can be taken between successive states that "chains" the states together and is entirely random. The property which states that a state's probability depends only on the previous and none before it is known as the Markov property, also called "memorylessness". If, given that a system is in a particular state, we attempt to walk backwards through its Markov chain to deduce what the previous states were, we will have to consider each state that leads to the current, then each state that leads to each of those, etc. until we can say nothing about which state the system was more likely to have been in at a certain number of time steps ago. Similarly, if we begin with our system in a particular state and follow the tree of all possible future states, the total probability for each will converge to some value which is independent of the state we started with. In this way, the system "forgets" the state it was in at a large number of time steps ago.

In general, Monte Carlo methods are numerical methods which use many draws of a random variable to generate a probability distribution. Due to the law of large numbers, a large enough number of draws of the random variable will allow the unknown quantity (for example, the mean of the samples) to converge to a deterministic value which is dependent on the problem itself, not the samples of the random variable. For example, the expected value of the roll of a standard die can be approximated by rolling the die several times and taking the mean of all rolls (the sample mean). As the number of rolls increases, the sample mean can be expected to converge to the true value.

Stationary Processes

There exists of class of time dependent processes known as *stationary processes* which have a few attributes making them useful for the analysis of a time series. A process is said to be stationary if its expectation value and variance remain constant at any time and it's autocovariance function for a given time lag is constant over the entire process.

The simplest stationary process is one in which the value of the system at any time is an identically independently distributed random variable with zero mean and variance, \(\sigma^{2}\). These processes are known as *white noise* processes.

AR Processes

It is useful to represent many time series as a markov chain where the current value of the system depends on the value of the system at some previous time. This means the process is *autoregressive*. The simplest case of a time-series that depends on itself is known at the AR(1) process. The AR(1) process is simply a process in which the current value of the system is the previous value scaled by some amount with an additional random term to represent the random driving forces within the system. We can write a time series X_t as

\(X_{t} = \phi X_{t-1} + \epsilon(t)\)

where \epsilon(t) is a white noise process with some variance \sigma^{2}.

We can represent a stochastic time series in terms of white noise processes.

It's possible to represent a time series as a linear combination of previous values of the system set equal to a white noise process. Since \epsilon(t) is a stationary process, it's useful to use a representation in which a linear combination of the values of the system at different time lags are stationary as well. We can do this by simply solving for \epsilon(t)

\(\epsilon(t) = X_t - \phi X_{t-1}\)

MA Processes

ARMA Models

We can develop a parametric form to represent a random process with both autoregressive and moving average behaviors by simply setting an AR process equal to an MA process. These processes allow us to probe the behavior of time series that are driven by random stochastic impulses to the system as well as their own previous behavior. We call these processes *AutoRegressive Moving Average* processes, or *ARMA* processes. ARMA processes are represented by the following finite difference equation.

\(X_t - \phi_1X_{t-1} - ... - \phi_pX_{t-p} = Z_t + \theta_{t-1} + ... + \theta_qZ_{t-q}\)

Where $\{Z_t\}$ is a white noise process with zero mean and some variance $\sigma^{2}$.