Literature Review

The wealth of high quality observations performed during the last two decades have thoroughly shown us how modern cosmology is capable of quantitatively reproduce the details of many observations, all indicating that the universe is undergoing an epoch of accelerated expansion (al. 2003, al. 2004).
Those observations include geometrical probes such as standard candles like SnIa (Project 1999, al. 1998, al. 2008) [add], gamma ray bursts (Friedman 2005) [add] , standard rulers like the CMB sound horizon and BAO (al. 2007, al. 2009) [add]; and dynamical probes like the growth rate of cosmological perturbations probed by redshift space distortions (al. 2002)[add] or weak lensing (al. 2008)[add].
While those observations allowed cosmologist to rule out a flat matter dominated universe at serveral sigma they failed to give us any further theoretical insight into the source for this late cosmic acceleration, that’s why it’s dubbed “Dark Energy” and the simplest candidate explanation is the so-called cosmological constant $$\Lambda$$ (Carroll 2001).
In the concordance model based on the assumptions of homogeneity, flatness and validity of general relativity, the cosmological constant is usually included in the right hand side of the Einstein field equations $R_{\mu \nu} - {1 \over 2}g_{\mu \nu}\,R = {8 \pi G \over c^4} T_{\mu \nu} - g_{\mu \nu} \Lambda$ and $$\Lambda$$ is treated as an unknown form of energy density that remains constant in time and it’s distinguished from ordinary matter species, such as baryons and radiations, due to its negative pressure that counteracts the gravitational force to lead to the accelerated expansion (Carroll 2001).
What’s even more surprising it’s the fact that, according to the latest PLANCK results(Collaboration 2014) , Dark Energy constitutes around the 68% of the known universe with the remainder being around 5% baryons and 27% dark matter. The latter is a name used to describe a form of non-relativistic matter that interacts very weakly with standard matter particles. Its existence was postulated by Vera Rubin in the 70’s (Rubin 1970) that inferred its presence from the gravitational effects it exerted on visible matter allowing her to explain the shape of rotational curves in the observed galaxies.
It is now well known that Dark Matter plays a crucial role for the growth of large scale structure in the universe because it can cluster by gravitational instability creating the perfect environment for galaxy formation. The formation of structure in the universe begins when the pressureless dark matter starts to dominate the total energy density of the universe, during this era the energy density of dark energy needs to be strongly suppressed to allow sufficient growth of large scale structures. While the energy density of dark matter evolves as $$\rho_m \propto a^{-3}$$ where a is the scale factor of an expanding universe, the dark energy density remains constant in time and eventually catches up with the former and starts to dominate and accelerate the expansion history of the universe (Dodelson 2003).
This evolution history rises two main theoretical weak points for the Cosmological Constant (Weinberg 1989) :

• The Fine Tuning Problem: from the viewpoint of particle physics, the cosmological constant can be interpreted as vacuum energy density, but summing up zero-point energies it’s estimated to be around $$10^{74}$$ Gev$$^4$$, much larger than the observed value of $$10^{-47}$$ Gev$$^4$$, so a novel mechanism is needed to obtain the tiny value of LAMBDA consistent with observations.

• The Coincidence Problem: Dark Energy started to dominate the expansion history of the universe relatively close to the present epoch, allowing just the right time for galaxies to form.

A current of thought argues that these are indeed the only conditions that allow the development of life as we know it, but this anthropic explanations is highly controversial. That’s why a lot of effort has been put in increasing the complexity of the concordance LCDM model. A first approch is to add additional degrees of freedom to the standard model as a way to parametrize our ignorance about the fundamental nature of Dark Energy with a few parameters that quantify possible deviations from the LCDM behavior. The first class of models falling into this category still describes Dark Energy as an homogeneous field in a universe described by the Friedmann-Lemaitre-Robertson-Walker metric (Baldi 2012): $ds^2 = -c^2dt^2 + a(t)\{ \delta_{ij}dx^i dx^j \}$ whose time dependence is all described by the scale factor $$a(t)$$. The background evolution so it’s usually described by the Hubble function $$H(a) \equiv \dot{a}/a$$ that describes how the expansion rate changes as a function of time. This in turn is related to the abundance of different constituents in the universe through the Friedmann equation: $\frac{H^2(a)}{H_0^2} = \Omega_M a^{-3} + \Omega_r a^{-4} + \Omega_K a^{-2} + \Omega_{DE} \text{exp} \left \{ -3 \int_1^a \frac{1+w(a')}{a'}da' \right \}$ Where the $$\Omega$$’s are the energy densities of respectively matter, radiation, curvature and Dark Energy.
The equation of state parameter $$w(a)$$ quantifies the ratio between pressure and energy density of the Dark Energy component. For a cosmological constant this parameter is constant and has a value $$w = -1$$ but we can readily see how just by allowing this parameter to be time dependant we can obtain a completely new expansion history. Common phenomenological parametrizations of the equation of state parameter are the Chevallier-Polarski-Linder parametrization (Chevallier 2001, Linder 2003): $w(a) = w_0 + w_a (1-a)$ based on the behavior of w(a) at low redshifts, and the early dark energy parametrization (Wetterich 2004) $w(a) = \frac{w_0}{1+b \text{ln}(1/a)}$ where $$b$$ is a parameter dependant on the the abundance of Dark Energy at early times.
Another class of widely studied scenarios views Dark Energy as a scalar field whose dynamical evolution is driven by a parametrized potential, the representative models of this class are Quintessence (Wetterich 1988, Ratra 1988), k-essence (Armendariz-Picon 2001), Phantom (Caldwell 2002), Quintom (al. 2005) and perfect fluid (al. 2001) Dark Energy models. For Quintessence models the most common choices of potentials include slowly varying runaway potentials such as an inverse power law (Ratra 1988): $V(\phi) = A \phi^{-\alpha}$ or an exponential (Wetterich 1988): $V(\phi) = A e^{-\alpha \phi}$ or SUGRA potentials arising within supersymmetric theories of gravity (Brax 1999) : $V(\phi) = A \phi^{-\alpha}e^{\phi^2/2}$ in k-essence models instead it’s the scalar field kinetic energy that drives the acceleration. In the perfect fluid models such as Chaplygin gas model (al. 2001) Dark Energy is modeled as a perfect fluid with a specific equation of state.
A first extension to those models can be done by letting the Dark Energy field interact with the matter species via a parametrized coupling like in the extended quintessence models (Pettorino 2008). For these scenarios the strength of the coupling must be suppressed in high density environments to satisfy solar system test of gravity. That’s why recently models with species-dependent coupling have been developed, examples are the Coupled Dark Energy models (Wetterich 1995, Amendola 2000) or the Growing Neutrino Models (Amendola 2008). In these scenarios the coupling with baryons is highly suppressed so that the model can be easily satisfy solar system tests (Will 1993) while still allowing for an interesting range of modification of the structure formation process (Baldi 2012).
Another approach along these lines is that of a slowly running cosmological constant that arises as an effective vacuum energy density in Quantum field theory in a curved space time (Parker 2009), this idea, based on the possibility that quantum effects in curved space time can be responsible for the renormalization group running of the vacuum energy, leading to an energy density that evolves with the Hubble rate $\rho_\Lambda(H) = \rho_{\Lambda0} + \frac{3\nu}{8\pi}(H^2-H_0^2)$ this can be coupled to the running of newton gravitational constant $G(H) = \frac{G_0}{1+\nu ~ \text{ln}(H^2/H_0^2)}$ to allow for the conservation of the energy-momentum tensor (al. 2011). This model known as the “Running FLRW model” is so able to describe completely new expansion history from first principles.
A further step these lines is to drop the assumption that General Relativity is valid on the largest scales and construct what are now known as Modified Gravity models.
A way to accomplish this is to take the Lagrangian density of the LCDM model $$f(R) = R - 2\Lambda$$ and modify it by adding non linear terms in $$R$$, obtaining what are now known as f(R)-Gravity models (Sotiriou 2010, al. 2004). Scalar tensor theories instead couple the Ricci scalar $$R$$ to a scalar field, examples of those are the Brans-Dicke theory (Brans 1961) and Dilaton gravity (Gasperini 2001). A different approach instead is that of the braneworld models proposed by Dvali, Gabadadze and Porrati (Dvali 2000) where the late time acceleration of the universe is realized as a result of the gravitational leakage from a 3-dimensional surface to a 5th extra dimension on Hubble distances.
All this models are able to satisfy local gravity constraints (Felice 2010) as well as conditions set by the epochs preceding the matter domination epoch and provide the acceleration of the universe without resorting to unknown forms of matter.

With this huge range of models available it’s important to place constraints on each one of those using observational data. The usual approach is to compare the $$\Lambda \text{CDM}$$ and different models predictions on the basis of a bayesian analysis.
On the largest scales observations of type Ia Supernovae provide information on the cosmic expansion around the redshift $$z \lesssim 2$$ (al. 2012) and while those are the observables that first helped establish the $$\Lambda \text{CDM}$$ model there are some statistically significant deviations in the brightness of Supernovae Ia data at $$z > 1$$ from the $$\Lambda \text{CDM}$$ prediction as stated by Kowalsky et al (al. 2008).
Another large scale probe is the measurement of Cosmic Microwave Background Radiation. This is the radiation coming from the last scattering surface around $$z \simeq 1090$$ when photons ceased to be tightly coupled with baryons and started to stream freely across the universe. This radiation presents anisotropies that are directly related to the matter perturbations originated from inflation. The presence of Dark Energy directly affects those anisotropies, for example by shifting the position of the acoustic peaks (al. 2005) or by changing the entity of the Integrated Sachs-Wolfe effect due to a variation of the gravitational potential (Sachs 1967).
But while $$\Lambda \text{CDM}$$ remains a really good fit for the above probes there are other observational challenges that might require some more drastic modifications in order to be resolved.
Some of those involve galactic scale phenomena such as the density distributions inside dark matter halos, observed to have a central density core (Gentile 2004, al. 2005, Blok 2005) whereas $$\Lambda \text{CDM}$$ predicts a steeper inner cusp (Blok 2005), or the high density of cluster halos compared to the shallow profiles predicted by $$\Lambda \text{CDM}$$ model (Navarro 1996, al. 2005, Umetsu 2008) and their baryon fraction measured to be systematically lower than expected (al. 2007, al. 2009) . Another problem involves the amount and properties of satellites of milky way sized halos (eg. the Missing Satellite Problem (Bullock 2010) and the Too Big to Fail Problem (Papastergis 2014)).
On larger scales instead it possible to measure the bulk flow corresponding to the CMBR dipole. This has been done in a number of large scale velocity surveys (al. 1987) [add] and has been observed that the galaxy velocities on scales larger than 100 Mpc/h are always bigger than expected (Watkins 2009).
Another LCDM prediction concerns the predicted amount of galaxies that should reside in the smaller underdense regions of the universe called voids. In fact $$\Lambda \text{CDM}$$ predicts the presence of many small dark matter halos that should host dwarf galaxies (Tikhonov 2009, Peebles 2005, Peebles 2007) .

While for the largest scale it’s possible to make predictions for every dark energy model simply by ignoring non linear effects whose contribution is negligible, for all the other observations presented above it is necessary, even for the simple $$\Lambda \text{CDM}$$ model, to resort to Numerical Simulations.
N-Body simulations follow the evolution and interaction of the matter species in the universe, starting from the tiny density fluctuations generated in the early universe by inflation, to the formation of large scale structures down the the galaxies in the present epoch and in the last decade the advances in gravitational n-body algorithms, computational fluid dynamics and accessibility of computational power allowed them to be really precise and readily available (Baldi 2012). The wide dynamical range of modern Cosmological Simulation has been made possible thanks to the continued effort spent in improving the level of detail achieved by including the effects of baryonic physics (Teyssier 2002, Springel 2002, al. 2010) and a wide range of astrophysical processes as gas cooling, star formation, supernovae and agn feedback (Springel 2003, Springel 2003, Kay 2002, al. 2007).
Only recently thoug a bigger effort is being spent in adapting the algorithms of N-Body simulations to different dark energy scenarios, allowing cosmologists to study their effects on the evolution of density perturbations in the non-linear regime.
Those were pioneered in 2003 by Klypin et al (al. 2003) with a dark-matter only simulation with an evolving equation of state parameter that found no major differences at redshift z=0 in the power spectrum and halo mass function but noted how the differences became more significant at higher redshift.
Subsequently multiple groups investigated the properties of dark matter structures in Dark Energy cosmologies (al. 2004, Grossi 2009, al. 2005) looking at halo concentrations, velocity dispersions and abundance relations in quintessence and early dark energy models. Along the same lines Baldi (Baldi 2012, al. 2010) investigated the evolution of coupled dark energy models that present an interesting signature due to new phenomena not present in the $$\Lambda \text{CDM}$$ scenario. The large scale implications of such models instead has been investigated by Jennings et al. (al. 2010) that studied the BAO peaks and the redshift space distortions in a variety of Quintessence and Early Dark Energy.
Recently new algorithms have been published that allow to simulate in great detail the evolution of Modified Gravity theories like f(R)-Gravity and DGP (al. 2012, Puchwein 2013) a feat that was not possible with the previously available codes. While most of those simulations concentrate on the dark matter properties additional effort has been spent recently to simulate the observable part of the universe by including baryonic physics and following it’s impact on the formation of the smallest structures; this has been accomplished in Maio et al (al. 2006) and Penzo et al (al. 2014) that studied galaxy properties in hydrodynamical simulations of quintessence models, and by Fontanot et al. (Bianchi 2013, al. 2012) that studied the statistical properties of galaxies applying a semi analytical model to Early Dark Energy and f(R)-Gravity simulations.
Both those kind of simulations allow us to quantify the deviations from the LCDM behavior on the medium to large scale using the matter distribution and halo properties and then constraint the new models by comparing them against observations and also to observe a virtual realization of the smaller scale regions to understand how galactic properties are affected and if those modifications are able to tackle the challenges that persist in the $$\Lambda \text{CDM}$$ scenario.