Antonio Bibiano edited Literture Review.tex  over 9 years ago

Commit id: 8876b60afb70727e2c46aa04e3ceedae9b3fbcfe

deletions | additions      

       

\section{Literature Review}  The wealth of high quality observations performed during the last two decades have thoroughly shown us how modern cosmology is capable of quantitatively reproduce the details of many observations, all indicating that the universe is undergoing an epoch of accelerated expansion[ 4 (1) ].  Those observations include geometrical probes such as standard candles like SnIa [1,2,3,4 (2) ], gamma ray bursts [ 5 (2) ], standard rulers like the CMB sound horizon and BAO [6, 7 (2) ]; and dynamical probes like the growth rate of cosmological perturbations probed by redshift space distortions [8 (2)] or weak lensing [9 (2)].   While those observations allowed cosmologist to rule out a flat matter dominated universe at server sigma they failed to give us any further theoretical insight into the source for this late cosmic acceleration.   That’s why it’s dubbed “Dark Energy” and the simplest candidate explanation is the so-called cosmological constant LAMBDA that, in the concordance model based on the assumptions of homogeneity, flatness and validity of general relativity, behaves as an unknown form of energy density that remains constant in time and it’s distinguished from ordinary matter species, such as baryons and radiations, due to its negative pressure that counteracts the gravitational force to lead to the accelerated expansion [10 (2)] .  What’s even more surprising it’s the fact that, according to the latest PLANCK results [ ] , Dark Energy constitutes around the 68% 68percent  of the known universe with the remainder being around 5% 5percent  baryons and 27% 27percent  dark matter. The latter is a name used to describe a form of non-relativistic matter that interact very weakly with standard matter particles. Its existence was postulated by Vera Rubin in the late 60’s [ ] that inferred its presence from the gravitational effect it exerted on visible matter allowing her to explain the observed galactic rotational curves.  It is now well known that Dark Matter plays a crucial role for the growth of large scale structure in the universe because it can cluster by gravitational instability creating the perfect environment for galaxy formation.  The formation of structure in the universe begins when the pressureless dark matter starts to dominate the total energy density of the universe, during this era the energy density of dark energy needs to be strongly suppressed to allow sufficient growth of large scale structure.  While the energy density of dark matter evolves as [rhom prop to a^-3] $\rho_m \propto a^{-3}$  where a is the scale factor of an expanding universe, the dark energy density remains constant in time and eventually catches up with the former and starts to dominate and accelerate the expansion history of the universe; This evolution history rises two main theoretical weak points for the Cosmological Constant [ Weinberg ]: [Weinberg] :  \begin{itemize}  \item  The Fine Tuning Problem: from the viewpoint of particle physics, the cosmological constant can be interpreted as vacuum energy density, but summing up zero-point energies it’s estimated to be around 10^74 Gev^4, $10^{74}$ Gev$^4$,  much larger than the observed value of 10^-47 Gev^4, $10^{-47}$ Gev$^4$,  so a novel mechanism is needed to obtain the tiny value of LAMBDA consistent with observations. \item  The Coincidence Problem: Dark Energy started to dominate the expansion history of the universe relatively close to the present epoch, allowing just the right time for galaxies to form. \end{itemize}  A current of thought argues that these are indeed the only conditions that allow the development of life as we know it, but this anthropic explanations is highly controversial.  That’s why a lot of effort has been put in increasing the complexity of the concordance LCDM model.   A first approach in this direction is the development of Dark Energy models that aim to describe dark energy with alternative forms for the energy-momentum tensor with a negative pressure. The additional degrees egrees  of freedom with respect to the standard model offer a way to parametrize our ignorance about the fundamental nature of DE with a few parameters that quantify possible deviations from the LCDM behavior. The first class of models falling into this category still describe DE as an homogeneous field in a universe described by the FLRW metric [ ] :  [equation]  whose time dependence is all described by the scale factor. The background evolution so it’s usually described by the Hubble function [ H= adot/a ] that describes how the expansion rate changes as a function of time. This in turn is related to the abundance of different constituents in the universe through the Friedmann Equation: 

Where the OMEGA’s are the energy densities of respectively matter, radiation, curvature and dark energy. The equation of state parameter w(a) quantifies the ratio between pressure and energy density of the DE component.  For a cosmological constant this parameter is constant and has a value w = -1 but we can readily see how just by allowing this parameter to be time dependant we can obtain a completely new expansion history.   Common phenomenological parametrizations of the equation of state parameter are the CPL parametrization [ ] :  \begin{equation}  w(a) = w0 + w1 (1-a) \end{equation}  based on the behavior of w(a) at low redshifts, and the early dark energy parametrization [ look at fontanot ]   \begin{equation}  w(a) = blalbala s [] s\end{equation}  that postulates a different abundance of DE at early times.  Another class of widely studied scenarios views dark energy as a scalar field whose dynamical evolution is driven by a parametrized potential, the representative models of this class are Quintessence [b], k-essence[b], Phantom[b], Quintom [b] and perfect fluid [] DE models.For Quintessence models the most common choices of potentials include slowly varying runaway potentials such as an inverse power law [ b]:  [equation] \begin{equation}equation\end{equation}  or an exponential [b]:  [equation] \begin{equation}equation\end{equation}  or SUGRA potentials arising within supersymmetric theories of gravity [b] :  [equation] \begin{equation}equation\end{equation}  in k-essence models instead the it’s the scalar field kinetic energy that drives the acceleration.  In the perfect fluid models the DE is modeled as a perfect fluid with a specific equation of state such as Chaplygin gas model [45 book ].  A first extension to those models can be done by letting the Dark Energy field interact with the matter species via a parametrized coupling like in the extended quintessence models []. 

The next step in this approach is to drop the assumption that General Relativity is valid on the largest scales and construct what are now known as Modified Gravity models. A way to accomplish this is to take the Lagrangian density of the LCDM model f(R) = R - 2LAMBDA and modify it by adding non linear terms in R, these models are now known as f(R)-Gravity [ ].  Scalar tensor theories instead couple the Ricci scalar R to a scalar field, examples of those are the Brans-Dicke theory [ ] and dilaton gravity [ ]. A different approach instead is that of the braneworld models proposed by Dvali, Gabadaze, Porrati (DGP) [ ] where the late time acceleration of the universe is realized as a result of the gravitational leakage from a 3-dimensional surface to a 5th extra dimension on Hubble distances.  All this models are able to satisfy local gravity constraints as well as conditions set by the epochs preceding the matter domination epoch and provide the acceleration of the universe without resorting to unknown forms of matter.  With this huge range of models available it’s important to place constraints on each one of those using observational data. The usual approach is to compare LCDM and different models predictions on the basis of a bayesian analysis.  On the largest scales SN Ia observations provide information on the cosmic expansion around the redshift z <~ 2 [ ] and while those are the observables that first helped establish the LCDM model there are some statistically significant deviations in the brightness of SnIa data at z>1 from the LCDM prediction as stated by Kowalsky et al [ ].  Another large scale probe is the measurement of CMBR. This is the radiation coming from the last scattering surface around z =~ 1090 when photons ceased to be tightly coupled with baryons and started to stream freely across the universe. This radiation presents anisotropies that are directly related to the matter perturbations originated from inflation. The presence of dark energy directly affects those anisotropies, for example by shifting the position of the acoustic peaks [68 book ] or by changing the entity of the Integrated sachs wolfe effect due to a variation of the gravitational potential [118 book]. 

Those were pioneered in 2003 by Klypin et al [penzo] with a dark-matter only simulation with an evolving equation of state parameter that found no major differences at redshift z=0 in the power spectrum and halo mass function but noted how the differences became more significant at higher redshift. Subsequently multiple groups investigated the properties of dark matter structures in DE cosmologies [note 1 penzo] looking at halo concentrations, velocity dispersions and abundance relations in quintessence and early dark energy models. Along the same lines Baldi [] investigated the evolution of coupled dark energy models that present an interesting signature due to new phenomena not present in the LCDM scenario.On the largest scales the BAO peaks and the redshift space distortions have been investigate by Jenkins [ ]. Recently new algorithms have been published that allow to simulate in great detail the evolution of Modified Gravity theories like f(R)-Gravity and DGP [ ] a feat that was not possible with the previously available codes. While most of those simulations concentrate on the dark matter properties additional effort has been spent recently to simulate the observable part of the universe by including baryonic physics and following it’s impact on the formation of the smallest structures  this has been accomplished in Maio et al [125 baldi] and Penzo et al[ have print] that studied galaxy properties in hydrodynamical simulations of quintessence models, and by Fontanot [ two pap] that studied the statistical properties of galaxies applying a semi analytical model to Early dark energy and f(R) gravity simulations.  Both those kind of simulations allow us to quantify the deviations from the LCDM behavior on the medium to large scale using the matter distribution and halo properties and then constraint the new models by comparing them against observations and also to observe a virtual realization of the small scale scenario to understand how galactic properties are affected and if those modifications are able to tackle the challenges that persist in the LCDM scenario.