No Title Found

AbstractNo Abstract Found

The Curvature method at low redshifts

\label{chp:The Curvature method}

According to the photo-heating model of the intergalactic medium, He ii Ireionization is expected to affect its thermal evolution. Evidence for additional energy injection into the IGM has been found at \(3\lesssim z\lesssim 4\), though the evidence for the subsequent fall-off below \(z\sim 2.8\) is weaker and depends on the slope of the temperature–density relation, \(\gamma\). Here we present, for the first time, an extension of the IGM temperature measurements down to the atmospheric cut-off of the H i Lyman-\(\alpha\) forest at \(z\simeq 1.5\). Applying the curvature method on a sample of 60 UVES spectra we investigated the thermal history of the IGM at \(z<3\) with precision comparable to the higher redshift results.

Boera, E., et al. 2014, MNRAS, 441, 1916

As discussed in Section \ref{IntroThRE} the IGM thermal history can be an important source of information about reionizing processes that injected vast amounts of energy into this gas on relatively short timescales and, in particular, about the He ii reionization. While the direct observation, through the detection of the “Gunn–Peterson effect”, recently suggests the end of the He ii reionization at \(z\sim 2.7\) (e.g. (citation not found: Shull10); (citation not found: Worseck11); (citation not found: Syphers13); (citation not found: Syphers14)), any current constraint on the physics of this phenomenon is limited by the cosmic variance among the small sample of “clean” lines of sight, those along which the He ii Ly-\(\alpha\) transition is not blocked by higher-redshift H i Lyman limit absorption. For this reason indirect methods have been developed to obtain a detailed characterization of the He ii reionization.

The reionization event is expected to reheat the intergalactic gas leaving the characteristic signature of a peak, followed by a gradual cooling, in the temperature evolution at the mean gas density (e.g. (citation not found: McQuinn09)). In the last decade, the search for this feature and the study of the thermal history of the IGM as a function of redshift have been the objectives of different efforts, not only to verify this basic theoretical prediction and constrain the timing of He ii reionization, but also to obtain information on the nature of the ionizing sources and the physics of the related ionizing mechanisms.

To obtain measurements of the temperature of the IGM, studying the absorption features of the H i Ly-\(\alpha\) forest has proven to be a useful method so far. The widths of the Ly-\(\alpha\) lines are sensitive to the thermal broadening, among the other effects,. Therefore, using the comparison with cosmological simulations, different approaches have been able to extract from them information about the “instantaneous” temperature of the gas at the moment of absorption. However, as summarized in Section \ref{PreviousMethods}, the observational picture drawn by the results of previous efforts does not have a straightforward interpretation.

Recently, (citation not found: Becker11) developed a statistical approach based on the flux curvature. This work constrained the temperature over \(2\lesssim z\lesssim 4.8\) of an “optimal” or “characteristic” overdensity, which evolves with redshift. The error bars were considerably reduced compared to previous studies, partially at the expense of determining the temperature at a single density only, rather than attempting to constrain the temperature–density relation. Some evidence was found for a gradual reheating of the IGM over \(3\lesssim z\lesssim 4\) but with no clear evidence for a temperature peak. Given these uncertainties, the mark of the He ii reionization still needs a clear confirmation. Nevertheless, the curvature method is promising because it is relatively robust to continuum placement errors: the curvature of the flux is sensitive to the shape of the absorption lines and not strongly dependent on the flux normalization. Furthermore, because it incorporates the temperature information from the entire Lyman-\(\alpha\) forest, this statistic has the advantage of using more of the available information, as opposed to the line-fitting method which relies on selecting lines that are dominated by thermal broadening.

Moreover, an injection of substantial amounts of thermal energy may also result in a change in the temperature–density relation (Equation \ref{eq:TDrelation}). The detailed study of this process has to take into consideration the effects of the IGM inhomogeneities driven by the diffusion and percolation of the ionized bubbles around single sources, and currently constitutes an important object of investigation through hydrodynamical simulations (e.g. (citation not found: Compostella13)). Some analyses of the flux PDF have indicated that the \(T\)\(\rho\) relation may even become inverted (e.g. (citation not found: Becker07); (citation not found: Bolton08); (citation not found: Viel09); (citation not found: Calura12); (citation not found: Garzilli12)). However, the observational uncertainties in this measurement are considerable (see discussion in (citation not found: Bolton13)). A possible explanation was suggested by considering radiative transfer effects ((citation not found: Bolton08)). Although it appears difficult to produce this result considering only He ii photo-heating by quasars ((citation not found: McQuinn09); (citation not found: Bolton09)), a new idea of volumetric heating from blazar TeV emission predicts an inverted temperature–density relation at low redshift and at low densities. According to these models, heating by blazar \(\gamma\)-ray emission would start to dominate at \(z\simeq 3\), obscuring the “imprint” of He ii reionization ((citation not found: Chang12); (citation not found: Puchwein12)). In the most recent analysis, with the line-fitting method ((citation not found: Rudie13); (citation not found: Bolton13)), the inversion in the temperature–density relation has not been confirmed, but a general lack of knowledge about the behavior of the \(T\)\(\rho\) relation at low redshift (\(z<3\)) still emerges, accompanied with no clear evidence for the He ii reionization peak. A further investigation of the temperature evolution in this redshift regime therefore assumes some importance for obtaining constraints on the physics of the He ii reionization and the temperature–density relation of the IGM.

The purpose of the work presented in this Chapter and in Chapter 3 is to apply the curvature method to obtain new, robust temperature measurements at redshift \(z<3\), extending the previous results, for the first time, down to the optical limit for the Lyman-\(\alpha\) forest at \(z\simeq 1.5\). By pushing the measurement down to such a low redshift, we attempt to better constrain the thermal history in this regime, comparing the results with the theoretical predictions for the different heating processes. Furthermore, the exploration of this new redshift regime allows to search for the end of the heating seen previously in (citation not found: Becker11), such an evidence is necessary to help bolster the interpretation as being due to the He ii reionization. We infer temperature measurements by computing the curvature on a new set of quasar spectra at high resolution obtained from the archive of the Ultraviolet and Visual Echelle Spectrograph (UVES) on the Very Large Telescope (VLT). Synthetic spectra, obtained from hydrodynamical simulations used in the analysis of (citation not found: Becker11), and extended down to the new redshift regime, are used for the comparison with the observational data. Similar to (citation not found: Becker11), we constrain the temperature of the IGM at a characteristic overdensity, \(\bar{\Delta}\), traced by the Lyman-\(\alpha\) forest, which evolves with redshift. We do not attempt to constrain the T–\(\rho\) relation, but we use fiducial values of the parameter \(\gamma\) in Equation \ref{eq:TDrelation} to present results for the temperature at the mean density, \(T_{0}\).

While the actual temperature measurements and their discussion will be presented in Chapter 3, this Chapter intends to explain the curvature analysis procedure and the necessary preparation of the observational and synthetic spectra; it is organised as follows. In Section \ref{sec:obs} we present the observational data sample obtained from the VLT archive, while the simulations used to interpret the measurements are introduced in Section \ref{sec:sims}. In Section \ref{sec:curvature} the curvature method and our analysis procedure are summarized. In Section \ref{sec:analysis} we present the data analysis and we discuss the strategies applied to reduce the systematic uncertainties. Finally, the calibration and the analysis of the simulations from which we obtain the characteristic overdensitieis is described in Section \ref{sec:SimAnalysis}.

The observational data

\label{sec:obs}

In this work we used a sample of 60 quasar spectra uniformly selected on the basis of redshift, wavelength coverage and S/N in order to obtain robust results in the UV and optical parts (3100–4870 Å) of the spectrum, where the Lyman-\(\alpha\) transition falls for redshifts \(1.5<z<3\). The quasars and their basic properties are listed in Table \ref{table:datatable}. The spectra were retrieved from the archive of the UVES spectrograph on the VLT. In general, most spectra were observed with a slit-width \(\lesssim 1\farcs 0\) wide and on-chip binning of 2\(\times\)2, which provides a resolving power \(R\simeq 50000\) (FWHM \(\simeq\) 7 km/s); this is more than enough to resolve typical Lyman-\(\alpha\) forest lines, which generally have FWHM \(\gtrsim\) 15 km/s. The archival quasar exposures were reduced using the ESO UVES Common Pipeline Language software. This suite of standard routines was used to optimally extract and wavelength-calibrate individual echelle orders. The custom-written program uves_popler11uves_popler was written and is maintained by M. T. Murphy and is available at http://astronomy.swin.edu.au/~mmurphy/UVES_popler. was then used to combine the many exposures of each quasar into a single normalized spectrum on a vacuum-heliocentric wavelength scale. For most quasars, the orders were redispersed onto a common wavelength scale with a dispersion of 2.5 km/s per pixel; for 4 bright (and high S/N), \(z\lesssim 2\) quasars the dispersion was set to 1.5 km/s per pixel. The orders were then scaled to optimally match each other and then co-added with inverse-variance weighting using a sigma-clipping algorithm to reject ‘cosmic rays’ and other spectral artefacts.

To ensure a minimum threshold of spectral quality and a reproducible sample definition, we imposed a “S/N” lower limit of 24 per pixel for selecting which QSOs and which spectral sections we used to derive the IGM temperature. A high S/N is, in fact, extremely important for the curvature statistic which is sensitive to the variation of the shapes of the Lyman-\(\alpha\) lines: in low S/N spectra this statistic will be dominated by the noise and, furthermore, by narrow metal lines that are difficult to identify and mask.

The “S/N” cut-off of 24 per pixel was determined by using the hydrodynamical Lyman-\(\alpha\) forest simulations discussed in Section \ref{sec:sims}. By adding varying amounts of Gaussian noise to the simulated forest spectra and performing a preliminary curvature analysis like that described in Sections \ref{sec:curvature} & \ref{sec:analysis}, the typical uncertainty on the IGM temperature could be determined, plus the extent of any systematic biases caused by low S/N. It was found that a competitive statistical uncertainty of \(\simeq 10\)% in the temperature could be achieved with the cut-off in “S/N” set to 24 per pixel, and that this was well above the level at which systematic biases become significant. However, in order for us to make the most direct comparison with these simulations, we have to carefully define “S/N”. In fact for the Lyman-\(\alpha\) forest the S/N fluctuates strongly and so it is not very well defined. Therefore, the continuum-to-noise ratio, C/N, is the best means of comparison with the simulations. To measure this from each spectrum, we had first to establish a reasonable continuum.

The continuum fitting is a crucial aspect in the quasar spectral analysis and for this reason we applied to all the data a standard procedure in order to avoid systematic uncertainties due to the continuum choice. We used the continuum-fitting routines of uves_popler to determine the final continuum for all our quasar spectra. Initially, we iteratively fitted a 5th-order Chebyshev polynomial to overlapping 10000-km/s sections of spectra between the Lyman-\(\alpha\) and Lyman-\(\beta\) emission lines of the quasar. The initial fit in each section began by rejecting the lowest 50 % of pixels. In subsequent iterations, pixels with fluxes \(\geq\)3 \(\sigma\) above and \(\geq\)1 \(\sigma\) below the fit were excluded from the next iteration. The iterations continued until the ‘surviving’ pixels remained the same in two consecutive iterations. The overlap between neighboring sections was 50 % and, after all iterations were complete, the final continuum was formed by combining the individual continua of neighboring sections with a weighting which diminished linearly from unity at their centers to zero at their edges. After this initial treatment of all spectra we applied further small changes to the fitting parameters after visually inspecting the results. In most cases, we reduced the spectral section size, the threshold for rejecting pixels below the fit at each iteration, and the percentage of pixels rejected at the first iteration to values as low as 6000 km/s, 0.8 \(\sigma\) and 40 % respectively. In Figure \ref{fig:forestex} we show examples of continuum fits for Lyman-\(\alpha\) forest regions at different redshifts obtained with this method. This approach allowed us to avoid cases where the fitted continuum obviously dipped inappropriately below the real continuum, but still defined our sample with specific sets of continuum parameters without any further manual intervention, allowing a reproducible selection of the appropriate sample for this analysis. Furthermore, as described in Section \ref{sec:analysis}, to avoid any systematics due to the large-scale continuum placement, we re-normalized each section of spectra that contributed to our results.

\(-\) Continued.
QSO \(z_{em}\) \(z_{Ly\alpha}\) C/N
\(z_{start}\) \(z_{end}\)
J115122\(+\)020426 2.40 1.86 2.40 17\(-\)55
J222006\(-\)280323 2.40 1.87 2.40 80\(-\)180
J011143\(-\)350300 2.41 1.87 2.41 67\(-\)130
J033106\(-\)382404 2.42 1.88 2.42 41\(-\)72
J120044\(-\)185944 2.44 1.90 2.44 62\(-\)108
J234628\(+\)124858 2.51 1.96 2.51 40\(-\)42
J015327\(-\)431137 2.74 2.15 2.74 75\(-\)130
J235034\(-\)432559 2.88 2.27 2.88 140\(-\)248
J040718\(-\)441013 3.00 2.37 3.00 96\(-\)135
J094253\(-\)110426 3.05 2.42 3.05 63\(-\)129
J042214\(-\)384452 3.11 2.46 3.11 98\(-\)160
J103909\(-\)231326 3.13 2.48 3.13 59\(-\)80
J114436\(+\)095904 3.15 2.50 3.15 25\(-\)43
J212912\(-\)153841 3.26 2.60 3.26 105\(-\)190
J233446\(-\)090812 3.31 2.64 3.31 40\(-\)46
J010604\(-\)254651 3.36 2.68 3.36 27\(-\)43
J014214\(+\)002324 3.37 2.68 3.37 28\(-\)35
J115538\(+\)053050 3.47 2.77 3.47 43\(-\)64
J123055\(-\)113909 3.52 2.82 3.52 18\(-\)39
J124957\(-\)015928 3.63 2.91 3.63 52\(-\)87
J005758\(-\)264314 3.65 2.92 3.65 74\(-\)90
J110855\(+\)120953 3.67 2.94 3.67 33\(-\)36
J132029\(-\)052335 3.70 2.96 3.70 42\(-\)81
J162116\(-\)004250 3.70 2.96 3.70 109\(-\)138
J014049\(-\)083942 3.71 2.97 3.71 37\(-\)38
\(-\) Continued.
QSO \(z_{em}\) \(z_{Ly\alpha}\) C/N
\(z_{start}\) \(z_{end}\)
J115122\(+\)020426 2.40 1.86 2.40 17\(-\)55
J222006\(-\)280323 2.40 1.87 2.40 80\(-\)180
J011143\(-\)350300 2.41 1.87 2.41 67\(-\)130
J033106\(-\)382404 2.42 1.88 2.42 41\(-\)72
J120044\(-\)185944 2.44 1.90 2.44 62\(-\)108
J234628\(+\)124858 2.51 1.96 2.51 40\(-\)42
J015327\(-\)431137 2.74 2.15 2.74 75\(-\)130
J235034\(-\)432559 2.88 2.27 2.88 140\(-\)248
J040718\(-\)441013 3.00 2.37 3.00 96\(-\)135
J094253\(-\)110426 3.05 2.42 3.05 63\(-\)129
J042214\(-\)384452 3.11 2.46 3.11 98\(-\)160
J103909\(-\)231326 3.13 2.48 3.13 59\(-\)80
J114436\(+\)095904 3.15 2.50 3.15 25\(-\)43
J212912\(-\)153841 3.26 2.60 3.26 105\(-\)190
J233446\(-\)090812 3.31 2.64 3.31 40\(-\)46
J010604\(-\)254651 3.36 2.68 3.36 27\(-\)43
J014214\(+\)002324 3.37 2.68 3.37 28\(-\)35
J115538\(+\)053050 3.47 2.77 3.47 43\(-\)64
J123055\(-\)113909 3.52 2.82 3.52 18\(-\)39
J124957\(-\)015928 3.63 2.91 3.63 52\(-\)87
J005758\(-\)264314 3.65 2.92 3.65 74\(-\)90
J110855\(+\)120953 3.67 2.94 3.67 33\(-\)36
J132029\(-\)052335 3.70 2.96 3.70 42\(-\)81
J162116\(-\)004250 3.70 2.96 3.70 109\(-\)138
J014049\(-\)083942 3.71 2.97 3.71 37\(-\)38

\label{fig:forestex}Examples of continuum fit (green solid line) in Lyman-\(\alpha\) regions for the quasar J112442-170517 with \(z_{em}=2.40\) (Top panel) and J051707-441055 at \(z_{em}=1.71\) (Bottom panel). The continuum fitting procedure used is described in the text.

The redshift distribution of the Lyman-\(\alpha\) forest (\(z_{Ly\alpha}\)) of the quasars in our selected sample is shown in Figure \ref{fig:histo}, where it is also reported their distribution of C/N in the same region. Due to instrumental sensitivity limits in the observation of bluest part of the optical Ly-\(\alpha\) forest, it is more difficult to collect data with high C/N for \(z_{Ly\alpha}<1.7\). The general lower quality of these data and the lower number of quasars contributing to this redshift region will be reflected in the results, causing larger uncertainties.

\label{fig:histo}Histograms showing our sample of quasar spectra with C/N\(>24\) in the Lyman- \(\alpha\) forest region. Top panel: redshift distribution referred to the Lyman\(-\alpha\) forest. Bottom panel: distribution of the continuum to noise ratio (C/N) in the forest region. The redshift bins of the histograms have been chosen for convenience of \(\delta z=0.025\). The vertical lines divide the histograms in the redshift bins width of \(\Delta z=0.2\) in which the measurements (at \(z\lesssim 3.1\)) will be collected in the following analysis.

The simulations

\label{sec:sims}

To interpret our observational results and extract temperature constraints from the analysis of the Lyman-\(\alpha\) forest, we used synthetic spectra, derived from hydrodynamical simulation and accurately calibrated to match the real data conditions. We performed a set of hydrodynamical simulations that span a large range of thermal histories, based on the models of (citation not found: Becker11) and extended to lower redshifts (\(z<1.8\)) to cover the redshift range of our quasar spectra. The simulations were obtained with the parallel smoothed particle hydrodynamics code GADGET-3 that is the updated version of gadget-2 ((citation not found: Springrl05)) with initial conditions constructed using the transfer function of (citation not found: Eisenstein99) and adopting the cosmological parameters \(\Omega_{m}=0.26\), \(\Omega_{\Lambda}\)=0.74, \(\Omega_{b}h^{2}=0.023\), \(h=0.72\), \(\sigma_{8}=0.80\), \(n_{s}=0.96\), according to the cosmic microwave background constraints of (citation not found: Reichardt09) and (citation not found: Jarosik11). The helium fraction by mass of the IGM is assumed to be \(Y=0.24\) ((citation not found: Olive04)). Because the bulk of the Lyman-\(\alpha\) absorption corresponds to overdensities \(\Delta=\rho/\bar{\rho}\lesssim 10\), our analysis will not be affected by the star formation prescription, established only for gas particles with overdensities \(\Delta>10^{3}\) and temperature \(T<10^{5}\)K.

Starting at \(z=99\) the simulations describe the evolution of both dark matter and gas using \(2\times 512^{3}\) particles with a gas particle mass of \(9.2\times 10^{4}h^{-1}M_{\odot}\) in a periodic box of 10 comoving \(h^{-1}\) Mpc. Instantaneous hydrogen reionization is fixed at \(z=9\). From the one set of initial conditions, many simulations are run, all with gas that is assumed to be in the optically thin limit and in ionisation equilibrium with a spatially uniform ultraviolet background from (citation not found: Haardt01). However, the photoheating rates, and so the corresponding values of the parameters \(T_{0}\) and \(\gamma\) of Equation \ref{eq:TDrelation}, were changed between simulations. In particular, the photo-heating rates from (citation not found: Haardt01) (\(\epsilon_{i}^{HM01}\)) for the different species (\(i\)=[H i, He i , He ii ]) have been rescaled using the relation \(\epsilon_{i}=\zeta\Delta^{\xi}\epsilon_{i}^{HM01}\) where \(\epsilon_{i}\) are the adopted photo-heating rates and \(\zeta\) and \(\xi\) are constants that change depending on the thermal history assumed. Possible bimodality in the temperature distribution at fixed gas density, observed in the simulations of (citation not found: Compostella13) in the early phases of the He II reionization, has not been taken into consideration in our models. We assume, in fact, that the final stages of He ii reionization at \(z<3\), when the IGM is almost completely reionized, can be described in a good approximation by a single temperature–density relation and are not affected anymore by the geometry of the diffusion of ionized bubbles. Our models do not include galactic winds or possible outflows from AGN. However, these are expected to occupy only a small proportion of the volume probed by the synthetic spectra and so they are unlikely to have an important effect on the properties of the Lyman-\(\alpha\) forest (see e.g. (citation not found: Bolton08) and also (citation not found: Theuns02) for a discussion in the context of the PDF of the Lyman-\(\alpha\) forest transmitted fraction where this has been tested).

A summary of the simulations used in this work is reported in Table \ref{table:simulations}. We used different simulation snapshots that covered the redshift range of our quasar spectra (\(1.5\lesssim z\lesssim 3\)) and, to produce synthetic spectra of the Lyman-\(\alpha\) forest, 1024 randomly chosen “lines of sight” through the simulations were selected at each redshift. To match the observational data, we needed to calibrate the synthetic spectra with our instrumental resolution, with the same H i Lyman-\(\alpha\) effective optical depth and the noise level obtained from the analysis of the real spectra (see Section \ref{sec:calibration}).

\label{table:simulations}Parameters corresponding to the different simulations used in this work. For each simulation we report the name of the model (column 1), the constants used to rescale the photo-heating rates for the different thermal histories (columns 2 & 3), the temperature of the gas at the mean density at \(z=3\) (column 4) and the power-law index of the \(T\)\(\rho\) relation at \(z=3\) (column 4).
Model \(\zeta\) \(\xi\) \(T_{0}^{z=3}\)[K] \(\gamma^{z=3}\)
A15 0.30 0.00 5100 1.52
B15 0.80 0.00 9600 1.54
C15 1.45 0.00 14000 1.54
D15 2.20 0.00 18200 1.55
E15 3.10 0.00 22500 1.55
F15 4.20 0.00 27000 1.55
G15 5.30 0.00 31000 1.55
D13 2.20 -0.45 18100 1.32
C10 1.45 -1.00 13700 1.02
D10 2.20 -1.00 18000 1.03
E10 3.10 -1.00 22200 1.04
D07 2.20 -1.60 17900 0.71

The curvature method

\label{sec:curvature}

The definition of curvature (\(\kappa\)), as used by (citation not found: Becker11), is the following :

\begin{equation} \label{c} \label{c}\kappa=\frac{F^{\prime\prime}}{[1+(F^{\prime})^{2}]^{3/2}}\ ,\\ \end{equation}

with the first and second derivatives of the flux (\(F^{\prime}\), \(F^{\prime\prime}\)) taken with respect to wavelength or relative velocity. The advantage of this statistic is that, as demonstrated in (citation not found: Becker11), it is quite sensitive to the IGM temperature but does not require the forest to be decomposed into individual lines. In this way the systematic errors are minimised if this analysis is applied to high resolution and high S/N spectra. Its calculation is relatively simple and can be computed using a single b-spline fit directly to large regions of forest spectra. This statistic incorporates the temperature information from all lines, using more of the available information, as opposed to line-fitting which relies on selecting lines that are dominated by thermal broadening. If calibrated and interpreted using synthetic spectra, obtained from cosmological simulations, the curvature represents a powerful tool to measure the temperature of the IGM gas, \(T(\bar{\Delta})\), at the characteristic overdensities (\(\bar{\Delta}\)) of the Lyman-\(\alpha\) forest at different redshifts.

However, at low redshifts (\(z\lesssim 3\)), the IGM gas tends to show characteristic overdensities (\(\bar{\Delta}\)) much higher than the mean density of the IGM (\(\bar{\rho}\)). Estimating the temperature at the mean density of the IGM (\(T_{0}\)) is then not straightforward. In fact, in the approximation of a gas collected into non-overlapping clumps of uniform density that have the same extent in redshift space as they have in real space, the Lyman-\(\alpha\) optical depth at a given overdensity (\(\Delta\)) will scale as:

\begin{equation} \label{eq:D} \label{eq:D}\tau(\Delta)\propto(1+z)^{4.5}\Gamma^{-1}T_{0}^{-0.7}\Delta^{2-0.7(1-\gamma)}\ ,\\ \end{equation}

where \(\Gamma\) is the H i photoionisation rate and \(T_{0}\) and \(\gamma\) are the parameters that describe the thermal state of the IGM at redshift \(z\) in Equation \ref{eq:TDrelation} ((citation not found: Weinberg97)). In general, if we assume that the forest will be sensitive to overdensities that produce a Lyman-\(\alpha\) optical depth \(\tau(\Delta)\simeq 1\), it is then clear from Equation \ref{eq:D} that these characteristic overdensities will vary depending on the redshift. At high redshift the forest will trace gas near the mean density while in the redshift range of interest here (\(z\lesssim 3\)) the absorption will be coming from densities increasingly above the mean. As a consequence, the translation of the \(T(\bar{\Delta})\) measurements at the characteristic overdensities into the temperature at the mean density becomes increasingly dependent on the value of the slope of the temperature–density relation (Equation \ref{eq:TDrelation}). Due to the uncertainties related to a poorly constrained parameter \(\gamma\), a degeneracy is introduced between \(\gamma\) and the final results for \(T_{0}\) that can be overcome only with a more precise measurement of the \(T\)\(\rho\) relation.

In this work we do not attempt to constrain the full \(T\)\(\rho\) relation because that would require a simultaneous estimation of both \(T_{0}\) and \(\gamma\). Instead, following consistently the steps of the previous analysis of (citation not found: Becker11), we establish empirically the characteristic overdensities (\(\bar{\Delta}\)) and obtain the corresponding temperatures from the curvature measurements. We define the characteristic overdensity traced by the Lyman-\(\alpha\) forest for each redshift as that overdensity at which \(T(\bar{\Delta})\) is a one-to-one function of the mean absolute curvature, regardless of \(\gamma\) (see Section \ref{sec:SimAnalysis}). We then recover the temperature at the mean density (\(T_{0}\)) from the temperature \(T(\bar{\Delta})\) using Equation \ref{eq:TDrelation} with a range of values of \(\gamma\) (see Section \ref{sec:temperature}).

We can summarize our analysis in 3 main steps:

  • Data analysis (Section \ref{sec:analysis}): from the selected sample of Lyman-\(\alpha\) forest spectra we compute the curvature (\(\kappa\)) in the range of \(z\simeq 1.5-3.0\). From the observational spectra we also obtain measurements of the effective optical depth that we use to calibrate the simulations.

  • Simulations analysis (Section \ref{sec:SimAnalysis}): we calibrate the simulation snapshots at different redshifts in order to match the observational data. We obtain the curvature measurements from the synthetic spectra following the same procedure that we used for the real data and we determine the characteristic overdensities (\(\bar{\Delta}\)) empirically, finding for each redshift the overdensity at which \(T(\bar{\Delta})\) is a one-to-one function of \(log(\langle|\kappa|\rangle)\) regardless of \(\gamma\).

  • Final temperature measurements (Section \ref{sec:temperature}): we determine the \(T(\bar{\Delta})\) corresponding to the observed curvature measurements by interpolating the \(T(\bar{\Delta})\)-\(log(\langle|\kappa|\rangle)\) relationship in the simulations to the values of \(log(\langle|\kappa|\rangle)\) from the observational data.

Data analysis

\label{sec:analysis}

To directly match the box size of the simulated spectra, we compute the curvature statistic on sections of \(10h^{-1}\) Mpc (comoving distance) of “metal free” Lyman-\(\alpha\) forest regions in our quasar spectra. Metals lines are, in fact, a potentially serious source of systematic errors in any measure of the absorption features of the Lyman-\(\alpha\) forest. These lines tend to show individual components significantly narrower than the Lyman-\(\alpha\) ones (\(b\lesssim 15\) km\(s^{-1}\)) and, if included in the calculation, the curvature measurements will be biased towards high values. As a consequence, the temperature obtained will be much lower. For these reasons we need to “clean” our spectra by adopting a comprehensive metal masking procedure (see Section \ref{sec:metals}).

However, not only metals can affect our analysis and, even if effectively masked from contaminant lines, the direct calculation of the curvature on observed spectra can be affected by other sources of uncertainties, particularly, noise and continuum errors. To be as much as possible consistent with the previous work of (citation not found: Becker11) we adopted the same strategies to reduce these potential systematic errors.

Noise : If applied directly to high resolution and high S/N spectra, the curvature measurements will be dominated by the noise in the flux spectra. To avoid this problem we fit a cubic b-spline to the flux and we then compute the curvature from the fit. In Figure \ref{fig:kex}, top panel, is shown a section of normalized Lyman-\(\alpha\) spectra in which the solid green line is the b-spline fit from which we obtain the curvature. For consistency, we adopt the same specifics of the fitting routine of the previous work of (citation not found: Becker11). We then use an adaptive fit with break points that are iteratively added, from an initial separation of 50 km \(s^{-1}\), where the fit is poor. The iterations proceed until the spacing between break points reach a minimum value or the fit converges. With this technique we are able to reduce the sensitivity of the curvature to the amount of noise in the spectrum as we can test using the simulations (see Section \ref{sec:SimAnalysis}).

Continuum : Equation \ref{c} shows a dependence of the curvature on the amplitude of the flux, which in turn is dependent on the accuracy with which the unabsorbed quasar continuum can be estimated. The difficulty in determining the correct continuum level in the Lyman-\(\alpha\) region can then constitute a source of uncertainties. To circumvent this issue we “re-normalize” each \(10h^{-1}\) Mpc section of data, dividing the flux of each section (already normalized by the longer-range fit of the continuum) by the maximum value of the b-spline fit in that interval. Computing the curvature from the re-normalized flux, we remove a potential systematic error due to inconsistent placement of the continuum. While this error could be important at high redshifts, where the Lyman-\(\alpha\) forest is denser, at \(z\lesssim 3\) we do not expect a large correction. In Figure \ref{fig:kex}, bottom panel, is shown the value of the curvature computed from the b-spline fit of the re-normalized flux (applying Equation \ref{c} ) for a section of forest.

\label{fig:kex}Curvature calculation example for one section of Lyman-\(\alpha\) forest. Top panel: b-spline fit (green line) of a section of \(10h^{-1}\) Mpc of normalized real spectrum. Bottom panel: the curvature statistic computed from the fit as defined in Equation \ref{c}.

We next measure the mean absolute curvature \(\langle|\kappa|\rangle\) for the “valid” pixels of each section. We consider valid all the pixels where the re-normalized b-spline fit (\(F^{\rm R}\)) falls in the range \(0.1\leq F^{\rm R}\leq 0.9\). In this way we exclude both the saturated pixels, that do not contain any useful information, and the pixels with flux near the continuum. This upper limit is in fact adopted because the flux profile tends to be flatter near the continuum and, as consequence the curvature for these pixels is considerably more uncertain. This potential uncertainty is particularly important at low redshift because increasing the mean flux also increases the number of pixels near the continuum ((citation not found: Faucher-Giguere08)).

Observed curvature and re-normalized optical depth

\label{sec:Realtau}

The final results for the curvature measurements from the real quasar spectra are shown in the green data points in Figure \ref{fig:kcomp}. In this plot the values of \(\langle|\kappa|\rangle\) obtained from all the \(10h^{-1}\) Mpc sections of forest have been collected and averaged in redshift bins of \(\Delta z=0.2\). The error bars show the \(1\sigma\) uncertainty obtained with a bootstrap technique generated directly from the curvature measurement within each bin. In all the redshift bins, in fact, the mean absolute curvature values of a large number of sections (\(N>100\)) have been averaged and so the bootstrap can be considered an effective tool to recover the uncertainties. It is important to note that the smaller number of sections contributing to the lowest redshift bin (\(1.5\leq z\leq 1.7\); see Figure \ref{fig:histo}), is reflected in a larger error bar. For comparison are shown the results of the curvature from (citation not found: Becker11) (black triangles) for redshift bins of \(\Delta z=0.4\). In the common redshift range the results seem to be in general agreement even if our values appear shifted slightly towards higher curvatures. Taking into consideration the fact that each point cannot be considered independent from the neighbours, this shift between the results from the two different data samples is not unexpected and may also reflect differences in the signal-to- noise ratio between the samples. At this stage we do not identify any obvious strong departure from a smooth trend in \(\langle|\kappa|\rangle\) as a function of redshift.

\label{fig:kcomp}Curvature measurements from the observational quasar spectra. The curvature values obtained in this work (green points) for redshift bins of \(\Delta z=0.2\) are compared with curvature points from (citation not found: Becker11) with \(\Delta z=0.4\) (black triangles). Horizontal error bars show the redshift range spanned by each bin. Vertical error bars in this work are 1\(\sigma\) and have been obtained from a bootstrap technique using the curvature measurements within each bin. In Becker et al. the errors are 2\(\sigma\), recovered using sets of artificial spectra. These errors from the simulations are in agreement with the direct bootstrap using the data from bins which contain a a large number of data points. Curvature measurements obtained in this work from spectra not masked for metals are also shown (red points).

From each section we also extract the mean re-normalized flux (\(F^{\rm R}\)) that we use to estimate the re-normalized effective optical depth (\(\tau_{\rm eff}^{\rm R}=-ln\langle F^{\rm R}\rangle\)) needed for the calibration of the simulations (see Section \ref{sec:calibration}). In Figure \ref{fig:taucomp} we plot the \(\tau_{\rm eff}^{\rm R}\)obtained in this work for redshift bins of \(\Delta z=0.2\) (green data points) compared with the results of (citation not found: Becker11) for bins of \(\Delta z=0.4\) (black triangles). Vertical error bars are 1\(\sigma\) bootstrap uncertainties for our points and \(2\sigma\) for (citation not found: Becker11). For simplicity we fitted our data with a unique power law (\(\tau=A(1+z)^{\alpha}\)) because we do not expect that a possible small variation of the slope as a function of redshift will have a relevant effect in the final temperature measurements. Comparing the least square fit computed from our measurements (green solid line) with the re-normalized effective optical depth of Becker et al. it is evident that there is a systematic difference between the two samples, increasing at lower redshifts: our \(\tau_{\rm eff}^{\rm R}\) values are \(\sim 10\%\) higher compared with the previous measurements and, even if the black triangles tend to return inside the \(\pm 1\sigma\) confidence interval on the fit (green dotted lines) for \(z\gtrsim 2.6\), the results are not in close agreement. However, the main quantities of interest in this paper (i.e. the curvature, from which the IGM temperature estimates are calculated) derive from a comparison of real and simulated spectra which have been re-normalized in the same way, so we expect that they will not depend strongly on the estimation of the continuum like the two sets of \(\tau^{\rm R}_{\rm eff}\) results in Figure \ref{fig:taucomp} do (we compare the effective optical depth prior the continuum re-normalization from the simulations calibrated with the \(\tau^{\rm R}_{\rm eff}\) results in Section \ref{sec:calibration}). The higher values shown by our re-normalized effective optical depth reflect the variance expected between different samples: a systematic scaling, of the order of the error bar sizes, between our results and the previous ones may not be unexpected due to the non-independence of the data points within each set. These differences will reflect different characteristic overdensities probed by the forest (see Section \ref{sec:characO}). Fortunately, the correct calibration between temperature and curvature measurements will wash out this effect, allowing consistent temperature calibration as a function of redshift (see Section \ref{sec:temperature} and Appendix A).

\label{fig:taucomp}Effective re-normalized optical depth (\(\tau_{\rm eff}^{\rm R}\)) from our quasar spectra (green points) compared with the results from (citation not found: Becker11) (black triangles). Vertical error bars are 1\(\sigma\) bootstrap uncertainties for our points and \(2\sigma\) for the previous work, while the horizontal bars show the redshift range spanned by each bin. The solid line represents the least square fit from our measurements while the dotted lines show the \(\pm 1\sigma\) confidence interval on the fit. The measurements have been obtained directly from quasar spectra after correcting for metal absorption.

Even if the scaling between the datasets could be smoothed, considering the fact that, according to (citation not found: Rollinde13), the bootstrap errors computed from sections of 10 \(h^{-1}\)Mpc (and then \(\lesssim 25\)Å) could underestimate the variance, another possible cause could be differences in the metal masking procedures of the two studies. In the next section we explain and test our metal masking technique, showing how our results do not seem to imply a strong bias due to contamination from unidentified metal lines.

Metal correction

\label{sec:metals}

Metal lines can be a serious source of systematic uncertainties for both the measures of the re-normalized flux (\(F^{\rm R}\)) and the curvature. While in the first case it is possible to choose between a statistical ((citation not found: Tytler04); (citation not found: Kirkman05)) and a direct ((citation not found: Schaye03)) estimation of the metal absorption, for the curvature it is necessary to directly identify and mask individual metals lines in the Lyman-\(\alpha\) forest. Removing these features accurately is particularly important for redshifts \(z\lesssim 3\), where there are fewer Lyman-\(\alpha\) lines and potentially their presence could affect significantly the results. We therefore choose to identify metal lines proceeding in two steps: an “automatic” masking procedure followed by a manual refinement. First, we use well-known pairs of strong metal-line transitions to find all the obvious metal absorbing redshifts in the spectrum. We then classify each absorber as of high (e.g. C iv, Si iv) or low (e.g. Mg ii, Fe ii) ionisation and strong or weak absorption, and we evaluate the width of its velocity structure. To avoid contamination, we mask all the regions in the forest that could plausibly contain common metal transitions of the same type, at the same redshift and within the same velocity width of these systems. The next step is to double-check the forest spectra by eye, searching for remaining unidentified narrow lines (which may be metals) and other contaminants (like damped Lyman-\(\alpha\) systems or corrupted chunks of data). Acknowledging that this procedure is in a certain way subjective, at this stage we try to mask any feature with very narrow components or sharp edges to be conservative.

In Figure \ref{fig:taumetals} is presented the correction for the metal line absorption on the re-normalized effective optical depth measurements; from the raw spectra (red triangles), to the spectra treated with the first “automatic” correction (yellow stars), to the final results double-checked by eye (green points). For all the three cases we show the least square fit (solid lines of corresponding colors) and the 1\(\sigma\) vertical error bars. In Table \ref{table:metalcorr} are reported the numerical values for our metal absorption compared with previous results of (citation not found: Schaye03) and (citation not found: Kirkman05) used in the effective optical depth measurements of (citation not found: Faucher-Giguere08). The relative metal correction to \(\tau_{\rm eff}^{\rm R}\) decreases with increasing redshift, as expected if the IGM is monotonically enriched with time ((citation not found: Faucher-Giguere08)) and in general is consistent with the previous results. In their work in fact, Faucher-Giguère et al. evaluated the relative percentages of metal absorption in their measurements of \(\tau_{\rm eff}\) when applying two different corrections: the one obtained with the direct identification and masking method by (citation not found: Schaye03), and the statistical estimate of (citation not found: Kirkman05) in which they used measurements of the amount of metals redwards the Lyman-\(\alpha\) emission line. At each redshift, (citation not found: Faucher-Giguere08) found good agreement between the estimates of their effective optical depth based on the two methods of removing metals. In individual redshift bins, applying our final relative metal absorption percentages, we obtain a \(\tau_{\rm eff}^{\rm R}\) that agrees well within \(1\sigma\) with the ones obtained after applying the corrections of these previous results, encouraging confidence that our metal correction is accurate to the level of our statistical error bar. However, our corrections are overall systematically larger than the previous ones. If we have been too conservative in removing potentially metal-contaminated portions of spectra in our second “by-eye” step, this will bias the effective optical depth to lower values.

\label{fig:taumetals}Comparison of our final measurements of the re-normalized effective optical depth (\(\tau_{\rm eff}^{\rm R}\)) (green points) with the values without applying any metal correction to the spectra (red triangles) and with only the first, automatic correction (yellow stars). Solid lines are the least square fits to the corresponding points, while vertical bars represent the \(1\sigma\) bootstrap errors. The metal absorption has been estimated following the procedure described in the text.

\label{table:metalcorr}Metal absorption correction to the raw measurements of \(\tau_{\rm eff}^{\rm R}\). For each redshift (column 1) is reported the percentage metal absorption correction obtained in this work in the first, ‘automatic’ mask described in the text (column 2) and in the refinement bye eye (column 3). For comparison, in the overlapping redshift range are presented the results of —\citetFaucher-Giguere08— obtained applying the direct metal correction of —\citetSchaye03— (column 4) and the statistical one of —\citetKirkman05— (column 5).
z automatic corr. final corr. Schaye corr. Kirkman corr.
1.6 \(13.8\%\) \(22.9\%\) n.a. n.a.
1.8 \(13.2\%\) \(21.5\%\) n.a. n.a.
2.0 \(12.6\%\) \(20.1\%\) \(13.0\%\) \(21.0\%\)
2.2 \(12.0\%\) \(18.8\%\) \(12.3\%\) \(16.0\%\)
2.4 \(11.5\%\) \(17.5\%\) \(11.4\%\) \(12.6\%\)
2.6 \(11.0\%\) \(16.3\%\) \(10.4\%\) \(10.4\%\)
2.8 \(10.5\%\) \(15.1\%\) \(9.7\%\) \(7.8\%\)
3.0 \(8.8\%\) \(14.0\%\) \(9.0\%\) \(6.0\%\)

In Figure \ref{fig:kcomp} is also shown the effect of our metal correction on the curvature measurements: red points are curvature values obtained from the raw spectra while the green points are the final measurements from masked sections, with vertical bars being the \(1\sigma\) error. Metal contamination has important effects on the curvature measurements: after the correction, in fact, the curvature measurements decrease between \(\sim 30\%-40\%\) at each redshift even if the relative differences among redshift bins seem to be maintained. The potential effects of an inaccurate metal correction on the final temperature measurements will be considered in Section \ref{sec:temperature} .

Proximity region

A final possible contaminant in the measurements of the effective optical depth is the inclusion in the analysis of the QSO proximity regions. The so-called “proximity regions” are the zones near enough to the quasars to be subjected to the local influence of its UV radiation field. These areas may be expected to show lower Lyman-\(\alpha\) absorption with respect to the cosmic mean due to the high degree of ionisation. To understand if the proximity effect can bias the final estimates of \(\tau_{\rm eff}^{\rm R}\), we compare our results with the measurements obtained after masking the chunks of spectra that are potentially effected by the quasar radiation. Typically the ionizing UV flux of a bright quasar is thought to affect regions of \(\lesssim 10\) proper Mpc along its own line of sight (e.g. (citation not found: Scott00); (citation not found: Worseck06)). To be conservative, we masked the 25 proper Mpc nearest to each quasar Lyman-\(\alpha\) and Lyman-\(\beta\) emission lines and re-computed \(\tau_{\rm eff}^{\rm R}\). The final comparison is presented in Figure \ref{fig:tauprox}: masking the proximity regions does not have any significant effect on \(\tau_{\rm eff}^{\rm R}\). In fact, the results obtained excluding these zones (light green circles) closely match the results inferred without this correction (green points), well within the 1\(\sigma\) error bars. We then do not expect that the inclusion in our analysis of the QSO proximity regions will affect significantly the temperature measurements.

\label{fig:tauprox}Effect of masking the proximity region on the re-normalized effective optical depth. The final results for \(\tau_{\rm eff}^{\rm R}\) without the masking of the proximity zones are shown as green points and those with this correction are shown as light green circles. Solid lines represent the least square fit of the data and vertical error bars are the \(1\sigma\) statistical uncertainties.

Simulations Analysis

\label{sec:SimAnalysis}

To extract temperature constraints from our measurements of the curvature we need to interpret our observational results using simulated spectra, accurately calibrated to match the real data conditions. In this Section we explain how we calibrate and analyse the synthetic spectra to find the connection between curvature measurements and temperature at the characteristic overdensities. We will use these results in Section \ref{sec:temperature} where we will interpolate the \(T(\bar{\Delta})\)\(log\langle|\kappa|\rangle\) relationship to the value of \(log\langle|\kappa|\rangle\) from the observational data to obtain our final temperature measurements.

The calibration

\label{sec:calibration}

To ensure a correct comparison between simulation and observational data we calibrate our synthetic spectra to match the spectral resolution and the pixel size of the real spectra. We adjust the simulated, re-normalized effective optical depth (\(\tau_{\rm eff}^{\rm R}\)) to the one extracted directly from the observational results (see Section \ref{sec:Realtau}) and we add to the synthetic spectra the same level of noise recovered from our sample.

Addition of noise

To add the noise to the synthetic spectra we proceed in three steps: first, we obtain the distributions of the mean noise corresponding to the 10\(h^{-1}\)Mpc sections of the quasar spectra contributing to each redshift bin. As shown in Figure \ref{fig:noise} (top panel) these distributions can be complex and so to save computational time we simplify them by extracting grids of noise values with a separation of \(\Delta\sigma=0.01\) and weights rescaled proportionally to the original distribution (Figure \ref{fig:noise} bottom panel). At each redshift the noise is finally added at the same levels of the corresponding noise distribution and the quantities computed from the synthetic spectra, with different levels of noise, are averaged with the weights of the respective noise grid.

\label{fig:noise}Top panel: an example of noise distribution for a \(\Delta z=0.2\) redshift bin of real quasar spectra (in this case one with \(z_{mean}=2.4\)). On the x-axis of the histogram is presented the mean noise per section of 10\(h^{-1}\) Mpc while on the y-axis is shown the number of sections contributing to the particular bin. Bottom panel: the same distribution presented in the top panel but simplified, collecting the data in a noise grid of \(\Delta\sigma=0.01\).

Recovered optical depth

\label{sec:simtau}

The simulated spectra are scaled to match the re-normalized effective optical depth, \(\tau_{\rm eff}^{\rm R}\), of the real spectra. These can then be used to recover the corresponding effective optical depth (\(\tau_{\rm eff}\); prior to the continuum re-normalization). In fact, the re-normalized effective optical depth cannot be compared directly with the results from the literature and to do so we need to compute the mean flux and then \(\tau_{\rm eff}\) (\(\tau_{\rm eff}=-ln\langle F\rangle\)) from the synthetic spectra using the same procedure applied previously to the real spectra (see Section 5) but without the re-normalization. In Figure \ref{fig:taurecov} is shown the trend of the recovered effective optical depth for three different simulations, A15, G15 and C15 in Table \ref{table:simulations}. For clarity we do not plot the curves for the remaining simulations but they lie in between the curves of simulations A15 and G15. Depending on the different thermal histories, the recovered effective optical depths vary slightly but the separation of these values is small compared with the uncertainties about the trend (e.g. green dotted lines referred to the simulation C15).

In Figure \ref{fig:taurecov} we also compare our results with the previous studies of (citation not found: Becker12), and (citation not found: Kirkman05). The results of (citation not found: Becker12), that for \(z\lesssim 2.5\) have been scaled to the (citation not found: Faucher-Giguere08) measurements, are significantly shifted toward lower \(\tau_{\rm eff}\), presenting a better agreement with Kirkman et al.. For \(z<2.2\) the effective optical depth of Kirkman et al. still shows values \(\sim 30\%\) lower than ours. In this case, again, part of the difference between the results could be explained by the non-independence of the data points within each set. Such an offset could also be boosted by a possible selection effect: the lines of sight used in this work were taken from the UVES archive and, as such, may contain a higher proportion of damped Lyman-\(\alpha\) systems; even if these systems have been masked out of our analysis, their presence will increase the clustering of the forest around them and so our sample will have higher effective optical depth as a consequence. The simulated \(\tau_{\rm eff}\) presented in Figure \ref{fig:taurecov} were obtained by matching the observed \(\tau_{\rm eff}^{\rm R}\) (see Figure \ref{fig:taucomp}) and do not represent one of the main results of this work, so we did not investigate further possible selection effects driven by our UVES sample. Being aware of this possibility, we decided to maintain the consistency between our curvature measurements and the simulations used to infer the temperature values, calibrating the simulated spectra with the effective optical depth obtained from our sample (see Appendix A). In the comparison between our results and the previous ones of (citation not found: Becker11), the effect of a calibration with an higher \(\tau_{\rm eff}\) will manifest itself as a shift towards lower values in the characteristic overdensities traced by the Lyman-\(\alpha\) forest at the same redshift (as we will see in Section \ref{sec:characO}).

\label{fig:taurecov}The effective optical depth, prior to the re-normalization correction, recovered from the simulations which matches the re-normalized effective optical depth, \(\tau_{\rm eff}^{\rm R}\), of our real spectra. The effective optical depth for three different simulations is shown: A15 (blue solid line), G15 (pink solid line) and C15 (green solid line). The recovered \(\tau_{\rm eff}\) for the remaining simulations in Table 2 are not reported for clarity but they lie in between the trends of A15 and G15. The spread in values for different thermal histories is, in fact, small compared with the 1\(\sigma\) uncertainties about the trend of each of the effective optical depths (green dotted lines for simulation C15). Our results are compared with the effective optical depths of (citation not found: Becker12) (red points) and (citation not found: Kirkman05) (black points).

The curvature from the simulations

\label{sec:simcurv}

Once the simulations have been calibrated we can measure the curvature on the synthetic spectra using the same method that we used for the observed data (see Section \ref{sec:analysis}). In Figure \ref{fig:kfavorite} are plotted the values of \(log\langle|\kappa|\rangle\) obtained from our set of simulations in the same redshift range and with the same spectral resolution, effective optical depth and mix of noise levels of the real spectra. Different lines correspond to different simulations in which the thermal state parameters are changed. We can preliminarily compare our data points with the simulations, noticing that the simulation that has the values of the curvature close to the real observations is C15, which assumes the fiducial parameter \(\gamma=1.54\) at redshift \(z\)=3. The variation in the properties of the real data alters the trend of the simulated curvature to be a slightly non-smooth function of redshift.

\label{fig:kfavorite}Curvature measurements: points with vertical error bars (\(1\sigma\) uncertainty) are for the real data and are compared with the curvature obtained from simulations with different thermal histories calibrated with the same spectral resolution, noise and effective optical depth of the observed spectra at each redshift.

As expected, the curvature values are sensitive to changes in the effective optical depth: as shown in Figure \ref{fig:kerror} for the fiducial simulation C15, the \(1\sigma\) uncertainty about the fit of the observed \(\tau_{\rm eff}^{\rm R}\) (see Figure \ref{fig:taucomp}) is in fact reflected in a scatter about the simulated curvature of about \(10\%\) at redshift \(z\sim 1.5\), decreasing at higher redshifts. The next section shows how this dependence of the simulated curvature on the matched effective optical depth will imply differences in the recovered characteristic overdensities between our work and (citation not found: Becker11).

\label{fig:kerror}Dependence of the simulated \(log\langle|\kappa|\rangle\) on the effective optical depth with which the simulations have been calibrated. The curvature recovered using the thermal history C15 (green solid line) is reported with the 1\(\sigma\) uncertainties about the trend (green dotted lines) corresponding to the \(1\sigma\) uncertainties about the fit of the observed \(\tau_{\rm eff}^{R}\) in Figure \ref{fig:taucomp}. The variation generated in the curvature is about \(10\%\) for \(z=1.5\) corresponding to a scatter of \(\pm 2-3\times 10^{3}\) K in the temperature calibration, and decreases at higher redshift.

The characteristic overdensities

\label{sec:characO}

The final aim of this work is to use the curvature measurements to infer information about the thermal state of the IGM, but this property will depend on the density of the gas. The Lyman-\(\alpha\) forest, and so the curvature obtained from it, in fact does not always trace the gas at the mean density but, instead, at low redshift (\(z\lesssim 3\)) the forest lines will typically arise from densities that are increasingly above the mean. The degeneracy between \(T_{0}\) and \(\gamma\) in the temperature–density relation (Equation \ref{eq:TDrelation}) will therefore be significant. For this reason at this stage we are not constraining both these parameters but we use the curvature to obtain the temperature at those characteristic overdensities (\(\bar{\Delta}\)) probed by the forest that will not depend on the particular value of \(\gamma\). We can in this way associate uniquely our curvature values to the temperature at these characteristic overdensities, keeping in mind that the observed values of \(\kappa\) will represent anyway an average over a range of densities.

The method

We determine the characteristic overdensities empirically, finding for each redshift the overdensities at which \(T(\bar{\Delta})\) is a one-to-one function of \(log\langle|\kappa|\rangle\) regardless of \(\gamma\). The method is explained in Figure \ref{fig:kDelta}: for each simulation type we plot the values of \(T(\Delta)\) versus \(log\langle|\kappa|\rangle\), corresponding to the points with different colors, and we fit the distribution with a simple power law. We change the value of the overdensity \(\Delta\) until we find the one (\(\bar{\Delta}\)) for which all the points from the different simulations (with different thermal histories and \(\gamma\) parameters) lie on the same curve and minimize the \(\chi^{2}\). The final \(T(\bar{\Delta})\) of our real data (see Section \ref{sec:temperature}) will be determined by interpolating the \(T(\bar{\Delta})\)\(log\langle|\kappa|\rangle\) relationship in the simulations to the value of \(log\langle|\kappa|\rangle\) computed directly from the real spectra.

\label{fig:kDelta}Example of the one-to-one function between \(log\langle|\kappa|\rangle\) and temperature obtained for a characteristic overdensity (\(\bar{\Delta}=3.7\) at redshift \(z=2.173\)), Different colors correspond to different simulations. At each redshift we find the characteristic overdensity, \(\Delta=\bar{\Delta}\), for which the relationship between \(T(\bar{\Delta})\)\(log\langle|\kappa|\rangle\) does not depend on the choice of a particular thermal history or \(\gamma\) parameter.

The results

The characteristic overdensities for the redshifts of our data points are reported in Table \ref{table:results}, while in Figure \ref{fig:overdensity} is shown the evolution of \(\bar{\Delta}\) as a function of redshift: as expected, at decreasing redshifts the characteristic overdensity at which the Lyman-\(\alpha\) forest is sensitive increases. Note that Figure \ref{fig:overdensity} also shows that while the addition of noise in the synthetic spectra for \(z\lesssim 2.2\) has the effect of decreasing the values of the characteristic overdensities, that tendency is inverted for higher redshifts where the noise shifts the overdensities slightly towards higher values with respect to the noise-free results. In Figure \ref{fig:overdensity} is also presented a comparison between the overdensities founded in this work and the ones obtained in (citation not found: Becker11) in their analysis with the addition of noise. Even if the two trends are similar, the difference in values of the characteristic overdensities at each redshift is significant (\(\sim 25\%\) at \(z\sim 3\) and increasing towards lower redshift). Because we used the same set of thermal histories and a consistent method of analysis with respect to the previous work, the reason for this discrepancy lies in the different data samples: in fact, the effective optical depth observed in our sample is higher than the one recorded by (citation not found: Becker11) (see Section \ref{sec:Realtau}). As we have seen in Section \ref{sec:simcurv}, the simulated curvature is sensitive to the effective optical depth with which the synthetic spectra have been calibrated and this is reflected in the values of the characteristic overdensities. It is then reasonable that for higher effective optical depths at a particular redshift we observe lower overdensities because we are tracing a denser universe and the Lyman-\(\alpha\) forest will arise in overdensities closer to the mean density.

\label{fig:overdensity}Evolution as a function of redshift of the characteristic overdensities obtained in this work with the addition of noise in the synthetic spectra (green solid line) and with noise-free simulations (green dotted line). As expected, the characteristic overdensities traced by the Lyman-\(\alpha\) forest increase toward lower redshift. For comparison, the result for the characteristic overdensities from the previous work of (citation not found: Becker11) (black solid line) is also presented. Our overdensities are lower and this can be associated with the higher effective optical depth observed in our sample that was used to calibrate the simulations.

[Someone else is editing this]

You are editing this file