loading page

Welcome to Authorea!
  • Anthony Cazasnoves
Anthony Cazasnoves

Corresponding Author:[email protected]

Author Profile

Abstract

Computed tomography (CT) scanners data are basically reconstructed with the gold standard Feldkamp (FDK) algorithm\cite{Feldkamp_1984}. Its analytical formalism requires a large number of projections for a robust reconstruction thus not contributing to limit dose to patient. Calling on iterative methods - such as ART \cite{ART}, EM \cite{EM} and their derivatives - reliable reconstructions are performed from reduced dataset but at large computational cost. This is mainly due to the regular voxel lattice used as sampling : high spatial resolution means thin 3D grids and leads to an oversampling of large homogeneous regions. This translates to a larger number of unknowns to estimate - computational cost - and to large file to store the reconstruction - memory requirement. Graphics processing units (GPU) downscale iterative methods computation time but the processing of big volumes still remains an issue - due to limitation in devices internal memory size. Volume storage memory consumption moreover remains the same. Addressing this issue, representations enabling an adaptive sampling of the reconstruction volume have been investigated. Such is the case of multi-scale basis functions and especially of blobs \cite{wang2011image}. Main drawbacks are however their high computational cost and the complexity of extension to the 3D case. In tomography, meshed representations are of particular interest, due to their ability to achieve a sampling mirroring the structure of the object. Meshed 2D CT reconstruction was investigated in \cite{Brankov} and the 3D case in \cite{Sitek,Sitek2}. Brankov et al. \cite{Brankov} approach is of sampling nature : a pixel-based coarse reconstruction is first performed and is used to sample the mesh nodes adequately. A maximum-likelihood (ML) algorithm adapted to the mesh representation is then applied for reconstruction. The initial pixel reconstruction needed to build the triangulated representation of the object is the main limitation of this approach because of the large number of projections required by analytical algorithms and because it represents an additional step. In \cite{Sitek}, Sitek et al. introduce a method based on a refinement scheme. A first regular grid of tetrahedral cells is generated and several iterations of EM algorithm are performed. Tetrahedra linked to big errors are splitted by addition of a node at their centroids and EM is again performed. In \cite{Sitek2} the approach is of coarsening type. Starting with a fine grid of tetrahedra, the method proceeds alternating iterative reconstruction and collapsing of cells belonging to homogeneous regions. In both cases, remeshing operations are computationally costly and the number of nodes added at each iteration being user fixed, the performance will be linked to the one’s expertise. Buyens et al. \cite{Quinto} framework combines the idea of the previous approach. Reconstruction is first performed on a 2D grid of triangle. By interpolating the result to a pixel grid, a level-set method is used to re-sample the nodes and a more adapted mesh is generated. Values are interpolated back from the pixel base to the mesh one and the process goes through another iteration. Results show that the convergence of the reconstruction is substantially improved when the mesh matches the structure of the considered object. The issue is once again that the representation adapts itself to the object along with the tomographic reconstruction. Moreover, the values interpolation from the tessellated representation to the pixel grid and back prove to be costly in terms of computation and may introduce imprecision in the reconstruction. In this work we build an adapted mesh prior to any step of reconstruction by directly exploiting the acquired data. Doing so, fast 3D CBCT reconstructions are achievable. In order to create such a mesh the location of the 3D interfaces that constitute the object structure has to be known. Firstly, we exploit the evidence of 2D interfaces as the result of the 3D ones by performing edge detection on the acquired data. Secondly, the 2D structural information is merged in 3D using the statistical framework of the hypothesis testing. This papers is organized as follows. Section \ref{Method} is devoted to the structural information merging and the positioning of the mesh nodes as a pointcloud. Section \ref{Results} shows the results of the complete method applied on numerical data. Conclusions and perspectives of this work are discussed in Section \ref{Conclusion}.