Variational attenuation correction in two-view confocal microscopy
- Thorsten Schmidt^{1}Email author,
- Jasmin Dürr^{2},
- Margret Keuper^{1},
- Thomas Blein^{2, 3},
- Klaus Palme^{2, 4, 5, 6} and
- Olaf Ronneberger^{1, 4}
DOI: 10.1186/1471-2105-14-366
© Schmidt et al.; licensee BioMed Central Ltd. 2013
Received: 22 March 2013
Accepted: 29 November 2013
Published: 18 December 2013
Abstract
Background
Absorption and refraction induced signal attenuation can seriously hinder the extraction of quantitative information from confocal microscopic data. This signal attenuation can be estimated and corrected by algorithms that use physical image formation models. Especially in thick heterogeneous samples, current single view based models are unable to solve the underdetermined problem of estimating the attenuation-free intensities.
Results
We present a variational approach to estimate both, the real intensities and the spatially variant attenuation from two views of the same sample from opposite sides. Assuming noise-free measurements throughout the whole volume and pure absorption, this would in theory allow a perfect reconstruction without further assumptions. To cope with real world data, our approach respects photon noise, estimates apparent bleaching between the two recordings, and constrains the attenuation field to be smooth and sparse to avoid spurious attenuation estimates in regions lacking valid measurements.
Conclusions
We quantify the reconstruction quality on simulated data and compare it to the state-of-the art two-view approach and commonly used one-factor-per-slice approaches like the exponential decay model. Additionally we show its real-world applicability on model organisms from zoology (zebrafish) and botany (Arabidopsis). The results from these experiments show that the proposed approach improves the quantification of confocal microscopic data of thick specimen.
Keywords
Attenuation correction Absorption Confocal microscopy Image restoration Calculus of variationsBackground
Confocal microscopy has become a standard technique to record and localize fluorescent marker molecules within the 3-D context of organs and whole organisms on sub-cellular resolution. The confocal principle minimizes the blur introduced by the point spread function of the optics. However, signal degradations introduced by scattering and absorption within the inhomogeneous tissue still hamper many automatic image analysis steps like detection, registration, segmentation, or co-localization.
Light attenuation is a result of photon loss along the excitation and emission light paths. Photons get lost due to absorption, where the photons are converted to thermal energy, or due to scattering, where the photons leave the ray passing through the pinhole. Both effects result in a multiplicative reduction of the number of photons by a local tissue specific factor, and can therefore be modeled by the Beer-Lambert’s law. The opposite effect, an intensity increase, is caused by scattered photons that hit the pinhole by chance. In most tissues this second effect is small compared to the photon loss and its exact simulation would require an immense computational effort. Therefore we model only photon loss using attenuation coefficients accounting for both local absorption and scattering throughout the article.
Attenuation correction requires to estimate two quantities at each recording position, the local attenuation coefficient and the true underlying intensity. Solving for both quantities without further assumptions would require two noise-free measurements per recording position. However in most real-world applications only sparse measurements at the fluorescently marked structures are available (especially when imaging whole organs or organisms). Additionally the measured signal is distorted by Poisson distributed photon noise and Gaussian distributed read-out noise.
Single view approaches try to estimate both quantities from one recording that provides only one measurement per recording position. This requires strong prior assumptions to constrain the solution space. A common approach is to assume that the attenuation is dominated by aberrations introduced by a mismatch in immersion and embedding media [1, 2]. In the resulting models, local attenuation effects are neglected or constant absorption throughout the cuboid-shaped recording volume is assumed resulting in an exponential decay with imaging depth [3]. Other approaches estimate the attenuation from the per-slice intensity statistics. The overall intensity distribution is adapted towards a reference maximizing the overall coherence [4, 5].
The multi-view approach can be applied to a wide range of data from biology and medicine. To underline this claim, we reconstruct the recordings of 500 μm thick zebrafish embryos, and Arabidopsis root tips. Two-view recording of tissue sections was already demonstrated in [6].
Contributions
Methods
Image formation model
where $\alpha :{\mathbb{R}}^{3}\supset \Omega \to {\mathbb{R}}_{\ge 0}$ are the spatially variant attenuation coefficients. s_{ i }:S→{0,1} are the cone sensitivity functions for both directions defined over the unit sphere S. s_{ i }(r) is one for all ray directions within the cone and zero otherwise. $I:\Omega \to {\mathbb{R}}_{\ge 0}$ denotes the attenuation free intensities. The factors β_{ i }∈ℝ_{+} can be used to additionally scale all intensities of the recordings. We use it to model photo bleaching induced signal attenuation in the second recording. Only the focused beam leads to significant bleaching since the excitation energy drops quadratically with the distance to the focal point. The assumption of constant bleaching for the whole volume is a zero-order approximation for the true bleaching function which is non-linear and specific for the marker-molecules used. We fix β_{1}:=1 and optimize β_{2} alongside with the real intensities I and the attenuation coefficients α as described in the upcoming section.
Energy formulation
We want to maximize the posterior probability for the attenuation coefficients α, the attenuation-free intensities I, and the factor β_{2} given the observed intensities of the two recorded data volumes I_{1} and I_{2}.
The prior P(I_{1},I_{2}) of the recorded images is independent of the attenuation, the true intensities, and the bleaching, therefore it could be dropped from the equation.
where $\Omega \subset {\mathbb{R}}^{3}$ is the recorded volume, λ,μ∈ℝ_{≥0} are weighting factors and ε_{sp}∈ℝ_{+} is a small constant which is added for reasons of numerical stability. The loss function $\psi :\mathbb{R}\to \mathbb{R}$ is either the identity function, leading to quadratic regularization (Tikhonov Miller (TM)) or approximates the total variation regularization (TV) when set to ${\psi}_{\text{TV}}\left({x}^{2}\right):=\sqrt{{x}^{2}+{\epsilon}_{\text{TV}}^{2}}$ with ε_{TV}∈ℝ_{+} being a small constant. This approximation is closely related to the Huber norm. During the experiments we set ε_{sp}=ε_{TV}=10^{−10}.
where Ω^{′} is the discretized recorded volume.
Note, that for m=0 and σ=1 the new model coincides with the pure Gaussian model presented in [7]. The actual value for the Poisson scaling m (the number of collected photons per intensity level) and the standard deviation σ of the Gaussian noise can be estimated during a microscope calibration phase. If they are unknown, one of them can be fixed to an arbitrary value (we always fixed σ=1), and the other one can be adjusted to qualitatively obtain the optimum result. If additional sample information is available, e.g. the recordings consist of large homogeneous regions of different intensities, one can also try to estimate the parameters from the images themselves as done in [10]. However, for biological samples this is rarely the case.
where T_{ r } is the attenuation along the ray with direction r, C_{ i } is the cone transmission for recording direction i, F_{ i } are the simulated intensities, and D_{ i } are the differences between the recordings and simulations. Variables in square brackets indicate dependencies on the corresponding optimization variables.
For the optimization we employ the Broyden-Fletcher-Goldfarb-Shanno algorithm with box constraints on the variables (short L-BFGS-B) [11]. The solver minimizes the energy while respecting the positivity of the attenuations throughout the iterative optimization. L-BFGS-B implements a quasi Newton method, therefore, we need to provide the derivatives of the energy with respect to the unknown intensities I, the attenuation coefficients α, and the bleaching factor β_{2}. These are given by
where the derivative of the loss function ψTM′(x^{2})=1 for the TM regularization and ${\psi}_{\text{TV}}^{\prime}\left({x}^{2}\right)=\frac{1}{2\sqrt{{x}^{2}+{\epsilon}_{\text{TV}}^{2}}}$ for the TV approximation. The detailed derivations are given in the Additional file 1.
Implementation
The variational attenuation correction was implemented in C++ and run under Linux (Ubuntu 12.04) on an Intel Xeon E5-2680 (2.7GHz) Dual-Processor system. For the optimization we used the ready FORTRAN implementation of the L-BFGS-B optimizer. One iteration for data sub-sampled to 80×80×80 voxels needed on average 1.8 seconds, so a full reconstruction can be computed in the range of a few minutes. The complexity scales linearly with the number of voxels to process within each iteration (Additional file 1: Figure S3). The memory complexity also scales linearly with the raw data volume. Both quantities can be limited by sub-sampling the high resolution raw data. This has two advantages: First, less computational resources are needed and second, the weighted averaging during the sub-sampling already considerably reduces the image noise. The cone transmission is computed in parallel for all ray directions leading to a significant speed-up of the confocal microscope simulation. Depending on the random computation order introduced by the scheduling the results can slightly deviate from the numbers reported in the Results section. For real-world data we observed deviations of the estimated intensities of up to 3% after convergence of the algorithm. However, these differences are visually not recognizable.
Discrete derivative and integral computation
The gradients needed in the derivatives of the TM smoothness term and in the sparsity term are computed using central differences. For TV regularization we extended the numerical differentiation-scheme from [12] to 3-D to obtain the divergence term in the derivative of the smoothness term. The corresponding equations are given in the Additional file 1. The cone integrals are approximated as in [7] with a ray spacing of six degrees. This approximation of the cone integrals requires high regularization to lead to good reconstructions. We also did experiments with an alternative ray integration scheme that uses thin rays instead of the incrementally widening conic rays of [7]. To still capture all attenuations the ray sampling was increased so that the cone is sampled densely at the largest cone diameter with respect to the volume grid. The result is given in the Additional file 1 and shows, that the energy formulation leads to the desired solution but the numerical approximations have crucial influence on the resulting reconstruction.
Data generation
Synthetic data
Zebrafish
To show that the approach also copes well with real world data, we tested it on samples of the ViBE-Z database consisting of confocal recordings of whole zebrafish (Danio rerio) embryos, which were fixed 72h after fertilization. Sample preparation, recording setup and image preprocessing are described in detail in [7]. The processing was performed on sub-sampled data with isotropic voxel extents of 8 μm.
Arabidopsis thaliana
Finally we tested the approach on recordings of the root tip of the model plant Arabidopsis thaliana. The samples were fixed 96h after germination and the cell membranes marked with an Alexa antibody stain. Then they were embedded in SlowFade Gold Antifade (Invitrogen) and recorded from top and bottom using a confocal microscope equipped with a 40 × oil immersion objective. We applied the elastic registration algorithm from [7] to register the two views to each other. Finally we performed a background subtraction prior to applying the attenuation correction. The embedding medium had a refractive index of 1.42 compared to a refractive index of the immersion oil of 1.52 for which the lens was adjusted. The attenuation correction was performed on sub-sampled data with isotropic voxel extents of 2 μm.
Parameter setup
We want all terms in the energy to have approximately the same influence on the optimization process. This leads to rough rules of thumb for the selection of λ and μ. Since all terms integrate over the whole image domain, the choice is independent of the number of voxels. The energy contribution of the data term is in the order of the squared expected intensity differences between recording and simulation divided by the Poisson weights. The smoothness term’s contribution is in the order of the magnitude of the expected attenuation gradient (TV) or its square (TM). Finally, the sparsity term’s contribution is in the order of the expected attenuations. E.g. for intensity data with an expected residual intensity difference of 5 (corresponding to the average noise intensity) and pure Gaussian noise with expected attenuation coefficients of 0.005 and gradient magnitudes of 0.0005 initial choices of $\lambda =\frac{{5}^{2}}{0.000{5}^{2}}=4\xb71{0}^{8}$ and $\mu =\frac{{5}^{2}}{0.005}=5000$ (TM), resp. $\lambda =\frac{{5}^{2}}{0.0005}=5\xb71{0}^{4}$ and μ=5000 (TV) are appropriate. The approximate estimates for the expected attenuations and their gradients were empirically confirmed on real world samples. For higher Poisson weighting m the factors have to be decreased accordingly. The optimal values depend on the image content and should be optimized for specific types of data.
For the synthetic phantoms we chose for each fixed m the optimal λ and μ which minimize the root mean squared error (rmse) of the true intensities and the estimated intensities. The parameters were empirically determined with an exponential grid search over a parameter range of λ∈{0,10^{0},…,10^{9}} and μ∈{0,10^{2}, 10^{3},10^{4}} for the textured sphere phantom, and λ∈{0, 10^{6},…,10^{12}} and μ∈{0,10^{3},…,10^{8}} for the Shepp-Logan phantom. For all experiments we set σ:=1. For the real world data we used a conservative parameter set of λ=10^{7}, μ=0 and m=0.1 (Arabidopsis) or λ=10^{8}, μ=10^{4} and m=0 (zebrafish) for all experiments with TM regularization. For the zebrafish experiments with TV regularization we set λ=5·10^{4}, μ=0, and m=0. For the real world data we stopped the iterative process when the visually optimal reconstruction of the intensities was reached, which was after between 3 to 20 iterations. For the textured sphere phantom data we set a maximum of 50 iterations for TM regularization and ran the algorithm to convergence for TV regularization. All results reported for the Shepp-Logan phantom were reached at convergence of the algorithm.
Results and discussion
In Figure 2 the influence of the different extensions to the original model in [7] are summarized. If no prior knowledge about the attenuations is introduced (Figure 2 (c)) the approach is already able to reasonably reconstruct the original intensities. However, the attenuation field is coarse and cannot be applied to the reconstruction of secondary channels. With regularization (Figure 2 (d) and (e)) the attenuation field is much smoother, but especially with Tikhonov Miller regularization strong spurious attenuations outside the sample are estimated. Application of the sparsity term reduces these attenuation estimates (Figure 2 (f) and (g)). The residual apparent “bleeding” of the attenuation coefficients below the sample are the effect of different mean intensities in the top and bottom recordings, as e.g. introduced by photo bleaching. When this factor is additionally estimated during the optimization, the lower boundary becomes much clearer.
Detailed evaluation of the proposed model on the data sets described in the methods section and a comparison to [7] are given in the remainder of this section.
Synthetic data
Results for the textured sphere phantom (+ noise)
Method | m | λ | μ | E | nIter | β | rms_{ I } | rms _{ α } |
---|---|---|---|---|---|---|---|---|
Ref. [7] | 0 | 0 | 0 | 37.8917 | 19 | 1.0 | 849.58 | 0.0032 |
proposed (TM) (β_{2}:=1) | 0.05 | 10^{6} | 0 | 20.6261 | 50 | 1.0 | 319.80 | 0.0067 |
proposed (TM) | 0 | 10^{9} | 10^{2} | 20.7742 | 50 | 0.80 | 177.79 | 0.0006 |
proposed (TM) | 0.02 | 10^{9} | 10^{4} | 20.3511 | 50 | 0.80 | 154.72 | 0.0006 |
proposed (TM) | 0.05 | 10^{8} | 10^{3} | 12.7709 | 50 | 0.80 | 117.31 | 0.0004 |
proposed (TM) | 0.07 | 10^{8} | 10^{3} | 10.8522 | 50 | 0.80 | 127.17 | 0.0004 |
proposed (TM) | 0.1 | 10^{6} | 10^{3} | 8.2228 | 50 | 0.80 | 130.29 | 0.0005 |
proposed (TV) | 0 | 5·10^{3} | 0 | 13.4404 | 89 | 0.80 | 156.14 | 0.0033 |
proposed (TV) | 0.02 | 10^{4} | 10^{3} | 13.0192 | 104 | 0.80 | 185.30 | 0.0028 |
proposed (TV) | 0.05 | 5·10^{4} | 10^{3} | 12.188 | 68 | 0.80 | 116.77 | 0.0008 |
proposed (TV) | 0.07 | 5·10^{4} | 10^{3} | 10.3538 | 79 | 0.80 | 115.98 | 0.0007 |
proposed (TV) | 0.1 | 5·10^{4} | 10^{2} | 8.1756 | 127 | 0.80 | 119.06 | 0.0007 |
Results for the Shepp-Logan phantom (+ noise)
Method | m | λ | μ | E | nIter | β | rms_{ I } | rms _{ α } |
---|---|---|---|---|---|---|---|---|
Ref. [7] | 0 | 0 | 0 | 84.2003 | 88 | 1.0 | 1138.68 | 0.0021 |
proposed (TM) (β_{2}:=1) | 0.05 | 10^{10} | 10^{6} | 72.1663 | 117 | 1.0 | 822.27 | 0.0011 |
proposed (TM) | 0 | 10^{6} | 10^{7} | 152.712 | 770 | 0.80 | 639.52 | 0.0032 |
proposed (TM) | 0.02 | 10^{8} | 10^{7} | 148.08 | 268 | 0.80 | 518.18 | 0.0019 |
proposed (TM) | 0.05 | 10^{9} | 10^{6} | 68.2854 | 66 | 0.80 | 487.45 | 0.0010 |
proposed (TM) | 0.07 | 10^{9} | 10^{6} | 63.4456 | 48 | 0.80 | 449.03 | 0.0008 |
proposed (TM) | 0.1 | 10^{8} | 10^{6} | 57.677 | 65 | 0.80 | 490.76 | 0.0010 |
proposed (TV) | 0 | 5·10^{6} | 10^{7} | 166.514 | 94 | 0.80 | 466.14 | 0.0008 |
proposed (TV) | 0.02 | 5·10^{5} | 10^{7} | 151.261 | 75 | 0.80 | 470.12 | 0.0009 |
proposed (TV) | 0.05 | 10^{5} | 10^{6} | 67.4981 | 88 | 0.80 | 567.72 | 0.0013 |
proposed (TV) | 0.07 | 10^{5} | 10^{6} | 63.3558 | 38 | 0.80 | 439.59 | 0.0008 |
proposed (TV) | 0.1 | 10^{5} | 10^{6} | 58.1152 | 40 | 0.80 | 478.66 | 0.0008 |
For the synthetic data, the new model clearly outperforms the baseline from [7] even for sub-optimal choices of the Poisson weight m. The increase in performance is clearer for the textured phantom in which our approximation to the real noise is less affected by suboptimal mean value estimates in low-intensity regions, but even for the Shepp-Logan phantom the reconstruction quality is increased almost by a factor of three. The noise model and the bleaching factor β_{2} both affect the reconstruction significantly. The sparsity term also plays an important role for the reconstruction in two ways: Firstly, it avoids high attenuation estimates in regions with insufficient information; Secondly, it suppresses errors introduced by the discrete numerical approximations.
Zebrafish
Arabidopsis thaliana
Limitations of the approach
The exponential decay model along a ray is only strictly valid for pure absorption. In most cases local random refractions can be also described by this model. However, in areas with clearly structured refraction, as e.g. in the eyes of the zebrafish, where the light is actively bundled, the model is violated and localized errors in the attenuation estimates are introduced. We minimize the influence of these errors with high regularization, however, a better modeling of refraction would be a desirable - though practically very challenging - extension.
Another source of error is the limited recording volume. Samples exceeding this volume introduce the problem of sensibly guessing the outside attenuations the rays pass before entering the recording volume. Boundary effects can lead to solutions with low energies which are qualitatively far away from the optimum, especially when performing many iterations. In our image formation model we assume zero outside attenuations (natural boundary conditions), while for the regularization we assume Neumann boundary conditions. If possible, the recording volume should be increased to contain more background in cases of boundary problems. If this is not possible the TV regularization with its sharp boundaries is to prefer over the TM regularization. Additionally a high weight on the sparsity term alleviates effects that lead to extreme attenuation estimates. This can be the case when outside attenuations are explained by a thin highly absorbing region at the image boundary. An alternative, that leads to visually good, but energetically suboptimal results, is to restrict the number of iterations (less than ten iterations usually lead to qualitatively good results). This has the additional advantage of very low computation times.
Conclusions
We could significantly improve the results of the variational attenuation correction presented in [7] by additionally modeling photo bleaching and replacing the ad-hoc Gaussian noise assumption by the (approximate) Poisson-Gaussian statistics. The choice of the loss function in the smoothness term allows to choose between smoothly varying (TM) or piecewise constant (TV) attenuation fields. The choice of the appropriate regularization is application dependent. In our case both regularization strategies lead to equally plausible results in the rather inhomogeneous biological samples analyzed. TV regularization is more stable in practice because the attenuation is much better localized, and therefore less boundary artifacts - that may lead to convergence to undesired solutions - are introduced. For both regularizations the sparsity term also actively avoids boundary errors, by keeping the attenuation field compact. However, high sparsity weights lead to an underestimation of the attenuation volume and should be avoided. Instead, an earlier manual termination of the iterative process leads to very good results without introducing this side-effect.
We showed the efficacy of the presented method on highly complex real world examples, where it was able to significantly increase the homogeneity of the measured signal and attenuation fields. This is crucial if the attenuation field is used to correct secondary channels containing sparse structures within the anatomy. Based on these findings, we conclude that the presented attenuation correction approach is an important step towards the quantification of confocal microscopic data.
Declarations
Acknowledgements
We thank the members of our teams for helpful comments on the manuscript. We also gratefully acknowledge the excellent technical support from Roland Nitschke (Life Imaging Centre, ZBSA, Freiburg). We thank Meta Rath, Alida Fillipi and Wolfgang Driever for providing the zebrafish recordings. We finally want to gratefully acknowledge EMBO for the long-term postdoctoral fellowship at the University of Freiburg awarded to TB. This work was supported by the German Research Foundation (DFG), the Excellence Initiative of the German Federal and State Governments (EXC 294) and the Bundesministerium für Bildung und Forschung (BMBF) (Fkz. 031 56 90 A). The article processing charge was funded by the DFG and the Albert Ludwigs University Freiburg in the funding program Open Access Publishing.
Authors’ Affiliations
References
- Egner A, Hell SW: Aberrations in confocal and multi-photon fluorescence microscopy induced by refractive index mismatch. Handbook of Biological Confocal Microscopy. Edited by: Pawley JB. 2006, New York: Springer, 404-413.View ArticleGoogle Scholar
- Booth M, Neil M, Wilson T: Aberration correction for confocal imaging in refractive-index-mismatched media. J Microsc. 1998, 192 (2): 99-98. 10.1046/j.1365-2818.1998.00412.x.View ArticleGoogle Scholar
- Guan YQ, Cai YY, Zhang X, Lee YT, Opas M: Adaptive correction technique for 3D reconstruction of fluorescence microscopy images. Microsc Res Techn. 2008, 71 (2): 146-157. 10.1002/jemt.20536.View ArticleGoogle Scholar
- Čapek M, Janáček J, Kubínová L: Methods for compensation of the light attenuation with depth of images captured by a confocal microscope. Microsc Res Tech. 2006, 69 (8): 624-635. 10.1002/jemt.20330.View ArticlePubMedGoogle Scholar
- Stanciu SG, Stanciu GA, Coltuc D: Automated compensation of light attenuation in confocal microscopy by exact histogram specification. Microc Res Tech. 2010, 73: 165-175. 10.1002/jemt.20767.View ArticleGoogle Scholar
- Can A, Al-kofahi O, Lasek S, Szarowski DH, Turner JN, Roysam B: Attenuation correction in confocal laser microscopes: a novel two-view approach. J Microsc. 2003, 211: 67-79. 10.1046/j.1365-2818.2003.01195.x.View ArticlePubMedGoogle Scholar
- Ronneberger O, Liu K, Rath M, Rueß D, Mueller T, Skibbe H, Drayer B, Schmidt T, Filippi A, Nitschke R, Brox T, Burkhardt H, Driever W: ViBE-Z: a framework for 3D virtual colocalization analysis in Zebrafish Larval Brains. Nature Methods. 2012, 9 (7): 735-742. 10.1038/nmeth.2076.View ArticlePubMedGoogle Scholar
- Visser T, Groen F, Brakenhoff G: Absorption and scattering correction in fluorescence confocal microscopy. J Microsc. 1991, 163 (2): 189-200. 10.1111/j.1365-2818.1991.tb03171.x.View ArticleGoogle Scholar
- Schmidt T, Dürr J, Keuper M, Blein T, Palme K, Ronneberger O: Variational attenuation correction of two-view confocal microscopic recordings. IEEE International Symposium on Biomedical Imaging (ISBI). 2013, San Francisco, CA, USA. Piscataway: IEEE, 169-172.Google Scholar
- Foi A, Trimeche M, Katkovnik V, Egiazarian K: Practical poissonian-gaussian noise modeling and fitting for single-image raw-data. Trans Image Process. 2008, 17 (10): 1737-1754. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4623175&tag=1,View ArticleGoogle Scholar
- Zhu C, Byrd RH, Lu P, Nocedal J: Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization. ACM Trans Math Softw. 1997, 23 (4): 550-560. 10.1145/279232.279236.View ArticleGoogle Scholar
- Brox T: Von Pixeln zu Regionen: Partielle Differenzialgleichungen in der Bildanalyse. PhD thesis,. Universität des Saarlandes, Mathematische Bildverarbeitungsgruppe, Fakultät für Mathematik und Informatik Universität des Saarlandes, 66041 Saarbrücken-2005. http://lmb.informatik.uni-freiburg.de/people/brox/pub/brox_PhDThesis.pdf,
- Shepp LA, Logan BF: The fourier reconstruction of a head section. IEEE Trans Nuclear Science. 1974, 21 (3): 21-43. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6499235&tag=1,View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License(http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.