- Proceedings
- Open Access
High-performance blob-based iterative three-dimensional reconstruction in electron tomography using multi-GPUs
- Xiaohua Wan^{1, 2},
- Fa Zhang^{1}Email author,
- Qi Chu^{1, 2} and
- Zhiyong Liu^{1}Email author
https://doi.org/10.1186/1471-2105-13-S10-S4
© Wan et al; licensee BioMed Central Ltd. 2012
- Published: 25 June 2012
Abstract
Background
Three-dimensional (3D) reconstruction in electron tomography (ET) has emerged as a leading technique to elucidate the molecular structures of complex biological specimens. Blob-based iterative methods are advantageous reconstruction methods for 3D reconstruction in ET, but demand huge computational costs. Multiple graphic processing units (multi-GPUs) offer an affordable platform to meet these demands. However, a synchronous communication scheme between multi-GPUs leads to idle GPU time, and a weighted matrix involved in iterative methods cannot be loaded into GPUs especially for large images due to the limited available memory of GPUs.
Results
In this paper we propose a multilevel parallel strategy combined with an asynchronous communication scheme and a blob-ELLR data structure to efficiently perform blob-based iterative reconstructions on multi-GPUs. The asynchronous communication scheme is used to minimize the idle GPU time so as to asynchronously overlap communications with computations. The blob-ELLR data structure only needs nearly 1/16 of the storage space in comparison with ELLPACK-R (ELLR) data structure and yields significant acceleration.
Conclusions
Experimental results indicate that the multilevel parallel scheme combined with the asynchronous communication scheme and the blob-ELLR data structure allows efficient implementations of 3D reconstruction in ET on multi-GPUs.
Keywords
- Electron Tomography
- Projection Image
- Compute Unify Device Architecture
- Memory Copy
- Simultaneous Iterative Reconstruction Technique
Background
Electron tomography (ET) combines electron microscopy (EM) and tomographic imaging to elucidate three-dimensional (3D) descriptions of complex biological structures at molecular resolution [1]. In ET, a series of projection images are taken with an electron microscope from a unique biological sample at different orientations around a single tilt axis [2]. From those projection images, the 3D structure of the sample can be obtained by means of tomographic reconstruction algorithms [3]. Weighted backprojection (WBP) is a standard reconstruction method in the field of 3D reconstruction in ET, due to its algorithmic simplicity and computational efficiency [4]. The major disadvantage of WBP, however, is that the results may be strongly affected by limited-angle data and noisy conditions [5]. Iterative methods, such as Simultaneous Iterative Reconstruction Technique (SIRT), are one of the main alternatives to WBP in 3D reconstruction in ET, owing to their good performance in handling incomplete, noisy data [6, 7]. Furthermore, blob-based iterative methods show a better performance than voxel-based ones in the incomplete and noisy conditions [5]. However, they have not been extensively used due to their high computational cost [8]. Furthermore, the need for high resolution makes ET of complex biological specimens use large projection images, which also yields large reconstructed files and requires an extensive use of computational resources and considerable processing time [8, 9].
3D reconstruction in ET demands huge computational costs and resources that derive from the computational complexity of the reconstruction algorithms and the size and number of the projection images involved. Traditionally, high-performance computing [8] has been used to address such computational requirements by means of parallel computing on supercomputers [9], large computer clusters [10] and multicore computers [11]. Recently, graphics processing units (GPUs) offer an attractive alternative platform to cope with the demands in ET in terms of the high peak performance, cost effectiveness, and the availability of user-friendly programming environments, e.g. NVIDIA CUDA [12, 13]. Several advanced GPU acceleration frameworks have been proposed to allow 3D-ET reconstruction to be performed on the order of minutes [14, 15]. However, these parallel reconstructions on GPUs only adopt traditional voxel basis functions which are less robust than blob basis functions under noisy situations. Some previous work focuses on the blob-based iterative reconstruction on a single GPU, which is still time-consuming [16, 17]. Single GPU cannot meet the requirements of the computational resources and the memory storage of 3D-ET reconstructions with the size of the projection images increasing constantly (typically 2 k × 2 k or even 4 k × 4 k). The architectural notion of a CPU serviced by multi-GPUs is an efficient solution for parallel 3D-ET reconstruction due to increasing the power of computations and the storage of memory.
Achieving efficient blob-based iterative reconstructions on multi-GPUs is challenging: because of the overlapping nature of blobs, the use of blobs as basis functions needs the communication between multiple GPUs during the process of iterative reconstructions. CUDA provides a synchronous communication scheme to handle the communication between GPUs [18]. But the downside of the synchronous communication is that each GPU must stop and sit idle while data is exchanged. The idle sit of GPU is a waste of resources which has a negative impact on the performance of reconstructions on multi-GPUs. Furthermore, as data collection strategies and electron detectors improve, a sparse weight matrix involved in blob-based iterative reconstruction methods needs large memory storage. Due to the limited available memory, it is infeasible to store such a large sparse matrix for most GPUs. Computing the weight matrix on the fly is more efficient than storing the matrix in the previous GPU-based ET implementations [14]. But it could bring the redundant computations since the weighted matrix has to be computed twice at least in each iteration.
To address the problems discussed above, in this paper, we make the following contributions: first, we present a multilevel parallel strategy for blob-based iterative reconstructions in ET on multi-GPUs, which can significantly accelerate 3D reconstructions in ET. Second, we develop an asynchronous communication scheme on multi-GPUs to minimize idle GPU time by asynchronously overlapping communications with computations. Finally, a data structure named blob-ELLR adopting three symmetric optimization techniques is developed to significantly reduce the storage space of the weight matrix. It only needs nearly 1/16 of the storage space in comparison with ELLPACK-R (ELLR). Also, the blob-ELLR format can achieve optimal coalesced access to global memory, which is suitable for 3D-ET reconstructions on multi-GPUs. Furthermore, we implement all the above techniques on the two different platforms: a NVIDIA GeForce GTX295 and two NVIDIA Tesla C2050s respectively. Experimental results show that the parallel strategy greatly reduces memory requirements and exhibits a significant acceleration.
Related background
In ET, the projection images are acquired from a specimen through the so-called single-axis tilt geometry. The specimen is tilted over a range, typically from -60° (or -70°) to +60° (or +70°) due to physical limitations of microscopes, with small tilt increments (1° or 2°). An image of the same object area is recorded at each tilt angle and then the 3D reconstruction of the specimen can be obtained from a set of projection images by means of blob-based iterative methods. In this section, we give a brief overview of blob-based iterative reconstruction methods, describe an iterative method called SIRT, and present a GPU computational model.
Blob-based iterative reconstruction methods
where I_{ m }(·) denotes the modified Bessel function of the first kind of order m, a is the radius of the blob, α is a non-negative real number controlling the shape of the blob. The choice of the parameters m, a, and α will influence the quality of the blob-based reconstructions. The basis functions that developed in [21] are used for the choice of the parameters in our algorithm (i.e., a = 2, m = 2 and α = 3.6).
where rf_{ ij } is the projected value of the pixel x_{ j } at an angle θ_{ i }. W is defined as a sparse matrix with M rows and N columns where w_{ ij } is the element of W. In general, the storage demand of the weighted matrix W rapidly increases as the size and the number of projection images increase. For example, when the size of images is 2 k × 2 k, the storage demand of the weighted matrix approaches to 3.5 GB. It is hard to store such a large matrix in the most GPUs due to the limited memory of GPUs.
Under those assumptions, the image reconstruction problem can be modelled as the inverse problem of estimating the x_{ j }'s from the p_{ i }'s by solving the system of linear equations given by Eq. (3). This problem is usually resolved by means of iterative methods.
Simultaneous iterative reconstruction technique (SIRT)
SIRT is a kind of all simultaneous iterative methods to solve the linear system which appears in image reconstruction. All simultaneous iterative methods (such as SIRT [22], component averaging methods (CAV) [23]) utilize the projection in the all directions to refine the current reconstruction in each iteration so that they are well suited for parallel computing on GPUs. In our work, we adopt SIRT to perform parallel reconstruction on multi-GPUs.
SIRT produces fairly smooth reconstruction results but requires for convergence a large number of iterations since SIRT adopts a global strategy: an approximation is updated simultaneously by all the projection images [24]. SIRT updates each x_{ j } only once per iteration, which means its updating strategy is pixel-by-pixel.
GPU computation model
Our algorithm is based on NVIDIA GPU architecture and compute unified device architecture (CUDA) programming model. GPU is a massively multi-threaded data-parallel architecture. NVIDIA GPUs contain a scalable array of streaming multiprocessors (SMs) each of which contains scalar processors (SPs). On the old Tesla architecture of GPUs, there are 8 SPs per SM while a SM contains 32 SPs in the new Fermi architecture GPUs. All the SPs in the same SM execute the same instructions synchronously in a Single Instruction Multiple Thread (SIMT) fashion [18]. During execution, 32 threads from a continuous section are grouped into a warp, which is the scheduling unit on each SM.
NVIDIA provides the programming model and software environment of CUDA. CUDA is an extension to the C programming language. A CUDA program consists of a host program that runs on CPU and a kernel program that executes on GPU itself. The host program typically sets up data and transfers it to and from the GPU, while the kernel program processes that data. Kernel, as a program on GPUs, consists of thousands of threads. Threads have a three-level hierarchy: grid, block, thread. A grid is a set of blocks that execute a kernel, and each block consists of hundreds of threads. All threads within a block can share the same on-chip memory and can be synchronized at a barrier. Each block can only be assigned to and executed on one SM.
CUDA provides a synchronous communication scheme (i.e. cudaThreadSynchronize()) to handle the communication between GPUs. With the synchronous scheme, all of threads on GPUs must be blocked until the data communication has been completed. CUDA devices use several memory spaces including global, local, shared, texture, constant memory and registers. Of these different memory spaces, global memory is the most plentiful. Global memory loads and stores by a half warp (16 threads) are coalesced in as few as one transaction (or two transactions in the case of 128-bit words) when certain access requirements are met. Coalesced memory accesses deliver a much higher efficient bandwidth than non-coalesced accesses, thus greatly affecting the performance of bandwidth-bound programs.
Methods
Multilevel parallel strategy for blob-based iterative reconstruction
The processing time of 3D reconstruction with blob-based iterative methods is a major challenge in ET due to large reconstructed data volume. So parallel computing on multi-GPUs is becoming paramount to cope with the computational requirement. We present a multilevel parallel strategy for blob-based iterative reconstruction and implement it on the OpenMP-CUDA architecture.
Coarse-grained parallel scheme using OpenMP
In the first level of the multilevel parallel scheme, a coarse-grained parallelization is straightforward in line with the properties of ET reconstruction. The single-tilt axis geometry allows data decomposition into slabs of slices orthogonal to the tilt axis. For this decomposition, the number of slabs equals to the number of GPUs, and each GPU reconstructs its own slab. Consequently, the 3D reconstruction problem can be decomposed into a set of 3D slabs reconstruction sub-problems. However, the slabs are interdependent due to the overlapping nature of blobs. Therefore, each GPU has to receive a slab which is composed of its corresponding own slices and additional redundant slices reconstructed in neighbour slabs. The number of redundant slices depends on the blob extension. In a slab, the own slices are reconstructed by the corresponding GPU and require information provided by the redundant slices from the neighbour GPUs. During the process of 3D-ET reconstruction, each GPU has to communicate with other GPUs for the additional redundant slices.
Fine-grained parallel scheme using CUDA
In the second level of the multilevel parallel scheme, 3D reconstruction of one slab, as a fine-grained parallelization, is implemented on each GPU using CUDA. In the 3D reconstruction of a slab, the generic iterative process is described as follows:
Initialization: compute the matrix W and make a initial value for X^{ (0) }by BPT;
Reprojection: estimate the computed projection data P' based on the current approximation X;
Backprojection: backproject the discrepancy ΔP between the experimental and calculated projections, and refine the current approximation X by incorporating the weighted backprojection ΔX.
Asynchronous communication scheme
As described above in the multilevel parallel scheme, there must be two communications between neighbour GPUs during one iterative reconstruction process. One is to exchange the computed projections of the redundant slices after the reprojection process. The other is to exchange the reconstructed pixels of the redundant slices after the backprojection process. CUDA provides a synchronous communication scheme (i.e. cudaThreadSynchronize()) to handle the communication between GPUs. With the synchronous communication scheme, GPUs must sit idle while data is exchange, which has a negative impact on the performance of the reconstruction in ET.
Blob-ELLR format with symmetric optimization techniques
In the parallel blob-based iterative reconstruction, another problem is the lack of memory on GPUs for the sparse weighted matrix. Recently, several data structures have been developed to store sparse matrices. Compressed row storage (CRS) is the most extended format to store the sparse matrix on CPUs [26]. ELLPACK can be considered as an approach to outperform CRS [27]. Vazquez et al. proposed and evaluated a variant of the ELLPACK format called ELLPACK-R (ELLR) [28]. ELLR has been proved to outperform the most efficient formats for storing the sparse matrix data structure on GPUs [29]. ELLR consists of two arrays, A[] and I[] of dimension N × MaxEntriesbyRows, and an additional N-dimensional integer array called rl[] is included in order to store the actual number of nonzeroes in each row [13, 28]. With the size and number of projection images increasing, the memory demand of the sparse weighted matrix rapidly increases. The weighted matrix involved is too large to load into most of GPUs due to the limited available memory, even with the ELLR data structure.
Although the blob-ELLR without symmetric techniques can reduce the storage of the sparse matrix W, the number of (4B) × N is rather large especially when the number of N increases rapidly. The optimization takes advantage of the symmetry relationships as follows:
Symmetry 1
So, only w_{ 1j } is stored in the blob-ELLR, whereas the other elements are easily computed based on w_{ 1j } . This scheme can reduce the storage spaces of A and I to 25%.
Symmetry 2
It is easy to see that the point (-x,-y) of a slice is then projected to a point r1 (r1 = -r) in the same tilt angle θ. The weighted value of the point (-x,-y) can be computed according to that of the point (x, y). Therefore, it is not necessary to store the weighted value of almost a half of the points in the matrix W so that the space requirements for A and I are further reduced by nearly 50%.
Symmetry 3
In general, the tilt angles used in ET are halved by 0°. Under the condition, a point (-x, y) with a tilt angle -θ is projected to a point r2 (r2 = -r). Therefore, the projection coefficients are shared with the projection of the point (x, y) with the tilt angle θ. This further reduces the storage spaces of A and I by nearly 50% again.
With the three symmetric optimizations mentioned above, the size of the storage for two arrays in the blob-ELLR is almost (B/2) × (N/2) reducing to nearly 1/16 of original size.
Results and discussions
In order to evaluate the performance of the multilevel parallel strategy, the blob-based iterative reconstructions of the caveolae from the porcine aorta endothelial (PAE) cell have been performed [30]. Three different experimental datasets are used (denoted by small-sized, medium-sized, large-sized) with 56 images of 512 × 512 pixels, 112 images of 1024 × 1024 pixels, and 119 images of 2048 × 2048 pixels, to reconstruct tomograms of 512 × 512 × 190, 1024 × 1024 × 310 and 2048 × 2048 × 430 respectively. All the experiments are carried out on both GT200 and Fermi platforms. The details of the platforms are as follows. The GT200 machine consist of a 2.66 GHz Intel Xeon X5650, 24 GB RAM, and a NVIDIA GeForce GTX 295 card including two Tesla GPUs, each containing 30 SMs of 8 SPs (i.e. 240 SPs) at 1.2 GHz, 896 MB of memory. The Fermi machine is composed of the same CPU based on Intel Xeon X5650, and two NVIDIA Tesla C2050 cards. NVIDIA Tesla C2050 adopts the Fermi architecture and contains 14 SMs of 32 SPs (i.e. 448 SPs) at 1.15 GHz, 3 GB of memory. The two machines are both running on Redhat EL5 64-bit. For the comparison of the performance of multi-GPUs with CPU, we have performed the related serial program on the same CPU, i.e. Intel Xeon X5650, with a single core. To clearly evaluate the performance of the asynchronous communication scheme and the blob-ELLR data structure respectively, we have performed two sets of experiments. The details of the experiments are introduced below.
Running times (s)
datasets | iteration number | CPU | |||||
---|---|---|---|---|---|---|---|
standard | ELLR | blob-ELLR | |||||
512 × 512 | 1 | 1413.12 | 235.52 | 204.12 | |||
5 | 4025.39 | 685.64 | 586.37 | ||||
10 | 6498.18 | 1116.54 | 968.39 | ||||
25 | 16032.87 | 2788.84 | 2423.67 | ||||
1024 × 1024 | 1 | 18708.48 | 3225.60 | 2763.23 | |||
5 | 60328.71 | 10473.51 | 9126.34 | ||||
10 | 93052.42 | 16423.07 | 14228.64 | ||||
25 | 210824.49 | 37524.64 | 32787.49 | ||||
2048 × 2048 | 1 | 111288.32 | 19476.48 | 17147.66 | |||
5 | 253034.69 | 44543.49 | 39536.56 | ||||
10 | 429278.31 | 76382.34 | 68356.32 | ||||
25 | 1006392.84 | 182980.72 | 164443.73 | ||||
datasets | iteration number | GTX295 | Tesla C2050 | ||||
standard | ELLR | blob-ELLR | standard | ELLR | blob-ELLR | ||
512 × 512 | 1 | 14.63 | 7.88 | 5.76 | 8.02 | 6.34 | 3.95 |
5 | 42.36 | 24.31 | 14.35 | 23.26 | 18.45 | 11.92 | |
10 | 66.36 | 47.89 | 30.32 | 55.03 | 39.04 | 28.41 | |
25 | 155.45 | 110.83 | 82.31 | 137.82 | 93.92 | 72.01 | |
1024 × 1024 | 1 | 162.54 | - | 79.39 | 98.73 | 81.83 | 57.07 |
5 | 523.13 | - | 289.43 | 329.31 | 265.81 | 194.29 | |
10 | 742.02 | - | 479.30 | 492.81 | 389.32 | 294.62 | |
25 | 1784.75 | - | 937.41 | 1092.71 | 867.92 | 687.05 | |
2048 × 2048 | 1 | 869.29 | - | 481.09 | 502.42 | - | 337.64 |
5 | 1954.38 | - | 1226.74 | 1192.51 | - | 811.03 | |
10 | 3476.94 | - | 2143.93 | 2076.93 | - | 1423.95 | |
25 | 7986.42 | - | 6105.83 | 4691.93 | - | 3374.52 |
Conclusions
ET allows elucidation of the molecular architecture of complex biological specimens. Blob-based iterative methods yield better results than other methods, but are not used extensively in ET because of their huge computational demands. Multi-GPUs have emerged as powerful platforms to cope with the computational requirements, but have the difficulties due to the synchronous communication and limited memory of GPUs. In this work, we present a multilevel parallel strategy combined with an asynchronous communication scheme and a blob-ELLR data structure to perform high-performance blob-based iterative reconstruction in ET on multi-GPUs. The asynchronous communication scheme is used to minimize the idle GPU time. The blob-ELLR structure only needs nearly 1/16 of the storage space in comparison with the ELLR storage structure and yields significant acceleration compared to the standard and ELLR matrix methods. In this work, adopting the multilevel parallel strategy with the asynchronous communication scheme and the blob-ELLR data structure, we have performed the parallel 3D-ET reconstruction using SIRT on multi-GPUs. In fact, the parallel strategy proposed can be also easily applied to the other simultaneous methods, e.g. CAV. In the future work, we will further investigate and implement the multilevel parallel strategy and the asynchronous communication scheme on a many-GPU cluster.
Declarations
Acknowledgements
This article has been published as part of BMC Bioinformatics Volume 13 Supplement 10, 2012: "Selected articles from the 7th International Symposium on Bioinformatics Research and Applications (ISBRA'11)". The full contents of the supplement are available online at http://www.biomedcentral.com/bmcbioinformatics/supplements/13/S10.
We would like to thank Prof. Fei Sun and Dr. Ka Zhang in Institute of biophysics for providing the experimental datasets. Work is supported by grants National Natural Science Foundation for China (61103139, 61070129 and 61003164); National Grand Fundamental Research 973 Program of China (2011CB302501) and National Core-High Tech-basic Program (2011ZX01028-001-002).
Authors’ Affiliations
References
- Frank J: Electron tomography. Methods for Three-Dimensional Visualization of Structures in the Cell. 2006, New York, Springer PressGoogle Scholar
- Marabini R, Rietzel E, Schroeder R, Herman GT, Carazo JM: Three-dimensional reconstruction from reduced sets of very noisy images acquired following a single-axis tilt schema: application of a new three-dimensional reconstruction algorithm and objective comparison with weighted back-projection. Journal of Structural Biology. 1997, 120 (3): 363-371. 10.1006/jsbi.1997.3923.View ArticlePubMedGoogle Scholar
- Herman GT: The Fundamentals of Computerized Tomography: Image Reconstruction from Projections. 2009, London, Springer Press, 2View ArticleGoogle Scholar
- Radermacher M: Weighted back-projection methods. Electron tomography: Methods for Three-Dimensional Visualization of Structures in the Cell. Edited by: Frank J. 2006, New York: Springer Press, 245-273.View ArticleGoogle Scholar
- Fernández JJ, Lawrence AF, Roca J, García I, Ellisman MH, Carazo JM: High performance electron tomography of complex biological specimens. Journal of Structural Biology. 2002, 138: 6-20. 10.1016/S1047-8477(02)00017-5.View ArticlePubMedGoogle Scholar
- Vazquez F, Garzon EM, Fernandez JJ: Matrix implementation of simultaneous iterative reconstruction technique (SIRT) on GPUs. The computer Journal. 2011, doi:10.1093/comjnl/bxr033,Google Scholar
- Sorzano CO, Marabini R, Boisset N, Rietzel E, Schroder R, Herman GT, Carazo JM: The effect of overabundant projection directions on 3D reconstruction algorithms. Journal of Structural Biology. 2001, 133: 108-118. 10.1006/jsbi.2001.4338.View ArticlePubMedGoogle Scholar
- Fernandez JJ: High performance computing in structural determination by electron cryomicroscopy. Journal of Structural Biology. 2008, 164: 1-6. 10.1016/j.jsb.2008.07.005.View ArticlePubMedGoogle Scholar
- Fernandez JJ, Garazo JM, García I: Three-dimensional reconstruction of cellular structures by electron microscope tomography and parallel computing. Journal of Parallel and Distributed Computing. 2004, 64: 285-300. 10.1016/j.jpdc.2003.06.005.View ArticleGoogle Scholar
- Wan X, Zhang F, Liu Z: Modified simultaneous algebraic reconstruction technique and its parallelization in cryo-electron tomography. Proceeding of the International Conference on Parallel and Distributed Systems. 2009, 384-390.Google Scholar
- Agulleiro JI, Fernandez JJ: Fast tomographic reconstruction on multicore computers. Bioinformatics. 2011, 27: 582-583. 10.1093/bioinformatics/btq692.View ArticlePubMedGoogle Scholar
- Castano-Diez D, Mueller H, Frangakis AS: Implementation and performance evaluation of reconstruction algorithms on graphics processors. Journal of Structural Biology. 2007, 157: 288-295. 10.1016/j.jsb.2006.08.010.View ArticlePubMedGoogle Scholar
- Vazquez F, Garzon EM, Fernandez JJ: A matrix approach to tomographic reconstruction and its implementation on GPUs. Journal of Structural Biology. 2010, 170: 146-151. 10.1016/j.jsb.2010.01.021.View ArticlePubMedGoogle Scholar
- Xu W, Xu F, Jones M, Keszthelyi B, Sedat JW, Agard DA: High-performance iterative electron tomography reconstruction with long-object compensation using graphics processing units (GPUs). Journal of Structural Biology. 2010, 171: 142-153. 10.1016/j.jsb.2010.03.018.PubMed CentralView ArticlePubMedGoogle Scholar
- Zheng SQ, Branlund E, Kesthelyi B, Braunfeld MB, Cheng Y, Sedat JW, Agard DA: A distributed multi-GPU system for high speed electron microscopic tomographic reconstruction. Ultramicroscopy. 2011, 111 (8): 1137-1143. 10.1016/j.ultramic.2011.03.015.PubMed CentralView ArticlePubMedGoogle Scholar
- Wan X, Zhang F, Chu Q, Zhang K, Sun F, Yuan B, Liu Z: Three-dimensional Reconstruction by Adaptive Simultaneous Algebraic Reconstruction Technique in Electron Tomography. Journal of Structural Biology. 2011, 175: 277-287. 10.1016/j.jsb.2011.06.002.View ArticlePubMedGoogle Scholar
- Andreyev A, Sitek A, Celler A: Acceleration of blob-based iterative reconstruction algorithm using Tesla GPU. IEEE Nuclear Science Symposium Conference. 2009, HP3-4: 4095-4098.Google Scholar
- NVIDIA: CUDA Programming Guide. 2008, [http://www.nvidia.com/cuda]Google Scholar
- Censor Y: Finite series-expansion reconstruction methods. Proceedings of the IEEE. 1983, 71 (3): 409-419.View ArticleGoogle Scholar
- Lewitt RM: Alternatives to voxels for image representation in iterative reconstruction algorithms. Physics in Medicine and Biology. 1992, 37: 705-716. 10.1088/0031-9155/37/3/015.View ArticlePubMedGoogle Scholar
- Matej S, Lewitt RM: Efficient 3D grids for image-reconstruction using spherically-symmetrical volume elements. IEEE Transaction Nuclear Science. 1995, 42: 1361-1370. 10.1109/23.467854.View ArticleGoogle Scholar
- Penczek P, Radermacher M, Frank J: Three-dimensional reconstruction of single particles embedded in ice. Ultramicroscopy. 1992, 40 (1): 33-53. 10.1016/0304-3991(92)90233-A.View ArticlePubMedGoogle Scholar
- Censor Y, Gordon D, Gordon R: Component averaging: an efficient iterative parallel algorithm for large and sparse unstructured problems. Parallel Computing. 2001, 27: 777-808. 10.1016/S0167-8191(00)00100-9.View ArticleGoogle Scholar
- Gilbert P: Iterative methods for the 3D reconstruction of an object from projections. Journal of Theoretical Biology. 1972, 76: 105-117.View ArticleGoogle Scholar
- Herman GT, Meyer LB: Algebraic reconstruction can be made computationally efficient. IEEE Transactions on Medical Imaging. 1993, 12: 600-609. 10.1109/42.241889.View ArticlePubMedGoogle Scholar
- Bisseling RH: Parallel Scientific Computation. 2004, Oxford University PressView ArticleGoogle Scholar
- Rice JR, Boisvert RF: Solving Elliptic Problems using ELLPACK. 1985, New York, Springer PressView ArticleGoogle Scholar
- Vazquez F, Garzon EM, Martinez JA, Fernandez JJ: Accelerating sparse matrix-vector product with GPUs. Proceedings of Computational and Mathematical Methods in Science and Engineering. 2009, 1081-1092.Google Scholar
- Vazquez F, Garzon EM, Fernandez JJ: A new approach for sparse matrix vector product on NVIDIA GPUs. Concurrency and Computation: Practice and Experience. 2011, 23: 815-826. 10.1002/cpe.1658.View ArticleGoogle Scholar
- Sun S, Zhang K, Xu W, Wang G, Chen J, Sun F: 3D structural investigation of caveolae from porcine aorta endothelial cell by electron tomography. Progress in Biochemistry and Biophysics. 2009, 36 (6): 729-735.View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.