 Research Article
 Open Access
 Published:
Modeling, validation and verification of threedimensional cellscaffold contacts from terabytesized images
BMC Bioinformatics volume 18, Article number: 526 (2017)
Abstract
Background
Cellscaffold contact measurements are derived from pairs of coregistered volumetric fluorescent confocal laser scanning microscopy (CLSM) images (zstacks) of stained cells and three types of scaffolds (i.e., spun coat, large microfiber, and medium microfiber). Our analysis of the acquired terabytesized collection is motivated by the need to understand the nature of the shape dimensionality (1D vs 2D vs 3D) of cellscaffold interactions relevant to tissue engineers that grow cells on biomaterial scaffolds.
Results
We designed five statistical and three geometrical contact models, and then downselected them to one from each category using a validation approach based on physically orthogonal measurements to CLSM. The two selected models were applied to 414 zstacks with three scaffold types and all contact results were visually verified. A planar geometrical model for the spun coat scaffold type was validated from atomic force microscopy images by computing surface roughness of 52.35 nm ±31.76 nm which was 2 to 8 times smaller than the CLSM resolution. A cylindrical model for fiber scaffolds was validated from multiview 2D scanning electron microscopy (SEM) images. The fiber scaffold segmentation error was assessed by comparing fiber diameters from SEM and CLSM to be between 0.46% to 3.8% of the SEM reference values. For contact verification, we constructed a webbased visual verification system with 414 pairs of images with cells and their segmentation results, and with 4968 movies with animated cell, scaffold, and contact overlays. Based on visual verification by three experts, we report the accuracy of cell segmentation to be 96.4% with 94.3% precision, and the accuracy of cellscaffold contact for a statistical model to be 62.6% with 76.7% precision and for a geometrical model to be 93.5% with 87.6% precision.
Conclusions
The novelty of our approach lies in (1) representing cellscaffold contact sites with statistical intensity and geometrical shape models, (2) designing a methodology for validating 3D geometrical contact models and (3) devising a mechanism for visual verification of hundreds of 3D measurements. The raw and processed data are publicly available from https://isg.nist.gov/deepzoomweb/data/ together with the web based verification system.
Background
The problem of 3D contact measurements between a cell and its surrounding scaffold is related to colocalization of two objects from dualcolor fluorescent microscopy zstacks [1,2,3,4] where each channel is imaged to excite either cell or scaffold stain. The zstacks are 3D images formed by a set of uniformlyspaced crosssectional 2D images along a zaxis. In general, colocalization refers to the spatial overlap of at least two fluorescent labels (staining dyes) emitting distinct wavelengths. The mathematical definition of the spatial overlap in volumetric data (zstacks) can be viewed as a cooccurrence of two labels at the same or neighboring locations, or as a correlation of intensities at the cooccurring locations. In this work, we use the term 3D contact to refer to the cooccurrence of fluorescent labels because of our interest in measuring the shape of cellscaffold spatial interactions.
The shape measurements of cellscaffold contacts are important for tissue engineers that grow cells on a variety of biomaterial scaffolds. One of the many challenges in growing cells is to discover how cellular processes (for instance, differentiation and proliferation) and cell shape changes are coordinated during morphogenesis [5]. In the past, it has been reported that (1) a type of scaffold drives the cell shape [6, 7], and (2) scaffold substrate effects on the shape of human bone marrow stromal cells (hBMSCs) can influence their behavior and differentiation [8,9,10,11]. However, there is a lack of understanding of the relationship between cell shape and cellscaffold contact shape, and how these measurements may serve as predictors of cell differentiation fate. The biological motivation is illustrated in Fig. 1.
Current approaches to designing 3D scaffold niches focus on assessing the effect of a design for a desirable cell function, such as proliferation, expansion, or differentiation towards a target lineage. Although this approach is very useful, it does not enable a reasoned approach to scaffold design where the scaffold is constructed to drive the cells into a particular morphology that will preferentially guide the cells towards the desired function. Although there is extensive evidence linking cell shape and function [10, 12,13,14], there is a lack of quantitative data regarding the 3D morphology of cellmaterial interactions in biomaterial scaffolds. In order to address these issues, the 3D shape of contacts between scaffolds and primary human bone marrow stromal cells (hBMSCs) was quantitatively evaluated in three biomaterial substrates made from poly(lacticcoglycolic acid) (PLGA): Spun Coat (SC), Medium Microfibers (MMF) and Large Microfibers (MF). hBMSCs were used for this study because of their clinical relevance to tissue engineering and regenerative medicine [15] and due to the intense interest in guiding their behavior through environmental cues [12, 16,17,18,19,20,21,22]. The three chosen substrates make an interesting system to study because fibrous scaffolds (MF and MMF) have been observed to drive osteogenic differentiation of hBMSCs while the flat substrates did not [9, 11, 22]. By constructing all three substrates from the same material (PLGA), the effect of substrate structure could be studied in the absence of changes in composition. A 24 h cell culture time point was selected for imaging to give the cells enough time to achieve a stable morphology but not so much time that the cells had proliferated or differentiated.
Although tissue engineers aim to improve scaffold design in order to guide cell behavior, the role of the geometry of cellscaffold contacts has not been adequately considered. Cell shape is dictated by the geometry of cell matrix contacts as the cell can only spread and adhere to the matrix which surrounds it. In addition, celladhesion sites, often described as focal adhesions, may trigger signaling events that guide gene expression and cell behavior. Thus, the geometry and spacing of cell adhesion sites will influence gradients and timing of these signaling events. For these reasons, tissue engineers can benefit from 3D mapping of cellscaffold contact sites in order to generate new insights for designing scaffolds that guide cell function. For example, cell shape alone might not convey information about cellscaffold contact surface for cells residing on hydrophobic versus hydrophilic scaffolds with the same geometry. Nevertheless, such contact measurements have not been acquired due to the complexity of these measurements as they require information about both the cell and the scaffold. Our motivation for the work comes from the need to design a measurement methodology for cellscaffold contact sites so that cell differentiation fate can be reliably predicted.
Several challenges of measuring cellscaffold contact shapes can be summarized as follows:

(1)
Our insufficient knowledge about the spatial and intensity statistics as well as geometry of foreground objects (cell membrane, scaffold) limits our ability to detect foreground reliably (see Fig. 2a).

(2)
The difficulties in acquiring orthogonal cellscaffold contact measurements and validating automated analytical algorithms constrain our measurement confidence (i.e., orthogonal measurements refer to those physicsbased methods that are based on other than fluorescent imaging modality).

(3)
Large RAM (random access memory) requirements (≈3 GB just to load the pair of input zstacks) and large data volume (>1 TB) impose computational and execution time burden on the analyses.

(4)
Fluorescent staining dyes emit light at overlapping wavelength ranges which introduces intensity bleedthrough across cell and scaffold coregistered zstacks (see Fig. 2b). This leads to a bias in colocalization (i.e., locations of a stained cell have higher intensity values in a scaffold channel than background and vice versa).

(5)
Design of an efficient and geographically accessible visual verification system of complicated 3D contact shapes over several hundreds of zstacks is difficult.
The specific problem addressed in this work can be formulated as a design of a measurement methodology for cellscaffold contacts over terabytesized collections of dualcolor fluorescent confocal microscopy zstacks. Following the past work [7], the measurement methodology consists of three components:

(1)
Modeling of (a) an object of interest (cell or scaffold) in each zstack for foreground segmentation and (b) a cellscaffold contact based on the relative spatial positions of the segmented objects.

(2)
Validation of the accuracy of segmentation and contact models.

(3)
Verification of several hundreds of automatically detected cellscaffold contacts through visual inspection.
The experimental design includes three types of scaffolds (SCMFMMF), eight cellscaffold contact methods, and three human experts performing verification. Figure 3 shows one example of a pair of cell and scaffold zstacks. These three scaffolds represent geometries that cause cells to have contacts with scaffolds at one or multiple zplanes (SP – one contact plane, MF and medium MMF – larger than 3 contact planes).
We approach the design problem for the threecomponent measurement methodology by addressing challenges specific to each component as summarized in Table 1. The modeling challenges related to our insufficient knowledge about statistics and geometry of foreground objects are approached as an optimization problem over a set of statistical and geometrical models. The modeling challenges also include large RAM and execution time requirements due to the terabytesized data collection. To alleviate these challenges, the regions of interests (ROIs) are cropped from each image zstack such that all cells and their surrounding scaffold are included. The validation challenges related to the difficulties in acquiring physicsbased orthogonal cellscaffold contact measurements are addressed by using Scanning Electron Microscopy (SEM) and Atomic Force Microscopy (AFM). Finally, the verification challenges related to raw and processed data quality due to complicated 3D contact shapes are handled by designing a webbased visual inspection system that accommodates verification time and accuracy tradeoffs.
Our work on modeling, validation and verification can be related to the published methods that focus on the problems of colocalization, foreground modeling, 3D segmentation validation, and verification of 3D contacts at large scale. The past work is summarized in Table 2 together with the relationship to our presented work. Detailed descriptions of related work can be found in Additional file 1.
Based on the reviewed related work, the novelty and contributions of our work come from:

(1)
creating and optimizing cellscaffold contact representations that incorporate five statistical and three geometrical models,

(2)
designing a methodology for validating fiber segmentation using reference SEM and fluorescent confocal measurements of single fibers, and

(3)
devising a mechanism for rapid visual verification of hundreds of 3D measurements.
An additional contribution comes from the fact that we created the largest collection of 3D cellscaffold measurements in the biomanufacturing community. The data are available at https://isg.nist.gov/deepzoomweb/data and the webbased verification system for cell segmentation and cellscaffold contacts is available at https://isg.nist.gov/CellScaffoldContact/app/index.html.
The main manuscript presents materials and methods, experimental results, discussion of quantitative and qualitative results, and conclusions. The appendices contain the detailed description of related work (Additional file 1), cell segmentation algorithm (Additional file 2), model for cropping contact regions of interest (Additional file 3), statistical model of background (Additional file 4), statistical models for segmenting all scaffold types (Additional file 5), algorithms based on statistical models for segmenting all scaffold types (Additional file 6), algorithm based on planar geometrical model for segmenting spun coat scaffolds (Additional file 7), algorithms based on cylindrical geometrical models for segmenting fiber scaffolds (Additional file 8), evaluations of goodnesstofit for planar model used for modeling spuncoat scaffolds (Additional file 9), and validation steps based on 2D SEM and 3D CLSM data of single fibers (Additional file 10).
Methods
Although we focus on shape metrology, one could view the materials and methods as a foundation for answering a question: “How would cell, scaffold, and cellscaffold interaction shape characteristics affect cell fate (differentiation and proliferation)?” Answering this and other related questions is the driving factor behind the next sections.
Materials
The materials and digital data are divided into a set supporting cellscaffold contact measurements and a set acquired for algorithmic validation purposes.
Cellscaffold contact measurements
In this paper, the data acquisition focuses on the measurements establishing the effect of scaffold types on cell morphology and on cell behavior. The data sets are acquired by CLSM as images (zstacks) of cells cultured on three different scaffolds. The three scaffolds are described in Table 3.
Cell preparation
Primary human bone marrow stromal cells (hBMSCs, Tulane Center for Gene Therapy, donor #8004 L, 22 yr. male, iliac crest) were cultured in medium (αMEM containing 16.5% by vol. fetal bovine serum, 4 mmol/L Lglutamine, and 100 units/mL penicillin and 100 μg/mL streptomycin) in a humidified incubator (37 °C with 5% CO2 by vol.) to 70% confluency, trypsinized (0.25% trypsin by mass containing 1 mmol/L ethylenediaminetetraacetate (EDTA), Invitrogen) and seeded onto substrates (scaffolds) at passage 4. SC, MF and MMF substrates (see Table 3) were placed in multiwell plates and cells suspended in medium were seeded onto them at a density of 1250 cells/cm^{2}. hBMSCs were cultured for 1 day for all treatments prior to imaging. After 1 day, culture, cells on scaffolds were fixed with 3.7% (vol./vol.) formaldehyde and stained for cell membrane (5 μmol/L OregonGreen maleimide, Life Technologies) and nucleus (0.03 mmol/L 4′,6diamidino2phenylindole, DAPI, Life Technologies). More than 100 cells were imaged per scaffold type to provide statistically meaningful results.
Scaffold preparation
The MMF and MF scaffolds for cell culture were created by electrospinning a blend of two types of poly(lacticcoglycolic acid) (PLGA): using the same polymer mixture for the MMF and MF treatments. The polymer mixture was 90% mass fraction PLGA Poly lacticcoglycolic acid (PLGA 50:50 M ratio of L to G, relative molecular mass ≈110,000 g/mol, Lactel Absorbable Polymers) and 10% mass fraction PLGAFlamma Fluor FKR648 (PLGA 50:50 M ratio of L to G, relative molecular mass ≈25,000 g/mol, Flamma Fluor FKR648 ester linked to the PLGA, Akina Inc., Polyscitech). Flamma Fluor FKR648 was covalently bound to the PLGA via an ester linkage to prevent leaching into the cell culture medium. For MF scaffolds, the PLGA/PLGAFKR648 blend was dissolved in 3:1 acetone: ethyl acetate and electrospun (18 gauge steel needle, 2.3 ml/h, tip to collector distance of 15 cm, aluminum foil target) at 14 kV (high voltage generator, ES30P5 W, Gamma High Voltage Research) to yield monodisperse PLGA nanofibers. For MMF scaffolds, the PLGA/PLGAFKR648 blend was dissolved in acetone and electrospun (22 ga. steel needle, 1.25 mL/h, tip to collector distance of 15 cm, aluminum foil target) at 12 kV (high voltage generator, ES30P5 W, Gamma High Voltage Research) to yield monodisperse PLGA nanofibers. For scanning electron microscope (SEM) imaging the PLGA mats were removed from the foil and cut into 5 mm × 5 mm squares.
Imaging
The samples were imaged with CLSM (Leica SP5 II, Leica Microsystems) using a 63× waterimmersion objective (numerical aperture 0.9, 1 Airy unit). Prior to imaging, cell culture medium was removed and replaced with phosphate buffered saline (PBS) to reduce the background fluorescent signal. A zstack with two channels was collected for each of 711 cells. The two channels corresponded to cell membrane (OregonGreen  excitation 488 nm, emission 501 nm to 570 nm) and fiber scaffold (Flammafluor648  excitation 633 nm, emission 652 nm to 708 nm). We also collected a single image of nucleus (DAPI  excitation 405 nm, emission 413 nm to 467 nm) to confirm that measured objects were cells (objects without a nucleus were discarded). Based on the manufacturer’s defined resolution for the 63× objective (XY = 217 nm and Z = 626 nm for 488 nm wavelength), we defined our acquisition fluorescent voxel dimensions in X, Y and Z respectively at 0.12 μm × 0.12 μm × 0.462 μm [2048 pixels × 2048 pixels in X and Y, up to 175 frames in Z]. Each zframe in the zstacks was exported as an 8 MB tif image with a resolution of 2048 pixels × 2048 pixels (246 μm × 246 μm) and 16 bits per pixel. Examples of zframe tif images are shown in Fig. 4.
Data summary and quality assurance
The data collection initially generated zstacks of 711 zstacks of [cell, scaffold] pairs that were visually inspected. We kept only zstacks with individual hBMSCs that were not touching other cells so that the contact measurements are per cell. Out of the initial 711 pairs, we eliminated 259 pairs due to outofstack cells (automated cell localization and focus failed) and 41 pairs due to very low background offset that would not allow us to estimate background intensity distribution model. After eliminating the total of 297 pairs, the remaining 411 [cell, scaffold] pairs were summarized in Table 4. Each zstack was between 922.75 MB and 1468 MB [2048 pixels (X) × 2048 pixels (Y) × 110 to 175 pixels (Z)] which mapped to about 3 GB of RAM when a pair of [cell, scaffold] zstacks was loaded.
Algorithmic model validation measurements
Surface roughness reference measurements of a spun coat scaffold to validate a planar geometrical contact model
Surface roughness of the SC films was measured using atomic force microscopy (AFM, Dimension Icon, Bruker, Billerica, MA). Six uniformlydistributed spots on a SC film sample were analyzed with each spot size of 50 μm × 50 μm (256 samples per scan line, 0.195 μm spatial resolution). The images were analyzed with Nanoscope Analysis (Bruker) and the root mean square (RMS) roughness was reported for each analyzed spot and averaged to produce a single value for the SC film.
Single fiber radius reference measurements to validate a cylindrical geometrical contact model and to assess accuracy of fiber scaffold segmentation
SEM was chosen to verify results from confocal epifluorescence mode (CLSM) because SEM is higher resolution than CLSM. However, SEM is conducted in the dry state, the CLSM was conducted under water immersion and PLGA fibers can swell when hydrated. To address this issue, the fibers that were imaged by SEM in the dry state were imaged by confocal via water immersion within 2 h of being immersed in PBS. Thus, swelling of the PLGA buffer should be minimal since it takes several days for PLGA to swell in buffer [23].
We used the same polymer and spinning conditions (as indicated before) for the large microfiber (MF) sample. However, rather than spin the fibers onto an aluminum foil target, fibers were spun onto aluminum mounts. Aluminum mounts were 25 mm × 75 mm × 0.5 mm and were made from folded aluminum foil. The mounts had five 1.5 mmdiameter holes punched into them using 1.5 mm biopsy punches (Miltex) and were distributed across its surface as shown in Fig. 5. The mounts where then covered in carbon tape (except over holes) and mounted to a grounded spinning metal drum that was 62.5 mm in diameter using carbon tape. The drum was spun at 60 RPM and allowed to collect fibers for 60 s. Mounts were then detached from the drum and were imaged with SEM.
Single fiber measurements were acquired using an SEM (Hitachi S4700 SEM, 5 kV, 10 mA, ≈13 mm working distance) and CLSM (Leica SP5 II, Leica Microsystems) with similar settings as during the acquisition of [cell, scaffold] contact data. The single electrospun PLGA microfibers were placed flat on a surface and imaged by SEM at 31.25 nm resolution in X and Y dimensions [1280 pixels × 960 pixels in X and Y] from two viewpoints at 90° and 65° from the flat surface. The two viewpoints allow us to verify that the fibers are cylindrical. After SEM, samples were immersed in PBS and imaged via water immersion CLSM within 2 h of being hydrated (to minimize swelling) since PLGA can swell in buffer. The CLSM zstacks were acquired at the resolution of 120 nm × 120 nm × 419 nm [2048 pixels × 2048 pixels in X and Y] with approximately 10% spatial area overlap of zstacks and were manually stitched in a similar method to the SEM 2D images. Figure 5 shows a single fiber sample collector and the SEM and CLSM images acquired along one fiber.
Based on these single fiber measurements, we could validate the segmentation accuracy of fiber scaffolds from CLSM zstacks against the reference measurements obtained from SEM images. Furthermore, we could use the reference measurements for selecting two of the best models from the eight segmentation models to minimize the timeconsuming contact verification effort.
Methodology
Following our approach to address the multiple challenges of 3D contact measurements, we designed a methodology as shown in Fig. 6. The validation of a cell model refers to our previous work [7].
In comparison to previous work on colocalization (see Additional file 1), our definition of cellscaffold contact is aligned with the objectbased analysis as opposed to spatial image crosscorrelation spectroscopy. While we model objects (cell, scaffold, and background) in two CLSM zstacks using continuous statistical and geometrical models, the cellscaffold contact sites are defined based on the spatial proximity of categorical cell and scaffold labels as illustrated Fig. 7. In order to obtain categorical labels, the probability values are adaptively thresholded using maximum entropy criterion [24]:
where H _{ FRG }(T) is the entropy of foreground, H _{ BKG }(T) is the entropy of background, and the optimization is over all values of T. The same adaptive thresholding method is used for the geometrical methods after cell masking of the zstacks processed based on a geometrical model.
Modeling
Modeling is divided into cell, scaffold, and contact modeling as illustrated in the overview of the methodology in Fig. 6. Cells are segmented using a statistical approach while scaffolds are segmented using multiple statistical and geometrical approaches. Cellscaffold contacts are obtained based on the law of total probability for statistical models and surface intersection for geometrical models.
Cell model for segmentation and ROI model for cropping
We started with cell segmentation by leveraging the previous work [7] and using the permutationbased design of an optimized algorithm selected based on analyses of thousands of cell zstacks. The algorithm is provided in Additional file 2. All cell segmentation results were visually inspected for quality assurance using the webbased verification system. Out of 414 cell zstacks, 15 cell zstacks were manually segmented using ImageJ since the experts rated the results from the automated segmentation as poor or missed. In order to handle large volumetric data, we cropped regions of interest (ROIs) from cell and scaffold zstacks according to bounding boxes of visuallyverified cell segmentation results. Our assumption was that the cellscaffold contact points occur only in onevoxel neighborhood of the cell surface, and therefore the rest of zstacks could be discarded. In order to preserve enough voxels around cells, we added 10% margins on each side of x and y boundaries to the cell bounding box enclosing the verified cell segment. For the z boundary, we analyzed the zaxis intensity profile of a scaffold zstack and selected the frames with high intensity values. The cropping method is described in Additional file 3.
Scaffold models
Our modeling approaches to segmenting scaffolds were divided into statistical and geometrical based on the modeling assumptions incorporated by the algorithms. Scaffold zstacks can be modeled using similar statistical assumptions to the ones used for segmenting cells. However, the scaffold zstacks typically have smaller amplitude signals than cell zstacks and hence are more affected by bleedthrough and noise. We designed eight specific models as representative samples of a larger body of image processing models. Our goal was to include a model that was optimal in the context of cellscaffold contact point estimation. Furthermore, the two types of models allowed us to compare segmentation accuracies derived based on general (statistical models) and scaffoldspecific (geometrical model) assumptions. These assumptions reflected the amount of prior knowledge embedded into measurement algorithms and the level of effort required to customize models for each type of scaffold. Table 5 provides a short summary of all models. We describe each statistical method in Additional file 5 and provide the algorithmic details in Additional file 6. The geometrical methods are described in Additional file 7 (Algorithm based on planar geometrical model for segmenting spun coat scaffolds) and in Additional file 8 (Algorithms based on cylindrical geometrical models for segmenting fiber scaffolds).
Cellscaffold contact model
For the statistical models, the contact probability is computed according to the law of total probability by:
The aforementioned five statistical models yield two conditional probabilities: P(Contact Cell) from cell channel and P(Contact Scaffold) from scaffold channel. In order to estimate the probabilities of P(Cell) and P(Scaffold), we use Kmeans clustering to partition 2D data points formed by intensity values from cell and scaffold channels at each voxel location (i.e., the cellscaffold intensity scatterplot). Figure 8 illustrates 3 clusters corresponding to cell, scaffold, and background. The probabilities at each voxel point are defined as relative distances to the cluster centroids, constrained by the sum of probabilities equal to 1 (i.e., P(Cell) + P(Scaffold) + P(BKG) = 1). Figure 9 shows examples of cluster assignments of voxel points for each of the scaffold types where the scatterplot points are colorcoded as cell (red), scaffold (blue), and background (black) according to the Kmeans clustering assignment.
For the geometrical models, contact surfaces are the ultimate objective of the measurement. We define the contact model for any geometrical method as the intersection of a binary cell segment with a binary scaffold segment (denoted as geometrical intersection model). Due to the discrete nature of zstacks, the intersection is defined as a cooccurrence or one voxel adjacency of cellscaffold binary labels at the same voxel location as illustrated in Fig. 7.
While a plane as a contact surface for spun coat is clearly defined, a contact surface for fibers (MF and MMF) can be defined in multiple ways. A piecewise linear cylinder can be strictly defined by its skeleton points and a set of radii at those points. It can also be defined in a relaxed sense as a set of voxels obtained by thresholding zstacks after a vesselness/tubeness filter has been applied. The vesselness filter is based on eigenvalue decomposition of Hessian matrix. This filter computes Hessian at every pixel (voxel) of the input image by convolving the image with second and cross derivatives of the Gaussian function [25]. The sigma parameter (the standard deviation of the Gaussian function) has an impact on the enhanced image appearance. The vesselness filter enhances intensities of tubular structures with radii corresponding to the sigma value. This enhancement is important for selecting a set of tubular voxel candidates in a zstack by thresholding. Given the uncertainty of contact measurements due to spatial resolution and contact representation (i.e., a cylinder represented with a sequence of spheres at each skeleton point), we opted for a simpler relaxed cylindrical model. To identify the surface points, we computed a 3D gradient for the cellmasked and thresholded scaffold zstacks and then reported those contact surface points that have nonzero gradient values.
Validation
Validation of geometrical models
The validation of a planar geometrical model for SC scaffold is performed directly by comparing the surface roughness reference measurements from AFM images with the voxel dimensions of each CLSM zstack. If the surface roughness is smaller than voxel dimensions, then the planar model is suitable. Similarly, the validation of a cylindrical geometrical model for fiber scaffolds (MF and MMF) is achieved by comparing diameters of a single fiber from multiview 2D SEM images.
Assessing accuracy of fiber scaffold segmentation
Given five statistical models and three geometrical models, we compare their accuracy and select one model for each category to minimize the contact verification effort. The accuracy assessment is achieved by measuring the accuracy of algorithms on the single fiber data acquired in SEM and CLSM imaging modalities (see section "Algorithmic model validation measurements". The validation is performed by extracting radius measurements along a single fiber (multiple fields of view) and comparing the radius histogram obtained from the eight algorithms applied to CLSM zstacks to the radius histogram obtained from 2D SEM images.
The validation methodology consists of the following steps:

(1)
acquire multiple spatially overlapping fields of view (FOVs) from a sample with single fibers in SEM and fluorescent modalities described in section "Algorithmic model validation measurements".

(2)
process 2D SEM images to extract radius measurements,

(3)
process 3D CLSM zstacks to extract radius measurements,

(4)
rankorder the designed algorithms applied to the CLSM zstacks based on the comparison of their radius histograms with the radius histogram derived from the SEM images.
The above processing steps involve stitching multiple FOVs, fiber segmentation, skeletonization of fiber segments, identification of the main reference fiber, and selection of fiber skeleton points that correspond to the main reference fiber. Figure 10 illustrates the sequence of steps to extract radii from CLSM zstacks (i.e., step 3 of the validation). The entire validation sequence is detailed in Additional file 10.
Verification of cell segmentation and cellscaffold contacts
Due to the large volume of [cell, scaffold] image data, we employed automated softwarebased contact point measurements. As a performance evaluation of the software, an efficient mechanism for visually verifying all contact results was devised since it is very difficult to create ground truth for 3D contact points. The challenges of designing such a verification system include:

(1)
3D inspection from multiple view angles,

(2)
simultaneous presentation of coregistered 3D channels and contacts,

(3)
access to the verification system from multiple remote locations due to geographically distributed experts, and

(4)
definition of verification labels to assure consistency of label assignment.
These verification challenges must be resolved under the constraints of minimum verification time and maximum accuracy.
To address the first challenge, we designed a webbased verification system for cell segmentation and cellscaffold contact. For cell segmentation, the multiple view challenge is addressed by presenting sidebyside three orthogonal max projections of raw cell and cell segment zstacks per cell. The max projections are sufficient to verify the shape accuracy of cell segments because the cell processing steps are designed to report a compact cell shape. For contacts, the same challenge is tackled by creating six webpage embedded movies per [cell, scaffold] pair. However, due to the 3D complexity of contact shapes, max projections are insufficient for contact verification. We opted for creating animations to convey multiple views and to accommodate the time vs. accuracy constraints. Animations are accompanied by controls that allow the movies to play, pause, and rewind, as well as to synchronize any subset of them. Figure 11 displays examples of the webbased verification of cell segmentation and cellscaffold contacts.
The second challenge of simultaneous presentation of coregistered channels is only relevant to contact verification. It is addressed by forming pseudocolor video frames that contain information about cell, scaffold and their contact. The semantic meaning of [red, green, blue] pseudocolors is overlaid in yellow text on the videos in Fig. 11. Furthermore, the cell and scaffold channels have different dynamic ranges which affect the rendering. To determine the optimal value for gamma correction, we performed a small user study using a set of zstacks enhanced by a range of gamma values and presented as movies.^{Footnote 1} Based on the user study, the gamma value for correcting scaffold intensities was set to 1.4 in all movies presented for contact verification.
The third challenge of accessing the verification system is approached by designing a web solution. The design uses the AngularJS JavaScript library [26] that supports declaring dynamic views in webapplications (transitions between any two data sets for verification). The web solution also leverages the current support of movie formats in HTML5 web technology.
To address the fourth challenge and establish consistent verification of labels across multiple viewers, it is important to define quantitative metrics for all labels. Although the verification labels are assigned subjectively, they are defined as percentages or ratios of voxels that are accurately assigned to a cell or a contact based on a visual inspection. The labels for cell segmentation are created by thresholding the percentage of correctly labeled cell pixels at [90%, 100%] (“good”), [75%, 90%) (“correct”), and [0%, 75%) (“incorrect”), and by recognizing missed cells with a label “missed.” The case of “missed” occurs when multiple cells are in one FOV and the segment of interest is not selected by the algorithm. The labels for contacts are expressed in terms of error ratios with respect to the total volume (statistical model) or surface (geometrical model) of a cell as [0, 1/12] (“excellent”), [1/12, 1/3] (“acceptable”) and [1/3, 1] (“bad”).
Results
The experimental results are presented in the order of steps that the cellscaffold contact methodology is executed. The steps are denoted to map to the methodology overview shown in Fig. 6.
Model and segment cell
The cell segmentation algorithm was executed on all 414 cell channel zstacks (see Additional file 2 and [7]). The segmentation computation took 84.5 h (24 h for 165 SC cells, 33 h for 135 MF cells, and 27.5 h for 114 MMF cells). The time was benchmarked using a single threaded Java program, Mac OS X, Mac Pro desktop computer (CPU: 3.2 GHz QuadCore Intel Xeon, RAM: 16 GB 1066 MHz DDR3, and data residing on a network server with 1 Gbit/s bandwidth).
Verify cell segmentation results
To verify the quality of cell segmentation [7], we deployed a webbased system on a public NIST server at https://isg.nist.gov/CellScaffoldContact/app/index.html. The webbased system contains 414 cells that have been labeled by three cell biologists for this study. We summarized the ratios of the assigned label agreement by any two experts in Table 6.
Following the precision computation in [7] and based on the values in Table 6 the cell segmentation precision per initial label set is (0.86 + 0.82 + 0.87)/3 = 0.85. Similarly, the cell segmentation precision per combined label is (0.94 + 0.95 + 0.94)/3 = 0.943. Out of 414 cell zstacks, we identified 15 pairs for which all experts assigned the label {incorrect or missed}. Thus, cell segmentation error is estimated as (414–15)/414 = 0.964. These 15 cells were manually segmented using ImageJ/Fiji (plugin crop3D) [27].
Model and crop region of interest (ROI)
Cell and scaffold zstacks are cropped according to bounding boxes of visually verified cell segmentation results to reduce the computational time on further processing. The cropping step leads to a significant data size reduction as summarized in Table 7. The cropping also reduces RAM requirements since the dimensions of zstacks are cut down from 2048 × 2048 pixels in X and Y to (200 to 1906) x (153 to 2045) pixels, and from up to 175 frames in Z to (25 to 114) frames. The number of voxels in one zstack ranges from 1,827,705 to 188,095,516 voxels.
Since we assumed that contact points only exist around cells, we derived the cropping box by adding 10% margins to the cell segment dimensions on each of X and Y sides. To derive the Z dimension of a cropping box, we looked at the intensity distributions across frames in scaffold zstacks. The start and end frames in a zstack for cropping are determined to be the inflection points in the second derivative of the zprofile closest to the maximum intensity point along zaxis. The zprofile is obtained by computing [X, Z] max projection of scaffold zstack, integrating intensities horizontally (along the X axis) by taking maximum intensity value at each X, and smoothing the signal by Gaussian filter of size 21 with standard deviation of 5 (empirically determined). The analysis is illustrated in Fig. 12.
Statistical modeling: Cellscaffold contact probabilities from 5 methods
While the algorithms based on geometrical models use implicit shape assumptions, the algorithms based on statistical models use assumptions about intensity models for background. To estimate parameters of a background intensity model, we performed a set of experiments described in Additional file 4 and then derived the average and standard deviation of background from either the first or the last frame of a zstack (see the algorithms in Additional file 6). Examples of probability results of the five statistical methods are shown in Fig. 13. The figure illustrates that all five algorithms produce visually similar results with a single view, indicating the need for multiple viewing angles for visual verification.
To compare the results quantitatively, we computed Euclidean distance d _{ ij } between contact point probability estimations from algorithms i and j using the following equation:
where p _{ i }(x, y, z) is the contact point probability estimated by algorithm i. Fig. 14 and Table 8 summarize the Euclidean distances of the results from the five statisticalmodel based algorithms. The Euclidean distance results correspond to an integral in Table 8 and histogram distribution in Fig. 14 computed from 414 cropped zstacks (around 11 × 10^{11} voxels) and all pairwise combinations of algorithms A1 to A5. Based on the integral value for A2comparedtoA3 (A2A3) equal to 0.53 in Table 8, we concluded that A2 and A3 algorithms have very similar probability assignments.
The methods were implemented in Matlab 2015a and their computational times are documented in Table 9. The benchmarks were acquired on a desktop computer running Ubuntu 14.04 operating system with Intel Xeon E5–269 2.4 GHz (8 processors), 32 GB of RAM, and all zstacks stored on an external drive connected via USB3. Note from the second plot in Fig. 14 that the A2A3 (red) trace is on the xaxis. This allows us to eliminate the A3 statistical algorithm since the accuracy is similar to A2 while its computational time is on average 22.1% higher than the execution time of A2.
Geometrical modeling: Cellscaffold contact from 3 methods
Following the plane model and its corresponding algorithmic implementation in Additional file 7, we computed the plane coefficients for the upper and lower surfaces of each spun coat zstack. To quantify the goodnessoffit for weighted least squares, we computed Residual Standard Deviation \( {STD}_k^{RES} \) per spun coat zstack and Pooled Standard Deviation STD ^{POOLED} for all SC scaffolds as follows:
where w _{ ki } is the weight at position (x _{ i }, y _{ i }, z _{ i }) in the kth zstack, \( \overline{w_k} \) is the average weight of all voxels in the kth zstack, f _{ k }(x _{ i }, y _{ i }, z _{ i }) = ax _{ i } + by _{ i } + cz _{ i } + d is the estimated point in a plane, p = 3 is the number of independent parameters in the plane model, f _{ k }(x _{ i }, y _{ i }, z _{ i }), K = 165 is the number of SC scaffold type zstacks, and n _{ k } is the number of voxels in the kth zstack. The minimum and maximum residual standard deviations \( {STD}_k^{RES} \) are 54.1 nm and 188.2 nm respectively. The pooled standard deviation STD ^{POOLED} is 105.1 nm. The distribution of residual standard deviations, as well as alignment of planar surface with the data, are included in Additional file 8.
Figure 15 shows intermediate results of the geometrical modelbased algorithm A6. They include a zstack after modified Frangi’s vessel enhancement filtering, cell masking, and thresholding and 3D gradient computation. Due to the 3D nature of the contact surfaces and the large number of fibers intersecting the cell segment, it is hard to visually assess the contact quality from a single frame. To facilitate visual inspection of scaffold segmentation and cellscaffold contacts, we applied postprocessing (skeletonization, radius estimation) and represented the fibers as a sequence of spheres extruded along the skeletal points. However, the additional postprocessing steps introduce several sources of uncertainties in contact detection and therefore we used the results shown in Fig. 15 for further processing. During the experimentation, we visually compared the performance and parameter choices of the vesselness methods by Frangi [28], by Sato [29] and by Erdt et al. [30] before choosing the Frangi’s method. We have also considered 2D steerable filters [31] and their 3D extensions [32]. While the 3D steerable filters are theoretically related to the vesselness filters, their online available implementation requires much more CPU and RAM resources than the vesselness filter implementation (according to the online available implementation of [32], minimum RAM must be at least 17 times the original volume size). Based on our visual comparison of the steerable filters and vesselness filters, the steerable filters underperformed vesselness filters in detecting cylindrical fiber surfaces.
Validation of planar and cylindrical geometrical models
From the AFM measurements described in section "Algorithmic model validation measurements", we computed the average RMS and its standard deviation for spun coat films to be 52.35 nm ±31.76 nm. These statistics are calculated from six spatially distributed spots (RMS: 109, 59, 43.2, 50.6, 39, 13.3). For the voxel resolution of zstacks as 120 nm × 120 nm × 462 nm, the voxel dimensions are 2 to 8 times larger than the average RMS and its standard deviation. This is supporting our conclusion that the use of a planar geometrical model for spun coat scaffolds is appropriate.
The single fiber SEM measurements from two imaging angles described before allowed comparison of fiber diameters extracted using DiameterJ (plugin to ImageJ/Fiji [33]). The differences in fiber diameters were within 3% error introduced by SEM image processing needed to extract diameters. Thus, the assumption of a cylindrical fiber model is appropriate.
Assessing fiber scaffold segmentation accuracy using single fibers
We followed the processing workflow shown in Fig. 10. The 2D SEM image analyses are based on ImageJ/Fiji library [27] and DiameterJ (plugin to ImageJ/Fiji [33]) while the 3D CLSM zstack analyses are based on inhouse implementations. The stitching vector is constrained to translation and has been estimated using (a) automated stitching (max projection and pairwise stitching, max projection and grid stitching, 3D zstack pairwise correlation on 4× downsampled data), (b) semiautomated stitching by defining pairs of corresponding points, and (c) manual stitching using max projection and visual alignment of tiles. The skeletonization is based on 3D medial axis thinning algorithms [34]. The radius estimation is computed as the smallest eigenvalue of a covariance matrix from all point coordinates selected using an equal angular spacing in 2D or 3D.
Due to the challenges related to stitching FOVs containing straight lines (i.e., stitching offset uncertainty), we evaluated statistics of radius histograms from two sets of detected fiber skeleton points. The two sets contain skeleton points from either all zstack FOVs (denoted as ALL) or nonoverlapping parts of zstack FOVs determined based on estimated stage position and approximate stitching vectors (denoted as Internal). Following the validation steps presented in section "Validation" and Additional file 10, the histograms of radii for the set denoted as ALL is shown in Fig. 16 and the comparative summary of histogram statistics is presented in Table 10.
Based on the single fiber experiments, we concluded that the mixedpixel statistical model A2 and the vesselness geometrical model A6 (with σ= 1.0) applied to fluorescence CLSM zstacks resulted in the closest average radius estimates to the SEM based average radius. The SEM radius estimate is obtained from 104,341 skeleton points while the CLSM radius estimates come from 20,000 to 36,000 skeleton points. Given the ratio of SEM to CLSM spatial resolutions 0.12/0.0312 = 3.84, the onetoone match between SEM and CLSM skeleton points would be 104,341/3.84 ≈ 27,000 CLSM points. The standard deviation of the SEM radius is 0.075 while the standard deviation for the method A2 is 0.31 and 0.35, and for the method A6 is 0.20 and 0.21. The ratios of radius standard deviations CLSM/SEM (A2:[4.13, 4.67], A6:[2.67, 2.80]) should theoretically be close to the ratio of spatial resolutions 3.84. The maximum difference between 3.84 and ratio values within the ranges are larger for the method A6 than for the method A2 (3.84–2.67 = 1.17 > 4.67–3.84 = 0.83). This reflects the fact that the A6 model is more constrained (selects only voxels that meet the vesselness model).
With respect to the SEM based estimates, the error of segmenting a single fiber from fluorescent CLSM and estimating its radius using the statistical A2 method is between 1.1242–1.1190/ 1.1242 = 0.46% and 1.1242–1.1562/1.1242 = 2.85%. The same error for the geometrical A6 method is 1.1242–1.1139/1.1242 = 0.92% and 1.1242–1.0815/ 1.1242 = 3.80%. These errors range between 0.46% to 3.8% of the SEM radius while our visual estimate of SEM radius is about 3%. When the SEMbased errors are compared to the CLSM based radius standard deviations of the two methods (A2:[0.31, 0.35], A6:[0.2, 0.21]), the errors represent not more than 9.19% and 19% of each method’s one standard deviation respectively (A2: 0.0285/0.31 = 0.0919, A6: 0.0380/0.2 = 0.1900).
Verification of cellscaffold contact sites
The webbased verification system described in section "Verification of cell segmentation and cellscaffold contacts" was populated by six movies per [cell, scaffold] pair, which yields 414 × 6 movies = 2484 movies. This number of movies is generated for each of the two selected contact methods A2 and A6. Each movie is constructed by generating 128 frames of size (640 × 640) pixels, 3 color channels, and presented at 15 frames per second. The movies are compressed from 157.3 MB (640 × 640 × 3 bytes × 128 frames = 157.3 MB) to 2.6 MB video in MP4 H264 codec with visually acceptable blur at 4000 bit rate. The video viewing time is about 9 s. Total movie time is 372.6 min = 6.21 h per method. An online help document is available to understand the movie content using pseudo colors, spatial layout of movies, and movie controls.
The movie frame generation is accomplished by data loading using ITK^{Footnote 2} library with libNifti^{Footnote 3} loader, creating a window using Qt library and QtCreator^{Footnote 4} environment, rendering the window content using OpenGL^{Footnote 5} and then saving frames with OpenCV^{Footnote 6}. The generated frames are aggregated into a movie using ffmpeg library.^{Footnote 7} Computational benchmarks of the movie generation are summarized in Table 11. The benchmarks were collected on Ubuntu 16.04 64bit operating system with 49.5GB RAM, 16 processors; Intel® Xeon® CPU E5620@2.4GHz, 2× GF 100GL [Tesla C2050/C2070] NVIDIA card with 6 GB of RAM, and 1× GeoForce GTX 760 NVIDIA card with 2 GB of RAM.
The visual verification was conducted by three experts over two contact detection methods (Statistical A2 and geometrical A6) and three scaffold types. The labels for each cellscaffold contact detection span excellent, acceptable and bad. For the statistical modelbased method, the following labels were defined:

Excellent: Visually, error is not exceeding 1/12th of the total volume of cell.

Acceptable: Visually, combined errors do not exceed ~1/3rd of the total volume of cell.

Bad: Visually, combined errors exceed ~1/3rd of the total volume of cell.
For the geometrical modelbased method, the labels were defined in the same way but the total volume of cells was replaced by the total surface of cell. Figure 17 illustrates two cellscaffold contact examples that were unanimously labeled by all three experts as excellent (top) and bad (bottom) for both Statistical A2 and Geometrical A6 methods.
The total time to complete the verification by the three experts was 6 h + 8 h + 6 h = 20 h. The results of visual verification were reported as proportions of the three labels (excellent, acceptable, bad) per model (A2, A6), scaffold type (SC, MF, MMF) and expert (E1, E2, E3) in Fig. 18. The proportional values are most distinguishable for SC, most compressed for MF, and most unpredictable for MMF. For SC scaffold type and excellent rating, geometrical model A6 is clearly better than statistical model A2. For MF scaffold type, the dominant rating is poor for statistical model while it is acceptable for geometrical model. For MMF scaffold type, the ranking of proportion values varies across experts.
Overall accuracy of contact measurement was defined as the ratio of the contacts labeled by at least one of the experts as excellent or acceptable over the total number of [cell, scaffold] pairs. According to this definition, accuracy of the statistical method A2 is (414–155)/414 = 0.626 and accuracy of the geometrical method A6 is (414–27)/414 = 0.935. Precision of contact measurements was derived as an average of the probabilities that two experts agreed on a label. These pairwise ratios of label agreement are summarized in Table 12. From the results in Table 12, the contact precision for the statistical method (A2) is (0.74 + 0.75 + 0.81)/3 = 0.767 and for the geometrical method (A6) is (0.86 + 0.86 + 0.91)/3 = 0.876.
Discussion
Quantitative discussion
Verificationbased accuracy and precision of cellscaffold contact methods
Based on section "Verify cell segmentation results", we assessed the accuracy of the cell segmentation method based on visual verification of three experts to be 0.964 with precision 0.943 for the two groups of labels {accurate, good} and {inaccurate, missed}. Based on similar assessment of cellscaffold contacts of labels {excellent, acceptable} and {bad} in section "Verification of cellscaffold contact sites", the accuracy of a statistical method A2 is 0.626 with 0.767 precision and the accuracy of a geometrical method A6 is 0.935 with 0.876 precision. By comparing the accuracies of cell segments and cellscaffold contact sites based on visual verifications, the cell segmentation algorithm is more accurate and more precise than the cellscaffold contact algorithms. These differences present the tradeoffs between the reliability and potential prediction power of cell shape versus cellscaffold contact shape according to Fig. 1.
Validation, model fitting and verificationbased accuracy of planar model for spun coat scaffolds
Based on planar model validation in section "Validation of planar and cylindrical geometrical models", the AFMderived surface roughness of 52.35 nm ±31.76 nm was 2 to 8 times smaller than the CLSM resolution which supported the use of a planar model. Based on model fittingbased accuracy in section "Geometrical modeling: Cellscaffold contact from 3 methods", the planar fit of spun coat in measured CLSM zstacks had the pooled standard deviation STD ^{POOLED} of 105.1 nm which is smaller than any of the three voxel dimensions (120 nm × 120 nm × 462 nm). The visual verification of SC scaffold type confirmed the low value of pooled standard deviation since the three experts reported only 18, 6, and 8 contacts as “bad” respectively out of 165 pairs which corresponds to 10.91%, 3.64%, and 4.85% of the number of SC scaffolds.
Validationbased accuracy of relaxed cylindrical geometrical model for fiber scaffolds
For MF scaffolds, the fiber radius fit was evaluated using the single fiber experiments in section "Assessing fiber scaffold segmentation accuracy using single fibers" and the errors ranged between 0.46% to 3.8% of the SEM radius. These error values were comparable to the 3% radius error from single fiber SEM images based on our visual inspection. The fiber radius in a single fiber experiments was 1.1242 μm ±0.075 μm and can be related to the results of visual verification for MF scaffold (radius ≈ 1.3 μm). The visual verification of MF scaffold type yielded 41, 38, and 33 contacts labeled as “bad” out of 135 pairs (30.37%, 28.15%, and 24.44%). By comparing the algorithmic errors observed from single fiber experiment of fiber radii and from visual verification of contacts, we could conclude that the cellMF scaffold contact has about 13× worse error than a single fiber radius error (contact: 30.37%, 28.15%, and 24.44% errors per expert versus radius: 0.46% to 3.8% errors per point selection; average contact error/average radius error = 27.65/2.13 ≈ 13). The magnitude of this ratio illustrates the complexity of cellMF scaffold contact versus single fiber radius measurements and the challenges associated with multiple touching fibers and channel bleedthrough. Interestingly, the visual verification of MMF scaffolds (radius ≈ 0.55 μm) led to 27, 11, and 9 contacts labeled as “bad”’ respectively out of 114 pairs (23.68%, 9.65%, and 7.89%) which were smaller errors than those for the MF scaffolds.
Computational and human labor costs
As with many “big data” experiments, the computational and human labor costs are not insignificant. To execute all computational steps of the methodology, it took approximately (a) 84.5 h to segment cells and to generate max projections for cell visual verification (data on network drive), (b) 13.33 h to crop data (all zstacks stored on an external drive connected via USB3), (c) 19 h to run all five statistical methods (A1A5) on the 414 zstacks of [cell, scaffold] pairs and obtain probabilities of contacts, (d) 3.45 h to convert statistical probabilities to binary contacts and 17.25 h to convert vesselness filtered values to binary contacts (data on local drive), (e) approximately 3.49 h to run the three geometrical methods (A6, A7, A8) on all 414 pairs (data on local drive), and (f) 12.37 h to generate movies. The total computational time was approximately 153.39 h. The computational times have been collected on five computers and include some input/output overhead in order to accommodate heterogeneous platforms of major contributors. We also approximated the total time spent by the three experts on verifying the cell segmentation was around 4 h + 4 h + 4 h = 12 h and on verifying the cellscaffold contact sites around 6 h + 8 h + 6 h = 20 h.
Qualitative discussion
Modeling tradeoffs
The cellscaffold contact methodology consists of modeling, validation and verification with several tradeoffs. The first tradeoff is related to choosing a model: general statistical model versus custom geometrical model. In other words, geometrical models can be specific to each scaffold type (e.g., a planar model for SC and a tube model for MF) while statistical models are more general. Thus, statistical methods can be reused for other experimental scaffolds while geometrical methods would have to be developed for each type of scaffold.
Another tradeoff is between the labor/computational complexity and the number of plausible contact models included in the search space of models. The word “plausible” should be interpreted with caution because a priori assumptions about plausible models are injected into an algorithm. This is the reason why in order to avoid biases, geometrical models with stronger assumptions about the scaffolds than the statistical models are validated by physicsbased orthogonal measurements rather than just by visual verification in our study.
Physicsbased validation and visual verification tradeoffs
We obtained measurement accuracies based on visual verification and validation using physicsbased orthogonal measurements. This poses a tradeoff between the value and cost of the two approaches. The value of visual verification lies in delivering confidence in accuracy measurements at the cost of manual labor. The visual inspection also allows for identifying errors in algorithms and discovering new phenomena. The drawback is that it is a qualitative assessment at a coarse level and that there are differential limitations in visualization quality based on user display. The value of validation using orthogonal measurements lies in removing human bias at the cost of smaller confidence in accuracy measurements because of different measurement conditions. The advantage of validation is in establishing quantitative assessment at a fine (voxel) level.
There is an option of establishing accuracy and robustness by using datadriven simulations. In our case, simulations started with segmentation of existing [cell, scaffold] pair and extracted skeletons and radii of fibers, followed by modelbased simulations of crosschannel bleeding, optical distortions and Gaussian noise. However, we are not reporting simulations because they require validating all simulation models, estimating their parameters, and comparing simulations against reference cases to show that the simulations are accurate.
Quality control considerations
The quality of cellscaffold contact measurements depends highly on the quality of data and models. There is a tradeoff between doing quality control after acquisition and after obtaining a contact measurement. In our study, we eliminated 297 zstacks of [cell, scaffold] pairs from the 711 automatically acquired zstacks which lowered the computational and verification efforts. The elimination of cells outoffocus and touching cells took approximately 2 h using CLSM software (Leica LAS AF) for browsing acquired zstacks. If we did not eliminate the 297 pairs then the total computational time and visual verification time would increase by a factor of 711/414 = 1.72.
Based on our observations, the most detrimental effect on contact measurements comes from channel bleedthrough. In the case of bleedthrough, we are unable to extract reliable contact measurements as opposed to other cases when the effects can be corrected manually (e.g., cell segmentation of touching cells) or by algorithmic design (e.g., cells extending outside of a FOV).
Complexity and heterogeneity considerations
Finally, the complexity of cellscaffold contact measurements from a TBsized collection of zstacks must be addressed by a team with diverse expertise. The diversity leads to a chain of heterogeneous contributions to the final contact measurement in terms of software languages, operating systems, and hardware platforms on which the measurement is performed. Thus, multiple verification milestones become critical to address the complexity and data scale of contact measurement, as well as to eliminate sources of computational errors.
Conclusions
The described objectbased contact measurement methodology enabled (a) optimized cellscaffold contact representations that incorporate a range of statistical and geometrical models, (b) validated 3D contacts using reference measurements, and (c) visual verification and efficient contact measurement of 414 cellscaffold interactions with two analysis methods over three types of scaffolds, totaling about 1 TB of data. The key contributions come from (1) the contact modeling and the validation methodology, (2) the large scale of contact measurements with 100% visual verification, and (3) the web mechanism for disseminating and reviewing contact measurements from a TBsized collection of zstacks.
In the near future, the resulting wellcharacterized cellscaffold contact measurements will be used to extract and classify shape dimensionality, while the methodology and computational parts can be reused for other colocalization studies. We also plan to compare the accuracy and time needed for contact verification with approaches that utilize the stateoftheart National Institute for Standards and Technology virtual reality metrology facility.
Abbreviations
 AFM:

atomic force microscopy
 CLSM:

confocal laser scanning microscopy
 CPU:

central processing unit
 FOV:

field of view
 hBMSCs:

human bone marrow stromal cells
 ICCS:

image crosscorrelation spectroscopy
 MF:

Large Microfibers
 MMF:

Medium Microfibers
 PLGA:

poly(lacticcoglycolic acid)
 RAM:

randomaccess memory
 SC:

Spun Coat
 SEM:

scanning electron microscopy
References
 1.
Comeau JWD, Costantino S, Wiseman PW. A guide to accurate fluorescence microscopy colocalization measurements. Biophys J [Internet]. Elsevier; 2006;91:4611–4622. Available from: http://www.ncbi.nlm.nih.gov/pubmed/17012312%5Cnhttp://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=PMC1779921.
 2.
Crowley MR, Head KL, Kwiatkowski DJ, Asch HL, Asch BB. The mouse mammary gland requires the actinbinding protein gelsolin for proper ductal morphogenesis. Dev Biol. 2000;225:407–23.
 3.
Fletcher PA, Scriven DRL, Schulson MN, Moore EDW. Multiimage colocalization and its statistical significance. Biophys J [Internet]. Biophysical Society; 2010;99:1996–2005. Available from: https://doi.org/10.1016/j.bpj.2010.07.006.
 4.
Costes S V, Daelemans D, Cho EH, Dobbin Z, Pavlakis G, Lockett S. Automatic and quantitative measurement of proteinprotein colocalization in live cells. Biophys J [Internet]. Elsevier; 2004;86:3993–4003. Available from: http://www.ncbi.nlm.nih.gov/pubmed/15189895.
 5.
Basson MA. Signaling in cell differentiation and morphogenesis. Cold Spring Harb Perspect Biol. 2012;4:1–21.
 6.
Florczyk SF, Simon M, Juba D, Pine PS, Sarkar S, Chen D, et al. 3D cellular Morphotyping of scaffold niches. 32nd South Biomed Eng Conf [Internet]. Shreveport, Louisiana; 2016. Available from: http://coes.latech.edu/sbec2016/.
 7.
Bajcsy P, Simon M, Florczyk S, Simon C, Juba D, Brady M. A method for the evaluation of thousands of automated 3D stem cell segmentations. J Microsc [Internet]. 2015;260:363–376. Available from: http://www.ncbi.nlm.nih.gov/pubmed/26268699.
 8.
García AJ, Reyes CD. The control of human mesenchymal cell differentiation using nanoscale symmetry and disorder. J Dent Res. 2005;84:407–13.
 9.
Kumar G, Tison CK, Chatterjee K, Pine PS, Mcdaniel H, Salit ML, et al. The determination of stem cell fate by 3D scaffold structures through the control of cell shape. Biomaterials. 2011;32:9188–96.
 10.
McBeath R, Pirone DM, Nelson CM, Bhadriraju K, Chen CS. Cell shape, cytoskeletal tension, and RhoA regulate stemm cell lineage commitment. Dev Cell. 2004;6:483–95.
 11.
Ruckh TT, Kumar K, Kipper MJ, Popat KC. Osteogenic differentiation of bone marrow stromal cells on poly(epsiloncaprolactone) nanofiber scaffolds. Acta Biomater [Internet]. 2010;6:2949–59. Available from: http://dx.doi.org/10.1016/j.actbio.2010.02.006
 12.
Kilian KA, Bugarija B, Lahn BT, Mrksich M. Geometric cues for directing the differentiation of mesenchymal stem cells. Proc Natl Acad Sci [Internet]. 2010;107:4872–7. Available from: http://www.pnas.org/cgi/doi/10.1073/pnas.0903269107
 13.
Dalby MJ, Gadegaard N, Tare R, Andar A, Riehle MO, Herzyk P, et al. The control of human mesenchymal cell differentiation using nanoscale symmetry and disorder. Nat Mater. 2007;6:997–1003.
 14.
Chen CS, Mrksich M, Huang S, Whitesides GM, Ingber DE. Geometric control of cell life and death. Science (80. ). [Internet]. 1997;276:1425–1428. Available from: https://www.ncbi.nlm.nih.gov/pubmed/9162012.
 15.
Mendicino M, Bailey AM, Wonnacott K, Puri RK, Bauer SR. MSCbased product characterization for clinical trials: an FDA perspective. Cell Stem Cell [Internet]. Elsevier Inc.; 2014;14:141–145. Available from: http://dx.doi.org/10.1016/j.stem.2014.01.013.
 16.
Smith LA, Liu X, Hu J, Wang P, Ma PX. Enhancing osteogenic differentiation of mouse embryonic stem cells by nanofibers. Tissue Eng Part A. 2009;15:1855–64.
 17.
Smith LA, Liu X, Hu J, Ma PX. The influence of threedimensional nanofibrous scaffolds on the osteogenic differentiation of embryonic stem cells. Biomaterials. 2009;30:2516–22.
 18.
Kumar G, Waters MS, Farooque TM, Young MF, Simon CG Jr. Freeform fabricated scaffolds with roughened struts that enhance both stem cell proliferation and differentiation by controlling cell shape. Biomaterials. 2012;33:4022–30.
 19.
Khetan S, Guvendiren M, Legant WR, Cohen DM, Chen CS, Burdick JA. Degradationmediated cellular traction directs stem cell fate in covalently crosslinked threedimensional hydrogels. Nat Mater. 2013;12:458–65.
 20.
Chatterjee K, LinGibson S, Wallace WE, Parekh SH, Lee YJ, Cicerone MT, et al. The effect of 3D hydrogel scaffold modulus on osteoblast differentiation and mineralization revealed by combinatorial screening. Biomaterials. 2010;31:5051–62.
 21.
Florczyk SJ, Leung M, Li Z, Huang JI, Hopper RA, Zhang M. Evaluation of threedimensional porous chitosanalginate scaffolds in rat calvarial defects for bone regeneration applications. J Biomed Mater Res Part A. 2013;101:2974–83.
 22.
Liao S, Nguyen LTH, Ngiam M, Wang C, Cheng Z, Chan CK, et al. Biomimetic nanocomposites to control osteogenic differentiation of human mesenchymal stem cells. Adv Healthc Mater. 2014;3:737–51.
 23.
Gasmi H, Danede F, Siepmann J, Siepmann F. Does PLGA microparticle swelling control drug release? New insight based on single particle swelling studies. J Control Release [Internet]. Elsevier B.V.; 2015;213:120–127. Available from: http://dx.doi.org/10.1016/j.jconrel.2015.06.039.
 24.
Sezgin M, Sankur B. Survey over image thresholding techniques and quantitative performance evaluation. J Electron Imaging [Internet]. 2004 13:146–165. [cited 2 Oct 2017] Available from: http://pequan.lip6.fr/~bereziat/pima/2012/seuillage/sezgin04.pdf.
 25.
Manniesing R, Viergever M, Niessen W. Vessel enhancing diffusion. Insight [Internet] 2009;10:1–2. Available from: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.124.2131&rep=rep1&type=pdf.
 26.
Google. AngularJS [Internet]. 2017 [cited 14 Sep 2017]. Available from: https://angularjs.org/
 27.
Schindelin J, ArgandaCarreras I, Frise E, Kaynig V, Longair M, Pietzsch T, Preibisch S, Rueden C, Saalfeld S, Schmid B, Tinevez JY, White DJ, Volker Hartenstein KE, Tomancak P, Cardona A. Fiji: an opensource platform for biologicalimage analysis. Nat Methods. 2012;9:676–82.
 28.
Frangi AF, Niessen WJ, Vincken KL, Viergever MA. Multiscale vessel enhancement filtering. Medial image Comput. Comput. Invervention  MICCAI’98. Lect Notes Comput Sci vol 1496. 1998;1496:130–137.
 29.
Sato Y, Nakajima S, Atsumi H, Koller T, Gerig G, Yoshida S, et al. 3D multiscale line filter for segmentation and visualization of curvilinear structures in medical images. CVRMedMRCAS’97 First Jt. Conf. Comput. Vision, Virtual Real. Robot. Med. Med. Robot. Comput. Surg. Grenoble, Fr. March 1922, 1997 Proc. [Internet]. Springer Berlin Heidelberg; 1997. p. 213–222. Available from: http://link.springer.com/chapter/10.1007/BFb0029240.
 30.
Erdt M, Raspe M, Suehling M. Automatic hepatic vessel segmentation using graphics hardware. In: Dohi T, Sakuma I, Liao H, editors. Med. Imaging virtual real. Lecture no. Tokyo, Japan: Springer Berlin Heidelberg; 2008. p. 403–12.
 31.
Jacob M, Unser M. Design of steerable lters for feature detection using cannylike criteria. IEEE Trans Pattern Anal Mach Learn. 2004;26:1007–19.
 32.
Aguet F, Jacob M, Unser M. Threedimensional feature detection using optimal steerable filters. Proc  Int Conf Image Process ICIP. 2005;2:1158–61.
 33.
Hotaling NA, Bharti K, Kriel H, Simon Jr CG. DiameterJ: a validated open source Nanofiber diameter measurement tool. Biomaterials. 2015;8:327–38.
 34.
Lee TC, Kashyap RL, Chu CN. Building skeleton models via 3D medial surface Axis thinning algorithms. CVGIP Graph Model Image Process. 1994:462–78.
 35.
Bolte S, Cordelieres FP. A guided tour into subcellular colocalisation analysis in light microscopy. J Microsc. 2006;224:13–232.
 36.
Indhumathi C, Cai YY, Guan YQ, Opas M. An automatic segmentation algorithm for 3D cell cluster splitting using volumetric confocal images. J Microsc [Internet]. 2011 [cited 15 Sep 2014];243:60–76. Available from: http://www.ncbi.nlm.nih.gov/pubmed/21288236.
 37.
Chen J, Kim OV, Litvinov RI, Weisel JW, Alber MS, Chen DZ. An automated approach for fibrin network segmentation and structure identification in 3D Confocal microscopy images, 2014 IEEE 27th Int. Symp. Comput. Med. Syst. [internet]; 2014. p. 173–8. [cited 15 Sep 2014]. Available from: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6881871
 38.
McCullough DP, Gudla PR, Harris BS, Collins JA, Meaburn KJ, Nakaya MA, et al. Segmentation of whole cells and cell nuclei from 3D optical microscope images using dynamic programming. IEEE Trans Med Imaging [Internet]. 2008;27:723–34. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2730109/?tool=pmcentrez
 39.
Lin G, Adiga U, Olson K, Guzowski JF, Barnes CA, Roysam B. A hybrid 3D watershed algorithm incorporating gradient cues and object models for automatic segmentation of nuclei in confocal image stacks. Cytometry A [Internet]. 2003;56:23–36. [cited 15 Sep 2014]. Available from: http://www.ncbi.nlm.nih.gov/pubmed/14566936
 40.
Herberich G, Windoffer R, Leube R, Aach T. 3D segmentation of keratin intermediate filaments in confocal laser scanning microscopy. Annu Int Conf IEEE Eng Med Biol Soc [Internet] Boston, MA; 2011. 2011:p. 7751–7754. Available from: http://www.ncbi.nlm.nih.gov/pubmed/22256135.
 41.
Bajcsy P, Chalfoun ACJ, Halter M, Juba D, Kociolek M, Majurski M, et al. Survey statistics of automated segmentations applied to optical imaging of mammalian cells. BMC Bioinformatics. 2015;16:1–28.
 42.
Parrilli A, Pagani S, Maltarello MC, Santi S, Salerno A, Netti PA, et al. Threedimensional cellular distribution in polymeric scaffolds for bone regeneration: a microCT analysis compared to SEM. CLSM and DNA content J Microsc. 2014;255:20–9.
 43.
Welf ES, Driscoll MK, Dean KM, Schäfer C, Chu J, Davidson MW, et al. Quantitative multiscale cell imaging in controlled 3D microenvironments. Dev Cell. 2016;36:462–75.
Acknowledgements
We would like to acknowledge the team members of the computational science in biological metrology project at NIST for providing invaluable inputs to our work. Specifically, we would like to thank Joe Chalfoun and Michael Majurski for assistance to Steve Florczyk with automating the Leica acquisition of zstack pairs. We would also like to acknowledge Andrew Wang (SURF student) who performed initial comparisons of multiple existing fiber segmentation software packages and initial validation of fiber segmentation using simulations of single fibers. Finally, we would like to express our gratitude for the review comments from James Filliben and Antonio Cardone during our NISTinternal review.
The work has been approved by the NIST Institutional Review Board under the project title “3D Tissue Scaffolds” with the determination that research project does not involve human subjects for purposes of the common rule for the protection of human subjects. The NIST principal investigator performing or overseeing research involving materials/data referenced in this recommendation is Carl G. Simon, Jr.
Funding
The funding was provided by the National Institute for Standards and Technology. Stephen Florczyk, Nathan Hotaling, and Nicholas Schaub were supported by a postdoctoral fellowship from the National Research Council. The cells (hBMSCs) employed in this work were purchased from the Tulane Center for Gene Therapy (NCRRNIH P40RR017447).
Availability of data and materials
The webbased verification system is publicly accessible at https://isg.nist.gov/CellScaffoldContact/app/index.html. It contains (1) 2D images of three orthogonal projections of raw cell zstacks that are sidebyside with three orthogonal projections of segmented cell zstacks for 414 cells, (2) six movies of rotating combinations of pseudocolor layers with segmented cell, raw scaffold channel with Gamma correction, and binary contact points per each of the 414 cellscaffold contacts where the 3D contact were computed using the statistical mixedpixel spatial model, and (3) six movies of rotating combinations of pseudocolor layers with segmented cell, raw scaffold channel with Gamma correction, and binary contact points per each of the 414 cellscaffold contacts where the 3D contact were computed using the geometrical spatial model for scaffolds (plane for spun coat, cylinder for microfiber and medium microfiber scaffolds).
The scaffold zstacks enhanced by a range of gamma values are available at https://isg.nist.gov/CellScaffoldContact/app/pages/docs/gammaCorrection.html. They are presented as movies and used during a user study to select an optimal gamma.
To enable easy data dissemination of the raw and processed data, we converted a series of tiff files representing one zstack into one file stored in the FITS file format. To lower the download time, we prepared all files after the cropping step, and compressed them using the 7zip utility. The raw cell and scaffold zstacks were compressed from 41.01 GB to 29.73 GB while the segmented cell zstacks were compressed from 10.30 GB to 38.91 MB. The data are available for downloading from https://isg.nist.gov/deepzoomweb/data/stemcellmaterialinteractions and contain the cropped raw zstacks of cells and scaffolds, the masks of cell segmentation, and the masks of cellscaffold contacts obtained by statistical and geometrical methods.
Disclaimer
Commercial products are identified in this document in order to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended to imply that the products identified are necessarily the best available for the purpose.
Author information
Affiliations
Contributions
All coauthors contributed to some part of the wet lab experimental and computational modeling work and/or to the manuscript preparation. PB – design of statistical segmentation algorithms, analysis of cell segmentation verification labels and manual cropping for low quality segments, fluorescent single fiber analysis, comparison of SEM and fluorescent CLSM single fiber results, adaptation and deployment of webbased cellscaffold contact verification system, execution of movie creation and geometrical segmentation methods, computational benchmarking, project coordination, data and software dissemination, and writing the original and cameraready manuscript versions. SY – implementation and execution of statistical segmentation algorithms with acquired pairs, spun coat scaffold hyperplane estimation and residual standard deviation quantification, zstack cropping analysis and execution, cell/scaffold/background and contact probability estimation, alignment of zstack probability maps, software dissemination, and contributions to preparing the cameraready manuscript. SF – preparation of imaged biological specimens, fluorescent acquisition of [cell, scaffold] pairs of zstacks, visual verification of acquired pairs, and visual verification of cell segmentations and cellscaffold contacts. NH – preparation of nanofiber scaffolds, SEM acquisition of single fibers, SEM single fiber analysis, and visual verification of cell segmentations and cellscaffold contacts. MS – adaptation and execution of cell channel segmentation from past work, design and deployment of webbased cell segmentation verification system, and software dissemination. PS – design of geometrical segmentation algorithms, skeleton and radius estimation from single fiber zstacks, and movie creation for acquired pairs. NS – fluorescent acquisition of single fibers, visual verification of acquired pairs of zstacks. CS  visual verification of cell segmentations and cellscaffold contacts, writing the manuscript, and project coordination. MB and RS  provide overall strategic direction for the Information System Group and Software and Systems Division. All authors read and approved the final manuscript.
Corresponding authors
Ethics declarations
Authors’ information
Peter Bajcsy received his Ph.D. in Electrical and Computer Engineering in 1997 from the University of Illinois at UrbanaChampaign (UIUC) and a M.S. in Electrical and Computer Engineering in 1994 from the University of Pennsylvania (UPENN). He worked for machine vision (Cognex), government contracting (Demaco/SAIC), and research and educational (NCSA/UIUC) institutions before joining the National Institute of Standards and Technology (NIST) in June 2011. Peter’s area of research is largescale imagebased analyses and syntheses using mathematical, statistical and computational models while leveraging computer science fields such as image processing, machine learning, computer vision, and pattern recognition. For more information, see https://www.nist.gov/people/bajcsypeter.
Soweon Yoon received the Ph.D. degree from the Department of Computer Science and Engineering, Michigan State University in 2014, and the B.S. and M.S. degrees from the School of Electrical and Electronic Engineering, Yonsei University, Seoul, South Korea, in 2006 and 2008, respectively. She is a Scientific Research Specialist at Dakota Consulting Inc. associated with National Institute of Standards and Technology since September 2015. She was a research associate at Michigan State University and National Institute of Standards and Technology from July 2014 and July 2015. Her research interests include image processing, pattern recognition, and computer vision in the areas of biometrics, bioimage analysis, and 3D computer vision.
Stephen Florczyk earned a Ph.D. in Materials Science and Engineering at University of Washington in 2012. He earned a B.S. and a M.S. at Alfred University in Ceramic Engineering (2004) and in Biomedical Materials Engineering Science (2006), respectively. He completed a National Research Council Postdoctoral Fellowship at National Institute of Standards and Technology (NIST) from 2013 to 2015. Dr. Florczyk joined University of Central Florida (UCF) in August 2015 as an Assistant Professor in Materials Science & Engineering and Director of the Biomaterials for Tissue Engineering and Cancer Research lab. Dr. Florczyk is an expert in materials processing and characterization, cell culture trials, and animal studies. His research group focuses on the development of biomaterial scaffolds for studying cellmaterial interactions and tissue engineering and tumor microenvironment applications. For more information, see “http://mse.ucf.edu/florczyk/.”
Nathan A. Hotaling received his B.S degree in Mechanical Engineering at the University of Central Florida in May of 2007. He then obtained a M.S. in clinical research and a Ph.D. in Biomedical Engineering from Emory University and Georgia Institute of Technology in 2013. He then obtained a PostDoctoral position at the National Institute of Standards and Technology from 2014 to 2015. He is now at the National Institutes of Health working as a Research Fellow. His research interests include: tissue engineering, stem cell biology, biomaterials, image analysis, computer vision, statistical modeling, and translational research.
Mylene Simon received her A.A.S. degree in biology and bioinformatics engineering from the University of Auvergne, France, in 2010 and her M.S. degree in computer science from the engineering school ISIMA (Institut Supérieur d’Informatique, de Modélisation et de leurs Applications), France, in 2013. She worked for 4 months as a bioinformatician trainee for the French CNRS Institute of Human Genetics in 2010 and for 11 months as a software engineer trainee for two French IT companies in 2012 and 2013. She joined the National Institute of Standards and Technology (NIST) in March 2014 as a Guest Researcher. Mylene’s research focuses on 3D image processing and big data computations.
Piotr M. Szczypinski received his M.Sc. degree in electronics and telecommunications at the Lodz University of Technology (LUT) in 1995, Ph.D. degree in digital image processing at the same university in 2001 and his D.Sc. degree in biocybernetics and biomedical engineering at the Faculty of Automatic Control, Electronics and Computer Science, Silesian University of Technology in 2013. Currently Dr. Szczypinski is an associate professor at the Faculty of Electrical, Electronic, Computer and Control Engineering, LUT. His research interests include deformable models applied to image segmentation and to object recognition, computer analysis of biomedical images for medical diagnosis, and computer vision applications for food quality assessment. For more information, see http://www.eletel.p.lodz.pl/pms/Home.html.
Nicolas J. Schaub received his B.S. degree in Biomedical Engineering (Magna Cum Laude) at the Michigan Technological University, Houghton, MI, and his PhD. In Biomedical Engineering at the Rensselaer Polytechnic Institute, Troy, NY. He holds currently a PostDoctoral position at the National Institute of Standards and Technology from under the National Research Council (NRC) fellowship. His research interests include biomaterials, tissue engineering, microscopy imaging, and image analysis.
Carl G. Simon jr. earned a B.S. in Biology from Bucknell University and a Ph.D. in Biochemistry from University of Virginia focusing on signal transduction during human platelet aggregation. He trained as a postdoctoral fellow in NIST Polymers Division and is currently a staff scientist and Project Leader in the NIST Biosystems and Biomaterials Division. Dr. Simon holds leadership positions in the Society for Biomaterials and is on the Editorial Boards for “Biomaterials” and “Journal of Biomedical Materials Research – Applied Biomaterials”. His research interests include cellmaterial interactions, measurement assurance strategies for cell therapy products, effect of scaffold properties on stem cell morphology and differentiation and measurements for scaffold characterization. For more information, see https://www.nist.gov/people/carlsimonjr.
Mary Brady received the B.S. degree in Computer Science and Mathematics, Mary Washington College, May 1985, and the M.S. degree in Computer Science, George Washington University, May 1990. She worked at the Naval Surface Warfare Center, in Carderock, MD, with primary duties to provide systems and network level support for the Center’s Central Computing Facilities. Since joining NIST in July, 1992, she has worked in a variety of groups within the Information Technology Laboratory. During this period, she has been responsible for the development and implementation of projects related to the research, standardization, and commercialization of distributed systems technologies.
Ram D. Sriram received a B.Tech. from IIT, Madras, India, and an M.S. and a Ph.D. from Carnegie Mellon University, Pittsburgh, USA. Prior to joining NIST, he was on the engineering faculty (1986–1994) at the Massachusetts Institute of Technology (MIT) and was instrumental in setting up the Intelligent Engineering Systems Laboratory. Dr. Sriram is currently the chief of the Software and Systems Division, Information Technology Laboratory, at NIST. His scientific interest is in developing knowledgebased expert systems, natural language interfaces, machine learning, objectoriented software development, lifecycle product and process models, geometrical modelers, objectoriented databases for industrial applications, health care informatics, bioinformatics, and bioimaging. For more information, see https://www.nist.gov/people/ramdsriram.
Ethics approval and consent to participate
Not applicable
Consent for publication
Not applicable
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Additional files
Additional file 1:
Detailed description of related work. (DOCX 59 kb)
Additional file 2:
Cell segmentation algorithm. (DOCX 25 kb)
Additional file 3:
Model for cropping contact regions of interests. (DOCX 17 kb)
Additional file 4:
Statistical model of background. (DOCX 90 kb)
Additional file 5:
Statistical models for segmenting all scaffold types. (DOCX 192 kb)
Additional file 6:
Algorithms based on statistical models for segmenting all scaffold types. (DOCX 48 kb)
Additional file 7:
Algorithm based on planar geometrical model for segmenting spun coat scaffolds. (DOCX 36 kb)
Additional file 8:
Algorithms based on cylindrical geometrical models for segmenting fiber scaffolds. (DOCX 25 kb)
Additional file 9:
Evaluation of goodnessoffit for planar model used for modeling spun coat scaffolds. (DOCX 796 kb)
Additional file 10:
Validation steps based on 2D SEM and 3D CLSM data of Single Fibers. (DOCX 34 kb)
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Bajcsy, P., Yoon, S., Florczyk, S.J. et al. Modeling, validation and verification of threedimensional cellscaffold contacts from terabytesized images. BMC Bioinformatics 18, 526 (2017). https://doi.org/10.1186/s128590171928x
Received:
Accepted:
Published:
Keywords
 Colocalization
 Cellular measurements
 Cellscaffold contact
 Segmentation models
 Contact evaluation
 Webbased verification
 Largevolume 3D image processing