 Research article
 Open Access
Beyond colocalization: inferring spatial interactions between subcellular structures from microscopy images
 Jo A Helmuth^{1, 2},
 Grégory Paul^{1, 2} and
 Ivo F Sbalzarini^{1, 2}Email author
https://doi.org/10.1186/1471210511372
© Helmuth et al; licensee BioMed Central Ltd. 2010
 Received: 10 March 2010
 Accepted: 7 July 2010
 Published: 7 July 2010
Abstract
Background
Subcellular structures interact in numerous direct and indirect ways in order to fulfill cellular functions. While direct molecular interactions crucially depend on spatial proximity, other interactions typically result in spatial correlations between the interacting structures. Such correlations are the target of microscopybased colocalization analysis, which can provide hints of potential interactions. Two complementary approaches to colocalization analysis can be distinguished: intensity correlation methods capitalize on pattern discovery, whereas objectbased methods emphasize detection power.
Results
We first reinvestigate the classical colocalization measure in the context of spatial point pattern analysis. This allows us to unravel the set of implicit assumptions inherent to this measure and to identify potential confounding factors commonly ignored. We generalize objectbased colocalization analysis to a statistical framework involving spatial point processes. In this framework, interactions are understood as position codependencies in the observed localization patterns. The framework is based on a model of effective pairwise interaction potentials and the specification of a null hypothesis for the expected pattern in the absence of interaction. Inferred interaction potentials thus reflect all significant effects that are not explained by the null hypothesis. Our model enables the use of a wealth of wellknown statistical methods for analyzing experimental data, as demonstrated on synthetic data and in a case study considering virus entry into live cells. We show that the classical colocalization measure typically underexploits the information contained in our data.
Conclusions
We establish a connection between colocalization and spatial interaction of subcellular structures by formulating the objectbased interaction analysis problem in a spatial statistics framework based on nearestneighbor distance distributions. We provide generic procedures for inferring interaction strengths and quantifying their relative statistical significance from sets of discrete objects as provided by image analysis methods. Within our framework, an interaction potential can either refer to a phenomenological or a mechanistic model of a physicochemical interaction process. This increased flexibility in designing and testing different hypothetical interaction models can be used to quantify the parameters of a specific interaction model or may catalyze the discovery of functional relations.
Keywords
 Monte Carlo
 Enhance Green Fluorescent Protein
 Interaction Strength
 Near Neighbor
 Distance Threshold
Background
A general biological principle states that cellular function results from the combined interactions of subcellular structures in space and time. Interactions typically manifest themselves through statistical dependencies in the spatial distributions of the involved structures. Here, we adopt this general definition and we understand interaction as the collection of all effects that cause significant (above the level predicted by a null hypothesis) correlations in the positions of the participating objects.
Over the last decades, advances in fluorescent markers have enabled probing interactions of subcellular structures in the microscope, either directly or indirectly. The direct approach relies on experiments that generate a signal upon the proximity required for molecular interaction. Indirect approaches are based on independently imaging two populations of interest, and searching for clues of interaction in their spatial distributions. This approach is based on the paradigm that spatial proximity (or colocalization) is a hallmark of many types of physical and chemical interactions between subcellular structures. If two or more structures interact, their spatial distributions hence appear correlated. The reverse, however, is not necessarily true. Presence or absence of significant colocalization does not imply presence or absence of interaction. The reason is that colocalization depends on the specific interaction mechanism: An unobserved third structure may act as a confounding factor (in the statistical sense), making the observed structures appear colocalized even though they do not interact. Furthermore, one can imagine interaction mechanisms that lead to spatial distributions with correlations that are not captured by simple colocalization measures. Hence, the interaction has to be statistically inferred from the data.
Such inference, however, entails a tradeoff between the objectives of pattern discovery and statistical detection power. According to these objectives, two complementary approaches to colocalization analysis can be distinguished: Intensity correlation methods capitalize on pattern discovery [1], whereas objectbased methods [2] emphasize detection power. Intensity correlation methods quantify correlations in the intensities of different color channels on individual pixels. Intensity correlation methods are straightforward to implement and use. The results, however, may be difficult to interpret since interactions need to be inferred from correlations in intensity space, which is sensitive to the blurring and noise inherent to microscopic imaging systems [3]. Objectbased methods quantify the spatial relationships between sets of discrete objects. This requires reducing the image to a set of geometric objects using, e.g., image segmentation or fitting of structure models. Objectbased approaches infer interactions from correlations in physical space, which allows constructing intuitive and simple colocalization measures, such as counting the number of overlapping objects [2].
The intensitybased approach is limited to interactions on a spatial scale on the order of the resolution of the microscope. While the objectbased approach is not necessarily limited to any particular length scale (note that the localization accuracy for an isolated object is not limited by the spatial resolution of the microscope, but rather the signaltonoise ratio [4–6]), a spatial scale is nevertheless assumed in practice. Many objectbased colocalization methods rely on a hard threshold for the distances between objects in order to distinguish between "colocalized" and "not colocalized" for each individual pair of objects [2]. The choice of distance threshold greatly influences the types of interactions that can be reliably detected. The actual physical or chemical interaction between subcellular objects can be of short temporal duration and they can quickly separate thereafter. In such situations, high thresholds can increase the detection power, but only at the expense of increased falsepositive rates. When interactions take place over long distances, the choice of threshold implicitly determines a range limit of the analysis.
Apart from fixing the interaction scale a priori, using a hard distance threshold also implies a binary distinction of pairwise distances: either they are below the threshold and hence the objects are assumed to interact  or they don't. A colocalization percentage thus corresponds to an indirect measure for the preference of "interaction" over "noninteraction". This preference reflects the strength of the interaction. However, it also depends on the frequency of possible distances that the population of objects can assume.
More specifically, the cellular context in which the interactions take place is a confounding factor. A high colocalization percentage can, for example, be observed in a cell with densely packed subcellular structures of interest, irrespective of their interaction strength. This artifact needs to be considered in statistical tests [7] or corrected for in order to construct an interaction score [8].
Taken together, objectbased approaches provide intuitive colocalization measures whose statistical interpretation, however, is not straightforward. Here, we establish a connection between colocalization and the notion of interaction as used in spatial statistics [9], namely the nonindependence of the relative positions of objects under study. This is based on modeling the nearestneighbor distance distribution between the observed objects. These distances are the result of interactions, measurement inaccuracies, and the geometry of the domain in which the objects are distributed. This modeling provides generic procedures for inferring interaction strengths and quantifying their statistical significance. Our approach helps formalizing design decisions in colocalization and interaction studies and shows how they translate to biological hypotheses. Standard objectbased colocalization analysis is included as a special case, which makes explicit the connections between interaction and colocalization. After developing and characterizing the statistical interaction analysis framework, we exemplify its utility in a biological study of virus entry.
Results and Discussion
Basic scenario: colocalization analysis
We review the basic concepts of classical objectbased colocalization analysis and its interpretation in terms of interactions.
where 1(·) is the indicator function and t an applicationspecific distance threshold. The form of Eq. 2 implies assumptions about how the objects in X and Y interact. The interaction process is considered to be translation and rotationinvariant since only the distance between interacting objects is taken into account. Based on this distance only two categories of positions of the objects in X are distinguished: either they are sufficiently close to any object in Y to be considered interacting, or they are not. Furthermore, objects in X interact with at most one object in Y and they do not experience the presence of any y_{ j }unless they cross the distance threshold t. The choice of t reflects an assumption about the length scale of the interaction to be detected.
This density q(d) is determined by the positions, dimensions, and number density of the objects in Y (see Fig. 1). Independent random positions will result in a relatively wide density q(d) (Fig. 1C). With regularly placed objects Y, large distances do not occur (Fig. 1B). Clustering increases the frequency of long distances at the expense of short distances (Fig. 1D). Objects with large surfaces or a large number density give rise to shorter distances. In case there are interactions between the objects in X and Y, some of the possible distances are additionally favored over others, deforming the density q(d) to p(d).
The colocalization measure C^{ t }is, therefore, not sufficient to separate the contributions from the cellular context and the interactions. Information about the interactions is only contained in the deviation from an expected baselevel in the absence of interactions. This base level, , is the colocalization measure that would be observed under the hypothesis H_{0}: "no interaction" (obtained by letting p(d) = q(d) and numerical evaluation of the integral in Eq. 2). But how does a certain deviation from the base level relate to interactions between the objects, and what deviations can be considered significant? We address this question in the following sections by generalizing colocalization analysis to interaction analysis. Ideally, an interaction score is independent of the cellular context and reflects variations of the interaction strength in a monotonous fashion. The first step toward constructing such a score is a precise definition of the term interaction strength in the context of an interaction model.
Generalization: interaction analysis
where the function ϕ(d) specifies the distance dependence of the interaction.
where, unlike in Eq. 4, an explicit dependence of the potential on x_{ i }is no longer present.
The normalization constant Z (the partition function) renders p (dq) a true probability density function.
This is the central class of models that we use to extend colocalization analysis to interaction analysis. All interaction models will be formulated as specific instances of such a model.
The quantity corrects for the cellular context and, therefore, fulfills our requirements for a valid interaction score. Eq. 11 relates the purely descriptive colocalization measure C^{ t }to an interaction model between the objects in X and Y. It builds a bridge between patterns in the data (the cellular context summarized in q and the measure C^{ t }) and functional relationships (interactions) between subcellular components.
Whether an observed estimate is indicative of the actual presence of an interaction, however, has to be addressed using statistical inference as presented in the following section.
Hypothesis testing and power analysis for the step potential
In the parameterization of our interaction model (Eqs. 8 and 9), the presence of an interaction is equivalent to ϵ ≠ 0. Since is an estimator, it is a random variable. Even if the hypothesis H_{0}: "no interaction" is true, a nonzero can occur with finite probability ( ≠ 0 does not imply ϵ ≠ 0). Inference about interactions requires finding a critical estimated interaction strength above which one can reject H_{0} on a prescribed significance level α.
This critical interaction strength is determined by the distribution of under H_{0} (null distribution), which depends on the sample size N, q, and the prescribed α. Under H_{0}, C^{ t }N is binomially distributed with parameters ( , N). Hence, the critical C^{ t }can be computed from the (numerically) inverted cumulative distribution function of the binomial distribution. The corresponding critical follows from Eq. 11.
The curves in Fig. 2B show the decision of the statistical test based on the estimated interaction strength . A true interaction with a strength ϵ greater than this critical value does, however, not guarantee that it will always be detected by the test (type II error: β). Furthermore, a weak interaction may lead to unwanted rejection of H_{0}. The behavior of the test critically depends on the effect size, which quantifies the departure from H_{0}. Here, effect size refers to the true interaction strength ϵ = a > 0. The statistical "power" (1  β) quantities the probability of rejecting H_{0} when H_{1}: "ϕ = ϕ^{st}, ϵ = a" is true. In Fig. 2C, the detection power for a true strength of a = 1 is shown as a function of . As expected from Fig. 2B, the power is low at the extremes of , eventually dropping significantly below the recommended value of 0.8, even for N = 100. Weak interactions are harder to detect, requiring larger sample sizes to yield a certain power.
In the design of experimental interaction studies, a key objective is to maximize the robustness and reliability of detecting effects of unknown size. Power can be increased by optimizing the experimental design or the subsequent statistical analysis. While increasing the sample size might be possible, controlling the cellular context is not feasible in most situations. Our analysis is based on the interaction model introduced in the previous section. It allows specifying different shapes f(·) and scales σ of the interaction potential. Power could potentially be increased by better modeling the interaction potential. In the next section, we thus quantify the influence of alternative model potentials on statistical power.
Improving statistical power with nonstep interaction potentials
Constructing statistical tests as described above requires assuming a specific shape and scale of the interaction potential. In the absence of prior knowledge, however, this model potential can be arbitrarily different from the true potential of the actual biological interactions under observation. Test statistics that are based on a model potential close to the real one may achieve greater power.
This potential has an overall 1 = dshape, but finite value and slope everywhere. The parameter ϵ again controls the interaction strength (potential depth). The parameter σ sets the length scale of the interaction (potential range) and allows gradually changing ϕ(d) from a steplike shape to a potential that causes significant attraction toward the objects in Y over large distances (see Fig. 3B).
is a sufficient test statistic for ϵ [12].
For a set of distances D, distributed according to Eq. 9 with ϕ(d) = ϕ^{pl}(d), a test for the presence of interactions can thus be constructed based on under H_{0}: "no interaction", where the scale parameter σ is assumed to be known. The nulldistribution can be approximated by i.i.d. Monte Carlo (MC) samples (see Materials and Methods). An observed value of T^{pl} is then ranked among the . If it ranks higher than ⌈(1  α)K⌉th, H_{0} is rejected on the significance level α[12]. The statistical power of this test to reject H_{0} when H_{1}: ϕ = ϕ^{pl}, ϵ = a" is true, can be estimated with additional MC simulations: For a fixed effect size a > 0, one draws N distances d_{ i }from p(d), computes T^{pl}, and conducts the test as described above [12]. This procedure is repeated many times and the fraction of tests rejected serves as an estimator of the power.
In order to quantify the influence of the model potential on statistical power, we test H_{0} against H_{1} and H_{2}: "ϕ = ϕ^{st}, ϵ = a" on data generated under H_{1} for varying σ (see Fig. 3B for the true interaction potentials under H_{1}). Testing H_{0} against H_{2} makes use of the sufficient statistic , which is proportional to C^{ t }with t = 0. As opposed to T^{pl}, this statistic only contains information about the signs of the d_{ i }and should thus yield a less powerful test.
Fig. 3C shows the number of samples required to reach 80% power as a function of the strength a of the true interaction potential. It can be seen that the power of a test based on the true interaction potential (solid lines) is higher than the power of a test based on a step potential (dashed lines). Moreover, this difference strongly increases with increasing potential range σ: for σ = 5 (blue lines) using the step model potential requires 4 times more samples. If the true potential is close to a step potential (σ = 0.2, red lines), both tests perform comparably well. Moreover, the figure also shows that interactions over longer distances are harder to detect. We therefore conclude that one needs to be careful when assuming a step potential (as implicitly done in traditional colocalization analysis). Controlling power requires prior knowledge about the interaction potential. Such prior knowledge can easily be included in the present framework by choosing t, σ, and f(·).
Example: virus trafficking
The uptake and intracellular transport of virus particles is a complex process that involves temporary association with membrane receptors and multiple organelles of the endocytic machinery, such as early and late endosomes [13]. In many cases, fluorescence microscopy allows resolving the involved entities as discrete objects. This has previously motivated the use of objectbased colocalization measures to quantify association kinetics and unravel infection pathways. Here, we show how the generalized framework of interaction analysis presented above can be applied in a practical experimental situation, and how it enables using a large toolbox of wellknown statistical techniques.
We consider a set of 274 twocolor fluorescence microscopy images of single HER911 cells expressing the small GTPase Rab5 tagged with enhanced green fluorescent protein (EGFP), recorded in the green color channel. Rab5 is a regulator of clathrinmediated endocytosis and a marker for early endosomes. These dynamic, lipidbounded organelles are formed by invaginations of the plasma membrane. They are the first sorting compartment of clathrinderived cargo [13]. Either fluorescently tagged Adenovirus serotype 2 (Ad2) or its temperature sensitive mutant (TS1) were recorded in the red color channel. Images were taken between 2 and 46 min post infection. The same data have already been used in a previous study [5]. Virus positions and endosome outlines were extracted from the images as described in the Materials and Methods section. Based on these object representations, the set D of virustonearestendosome distances and the state density q(d) were computed for each of the imaged cells.
Like Ad2, TS1 is known to enter the cell by clathrinmediated endocytosis, but the mutation inhibits escape from endosomes [14, 15]. This should be reflected in a deviation of the empirical distribution of observed distances D from the null distribution p(d) = q(d), which is stronger for TS1 than for Ad2. In our framework, this translates to a nonflat interaction potential between virus centroids and outlines of Rab5positive endosomes.
Results of nonparametric statistical tests for interaction in the virus trafficking data.
#cells  p< 0.05  p< 0.01  N  

Ad2  135  70 (52%)  25 (19%)  180 ± 50 
TS1  139  128 (92%)  100 (72%)  157 ± 59 
The estimated nonparametric potential serves as a template for the shape of parametric models. Parametric potentials can be identified more robustly from sets of observed distances of individual cells. This allows correlating their parameters with covariates such as the virus type or the time at which a cell was imaged after infection. We consider four different potentials, two that resemble the shape in Fig. 4 (Hermquist and Linear type 1) and two that are generalizations of the step potential with a plateau below d = 0 (Linear type 2 and Plummer). For all potentials, we fix the threshold to t = 0. Definitions of the potential shapes f(·) are given in the Materials and Methods section.
Conclusions
We have introduced a statistical inference framework for robustly estimating interaction parameters from experimentally observed object distributions.
This allowed establishing a connection between spatial codistributions of objects and interaction, by formulating the objectbased interaction analysis problem in a spatial statistics framework based on nearestneighbor distance distributions. The present framework provides generic procedures for inferring interaction strengths and quantifying their statistical significance. Standard objectbased colocalization analysis is included as a limit case, making explicit the connections between the present framework and more classical approaches.
In the present framework, two novel key quantities emerge: (i) the state density q(d), which is the distribution of nearestneighbor distances expected under the null hypothesis of no interaction, and (ii) the interaction potential ϕ(d), which defines the strength and distance dependence of the interaction. We have shown that classical colocalization analysis amounts to estimating the parameters of a step potential. This requires a notion of "inside" and "outside", either naturally defined by the physical extent of the objects or imposed through the step function's distance threshold. For pointlike objects, or weak correlations between object positions, the choice of distance threshold is arbitrary.
This limitation can be relaxed by affording more general shapes of the interaction potential, which naturally extends colocalization analysis to (spatial) codistribution analysis without requiring any additional assumptions. The additional flexibility allows capturing information about a wider range of subcellular interactions. This was demonstrated by statistical power analysis of the classical and generalized measures. Our results highlight that the probability of detecting an interaction strongly depends on the cellular context. We furthermore illustrated the influence of the range of an interaction on its detectability. Test statistics that include knowledge about the shape of the true interaction potential can greatly reduce the number of samples required to achieve a certain target power. Physicochemical models might provide such prior knowledge. Alternatively, a nonparametric phenomenological potential can be estimated from the data as demonstrated here. This potential can then serve as a template for the parametric potentials used in subsequent analyses. In addition, the present framework enables comparison of the likelihoods of different hypothetical physicochemical interaction models directly on the original image data.
The present approach enables applying a wide range of established statistical tools for analyzing experimental data, from parameter identification to model selection. This workflow was illustrated by studying the spatial patterns of endosomes and viruses infecting live human cells. In this case study, the experimental data were very well explained using only a single free parameter per cell. Among the five potentials considered, the step potential (corresponding to the classical colocalization measure) was worst in explaining the data. This highlights the benefit of the present method over classical colocalization analysis. Moreover, the fitted potentials provided additional quantitative readouts that could be used in subsequent machine learning analyses.
For simplicity the case study was done on 2D projections of 3D images. The presented approach, however, is equally applicable in three dimensions without any changes, provided threedimensional object detection and segmentation is available. Projecting the data into two dimensions alters the estimated potentials (as it also does for any other colocalization measure), since it distorts both the distance data D and the state density q(d). We empirically found that the strengths of the potentials estimated from the projected 2D data may be smaller than those estimated directly on the raw 3D data (data not shown). Although all distances D are systematically reduced by the projection, this effect is overcompensated by the nonlinear distortion of q(d), which is strongest for intermediate distances, but negligible for very small and large distances. Besides projection artifacts, errors in the image processing may also influence the estimated colocalization measures. Depending on the accuracy of the image segmentation method used, object sizes can be under or overestimated, or entire objects can be missed altogether. This problem is inherent to all forms of colocalization or distribution analysis. We have assessed the sensitivity of our method with respect to image segmentation errors by successively eroding or dilating the endosomes from the presented case study. The results show that the mean of the estimated strength of the Hermquist potential remains unaffected, yet the variance of the estimate increases for strong erosion when entire endosomes start to be missed (data not shown). This robustness of the present method is due to the state density q(d) correcting for size errors. The classical colocalization measure, naively corrected for the cellular context by subtracting the amount of unspecific colocalization C_{0}, significantly changes when under or overestimating object sizes. For strong erosion, leading to very small and frequently missing objects, it even drops to a meaningless value of zero (data not shown). Since image segmentation errors are always present in practical applications, we consider the robustness of our method one of its major advantages over classical measures.
The presented framework is limited by the same assumptions that also underlie classical colocalization analysis: (i) spatial homogeneity and (ii) isotropy of the interaction within the observation window, and (iii) exclusively nearestneighbor interactions between objects of different classes. Assumption (i) is, e.g., violated if large areas of the analyzed images do not contain any objects. In this case, estimation of q(d) is not robust. Assumption (iii) imposes limits on admissible distances between objects: If objects X are attracted toward objects Y, the distances between the objects within the set Y need to be larger than the typical interaction range.
All of these limitations could be relaxed by using positiondependent interaction potentials or allowing for manybody interactions as described by general Gibbs processes. Considering such processes, however, is theoretically and numerically challenging. The presented framework could also be extended by including additional confounding factors, such as imaging artifacts causing spurious colocalization. Temporal plasticity of interactions, celltocell variations, and experimenttoexperiment variations could be accounted for through additional covariates (time, cell index, experiment index) in the statistical model. Already in its present form, the statistical framework can be used to test more general hypotheses, such as "interactions are stronger in strain A than in strain B".
The interpretation of fitted potentials is limited to their relative strengths. In the absence of a mechanistic or physical model of the process that has created the observed spatial pattern, biophysical interpretation of the identified parameter values is difficult or misleading. This is because the fitted interaction potentials reflect the collection of all intracellular phenomena that lead to the observed point pattern. Interestingly, however, a relation between the steadystate distribution of a diffusion process with added deterministic forces and the distribution of the Gibbs process (Eq. 4) exists: If the deterministic force acting between the diffusing objects is given by ∂ϕ/∂d, the two distributions become identical (in appropriate units). This fact points a possibility of connecting fitted interaction potentials with biophysical processes.
Methods
Image acquisition and processing
Endosomes and virus particles were imaged with a highresolution spinning disk confocal microscope (NA 1.35, 100× objective plus additional 1.6× lens, 100 nm pixel size) as described [5]. We acquired zstacks of 8 images each with a 400 nm zspacing. Stacks were maximum projected prior to image analysis. Endosome outlines were represented as piecewise linear closed splines in the focal plane. Outlines were estimated from images using a specialized modelbased image analysis technique [5], yielding subpixel localization accuracy and precision. Virus particles were modeled as points and represented by estimated intensity centroid positions [6]. Prior to distance measurement, relative shifts between virus and endosome positions due to chromatic aberration were corrected using an empirical calibration function [5, 16]. The boundary ∂Ω of the region Ω was defined as the cell boundary. An approximation of it was found by lowpass filtering and thresholding of the endosome images.
Measuring q(d)
The state density q(d) was determined from the objects {y_{ i }} contained in the region Ω. Positions x in Ω were sampled exhaustively on a uniform Cartesian grid with spacing h = 0.25 pixel. For each x, the distance d_{ i }to the nearest neighbor in Y was computed. Using this finite sample of distances D = {d_{ i }}_{ i }, an approximation of q(d) was found by Gaussian kernel smoothing density estimation using the MATLAB (The MathWorks, Inc.) function ksdensity.m with default settings.
Test for interaction
Second, T and U were computed for the set D of observed distances. U was then ranked among the obtained from an additional Monte Carlo sample , generated as described above. If it ranked higher than ⌈(1  α)K⌉th, H_{0} was rejected on the significance level α.
The parametric tests used in sections "Hypothesis testing and power analysis for the step potential" and "Improving statistical power with nonstep interaction potentials" followed a simpler protocol. The ranking was directly performed among the scalar test statistics T^{st} and T^{pl}, avoiding the detour via U. A priori estimation of the expectation and variance of T^{st} and T^{pl} was therefore not required.
ML estimation of potentials
with respect to the parameters {Θ_{ k }} = {(ϵ_{ k }, σ*)}. This was done by numerically maximizing (using NelderMead simplex) the sum of maxima l((ϵ_{ k }, σ*)D_{ k }, k) with respect to σ*.
with respect to Θ = (w_{1},...,w_{P1}). Smoothness of ϕ^{n.p.} was controlled by the parameter s = 2. The quadratic penalty in Eq. 19 corresponded to a Gaussian prior with zero mean and standard deviation s on the differences w_{ p } w_{p+1}.
List of parametric potentials
Potentials were parameterized as ϕ(d) = ϵf((d  t)/σ) with interaction strength ϵ, length scale σ, and threshold t = 0. Their shapes f(·) were defined as:

Plummer potential: defined in Eq. 12.
Implementation
All software was implemented in MATLAB version 7.9 (The Mathworks, Inc.) and run on a 2.66 GHz Intel Core2 Duo machine. Estimation of twoparameter potentials (Eqs. 12 and 20 to 22) took a few milliseconds per cell. Computation of q(d) took about one second. This time, however, strongly depended on the sampling resolution used. The nonparametric test for interaction took about half a second per cell. The time needed to estimate the common scale parameter for all cells was around ten minutes. A constantly updated version of the developed software is freely available from the web site of the authors http://www.mosaic.ethz.ch/Downloads. The MATLAB functions, scripts, and sample data at the time of writing are contained in additional file 1.
Declarations
Acknowledgements
JAH was financed by the ETH Research Commission under grant TH10071. GP was funded through CTI grant 9325.2 PFLSLS from the Swiss Federal Commission for Technology and Innovation. This project was also supported with a grant from the Swiss SystemsX.ch initiative, grant LipidX2008/011 to IFS. The authors thank Christoph J. Burckhardt (Harvard University, Cambridge, MA) and Urs F. Greber (University of Zurich) for providing experimental data. JAH further thanks Rajesh Ramaswamy (ETH Zurich) for encouraging comments on early results and Christian Müller (ETH Zurich) for his help with CMAES.
Authors’ Affiliations
References
 Costes SV, Daelemans D, Cho EH, Dobbin Z, Pavlakis G, Lockett S: Automatic and quantitative measurement of proteinprotein colocalization in live cells. Biophys J 2004, 86(6):3993–4003. 10.1529/biophysj.103.038422View ArticlePubMedPubMed CentralGoogle Scholar
 Bolte S, Cordelieres FP: A guided tour into subcellular colocalization analysis in light microscopy. J Microsc 2006, 224: 213–232. 10.1111/j.13652818.2006.01706.xView ArticlePubMedGoogle Scholar
 Anlauf E, Derouiche A: A practical calibration procedure for fluorescence colocalization at the single organelle level. J Microsc 2009, 233: 225–233. 10.1111/j.13652818.2009.03112.xView ArticlePubMedGoogle Scholar
 Thompson RE, Larson DR, Webb WW: Precise nanometer localization analysis for individual fluorescent probes. Biophys J 2002, 82(5):2775–2783. 10.1016/S00063495(02)75618XView ArticlePubMedPubMed CentralGoogle Scholar
 Helmuth JA, Burckhardt CJ, Greber UF, Sbalzarini IF: Shape reconstruction of subcellular structures from live cell fluorescence microscopy images. J Struct Biol 2009, 167: 1–10. 10.1016/j.jsb.2009.03.017View ArticlePubMedGoogle Scholar
 Sbalzarini IF, Koumoutsakos P: Feature point tracking and trajectory analysis for video imaging in cell biology. J Struct Biol 2005, 151(2):182–195. 10.1016/j.jsb.2005.06.002View ArticlePubMedGoogle Scholar
 Zhang B, Chenouard N, OlivioMarin JC, MeasYedid V: Statistical Colocalization in Biological Imaging with False Discovery Control. Proceedings of the 2008 IEEE International Symposium on Biomedical Imaging: From Nano to Macro: 14–17 May 2008; Paris, France 2008, 1327–1330. full_textGoogle Scholar
 Lachmanovich E, Shvartsman DE, Malka Y, Botvin C, Henis YI, Weiss AM: Colocalization analysis of complex formation among membrane proteins by computerized fluorescence microscopy: application to immunofluorescence copatching studies. J Microsc 2003, 212(2):122–131. 10.1046/j.13652818.2003.01239.xView ArticlePubMedGoogle Scholar
 Møller J, Waagepetersen R: Statistical inference and simulation for spatial point processes. CRC Press; 2004.Google Scholar
 Stoyan D, Penttinen A: Recent applications of point process methods in forestry statistics. Stat Sci 2000, 15: 61–78. 10.1214/ss/1009212674View ArticleGoogle Scholar
 Diggle PJ: Statistical Analysis of Spatial Point Patterns. 2nd edition. A Hodder Arnold Publication; 2003.Google Scholar
 Assunção R: Score test for pairwise interaction parameters of Gibbs point processes. Braz J Probab Stat 2003, 17: 169–178.Google Scholar
 Mellman I, Warren G: The road taken: Past and future foundations of membrane trafic. Cell 2000, 100: 99–112. 10.1016/S00928674(00)816876View ArticlePubMedGoogle Scholar
 Imelli N, Ruzsics Z, Puntener D, Gastaldelli M, Greber UF: Genetic reconstitution of the human Adenovirus type 2 temperaturesensitive 1 mutant defective in endosomal escape. Virol J 2009., 6(174):Google Scholar
 Gastaldelli M, Imelli N, Boucke K, Amstutz B, Meier O, Greber UF: Infectious Adenovirus Type 2 Transport Through Early but not Late Endosomes. Trafic 2008, 9(12):2265–2278. 10.1111/j.16000854.2008.00835.xView ArticleGoogle Scholar
 Kozubek M, Matula P: An efficient algorithm for measurement and correction of chromatic aberrations in fluorescence microscopy. J Microsc 2000, 200: 206–217. 10.1046/j.13652818.2000.00754.xView ArticlePubMedGoogle Scholar
 Heikkinen J, Penttinen A: Bayesian smoothing in the estimation of the pair potential function of Gibbs point processes. Bernoulli 1999, 5(6):1119–1136. 10.2307/3318562View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.