The proposed data calibration process addressed some key factors that are commonly ignored or dismissed. Registration of frames ensures consistency between spatial locations, and yields better resolution in stacked images, as well as in the construction of the basal conditions. The natural decay of the Fura-2 marker must be considered to perform model fittings and comparisons. To achieve better calibration and basal conditions estimations, we suggest acquiring several frames before the mechanical stimulus. We used from 10 to 50 frames.
Noise management is another key feature of this calibration pipeline. Small median filters are effective to remove high variations (noise) in pixel values, while preserving boundaries and relevant features. Since the calcium concentration is derived from the ratio of two images, noise is not well correlated to intensities, and noise tends to be amplified.
Analysis of the calcium wave through a global approach simplifies the data, enables to extract a few parameters that characterize the process, and provides with a quick way to compare different groups or populations.
One of these parameters is the velocity of the traveling wave. Since we are assuming radial symmetry, it is of great importance to determine the origin of the wave. The proposed method of iterative calculations of the mass center retrieves a user independent point, with a high success rate (in our experiments, 57% showed a difference smaller than 5px to a manual estimation, and 83% showed a difference less than 20px). This increases reproducibility of the experiment, and reduces inter-operator sources of errors. As seen in Figure 4, frames usually have secondary calcium release sources, or high noise levels. Such factors prevent the use of a single step mass center calculation as a good estimator of the mechanical stimulus location. In this sense, our iterative approach detects the brightest and largest object, and then calculates its mass center efficiently.
Integration of one pixel-width ring samples gives a higher sampling rate than alternative methods, like averaging square samples on a matrix distribution, or over lines. Additionally, this method does not mix information at different distances, increasing spatial resolution and isolation. Manual extraction and integration of such amount of information would be unaffordable for a human operator.
We propose two methods to estimate the velocity of the wave, measuring the maximum intensity of the calcium concentration, and the maximum increase of the calcium levels, for every sampled radial distance. The wave-front is more related to the maximum gradient, since it detects the highest change in the calcium concentration. Analysis of the maximum concentration is affected by delays in the release of calcium by the cells, and homogenization of intra cellular levels. Increasing distances also produces dispersion of activation times, and hence it explains the lower measured velocities, compared to maximum gradient velocities.
Nevertheless, both velocities are highly correlated, and any of them characterizes the wave, enabling statistical analysis of different groups (in our case controls, leptin and adiponectin exposures). Our measurements are in agreement with previous experiments by other groups. Chopra et al. , reported intra-cellular velocities up to 100 um/s in cardiac myocytes. Sanderson et al. , working with epithelial cells in the respiratory tract, measured velocities in the cytoplasm of nearly 25 um/s. Since we are measuring mean propagation velocities, which are affected by delays between cells, lower results are expected (we obtained a mean propagation velocity of 15 um/s).
The optimization process proved to be a very efficient and powerful tool to retrieve deeper knowledge of the calcium wave. Our naive functions (truncated exponential decay, and square decay) yielded useful qualitative information. However, alternative fitting functions might be investigated to improve the match with the acquired data.
Global Analysis can be improved using signal masks. This way data are less corrupted by background noise, and mean intensities are more related to the values of the pixels of stimulated cells than those that have not been affected by the calcium wave. Since the number of non stimulated cells is not proportional to distance, or a linear function, there is not a simple way to correct this effect, rather than masking the signal by thresholding with a factor of the maximum measured intensity. Fittings should be done over masked data. When wave velocities are measured, masks give a better statistical significance to the study, since computations are done over data with high signal to noise ratio.
Masks are also helpful in the local analysis, since they restrict the number of analyzed pixels, and therefore execution times are lower. They also improve the visualization of the results.
Local analysis yields a set of parameters for each pixel. It might be difficult to take advantage quantitatively of such amount of information, but a qualitative inspection of maps containing those parameters yields a better understanding of the global analysis findings. Differences are not high between neighboring pixels in the same cell. There is a high correlation between the wave amplitude, time of activation, and the statistical output from the data calibration process (activation time is close to the time of the maximum intensity, and wave amplitude is similar to the maximum intensity). Feeding the optimization process with those values as starting points reduces the number of iterations, and hence the execution time, and they may also improve the robustness of the algorithm. Decay rates seem to diminish with distance for cells nearby to the mechanical stimulus, but after that it begins to raise again. Local analysis reveals that this behavior may be explained by low intensity, and noisy pixels. Further studies are needed to confirm this, and retrieve a model of the decay rates. Also, since the parameters seem to be cell-related (in terms of distance to the stimulus), large variations inside a cell are an indication of poor signal to noise, and results may lack of confidence. Nevertheless, propagation velocity of the wave may be derived only using the Global Analysis strategy, enhanced with a signal mask, so Local Analysis is left as a tool to be used when specific spatially varying information is needed.
Studies that are focused on measurements of the wave propagation velocity can be significantly improved capturing data from one filter, instead of the standard two-filter method. Data acquisition with two filters involves a mechanical rotation of a filter wheel, introducing a time delay between exposures. Time frequency is then limited by the speed of the mechanical system, exposure times, and properties of the camera and data transfer or storage. Using one filter, we are saving time, allowing a higher sampling rate, with the same exposure times. This increased frequency is needed to achieve a higher temporal resolution in the maximum increase of the calcium levels, used to characterize the wave-front.
Data from the 380 nm filter showed a strong correlation of the maximum gradient. Velocities measured by either method are equivalent. Maximum intensities in the 380 nm filter are not well correlated with the maximum calcium concentration; thus, measuring the wave velocity using this strategy was discouraged.
Execution time of the entire sequence may take up to half an hour, including visual inspection of the data to manually detect the frame where the wave propagation begins. User inputs may be reduced to a minimum set of parameters that are easily determined by inspection.
If no model fitting is needed, extracting only the propagation velocity by linear regression reduces the computation time to a few minutes.
Compared to manual methods, even the full process presents a considerable saving of time, with more reliable statistical results (from several hours, to just a few minutes). Considering computing time only, we need less than 15 minutes (data calibration, mask generation, global and local analysis). Manual work consists mainly of the necessary steps to set the processes’ parameters, like identifying the frame of the mechanical stimulation, or evaluating results and fine-tuning parameters (as in the signal mask creation, or the maximum distance used in the global analysis). In the current scheme, this manual workload may take from 5 minutes to 10 minutes, considering time consumed by the computer in non-optimal results. Since human supervision is desirable for optimal results, further automation should be emphasized on those steps that are required to set parameters that do not require fine-tuning, or are highly case dependent. The most time demanding tasks in that field are those related to inspection of the image series, or an animation of the frames.
From our experimental data, meaningful differences in propagation velocities were found between samples containing leptin, adiponectin and the control group, as presented in . Nevertheless, further studies are needed to confirm that hypothesis, with a larger set of experiments and concentrations.
Future development may be achieved modeling the calcium wave with diffusion equations, and taking advantage of the quantitative properties of the optimization process. This should also involve experimental modeling of parameters, as a function of radial distances.
The proposed one filter method should be used to measure fast traveling waves, and to compare behavior of different groups. Statistical confidence analysis of individuals would enable to study the effect of different hormones and proteins in calcium propagation, and in biological mechanisms.
Additional image processing techniques for automation may enable larger studies, and faster results, with less manual work and operators, as suggested in the previous subsection. Again, this should have a positive impact on statistical reliability of the studies.
In terms of automation, one step that might be further improved is the detection of the frame where the propagation begins. This is the first image were a large calcium release is detected, or major morphological changes of the stimulated cells are presented. We implemented a prototype automatic function to detect this, using the total flux plot. This strategy used a linear fit starting from the maximum calcium level to the previous frames, looking for the intersection of this line to the linear fit that described the basal conditions. In most cases, we achieved an accuracy of + −2 frames. This could be a good starting point for a more robust search, which may be based on the iterative center of mass calculation described in section 2.3.1, over the candidate frames. There, the last frame (in terms of acquisition time) may be used to create a signal mask, to avoid spurious data. Data may also be preprocessed by a soft-thresholding scheme, also described in 2.3.1, with a value that depends on the standard deviation of pixel values in the background. Since the calcium propagation is very symmetric at the beginning of the experiment, a large deviation of the mass center, and changes in intensity, might help in this detection.
Even though the current manual inspection is easy to perform and fast, signal masking might by further automated through the minimization of a “quality potential” with a two-step search. To optimize the threshold selection, a binary search may be used, expressing the threshold in terms of the median value of the image, and measuring amplitudes in standard deviations. Since the combinations of morphological operations are limited, an exhaustive search will take less than 5 seconds. The quality potential may compare the 2-norm differences between a low pass filtered version of the image, and the image multiplied by the mask.
Finally, automation of the selection of maximum distance to use in the Global Analysis may also be implemented, using two criteria: local dispersion of the maximum intensity or maximum gradient point, and a signal to noise ratio estimation for each radial distance. This strategy should reject too large distances, where linearity is lost, or no meaningful data was extracted, and include as many samples as possible, to ensure good statistics and consistency. This is especially true for a two-filter method, where frequency sampling is limited, and thus, larger distances are needed for accurate measurements.