Skip to main content
  • Methodology article
  • Open access
  • Published:

A bag-of-words approach for Drosophila gene expression pattern annotation

Abstract

Background

Drosophila gene expression pattern images document the spatiotemporal dynamics of gene expression during embryogenesis. A comparative analysis of these images could provide a fundamentally important way for studying the regulatory networks governing development. To facilitate pattern comparison and searching, groups of images in the Berkeley Drosophila Genome Project (BDGP) high-throughput study were annotated with a variable number of anatomical terms manually using a controlled vocabulary. Considering that the number of available images is rapidly increasing, it is imperative to design computational methods to automate this task.

Results

We present a computational method to annotate gene expression pattern images automatically. The proposed method uses the bag-of-words scheme to utilize the existing information on pattern annotation and annotates images using a model that exploits correlations among terms. The proposed method can annotate images individually or in groups (e.g., according to the developmental stage). In addition, the proposed method can integrate information from different two-dimensional views of embryos. Results on embryonic patterns from BDGP data demonstrate that our method significantly outperforms other methods.

Conclusion

The proposed bag-of-words scheme is effective in representing a set of annotations assigned to a group of images, and the model employed to annotate images successfully captures the correlations among different controlled vocabulary terms. The integration of existing annotation information from multiple embryonic views improves annotation performance.

Background

Study of the interactions and functions of genes is crucial to deciphering the mechanisms governing cell-fate differentiation and embryonic development. The DNA microarray technique is commonly used to measure the expression levels of a large number of genes simultaneously. However, this technique primarily documents the average expression levels of genes, with information on spatial patterns often unavailable [1, 2]. In contrast, The RNA in situ hybridization uses gene-specific probes and illuminates the spatial patterns of gene expression precisely. Recent advances in this high-throughput technique have generated spatiotemporal information for thousands of genes in organisms such as Drosophila [1, 3] and mouse [4]. Comparative analysis of the spatiotemporal patterns of gene expression can potentially provide novel insights into the functions and interactions of genes [57].

The embryonic patterning of Drosophila melanogaster along the anterior-posterior and dorsal-ventral axes represents one of the best understood examples of a complex cascade of transcriptional regulation during development. Systematic understanding of the mechanisms underlying the patterning is facilitated by the comprehensive atlas of spatial patterns of gene expression during Drosophila embryogenesis, which has been produced by the in situ hybridization technique and documented in the form of digital images [1, 8]. To provide flexible tools for pattern searching, the images in the Berkeley Drosophila Genome Project (BDGP) high-throughput study are annotated with anatomical and developmental ontology terms using a controlled vocabulary (CV) [1] (Figure 1). These terms integrate the spatial and temporal dimensions of gene expression by describing a developmental "path" that documents the dynamic process of Drosophila embryogenesis [1, 2]. Currently, the annotation is performed manually by human curators. However, the number of available images is now rapidly increasing [5, 911]. It is therefore tempting to design computational methods to automate this task.

Figure 1
figure 1

Sample image groups and their associated terms in the BDGP database http://www.fruitfly.org for the segmentation gene engrailed in 5 stage ranges.

The particular nature of this problem determines that some challenging questions need to be addressed while designing the automated method. Owing to the effects of stochastic processes during development, no two embryos develop identically. Also, the quality of the obtained data is limited by current image processing techniques. Hence, the shape and position of the same embryonic structure may vary from image to image. Indeed, this has been considered as one of the major impediments to automate this task [1]. Thus, invariance to local distortions in the images is an essential requirement for the automatic annotation system. Furthermore, gene expression pattern images are annotated collectively in small groups using a variable number of terms in the original BDGP study. Images in the same group may share certain anatomical and developmental structures, but all terms assigned to a group of images do not apply to every image in the group. This requires the development of approaches that can retain the original group membership information of images, because we need to test the accuracy of the new method using existing (and independent) annotation data. Prior work on this task [12] ignored such groups and assumed that all terms are associated with all images in a group, which may adversely impact their effectiveness for use on the BDGP data. Finally, the Drosophila embryos are 3D objects, and they are documented as 2D images taken from multiple views. Since certain embryonic structures can only be seen in specific two-dimensional projections (views), it is beneficial to integrate images with different views to make the final annotation. In this article we present a computational method for annotating gene expression pattern images. This method is based on the bag-of-words approach in which invariant visual features are first extracted from local patches on the images, and they are then quantized to form the bag-of-words representation of the original images. This approach is known to be robust to distortions in the images [13, 14], and it has demonstrated impressive performance on object recognition problems in computer vision [15] and on image classification problems in cell biology [16]. In our approach, invariant features are first extracted from local patches on each image in a group. These features are then quantized based on precomputed "visual codebooks", and images in the same group with the same view is represented as a bag-of-words. Thus, our approach can take advantage of the group membership information of images as in the BDGP study. To integrate images with different views, we propose to construct a separate codebook for images with each view. Then image groups containing images with multiple views can be represented as multiple bags, each containing words from the corresponding view. We show that multiple bags can be combined to annotate the image group collectively. After representing each image group as multiple bags of words, we employ a classification model [17] developed recently to annotate the image groups. This model [17] can exploit the correlations among different terms, leading to improved performance. Experimental results on the gene expression pattern images obtained from the FlyExpress database http://www.flyexpress.net show that the proposed approach outperforms other methods consistently. Results also show that integration of images with multiple views improves annotation performance. The overall flowchart of the proposed method is depicted in Figure 2.

Figure 2
figure 2

Flowchart of the proposed method for annotating gene expression patterns.

Methods

The proposed method is based on the bag-of-words approach, which was originally used in text mining, and is now commonly employed in image and video analysis problems in computer vision [15, 1820]. In this approach, invariant features [21] are first extracted from local regions on images or videos, and a visual codebook is constructed by applying a clustering algorithm on a subset of the features where the cluster centers are considered as "visual words" in the codebook. Each feature in an image is then quantized to the closest word in the codebook, and an entire image is represented as a global histogram counting the number of occurrences of each word in the codebook. The size of the resulting histogram is equal to the number of words in the codebook and hence the number of clusters obtained from the clustering algorithm. The codebook is usually constructed by applying the flat k-means clustering algorithm or other hierarchical algorithms [14]. This approach is derived from the bag-of-words models in text document categorization, and is shown to be robust to distortions in images. One potential drawback of this approach is that the spatial information conveyed in the original images is not represented explicitly. This, however, can be partially compensated by sampling dense and redundant features from the images. The bag-of-words representation for images is shown to yield competitive performance on object recognition and retrieval problems after some postprocessing procedures such as normalization or thresholding [14, 15]. The basic idea behind the bag-of-words approach is illustrated in Figure 3.

Figure 3
figure 3

Illustration of the bag-of-words approach. First, a visual codebook is constructed by applying a clustering algorithm to a subset of the local features from training images, and the center of each cluster is considered as a unique "visual words" in the codebook. Each local feature in a test image is then mapped to the closest visual word, and each test image is represented as a (normalized) histogram of visual words.

For our problem, the images are annotated collectively in small groups in the BDGP database. Hence, we propose to extract invariant visual features from each image in a group and represent the images in the same group with the same view as a bag of visual words. The 3D nature of the embryos and the 2D layout of the images determine that certain body parts can only be captured by images taken from certain views. For example, the body part "ventral midline" can only be identified from images taken from the ventral view. Hence, one of the challenges in automated gene expression pattern annotation is the integration of images with different views. We propose to construct a separate codebook for images with each view and quantize the image groups containing images with multiple views as multiple bags of visual words, one for each view. The bags for multiple views can then be concatenated to annotate the image groups collectively. After representing each image group as a bag-of-words, we propose to apply a multi-label classification method developed recently [17] that can extract shared information among different terms, leading to improved annotation performance.

Feature extraction

The images in the FlyExpress database have been standardized semi-automatically, including alignment. Three common methods for generating local patches on images are those based on affine region detectors [22], random sampling [23], and regular patches [24]. We extract dense features on regular patches on the images, since such features are commonly used for aligned images. The radius and spacing of the regular patches are set to 16 pixels in the experiments (Figure 4). Owing to the limitations of image processing techniques, local variations may exist on the images. Thus, we extract invariant features from each regular patch. In this article, we apply the SIFT descriptor [21, 25] to extract local visual features, since it has been applied successfully to other image-related applications [21]. In particular, each feature vector is computed as a set of orientation histograms on 4 × 4 pixel neighborhoods, and each histogram contains 8 bins. This leads to a SIFT feature vector with 128 (4 × 4 × 8) dimensions on each patch. Note that although the invariance to scale and orientation no longer exists since we do not apply the SIFT interest point detector, the SIFT descriptor is still robust against the variance of position, illumination, and viewpoint [25].

Figure 4
figure 4

Illustration of the image patches on which the SIFT features are extracted. We extract local features on regular patches on the images where the radius and spacing of the regular patches are set to 16 pixels.

Codebook construction

In this article, we consider images taken from the lateral, dorsal, and ventral views, since the number of images from other intermediate views is small. For each stage range, we build a separate codebook for images with each view. Since the visual words of the codebooks are expected to be used as representatives of the embryonic structures, the images used to build the codebooks should contain all the embryonic structures that the system is expected to annotate. Hence, we extract codebook images in a way so that each embryonic structure appears in at least a certain number of images. This number is set to 10, 5, and 3 for codebooks of lateral, dorsal, and ventral images, respectively, based on the total number of images with each view (Table 1). The SIFT features computed from regular patches on the codebook images are then clustered using the k-means algorithm. Since this algorithm depends on the initial centers, we repeat the algorithm with ten random initializations from which the one resulting in the smallest summed within-cluster distance is selected. We study the effect of the number of clusters (i.e., the size of the codebook) on the performance below and set this number to 2000, 1000, and 500 for lateral, dorsal, and ventral images, respectively.

Table 1 Summary of the statistics of the BDGP images.

Pattern representation

After the codebooks for all views are constructed, the images in each group are quantized separately for each view. In particular, features computed on regular patches on images with a certain view are compared with the visual words in the corresponding codebook, and the word closest to the feature in terms of Euclidean distance is used to represent it. Then the entire image group is represented as multiple bags of words, one for each view. Since the order of the words in the bag is irrelevant as long as it is fixed, the bag can be represented as a vector counting the number of occurrences of each word in the image group. Let c1,,c m dbe the m cluster centers (codebook words) and let v1,,v n dbe the n features extracted from images in a group with the same view where d is the dimensionality of the local features (d = 128 for SIFT). Then the bag-of-words vector w is m-dimensional, and the k-th component w k of w is computed as

where δ(a, b) = 1 if a = b, and 0 otherwise, and · denotes the vector 2-norm. Note that , since each feature is assigned to exactly one word.

Based on this design, the vector representation for each view can be concatenated so that the images in a group with different views are integrated (Figure 3). Let wl, wd, and wvbe the bag-of-words vector for images in a group with lateral, dorsal, and ventral views, respectively. Then the bag-of-words vector w for the entire image group can be represented as

To account for the variability in the number of images in each group, we normalize the bag-of-words vector to unit length. Note that since not all the image groups contain images from all views, the corresponding bag-of-words representation is a vector of zero if a specific view is absent.

Pattern annotation

After representing each image group as a global histogram using the bag-of-words representation, the gene expression pattern image annotation problem is reduced to a multi-label classification problem, since each group of images can be annotated with multiple terms. (We use the terminology "label" and "term" interchangably, since the former is commonly used in machine learning literature, and the latter is more relevant for our application). The multi-label problems have been studied extensively in the machine learning community, and one simple and popular approach for this problem is to construct a binary classifier for each label, resulting in a set of independent binary classification problems. However, this approach fails to capture the correlation information among different labels, which is critical for many applications such as the gene expression pattern image annotation problem where the semantics conveyed by different labels are correlated. To this end, various methods have been developed to exploit the correlation information among different labels so that the performance can be improved [17, 2629]. In [17], a shared-subspace learning framework has been proposed to exploit the correlation information in multi-label problems. We apply this formulation to the gene expression pattern image annotation problem in this article.

We are given a set of n input data vectors d(d = 3500 if all of the three views are used) which are the bag-of-words representations of n image groups. Let the terms associated with each of the n image groups be encoded into the label indicator matrix Y n × mwhere m is the total number of terms, and Yi= 1 if the i th image group has the ℓth term and Yi= -1 otherwise. In the shared-subspace learning framework proposed in [17], a binary classifier is constructed for each label to discriminate this label from the rest of them. However, unlike the approaches that build the binary classifiers independently, a low-dimensional subspace is assumed to be shared among multiple labels. The predictive functions in this framework consist of two parts: one part is contributed from the original data space, and the other part is derived from the shared subspace as follows:

(1)

where w dand v rare the weight vectors, Θ r × dis the linear transformation used to parameterize the shared low-dimensional subspace, and r is the dimensionality of the shared subspace. The transformation Θ is common for all labels, and it has orthonormal rows, that is ΘΘT= I. In this formulation, the input data are projected onto a low-dimensional subspace by Θ, and this low-dimensional projection is combined with the original representation to produce the final prediction.

In [17] the parameters and Θ are estimated by minimizing the following regularized empirical risk:

(2)

subject to the constraint that ΘΘT= I, where L is some loss function, = Yi, and α > 0 and β > 0 are the regularization parameters. It can be shown that when the least squares loss is used, the optimization problem in Eq. (2) can be expressed as

(3)

where X = [x1,,x n ]T n × dis the data matrix, · F denotes the Frobenius norm of a matrix [30], u = w + ΘTv, U = [u1,,u m ], and V = [v1,,v m ]. The optimal Θ* can be obtained by solving a generalized eigenvalue problem, as summarized in the following theorem:

Theorem 1 Let X, Y, and Θ be defined as above. Then the optimal Θ that solves the optimization problem in Eq. (3) can be obtained by solving the following trace maximization problem:

(4)

where S 1 and S 2 are defined as:

(5)
(6)
(7)

For high-dimensional problems where d is large, an efficient algorithm for computing the optimal Θ is also proposed in [17]. After the optimal Θ is obtained, the optimal values for can be computed in a closed form.

Results and discussion

We report and analyze the experimental results on gene expression pattern annotation in this section. We also demonstrate the performance improvements achieved by integrating images with multiple views and study the effect of the codebook size on the annotation performance. The performance for each individual term is also presented and analyzed.

Data description

In our experiments, we use Drosophila gene expression pattern images retrieved from the FlyExpress database [8], which contains standardized versions of images obtained from the BDGP high-throughput study [1, 2]. The images are standardized semi-manually, and all images are scaled to 128 × 320 pixels. The embryogenesis of Drosophila has been divided into six discrete stage ranges (stages 1–3, 4–6, 7–8, 9–10, 11–12, and 13–16) in the BDGP high-throughput study [1]. Since most of the CV terms are stage range specific, we annotate the images in each stage range separately. The Drosophila embryos are 3D objects, and the FlyExpress database contains 2D images that are taken from several different views (lateral, dorsal, ventral, and other intermediate views) of the 3D embryos. The size of the CV terms, the number of image groups, and the number of images with each view in each stage range are summarized in Table 1. We can observe that most of the images are taken from the lateral view. In stage range 13–16, the number of dorsal images is also comparable to that of the lateral images. We study the performance improvement obtained by using images with different views, and results show that incorporating images with dorsal views can improve performance consistently, especially in stage range 13–16 where the number of dorsal images is large. In contrast, the integration of ventral images results in marginal performance improvement at the price of an increased computational cost, since the number of ventral images is small. Hence, we only use the lateral and dorsal images in evaluating the relative performance of the compared methods.

Evaluation of annotation performance

We apply the multi-label formulation proposed in [17] to annotate the gene expression pattern images. To demonstrate the effectiveness of this formulation in exploiting the correlation information among different labels, we also report the annotation performance achieved by the one-against-rest linear support vector machines (SVM) in which each linear SVM builds a decision boundary between image groups with and without one particular term. Note that in this method the labels are modeled separately, and hence no correlation information is captured. To compare the proposed method with existing approaches for this task, we report the annotation performance of a prior method [31], which used the pyramid match kernel (PMK) algorithm [3234] to construct the kernel between two sets of feature vectors extracted from two sets of images. We report the performance of kernels constructed from the SIFT descriptor and that of the composite kernels combined from multiple kernels as in [31]. In the case of composite kernels, we apply the three kernel combination schemes (i.e., star, clique, and kCCA) and the best performance on each data set is reported. Note that the method proposed in [12] required that the training set contains embryos that are annotated individually, and it has been shown [31] that such requirement leads to low performance when applied to BDGP data in which the images are annotated in small groups. Hence, we do not report these results. In the following, the multi-label formulation proposed in [17] is denoted as MLLS, and the one-against-rest SVM is denoted as SVM. The pyramid match kernel approaches based on the SIFT and the composite features are denoted as PMKSIFT and PMKcomp, respectively. All of the model parameters are tuned using 5-fold cross validation in the experiments.

From Table 1 we can see that the first stage range (1–3) is annotated with only two terms, and we do not report the results in this stage range. In other five stage ranges, we remove terms when they appeared in less than 5 training image groups in a stage range, which yielded data sets in which 60 or fewer terms need to be considered in every case. The two primary reasons for this decision are (1) terms which appeared in too few image groups are statistically too weak to be learned effectively, and (2) we used 5-fold cross-validation to tune the model parameters, and each term should appear in each fold at least once. Therefore, the maximum numbers of terms reported in Table 2, Table 3, Table 4, Table 5, and Table 6 represent the "all terms" test.

Table 2 Annotation performance in terms of AUC, macro F1, micro F1, sensitivity, and specificity for image groups in stage range 4–6.
Table 3 Annotation performance in terms of AUC, macro F1, micro F1, sensitivity, and specificity for image groups in stage range 7–8.
Table 4 Annotation performance in terms of AUC, macro F1, micro F1, sensitivity, and specificity for image groups in stage range 9–10.
Table 5 Annotation performance in terms of AUC, macro F1, micro F1, sensitivity, and specificity for image groups in stage range 11–12.
Table 6 Annotation performance in terms of AUC, macro F1, micro F1, sensitivity, and specificity for image groups in stage range 13–16.

The experiments are geared toward examining the change in the accuracy of our annotation method, as we used an increasingly larger set of vocabulary terms. In our experiment, we begin with the 10 terms that appear in the largest number of image groups. Then we add additional terms in the order of their frequencies. By virtue of this design, experiments with 10 terms should show higher performance than those with 50 terms, because 10 most frequent terms will appear more often in image groups in the training data sets as compared to the case of 50 terms (for example). The extracted data set is partitioned into training and test sets using the ratio 1:1 for each term, and the training data are used to construct the classification model.

The agreement between the predicted annotations and the expert data provided by human curators is measured using the area under the receiver operating characteristic (ROC) curve, called AUC [35], F1 measure [36], sensitivity and specificity. For AUC, the value for each term is computed and the averaged performance across multiple terms is reported. For F1 measure, there are two ways, called macro-averaged F1 and micro-averaged F1, respectively, to average the performance across multiple terms and we report both results. For each data set, the training and test data sets are randomly generated 30 times, and the averaged performance and standard deviations are reported in Table 2, Table 3, Table 4, Table 5, and Table 6. To compare the performance of all methods across different values of sensitivity and specificity, we show the ROC curves of 9 randomly selected terms on two data sets from stage ranges 11–12 and 13–16 in Figures 5 and 6.

Figure 5
figure 5

The ROC curves for 9 randomly selected terms on a data set from stage range 11–12. Each figure corresponds to the ROC curves for a term. The circles on the curves show the corresponding decision points, which are tuned on the training set based on F1 score.

Figure 6
figure 6

The ROC curves for 9 randomly selected terms on a data set from stage range 13–16. Each figure corresponds to the ROC curves for a term. The circles on the curves show the corresponding decision points, which are tuned on the training set based on F1 score.

We can observe from Table 2, Table 3, Table 4, Table 5, and Table 6 and Figures 5 and 6 that approaches based on the bag-of-words representation (MLLS and SVM) consistently outperform the PMK-based approaches (PMKSIFT and PMKcomp). Note that since both the shared-subspace formulation and SVM are based on the bag-of-words representation, the benefit of this representation should be elucidated by comparing the performance of both the shared-subspace formulation and SVM to the two approaches based on PMK. In particular, MLLS outperforms PMKSIFT and PMKcomp on all of the 18 data sets in terms of all three performance measures (AUC, macro F1, and micro F1). In all cases, the performance improvements tend to be larger for the two F1 measures than AUC. It can also be observed from Figures 5 and 6 that the ROC curves for SVM and the shared-subspace formulation are always above those based on the pyramid match algorithm. This indicates that both SVM and the shared-subspace formulation outperform previous methods across all classification thresholds. A similar trend has been observed from other data sets, but their detailed results are omitted due to space constraint. This shows that the bag-of-words scheme is more effective in representing the image groups than the PMK-based approach. Moreover, we can observe that MLLS outperforms SVM on most of the data sets for all three measures. This demonstrates that the shared-subspace multi-label formulation can improve performance by capturing the correlation information among different labels. For the PMK-based approaches, PMKcomp outperforms PMKSIFT on all of the data sets. This is consistent with the prior results obtained in [31] that the integration of multiple kernel matrices derived from different features improves performance.

Performance of individual terms

To evaluate the relative performance of the individual terms used, we report the AUC values achieved by the proposed formulation on 6 data sets in Figures 7 and 8. One major outcome of our analysis was that some terms were consistently assigned to wrong image groups. For example, the terms "hindgut proper primordium", "Malpighian tubule primordium", "garland cell primordium", "salivary gland primordium", and "visceral muscle primordium" in stage range 11–12 achieve low AUC on all three data sets. Similarly, the terms "ring gland", "embryonic anal pad", "embryonic proventriculus", "gonad"", and "embryonic/larval garland cell" achieve low AUC on all three data sets in stage range 13–16. For most of these terms, the low performance is caused by the fact that they only appear in very few image groups. Such low frequencies result in weak learning due to statistical reasons. Therefore, the number of images available for training our method will need to be increased to improve performance.

Figure 7
figure 7

The AUC of individual terms on three data sets from stage range 11–12. The three figures, from top to down, show the performance on data sets with 30, 40, and 50 terms, respectively.

Figure 8
figure 8

The AUC of individual terms on three data sets from stage range 13–16. The three figures, from top to down, show the performance on data sets with 40, 50, and 60 terms, respectively.

Integration of images with multiple views

To evaluate the effect of integrating images with multiple views, we report the annotation performance in the cases of using only lateral images, lateral and dorsal images, and lateral, dorsal, and ventral images. In particular, we extract six data sets from the stage range 13–16 with the number of terms ranged from 10 to 60 with a step size of 10. The average performance in terms of AUC, macro F1, and micro F1 achieved by MLLS over 30 random trials is shown in Figure 9. We observe that performance can be improved significantly by incorporating the dorsal view images. In contrast, the incorporation of ventral images results in slight performance improvement. In other stage ranges, the integration of images with multiple views can either improve or keep comparable performance. This may be due to the fact that the dorsal view images are mostly informative for annotating embryos in stage range 13–16, as large morphological movements happen on dorsal side in this stage range. Similar trends have been observed when the SVM classifier is applied.

Figure 9
figure 9

Comparison of annotation performance achieved by ML LS when images from different views (lateral, lateral+dorsal, and lateral+dorsal+ventral) are used on 6 data sets from the stage range 13–16. In each figure, the x-axis denotes the data sets with different numbers of terms. For each data set, 30 random partitions of the training and test sets are generated and the averaged performance is reported.

Effect of codebook size

The size of the codebook is a tunable parameter, and we evaluate its effect on annotation performance using a subset of lateral images from stage range 13–16 with 60 terms in this experiment. In particular, the size of the codebook for this data set increases from 500 to 4000 gradually with a step size of 500, and the performance of MLLS and SVM is plotted in Figure 10. In most cases the performance can be improved with a larger codebook size, but it can also decrease in certain cases such as the performance of MLLS when measured by macro F1. In general, the performance does not change significantly with codebook size. Hence, we set the codebook size to 2000 for lateral images in previous experiments to maximize performance and minimize computational cost. An interesting observation from Figure 10 is that the performance differences between MLLS and SVM tend to be larger for a small codebook size. This may reflect the fact that small codebook sizes cannot capture the complex patterns in image groups. This representation insufficiency can be compensated effectively by sharing information between image groups using the shared-subspace multi-label formulation. For a large codebook size, the performance of MLLS and SVM tend to be close.

Figure 10
figure 10

The change of performance when the codebook size increases gradually from 500 to 4000 with a step size of 500 on a data set in stage range 13–16 with 60 terms. In each case, the average performance and standard deviation over 30 random partitions of the training and test sets are shown. A similar trend has been observed in other stage ranges.

Conclusion

In this article we present a computational method for automated annotation of Drosophila gene expression pattern images. This method represents image groups using the bag-of-words approach and annotates the groups using a shared-subspace multi-label formulation. The proposed method annotates images in groups, and hence retains the image group membership information as in the original BDGP study. Moreover, multiple sources of information conveyed by images with different views can be integrated naturally in the proposed method. Results on images from the FlyExpress database demonstrate the effectiveness of the proposed method.

In constructing the bag-of-words representation in this article, we only use the SIFT features. Prior results on other image-related applications show that integration of multiple feature types may improve performance [37]. We plan to extend the proposed method for integrating multiple feature types in the future. In addition, the bag-of-words representation is obtained by the hard assignment approach in which a local feature vector is only assigned to the closest visual word. Recent study [38] shows that the soft assignment approach that assigns each feature vector to multiple visual words based on their distances usually results in improved performance. We will explore this in the future.

References

  1. Tomancak P, Beaton A, Weiszmann R, Kwan E, Shu S, Lewis SE, Richards S, Ashburner M, Hartenstein V, Celniker SE, Rubin GM: Systematic determination of patterns of gene expression during Drosophila embryogenesis. Genome Biology 2002., 3(12):

    Google Scholar 

  2. Tomancak P, Berman B, Beaton A, Weiszmann R, Kwan E, Hartenstein V, Celniker S, Rubin G: Global analysis of patterns of gene expression during Drosophila embryogenesis. Genome Biology 2007, 8(7):R145.

    Article  PubMed Central  PubMed  Google Scholar 

  3. Lécuyer E, Yoshida H, Parthasarathy N, Alm C, Babak T, Cerovina T, Hughes T, Tomancak P, Krause H: Global analysis of mRNA localization reveals a prominent role in organizing cellular Architecture and function. Cell 2007, 131: 174–187.

    Article  PubMed  Google Scholar 

  4. Lein ES, et al.: Genome-wide atlas of gene expression in the adult mouse brain. Nature 2006, 445: 168–176.

    Article  PubMed  Google Scholar 

  5. Kumar S, Jayaraman K, Panchanathan S, Gurunathan R, Marti-Subirana A, Newfeld SJ: BEST: a novel computational approach for comparing gene expression patterns from early stages of Drosophlia melanogaster develeopment. Genetics 2002, 169: 2037–2047.

    Google Scholar 

  6. Samsonova AA, Niranjan M, Russell S, Brazma A: Prediction of Gene Expression in Embryonic Structures of Drosophila melanogaster . PLoS Comput Biol 2007, 3(7):e144.

    Article  PubMed Central  PubMed  Google Scholar 

  7. Costa I, Krause R, Opitz L, Schliep A: Semi-supervised learning for the identification of syn-expressed genes from fused microarray and in situ image data. BMC Bioinformatics 2007, 8(Suppl 10):S3.

    Article  PubMed Central  PubMed  Google Scholar 

  8. Van Emden B, Ramos H, Panchanathan S, Newfeld S, Kumar S: FlyExpress: An image-matching web-tool for finding genes with overlapping patterns of expression in Drosophila embryos.2006. [Http://www.flyexpress.net]

    Google Scholar 

  9. Gurunathan R, Emden BV, Panchanathan S, Kumar S: Identifying spatially similar gene expression patterns in early stage fruit fly embryo images: binary feature versus invariant moment digital representations. BMC Bioinformatics 2004, 5(202):13.

    Google Scholar 

  10. Ye J, Chen J, Li Q, Kumar S: Classification of Drosophila embryonic developmental stage range based on gene expression pattern images. Comput Syst Bioinformatics Conf 2006, 293–298.

    Chapter  Google Scholar 

  11. Ye J, Chen J, Janardan R, Kumar S: Developmental stage annotation of Drosophila gene expression pattern images via an entire solution path for LDA. ACM Trans Knowl Discov Data 2008., 2(1):

    Google Scholar 

  12. Zhou J, Peng H: Automatic recognition and annotation of gene expression patterns of fly embryos. Bioinformatics 2007, 23(5):589–596.

    Article  CAS  PubMed  Google Scholar 

  13. Jurie F, Triggs B: Creating efficient codebooks for visual recognition. Proceedings of the Tenth IEEE International Conference on Computer Vision 2005, 604–610.

    Google Scholar 

  14. Moosmann F, Nowak E, Jurie F: Randomized Clustering Forests for Image Classification. IEEE Trans Pattern Anal Mach Intell 2008, 30(9):1632–1646.

    Article  PubMed  Google Scholar 

  15. Sivic J, Zisserman A: Efficient Visual Search of Videos Cast as Text Retrieval. IEEE Trans Pattern Anal Mach Intell 2009, 31(4):591–606.

    Article  PubMed  Google Scholar 

  16. Marée R, Geurts P, Wehenke L: Random subwindows and extremely randomized trees for image classification in cell biology. BMC Cell Biology 2007, 8: S2.

    Article  PubMed Central  PubMed  Google Scholar 

  17. Ji S, Tang L, Yu S, Ye J: Extracting Shared Subspace for Multi-label Classification. Proceedings of the Fourteenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2008, 381–389.

    Google Scholar 

  18. Sivic J, Zisserman A: Efficient Visual Search for Objects in Videos. Proceedings of the IEEE 2008, 96(4):548–566.

    Article  Google Scholar 

  19. Philbin J, Chum O, Isard M, Sivic J, Zisserman A: Object retrieval with large vocabularies and fast spatial matching. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2007.

    Google Scholar 

  20. Nilsback ME, Zisserman A: A Visual Vocabulary for Flower Classification. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2006, 2: 1447–1454.

    Google Scholar 

  21. Mikolajczyk K, Schmid C: A Performance Evaluation of Local Descriptors. IEEE Trans Pattern Anal Mach Intell 2005, 27(10):1615–1630.

    Article  PubMed  Google Scholar 

  22. Mikolajczyk K, Tuytelaars T, Schmid C, Zisserman A, Matas J, Schaffalitzky F, Kadir T, Van Gool L: A Comparison of Affine Region Detectors. International Journal of Computer Vision 2005, 65(1–2):43–72.

    Article  Google Scholar 

  23. Nowak E, Jurie F, Triggs B: Sampling strategies for bag-of-features image classification. Proceedings of the 2006 European Conference on Computer Vision 2006, 490–503.

    Chapter  Google Scholar 

  24. Fei-Fei L, Perona P: A Bayesian Hierarchical Model for Learning Natural Scene Categories. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington, DC, USA: IEEE Computer Society; 2005:524–531.

    Google Scholar 

  25. Lowe DG: Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision 2004, 60(2):91–110.

    Article  Google Scholar 

  26. Zhang ML, Zhou ZH: Multilabel Neural Networks with Applications to Functional Genomics and Text Categorization. IEEE Transactions on Knowledge and Data Engineering 2006, 18(10):1338–1351.

    Article  Google Scholar 

  27. Zhang ML, Zhou ZH: ML-KNN: A lazy learning approach to multi-label learning. Pattern Recognition 2007, 40(7):2038–2048.

    Article  Google Scholar 

  28. Zhou ZH, Zhang ML: Multi-Instance Multi-Label Learning with Application to Scene Classification. In Advances in Neural Information Processing Systems 19. Edited by: Schölkopf B, Platt J, Hoffman T. Cambridge, MA: MIT Press; 2007:1609–1616.

    Google Scholar 

  29. Sun L, Ji S, Ye J: Hypergraph Spectral Learning for Multi-label Classification. Proceedings of the Fourteenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2008, 668–676.

    Google Scholar 

  30. Golub GH, Van Loan CF: Matrix Computations. third edition. Baltimore, Maryland, USA: The Johns Hopkins University Press; 1996.

    Google Scholar 

  31. Ji S, Sun L, Jin R, Kumar S, Ye J: Automated annotation of Drosophila gene expression patterns using a controlled vocabulary. Bioinformatics 2008, 24(17):1881–1888.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  32. Grauman K, Darrell T: The Pyramid Match Kernel: Efficient Learning with Sets of Features. Journal of Machine Learning Research 2007, 8: 725–760.

    Google Scholar 

  33. Grauman K, Darrell T: Approximate Correspondences in High Dimensions. In Advances in Neural Information Processing Systems. Edited by: Schölkopf B, Platt J, Hoffman T. Cambridge, MA: MIT Press; 2007:505–512.

    Google Scholar 

  34. Lazebnik S, Schmid C, Ponce J: Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington, DC, USA: IEEE Computer Society; 2006:2169–2178.

    Google Scholar 

  35. Gribskov M, Robinson NL: Use of receiver operating characteristic (ROC) analysis to evaluate sequence matching. Comput Chem 1996, 20: 25–33.

    Article  CAS  PubMed  Google Scholar 

  36. Datta R, Joshi D, Li J, Wang JZ: Image retrieval: Ideas, influences, and trends of the new age. ACM Computing Surveys 2008, 40(2):1–60.

    Article  Google Scholar 

  37. Zhang J, Marszalek M, Lazebnik S, Schmid C: Local Features and Kernels for Classification of Texture and Object Categories: A Comprehensive Study. International Journal of Computer Vision 2007, 73(2):213–238.

    Article  Google Scholar 

  38. Philbin J, Chum O, Isard M, Sivic J, Zisserman A: Lost in Quantization: Improving Particular Object Retrieval in Large Scale Image Databases. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2008.

    Google Scholar 

Download references

Acknowledgements

We thank Bernard Van Emden for help with access to the gene expression data. This work is supported in part by the National Institutes of Health grant No. HG002516, the National Science Foundation grant No. IIS-0612069, the National Science Foundation of China grant Nos. 60635030 and 60721002, and Jiangsu Science Foundation grant No. BK2008018.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jieping Ye.

Additional information

Authors' contributions

All authors analyzed the results and wrote the paper. SJ designed the methodology, implemented the programs, and drafted the manuscript. SK and JY supervised the project and guided the implementation. All authors have read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Authors’ original file for figure 20

Authors’ original file for figure 21

Authors’ original file for figure 22

Authors’ original file for figure 23

Authors’ original file for figure 24

Authors’ original file for figure 25

Authors’ original file for figure 26

Authors’ original file for figure 27

Authors’ original file for figure 28

Authors’ original file for figure 29

Authors’ original file for figure 30

Authors’ original file for figure 31

Authors’ original file for figure 32

Authors’ original file for figure 33

Authors’ original file for figure 34

Authors’ original file for figure 35

Authors’ original file for figure 36

Authors’ original file for figure 37

Authors’ original file for figure 38

Authors’ original file for figure 39

Authors’ original file for figure 40

Authors’ original file for figure 41

Authors’ original file for figure 42

Authors’ original file for figure 43

Authors’ original file for figure 44

Authors’ original file for figure 45

Authors’ original file for figure 46

Authors’ original file for figure 47

Authors’ original file for figure 48

Authors’ original file for figure 49

Authors’ original file for figure 50

Authors’ original file for figure 51

Authors’ original file for figure 52

Authors’ original file for figure 53

Authors’ original file for figure 54

Authors’ original file for figure 55

Authors’ original file for figure 56

Authors’ original file for figure 57

Authors’ original file for figure 58

Authors’ original file for figure 59

Authors’ original file for figure 60

Authors’ original file for figure 61

Authors’ original file for figure 62

Authors’ original file for figure 63

Authors’ original file for figure 64

Authors’ original file for figure 65

Authors’ original file for figure 66

Authors’ original file for figure 67

Authors’ original file for figure 68

Authors’ original file for figure 69

Authors’ original file for figure 70

Authors’ original file for figure 71

Authors’ original file for figure 72

Authors’ original file for figure 73

Authors’ original file for figure 74

Authors’ original file for figure 75

Authors’ original file for figure 76

Authors’ original file for figure 77

Authors’ original file for figure 78

Authors’ original file for figure 79

Authors’ original file for figure 80

Authors’ original file for figure 81

Authors’ original file for figure 82

Authors’ original file for figure 83

Authors’ original file for figure 84

Authors’ original file for figure 85

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Ji, S., Li, YX., Zhou, ZH. et al. A bag-of-words approach for Drosophila gene expression pattern annotation. BMC Bioinformatics 10, 119 (2009). https://doi.org/10.1186/1471-2105-10-119

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2105-10-119

Keywords