Automated identification of cell-type-specific genes in the mouse brain by image computing of expression patterns
© Li et al.; licensee BioMed Central Ltd. 2014
Received: 16 December 2013
Accepted: 29 May 2014
Published: 20 June 2014
Differential gene expression patterns in cells of the mammalian brain result in the morphological, connectional, and functional diversity of cells. A wide variety of studies have shown that certain genes are expressed only in specific cell-types. Analysis of cell-type-specific gene expression patterns can provide insights into the relationship between genes, connectivity, brain regions, and cell-types. However, automated methods for identifying cell-type-specific genes are lacking to date.
Here, we describe a set of computational methods for identifying cell-type-specific genes in the mouse brain by automated image computing of in situ hybridization (ISH) expression patterns. We applied invariant image feature descriptors to capture local gene expression information from cellular-resolution ISH images. We then built image-level representations by applying vector quantization on the image descriptors. We employed regularized learning methods for classifying genes specifically expressed in different brain cell-types. These methods can also rank image features based on their discriminative power. We used a data set of 2,872 genes from the Allen Brain Atlas in the experiments. Results showed that our methods are predictive of cell-type-specificity of genes. Our classifiers achieved AUC values of approximately 87% when the enrichment level is set to 20. In addition, we showed that the highly-ranked image features captured the relationship between cell-types.
Overall, our results showed that automated image computing methods could potentially be used to identify cell-type-specific genes in the mouse brain.
Although all cells in the brain are genetically identical, they can develop into different cell-types that are distinct in morphology, connectivity, and function. For example, the mammalian brain contains an enormous number of neuronal and glial cells. The neuronal cells are responsible for information communication and processing, while the glial cells are traditionally considered to provide supportive functions. Cell-type diversity is resulted from the different sets of molecules that cells of each type contain. This is in turn due to the differential expression and regulation of genes in the genome. Thus, analysis of gene expression patterns provides an informative way of studying cellular diversity [1, 2]. In these studies, it has been commonly observed that some genes are specifically expressed in certain cell-types. These genes serve as cell-type markers and might define cell-type-specific transcriptional programs [3, 4]. A complete catalogue of the cell-type-specific genes would be valuable in elucidating the relationship between gene expression patterns, connectivity, brain regions, and cell-types [5–9].
Currently, both experimental and computational approaches have been used to study cell-type-specific gene expression patterns. Experimental methods involve in separating cells of different types from heterogeneous tissues and measuring gene expression levels in the separated tissues using microarrays. Along this line, multiple techniques have been developed for tissue processing; they, however, suffer from different limitations . As an alternative approach, current computational methods identify cell-type-specific genes by comparing their expression profiles captured by either microarrays [10–12] or in situ hybridization (ISH) voxel-level data . These approaches either lack the fine spatial resolution or the high-order expression characteristics that are needed for resolving cell-type-specificity.
Our results showed that the high-level representations computed directly from cellular-resolution ISH images are predictive of cell-type-specificity of genes in major brain cell types. We used the area under the receiver operating characteristic curve (AUC) as the performance measure [15, 16]. We achieved AUC values of approximately 87% in five out of the six tasks when the threshold value for fold enrichment is set to 20, a recommended value based on experimental data . Our results also showed that the image-based invariant representations for ISH images generally yielded better performance than voxel-based features in discriminating genes enriched in different brain cell types. The average AUC value given by our image-based approach on data sets with >1.5 enrichment fold was approximately 75% while an average AUC value of 65% was achieved by voxel-based features. Visualization of highly-ranked features showed that they corresponded to locations containing the most discriminative features among brain cell-types. We also compared the performance of different tasks to investigate the intrinsic relationship between various brain cell-types. Our results showed that the relative performance differences among various brain cell-types are generally consistent with our current knowledge on cell-type functions.
Material and methods
Allen mouse brain atlas
The Allen Mouse Brain Atlas provides genome-wide, three-dimensional, high-resolution in situ hybridization (ISH) gene expression images for approximately 20,000 genes in the sagittal section for the 56-day old male mice . In addition, coronal sections at a higher resolution are available for a set of about 4,000 genes showing restricted expression patterns. For each experiment, a set of high-resolution, two-dimensional image series are generated. These image slices are subsequently processed by an informatics data processing pipeline to generate grid-level voxel data in the Allen Reference Atlas space . The output of the pipeline is quantified expression values at a grid voxel level [19, 20]. The voxel-level data have been used to identify cell-type-specific genes based on correlation search . Note that the selection of coronal genes was biased toward genes enriched in cortical and/or hippocampal regions .
ISH image feature extraction
To fully exploit the cellular-resolution ISH images and extract high-order information for classification, we computed features from the original ISH images directly. The ISH images we used were taken from different mouse brains. Thus, the shape and size of the brain and various anatomical structures might vary from image to image. Additionally, tissue processing and image acquisition might also introduce distortions on the images. To account for these image-level variations, we employed the scale-invariant feature transform (SIFT) descriptor to capture expression patterns on local patches of ISH images [22, 23]. This approach can produce robust representations that are invariant to various distortions on the images. To compute SIFT features, an image is first convolved with a sequence of Gaussian filters of different scales to produce difference-of-Gaussian (DOG) images. Stable key-point locations are then detected from these DOG images. A set of orientation histograms on 4×4 neighborhoods at each location are subsequently computed, and each histogram contains 8 spatial bins recording the pixel gradients in 8 orientations.
In many of the current image classification systems, key-point extractors are typically not used [24, 25]. Instead, SIFT features are commonly applied on regularly spaced grid on the images, leading to densely populated SIFT descriptors. Following [26, 27] we also applied dense SIFT features on the ISH images . This generated approximately 1 million SIFT feature vectors from each ISH image section . In our work, we used the most medial slice of each sagittal section image series. For the coronal section image series, we used the slice with the median Section ID that corresponds to the middle location between the most posterior section showing the cerebellum and hindbrain and the most anterior section showing the olfactory bulb. The use of more slices would incur high computational cost. In addition, it has been shown in  that performance may not be improved when more slices were used. In the Allen Mouse Brain Atlas, a detection algorithm was applied to each ISH image to create a mask identifying pixels in the ISH image that correspond with gene expression. Thus foreground pixels are considered to correspond with gene expression while background pixels are not . Only the SIFT descriptors computed from the foreground pixels were used in our study.
High-level feature construction
In order to derive an image-level representation for cell-type-specific gene classification, we employed the bag-of-words method to construct ISH image representations [29–31]. To construct a visual codebook, we randomly sampled the non-zero descriptors of every image to obtain a descriptor pool of size 100,000. In some of the classification tasks, the numbers of images in the two classes differ significantly. To take this situation into account, we equalized the number of descriptors chosen from both classes. That is, approximately half of the sampled descriptors were from each of the two classes. The descriptors from each class were equally distributed among all images in that class.
We applied the K-means algorithm to cluster the SIFT descriptors in this pool. Since the K-means algorithm depends on the initialization, we repeated the algorithm three times with random initializations and used the one with the smallest summed within-cluster distance. The cluster centers were considered as “visual words” in the codebook. We then represented an entire image as a global histogram counting the number of occurrences of each visual word in the codebook. The size of the resulting histogram is equal to the number of words in the codebook, which is also the number of clusters used in the clustering algorithm.
where δ(a,b)=1 if a=b, and 0 otherwise, and ||·|| denotes the vector ℓ2-norm.
To capture the spatial expression patterns at different scales, we constructed four separate codebooks for images with four different resolutions. We then quantized each image using multiple bags of visual words, one for each resolution. The representations for different resolutions were then concatenated to form a single representation for the image. Following , we fixed the number of clusters to be 500 in the reported results. To account for the zero descriptors, we introduced an extra dimension in the histogram to record the number of zero descriptors for each image at each resolution. Eventually, an ISH image was represented by a high-level feature vector , where p=(500+1)×4=2004. Note that the bag-of-words representation has been successfully applied to represent biological images in the past [26, 32]. In addition, the local binary pattern (LBP) features have been used in  to identify genes expressed in cerebellar layers. We have compared the LBP features with the bag-of-words features and observed that the later performed better for the problem studied in this work.
Cell-type-specific gene classification
In this work, we trained and evaluated our methods based on the genes enriched in astrocytes, neurons, and oligodendrocytes . For each gene studied in , we checked the availability of ISH images from the Allen Mouse Brain Atlas. By doing this, we obtained a database consisting of 6,660 ISH image series representing 2,872 genes in total. Note that each gene in this database could be associated with more than one cell type, though this does not happen very often.
Statistics on the numbers of images and genes for each of the six tasks with different thresholds for fold enrichment
Number of genes
Number of images
A vs. Neg.
711 vs. 939
775 vs. 981
N vs. Neg.
775 vs. 939
844 vs. 981
O vs. Neg.
541 vs. 939
577 vs. 981
O vs. A
501 vs. 671
532 vs. 730
A vs. N
690 vs. 754
754 vs. 823
N vs. O
753 vs. 519
819 vs. 552
A vs. Neg.
72 vs. 939
80 vs. 981
N vs. Neg.
178 vs. 939
209 vs. 981
O vs. Neg.
47 vs. 939
50 vs. 981
O vs. A
47 vs. 72
50 vs. 80
A vs. N
72 vs. 178
80 vs. 209
N vs. O
178 vs. 47
209 vs. 50
A vs. Neg.
26 vs. 939
31 vs. 981
N vs. Neg.
67 vs. 939
78 vs. 981
O vs. Neg.
17 vs. 939
18 vs. 981
O vs. A
17 vs. 26
18 vs. 31
A vs. N
26 vs. 67
31 vs. 78
N vs. O
67 vs. 17
78 vs. 18
Classification and image feature selection
where and denote the model weight vector and bias term, respectively, Ω(w) denotes the regularization term, and λ is the regularization parameter.
In this study, we employed the logistic regression loss function as this loss yielded competitive performance in classification tasks [34, 35]. The ℓ2-norm regularization Ω(w)=∥w∥2 was used when making predictions . Additionally, we were interested in identifying the most important image features that contributed to the classification performance. This can be achieved by employing the ℓ1-norm regularization Ω(w)=∥w∥1, which drives some entries of w to zero, leading to feature selection [37–42].
To make the ℓ1-norm based feature selection robust and stable, we employed an ensemble learning technique known as stability selection [43, 44]. In this technique, a set of λ values were selected, and data sets of size ⌊n/2⌋ were repeatedly sampled, without replacement, from the original data of size n. For each sampled data set, a set of models, corresponding to different λ values, were trained. Then the selection probability for each feature under a particular λ value was computed as the relative frequency that this feature was selected among the multiple random samples. Finally, the maximum selection probability across the λ values was computed and used to rank the features.
Results and discussion
We formulated the prediction of cell-type-specific genes as a set of six binary-class classification tasks. The prediction was performed by using ℓ2-norm regularized logistic regression . We also employed the ℓ1-norm regularized logistic regression  and stability selection for image feature ranking. For each prediction task, we used the area under the ROC curve (AUC) as the performance measure [15, 16]. We randomly partitioned the entire data set for each task into training and test set so that 2/3 of the data were in the training set, and the remaining 1/3 were in the test set. To obtain robust performance estimation, this random partition was performed 30 times, and the statistics computed over these 30 trials were reported.
In , genes with >1.5-fold enrichment were reported for each of the astrocyte, neuron, and oligodendrocyte cell types. It was also stated in  that genes enriched with >20-fold should be considered as cell-type-specific based on the enrichment levels of well-established cell type markers. In  genes with >10-fold enrichment were considered as cell-type-specific genes. We thus generated multiple data sets by using 1.5, 10, and 20 as cutoff enrichment levels for each of the six tasks. The numbers of genes and images in each task were summarized in Table 1.
In the Allen Mouse Brain Atlas, ISH images are provided in both the sagittal and the coronal sections, and we used only those genes with both coronal and sagittal data. We extracted SIFT features and constructed high-level representations for the coronal and the sagittal images separately. Since images from different sections might capture different and complementary information, we also concatenated the coronal and sagittal representations in the classification tasks. To ensure that all features have the same dimensionality, the codebook size was reduced to 250 so that the concatenated features have the same dimensionality as the features constructed from only coronal and sagittal images. We also used the same set of genes for the coronal and the sagittal images so that the results are directly comparable.
Performance of cell-type-specific gene identification
We now consider the performance achieved by the combination of the coronal and sagittal images, as these data sets yielded the best performance. When the enrichment fold cutoff value was set to 1.5, the performance on five out of the six tasks was higher than 0.7. When the cutoff value was increased to 10, the performance on five out of the six tasks reached 0.85. When the cutoff value was further increased to 20, the performance on five out of the six tasks became higher than 0.87. Note that a comparative study in  showed that genes enriched with >20-fold should be considered as cell-type-specific. At this level, our proposed methods can achieve high predictive performance. These results demonstrated that our image-based predictive methods were able to identify cell-type-specific genes in major brain cell types.
Comparison with voxel-based results
The initial attempt to identify cell-type-specific genes from the ISH data used the grid-level voxel data generated from the registered ISH images . In particular,  used well-established cell-type marker genes as queries to identify genes enriched in the same cell-type. This was achieved by computing the correlations of all other genes with these marker genes based on the voxel-level expression grid data. A high correlation value was considered as a high probability of enriching in the same cell-type. We compared the voxel-based features and our image-based features in identifying cell-type-specific genes in a discriminative learning framework.
Statistical test results in comparing our image-based method with voxel-based method
A vs. Neg.
N vs. Neg.
O vs. Neg.
O vs. A
A vs. N
N vs. O
Ranking of image features
Performance comparison among different tasks
We observed that the six tasks achieved different performance, and these differences might be related to the intrinsic relationship between various brain cell-types. In order to expedite cross-task comparison, we showed the performance of the six tasks on the combination of coronal and sagittal images in Figure 5. We can see that the relative performance differences among the six tasks are generally consistent across the three data sets with different levels of enrichment.
We can see that the classification of genes enriched in astrocytes versus the negative set yielded the lowest performance on all three data sets. Indeed, astrocytes are among the least-understood brain cells currently, though they account for a high proportion of the brain cells . This type of cells fill the space between neurons and were traditionally considered as providing supportive functions to neurons. However, recent studies showed that thy might control the concentration of extracellular molecules, thereby providing important regulatory functions [46–48]. Thus, the difficulty of distinguishing astrocytes with other cells might be due to the fact that they are spatially very close to other major brain cell-types, and they are found in all areas of the brain [46, 48, 49].
On the other hand, the classification of genes enriched in neurons and oligodendrocytes yielded the highest performance on all three data sets. Indeed, oligodendrocytes are examples of well-understood glia in the brain. Their primary function was to insulate the axon and thus expedite the transduction of impulses between neurons by creating the myelin sheath [46, 48, 49]. Thus, oligodendrocytes mainly reside in the white matter, while neurons mainly reside in the gray matter. The spatial complementarity between oligodendrocytes and neurons might explain the relatively high performance of distinguishing genes enriched in these two cell-types.
Conclusion and outlook
In this study, we aimed at identifying cell-type-specific genes in the mouse brain automatically. This was achieved by combining the high-resolution ISH images from the Allen Brain Atlas with the experimentally-generated lists of genes enriched in astrocytes, neurons, and oligodendrocytes. We constructed invariant, high-level representations from the ISH images directly and employed advanced machine learning techniques to perform the classification and image feature selection. Results showed that our image-based representations were predictive of cell-type enrichment. We also showed that the highly-ranked image features identified by our method explained the intrinsic relationships among brain cell-types. Overall, our results demonstrated that automated image computing could lead to more quantitative and accurate computational modeling and results [50–52].
In the current study, the features for identifying cell-type-specific genes are generic representations and are not trained and tuned to specific tasks. We will explore deep models that are trained end-to-end for fully automated cell-type-specific gene prediction [53, 54]. We formulated the cell-type-specific gene identification problem into six separate classification tasks in the current work. However, the prediction of specificity in multiple cell-types might be related. We will employ multi-task learning techniques [55–57] to identify cell-type-specific genes in multiple cell-types simultaneously in the future.
We thank the Allen Institute for Brain Science for making the Allen Brain Atlas data available. This work was supported by National Science Foundation grant DBI-1147134.
- Cahoy JD, Emery B, Kaushal A, Foo LC, Zamanian JL, Christopherson KS, Xing Y, Lubischer JL, Krieg PA, Krupenko SA, Thompson WJ, Barres BA: A transcriptome database for astrocytes, neurons, and oligodendrocytes: A new resource for understanding brain development and function. J Neurosci. 2008, 28 (1): 264-278.View ArticlePubMedGoogle Scholar
- Grange P, Hawrylycz M, Mitra PP: Cell-type-specific microarray data and the Allen atlas: quantitative analysis of brain-wide patterns of correlation and density. arXiv: 1303.0013 [q-bio.NC] (2013),Google Scholar
- Okaty BW, Sugino K, Nelson SB: A quantitative comparison of cell-type-specific microarray gene expression profiling methods in the mouse brain. PLoS One. 2011, 6 (1): 16493-View ArticleGoogle Scholar
- Ko Y, Ament SA, Eddy JA, Caballero J, Earls JC, Hood L, Price ND: Cell type-specific genes show striking and distinct patterns of spatial expression in the mouse brain. Proc Nat Acad Sci. 2013, 110 (8): 3095-3100.View ArticlePubMed CentralPubMedGoogle Scholar
- French L, Tan PPC, Pavlidis P: Large-scale analysis of gene expression and connectivity in the rodent brain: insights through data integration. Front Neuroinform. 2011, 5 (12):Google Scholar
- Tan PPC, French L, Pavlidis P: Neuron-enriched gene expression patterns are regionally anti-correlated with oligodendrocyte-enriched patterns in the adult mouse and human brain. Front Neurosci. 2013, 7 (5):Google Scholar
- Grange P, Mitra PP: Computational neuroanatomy and gene expression: Optimal sets of marker genes for brain regions. Proceedings of the 46th Annual Conference on Information Sciences and Systems. 2012, Princeton, NJ, USA: IEEE, 1-6.Google Scholar
- Ji S: Computational genetic neuroanatomy of the developing mouse brain: dimensionality reduction, visualization, and clustering. BMC Bioinformatics. 2013, 14: 222-View ArticlePubMed CentralPubMedGoogle Scholar
- Ji S, Fakhry A, Deng H: Integrative analysis of the connectivity and gene expression atlases in the mouse brain. NeuroImage. 2014, 84 (1): 245-253.View ArticlePubMedGoogle Scholar
- Zuckerman NS, Noam Y, Goldsmith AJ, Lee PP: A self-directed method for cell-type identification and separation of gene expression microarrays. PLoS Comput Biol. 2013, 9 (8): 1003189-View ArticleGoogle Scholar
- Oldham MC, Konopka G, Iwamoto K, Langfelder P, Kato T, Horvath S, Geschwind DH: Functional organization of the transcriptome in human brain. Nat Neurosci. 2008, 11 (11): 1271-1282.View ArticlePubMed CentralPubMedGoogle Scholar
- Hawrylycz MJ, Lein ES, Guillozet-Bongaarts AL, Shen EH, Ng L, Miller JA, Van de Lagemaat LN, Smith KA, Ebbert A, Riley ZL, Abajian C, Beckmann CF, Bernard A, Bertagnolli D, Boe AF, Cartagena PM, Chakravarty MM, Chapin M, Chong J, Dalley RA, Daly BD, Dang C, Datta S, Dee N, Dolbeare TA, Faber V, Feng D, Fowler DR, Goldy J, Gregor BW, et al: An anatomically comprehensive atlas of the adult human brain transcriptome. Nature. 2012, 489 (7416): 391-399.View ArticlePubMed CentralPubMedGoogle Scholar
- Lein ES, Hawrylycz MJ, Ao N, Ayres M, Bensinger A, Bernard A, Boe AF, Boguski MS, Brockway KS, Byrnes EJ, Chen L, Chen L, Chen TM, Chin MC, Chong J, Crook BE, Czaplinska A, Dang CN, Datta S, Dee NR, Desaki AL, Desta T, Diep E, Dolbeare TA, Donelan MJ, Dong HW, Dougherty JG, Duncan BJ, Ebbert AJ, Eichele G, et al: Genome-wide atlas of gene expression in the adult mouse brain. Nature. 2007, 445 (7124): 168-176.View ArticlePubMedGoogle Scholar
- Ganguli S, Sompolinsky H: Compressed sensing, sparsity, and dimensionality in neuronal information processing and data analysis. Ann Rev Neurosci. 2012, 35 (1): 485-508.View ArticlePubMedGoogle Scholar
- Green DM: Swets JA: Signal Detection Theory and Psychophysics. 1966, New York, NY, USA: John Wiley and Sons Inc.Google Scholar
- Spackman KA: Signal detection theory: valuable tools for evaluating inductive learning. Proceedings of the Sixth International Workshop on Machine Learning. 1989, San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 160-163.Google Scholar
- Allen Institute for Brain Science: Allen Mouse Brain Atlas [Internet]. http://mouse.brain-map.org/.,
- Dong H-W: The Allen Reference Atlas: A Digital Color Brain Atlas of the C57BL/6J Male Mouse. 2009, Hoboken, NJ: John Wiley & Sons Inc.Google Scholar
- Allen Institute for Brain Science: Allen Mouse Brain Atlas: Technical White Paper: Informatics Data Processing. http://help.brain-map.org/download/attachments/2818169/InformaticsDataProcessing.pdf.,
- Ng L, Pathak S, Kuan C, Lau C, Dong H, Sodt A, Dang C, Avants B, Yushkevich P, Gee J, Haynor D, Lein E, Jones A, Hawrylycz M: Neuroinformatics for genome-wide 3-D gene expression mapping in the mouse brain. IEEE/ACM Trans Comput Biol Bioinform. 2007, 4: 382-393.View ArticlePubMedGoogle Scholar
- Ng L, Bernard A, Lau C, Overly CC, Dong HW, Kuan C, Pathak S, Sunkin SM, Dang C, Bohland JW, Bokil H, Mitra PP, Puelles L, Hohmann J, Anderson DJ, Lein ES, Jones AR, Hawrylycz MJ: An anatomic gene expression atlas of the adult mouse brain. Nat Neurosci. 2009, 12 (3): 356-362.View ArticlePubMedGoogle Scholar
- Lowe DG: Distinctive image features from scale-invariant keypoints. Int J Comput Vis. 2004, 60 (2): 91-110.View ArticleGoogle Scholar
- Mikolajczyk K, Schmid C: A performance evaluation of local descriptors. IEEE Trans Patt Anal Mach Intell. 2005, 27 (10): 1615-1630.View ArticleGoogle Scholar
- Nowak E, Jurie F, Triggs B: Sampling strategies for bag-of-features image classification. Proceedings of the 9th European Conference on Computer Vision. 2006, Berlin, Heidelberg: Springer, 490-503.Google Scholar
- Bosch A, Zisserman A, Muoz X: Image classification using random forests and ferns. Proceedings of the IEEE 11th International Conference on Computer Vision. 2007, Rio de Janeiro, Brazil: IEEE Computer Society, 1-8.Google Scholar
- Liscovitch N, Shalit U, Chechik G: FuncISH: learning a functional representation of neural ISH images. Bioinformatics. 2013, 29 (13): 36-43.View ArticleGoogle Scholar
- Ji S, Sun L, Jin R, Kumar S, Ye J: Automated annotation of Drosophilagene expression patterns using a controlled vocabulary. Bioinformatics. 2008, 24 (17): 1881-1888.View ArticlePubMed CentralPubMedGoogle Scholar
- Vedaldi A, Fulkerson B: VLFeat: An Open and Portable Library of Computer Vision Algorithms. 2008, http://www.vlfeat.org/.,Google Scholar
- Csurka G, Dance C, Fan L, Willamowski J, Bray C: Visual categorization with bags of keypoints. ECCV Workshop on Statistical Learning in Computer Vision. 2004, Prague, Czech Republic, 1-22.Google Scholar
- Fei-Fei L, Perona P: A Bayesian hierarchical model for learning natural scene categories. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2005, San Diego, CA, USA: IEEE Computer Society, 524-531.Google Scholar
- Lazebnik S, Achmid C, Ponce J: Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2006, New York, NY, USA: IEEE Computer Society, 2169-2178.Google Scholar
- Ji S, Li Y-X, Zhou Z-H, Kumar S, Ye J: A bag-of-words approach for Drosophilagene expression pattern annotation. BMC Bioinformatics. 2009, 10 (1): 119-View ArticlePubMed CentralPubMedGoogle Scholar
- Kirsch L, Liscovitch N, Chechik G: Localizing genes to cerebellar layers by classifying ISH images. PLOS Comput Biol. 2012, 8 (12): 1002790-View ArticleGoogle Scholar
- Ryali S, Supekar K, Abrams DA, Menon V: Sparse logistic regression for whole-brain classification of fMRI data. NeuroImage. 2010, 51 (2): 752-764.View ArticlePubMed CentralPubMedGoogle Scholar
- de Brecht M, Yamagishi N: Combining sparseness and smoothness improves classification accuracy and interpretability. NeuroImage. 2012, 60 (2): 1550-1561.View ArticlePubMedGoogle Scholar
- Lin C-J, Weng RC, Keerthi SS: Trust region newton method for logistic regression. J Mach Learn Res. 2008, 9: 627-650.Google Scholar
- Tibshirani R: Regression shrinkage and selection via the lasso. J R Stat Soc Series B. 1996, 58 (1): 267-288.Google Scholar
- Yuan G-X, Ho C-H, Lin C-J: Recent advances of large-scale linear classification. Proc IEEE. 2012, 100 (9): 2584-2603.View ArticleGoogle Scholar
- Liu J, Ye J, Ji S: SLEP: Sparse Learning with Efficient Projections. 2009, Arizona State University, http://www.public.asu.edu/~jye02/Software/SLEP/,Google Scholar
- Yuan G-X, Chang K-W, Hsieh C-J, Lin C-J: A comparison of optimization methods and software for large-scale L1-regularized linear classification. J Mach Learn Res. 2010, 11: 3183-3234.Google Scholar
- Xing F, Su H, Neltner J, Yang L: Automatic ki-67 counting using robust cell detection and online dictionary learning. IEEE Trans Biomed Eng. 2014, 61 (3): 859-870.View ArticlePubMedGoogle Scholar
- Su H, Xing F, Lee J, Peterson C, Yang L: Automatic myonuclear detection in isolated single muscle fibers using robust ellipse fitting and sparse optimization. IEEE/ACM Trans Comput Biol Bioinformatics. 2013, PP (99): 1-1.Google Scholar
- Bühlmann P: Bagging, boosting and ensemble methods. Handbook of Computational Statistics: Concepts and Methods. Edited by: Gentle J, Härdle W, Mori Y. 2004, Berlin: Springer Handbooks of Computational Statistics, Springer, 877-907.Google Scholar
- Meinshausen N, Bühlmann P: Stability selection. J R Stat Soc Series B (Stat Methodol). 2010, 72 (4): 417-473.View ArticleGoogle Scholar
- Fan R-E, Chang K-W, Hsieh C-J, Wang X-R, Lin C-J: LIBLINEAR: A library for large linear classification. J Mach Learn Res. 2008, 9: 1871-1874.Google Scholar
- Kandel ER, Schwartz JH, Jessell TM, Siegelbaum SA, Hudspeth AJ: Principles of Neural Science. 2012, New York, NY, USA: McGraw-Hill ProfessionalGoogle Scholar
- Walz W: Role of astrocytes in the clearance of excess extracellular potassium. Neurochem Int. 2000, 36 (4–5): 291-300.View ArticlePubMedGoogle Scholar
- Bear MF, Connors BW, Paradiso MA: Neuroscience: Exploring the Brain. 2006, Baltimore, MD, USA: Lippincott Williams & WilkinsGoogle Scholar
- Watson C, Kirkcaldie M, Paxinos G: The Brain: An Introduction to Functional Neuroanatomy. 2010, NY, USA: Academic PressGoogle Scholar
- Peng H, Roysam B, Ascoli G: Automated image computing reshapes computational neuroscience. BMC Bioinformatics. 2013, 14 (1): 293-View ArticlePubMed CentralPubMedGoogle Scholar
- Peng H: Bioimage informatics: a new area of engineering biology. Bioinformatics. 2008, 24 (17): 1827-1836.View ArticlePubMed CentralPubMedGoogle Scholar
- Ugolotti R, Mesejo P, Zongaro S, Bardoni B, Berto G, Bianchi F, Molineris I, Giacobini M, Cagnoni S, Di Cunto F: Visual search of neuropil-enriched RNAs from brain in situ hybridization data through the image analysis pipeline Hippo-ATESC. PlOS ONE. 2013, 8 (9): 74481-View ArticleGoogle Scholar
- Bengio Y, Courville A, Vincent P: Representation learning: A review and new perspectives. IEEE Trans Pattern Anal Mach Intell. 2013, 35 (8): 1798-1828.View ArticlePubMedGoogle Scholar
- Ji S, Xu W, Yang M, Yu K: 3D convolutional neural networks for human action recognition. IEEE Trans Pattern Anal Mach Intell. 2013, 35 (1): 221-231.View ArticlePubMedGoogle Scholar
- Pong TK, Tseng P, Ji S, Ye J: Trace norm regularization: Reformulations, algorithms, and multi-task learning. SIAM J Optimization. 2010, 20 (6): 3465-3489.View ArticleGoogle Scholar
- Liu J, Ji S, Ye J: Multi-task feature learning via efficient ℓ 2,1 -norm minimization. Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence. 2009, Montreal, Canada: Association for Uncertainty in Artificial Intelligence, 339-348.Google Scholar
- Zhang D, Shen D: Multi-modal multi-task learning for joint prediction of multiple regression and classification variables in Alzheimer’s disease. NeuroImage. 2012, 59 (2): 895-907.View ArticlePubMed CentralPubMedGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.