Skip to main content

Model of visual attention for video sequences

Background

In our previous work [1] we generalized Olshausen's algorithm [2] and designed a perceptual learning model using video sequences. In this study we propose to model conjunction search. Conjunction search (search for a unique combination of two features – e.g., orientation and spatial frequency – among distractions that share only one of these features) examines how the system combines features into perceptual wholes. We propose to improve the effectiveness of the decomposition algorithm by providing classification awareness. Attentional guidance does not depend solely on local visual features, but must also include the effects of interactions among features. The idea is to group together filters that will be responsible to extract similar features. It is well known that knowledge about which features define the target improves search performance and/or accuracy [3]. The nearest neighbors of the fixations will share a certain feature.

Methods

The main goal of this work is to use sequences of images and to design an attention model including conjunction search based on unsupervised self-learning. First, Independent Component Analysis algorithm is used to determine an initial set of basis functions from the first image (Figure 1). Second, unsupervised self-organizing learning is used to group together similar bases functions (Figure 2).

Figure 1
figure1

Initial basis images. a) the original image; b) a set of basis functions received by ICA from several patches from one image; c) and d) the convolution results of two functions with the original image.

Figure 2
figure2

Components for unsupervised self-organizing learning.

Conclusion

It is shown that performing sparse learning codes on video sequences of natural scenes produces results with qualitatively similar spatio-temporal properties of simple receptive field of neurons. The basic functions are similar to those obtained by sparse learning, but in our model they have a particular order (Figure 3). The proposed framework was tested using neurobiological (event related potentials ERP's) and behavioral (eye tracking) data.

Figure 3
figure3

Ordered resulting basis functions.

References

  1. 1.

    Milanova M, Wachowiak M, Rubin S, Elmaghraby A: A perceptual learning model based on topological representation, neural networks. Proceedings, IJCNN'01 International Conference 2001, 406–411.

    Google Scholar 

  2. 2.

    Olshausen B: Sparse codes and spikes. In Probabilistic Models of Perception and Brain Function. Edited by: Rao RPN, Olshausen BA, Lewicki MS. MIT Press; 2001.

    Google Scholar 

  3. 3.

    Rutishauser U, Koch C: Probabilistic modeling of eye movement data during conjunction search via feature-based attention. Journal of Vision 2007, 7(6):5. 1–20 1–20 10.1167/7.6.5

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

The project described was supported by NIH Grant Number P20 RR-16460 from the IDeA Networks of Biomedical Research Excellence (INBRE) Program of the National Center for Research Resources.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Mariofanna Milanova.

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Milanova, M. Model of visual attention for video sequences. BMC Bioinformatics 9, P14 (2008). https://doi.org/10.1186/1471-2105-9-S7-P14

Download citation

Keywords

  • Video Sequence
  • Receptive Field
  • Independent Component Analysis
  • Event Related Potential
  • Perceptual Learning