Model of visual attention for video sequences
© Milanova; licensee BioMed Central Ltd. 2008
Published: 8 July 2008
In our previous work  we generalized Olshausen's algorithm  and designed a perceptual learning model using video sequences. In this study we propose to model conjunction search. Conjunction search (search for a unique combination of two features – e.g., orientation and spatial frequency – among distractions that share only one of these features) examines how the system combines features into perceptual wholes. We propose to improve the effectiveness of the decomposition algorithm by providing classification awareness. Attentional guidance does not depend solely on local visual features, but must also include the effects of interactions among features. The idea is to group together filters that will be responsible to extract similar features. It is well known that knowledge about which features define the target improves search performance and/or accuracy . The nearest neighbors of the fixations will share a certain feature.
The project described was supported by NIH Grant Number P20 RR-16460 from the IDeA Networks of Biomedical Research Excellence (INBRE) Program of the National Center for Research Resources.
- Milanova M, Wachowiak M, Rubin S, Elmaghraby A: A perceptual learning model based on topological representation, neural networks. Proceedings, IJCNN'01 International Conference 2001, 406–411.Google Scholar
- Olshausen B: Sparse codes and spikes. In Probabilistic Models of Perception and Brain Function. Edited by: Rao RPN, Olshausen BA, Lewicki MS. MIT Press; 2001.Google Scholar
- Rutishauser U, Koch C: Probabilistic modeling of eye movement data during conjunction search via feature-based attention. Journal of Vision 2007, 7(6):5. 1–20 1–20 10.1167/7.6.5View ArticlePubMedGoogle Scholar
This article is published under license to BioMed Central Ltd.