Background
In our previous work [1] we generalized Olshausen's algorithm [2] and designed a perceptual learning model using video sequences. In this study we propose to model conjunction search. Conjunction search (search for a unique combination of two features – e.g., orientation and spatial frequency – among distractions that share only one of these features) examines how the system combines features into perceptual wholes. We propose to improve the effectiveness of the decomposition algorithm by providing classification awareness. Attentional guidance does not depend solely on local visual features, but must also include the effects of interactions among features. The idea is to group together filters that will be responsible to extract similar features. It is well known that knowledge about which features define the target improves search performance and/or accuracy [3]. The nearest neighbors of the fixations will share a certain feature.