Simultaneous bottom-up/top-down processing in early and mid level vision

Erdem, Mehmet Erkut
The prevalent view in computer vision since Marr is that visual perception is a data-driven bottom-up process. In this view, image data is processed in a feed-forward fashion where a sequence of independent visual modules transforms simple low-level cues into more complex abstract perceptual units. Over the years, a variety of techniques has been developed using this paradigm. Yet an important realization is that low-level visual cues are generally so ambiguous that they could make purely bottom-up methods quite unsuccessful. These ambiguities cannot be resolved without taking account of high-level contextual information. In this thesis, we explore different ways of enriching early and mid-level computer vision modules with a capacity to extract and use contextual knowledge. Mainly, we integrate low-level image features with contextual information within unied formulations where bottom-up and top-down processing take place simultaneously.
Citation Formats
M. E. Erdem, “Simultaneous bottom-up/top-down processing in early and mid level vision,” Ph.D. - Doctoral Program, Middle East Technical University, 2008.