Holcombe & Cavanagh (2001). Early binding of feature pairs for visual perception. Nature Neuroscience, 4(2), 127-128.

Abstract If features such as color and orientation are processed separately by the brain at early stages, how does the brain subsequently match the correct color and orientation? We found that spatially superposed pairings of orientation with either color or luminance could be reported even for extremely high rates of presentation, which suggests that these features are coded in combination explicitly by early stages, thus eliminating the need for any subsequent binding of information. In contrast, reporting the pairing of spatially separated features required rates an order of magnitude slower, suggesting that perceiving these pairs requires binding at a slow, attentional stage.


binding across locations

binding at same location

In the movie above, determining that green is presented at the same time as the leftward-tilted contour is difficult: binding fails.

Even though this movie is presented much faster, here one can perceive the color-tilt pairing: green with rightward tilt.

NOTE: these movies are not calibrated for your display and are not an exact duplication of the conditions of the actual experiments. Movie performance depends on your computer, operating system, web browser, and web browser version


The figure below helps explain the theoretical framework which motivated the original experiments. At the end of the paper we conclude that in the spatially separated condition, temporal thresholds are limited by a need to combine features represented separately. Bodelon, Fallah, & Reynolds (2007) have now found that there is also a regime, although very narrow temporally, wherein one can experience the individual features in the superposed case, yet not be able to report their pairing.


See citations of this work, and a blog post explaining the concept of temporal resolution.