Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.06.27.175505v1?rss=1 Authors: DiMattina, C., Baker, C. L. Abstract: Segmenting the visual scene into distinct surfaces is one of the most basic tasks of visual perception, and luminance differences between two adjacent surfaces often provide an important segmentation cue. However, mean luminance differences between two surfaces may exist without any sharp change in albedo at their boundary, but rather from differences in the proportion of light and dark areas within each surface, which we refer to as a luminance texture boundary. In this study, we investigate the performance of human observers segmenting luminance texture boundaries, with the ultimate goal of understanding the underlying visual computations. We demonstrate that a simple model involving a single stage of filtering is inadequate to explain observer performance when segmenting luminance texture boundaries, but performance can be explained when contrast normalization is added to this one-stage model. Performing additional experiments in which observers segment luminance texture boundaries while ignoring super-imposed luminance step boundaries, we demonstrate that the one-stage model cannot explain segmentation performance in this task. We then present a Filter-Rectify-Filter (FRF) model positing two cascaded stages of filtering, and demonstrate that it neatly fits our data and explains our ability to segment luminance texture boundary stimuli in the presence of interfering luminance step boundaries, as well as in the absence of masking stimuli. We propose that such computations may be useful for boundary segmentation in natural scenes, where shadows often give rise to luminance step edges which do not correspond to surface boundaries. Finally, we suggest directions for future work to further investigate possible neural implementation of the computations suggested by our psychophysical study. Copy rights belong to original authors. Visit the link for more info