Dr. Julie Golomb
Associate Professor, Cognitive
200K Lazenby Hall
1835 Neil Avenue
- B.S.: Brandeis University
- Ph.D : Yale University
My research explores the interactions between visual attention, memory, perception, and eye movements using human behavioral and computational cognitive neuroscience techniques. Our brains construct rich perceptual experiences from the rawest of visual inputs: spots of light hitting different places on our eyes. In a fraction of a second, we integrate this information to recognize objects, deduce their locations, and plan complex actions and behaviors. But although visual perception feels smooth and seamless, this process is far more complex than it appears. My lab studies human behavior and brain function to investigate how visual properties such as color, shape, and spatial location are perceived and coded in the brain, and how these representations are influenced by eye movements, shifts of attention, and other dynamic cognitive processes.
Julie received her B.S. in Neuroscience from Brandeis University working with Art Wingfield and Mike Kahana, her Ph.D. in Neuroscience from Yale University working with Marvin Chun and Jamie Mazer, and was a post-doctoral research fellow with Nancy Kanwisher at MIT before joining the faculty at Ohio State in 2012. She has won early career awards including Sloan Research Fellow in Neuroscience, APA Distinguished Scientific Award for Early Career Contribution to Psychology, APF Fantz Award, and Federation of Associations in Behavioral and Brain Sciences Early Career Impact Award.
Please visit my lab webpage for an updated CV and full list of publications.
Dowd, E.W. and Golomb, J.D. (in press). Object feature binding survives dynamic shifts of spatial attention. Psychological Science.
Shafer-Skelton, A. and Golomb, J.D. (2018). Retinotopic memory is more accurate than spatiotopic memory, even for visually guided reaching. Psychonomic Bulletin & Review. 25(4): 1388-98.
Shafer-Skelton, A., Kupitz, C.N., and Golomb, J.D. (2017). Object-location binding across a saccade: A retinotopic Spatial Congruency Bias. Attention, Perception, & Psychophysics. 79(3): 765-81.
Finlayson, N.J., Zhang, X., and Golomb, J.D. (2017). Differential patterns of 2D location versus depth decoding along the visual hierarchy. NeuroImage. 147: 507-516.
Finlayson, N.J. and Golomb, J.D. (2016). Feature-location binding in 3D: Feature judgments are biased by 2D location but not position-in-depth. Vision Research. 127: 49-56.
Lescroart, M.D., Kanwisher, N., and Golomb, J.D. (2016). No evidence for automatic remapping of stimulus features or location found with fMRI. Frontiers in Systems Neuroscience. 10: 53. (Special Issue on Perisaccadic Vision)
Golomb, J.D., Kupitz, C.N., and Thiemann, C.T. (2014). The influence of object location on identity: A “spatial congruency bias”. Journal of Experimental Psychology: General. 143(6): 2262-78.
Golomb, J.D., L’Heureux, Z.E., and Kanwisher, N. (2014). Feature-binding errors after eye movements and shifts of attention. Psychological Science 25(5): 1067-78.
Golomb, J.D. and Kanwisher, N. (2012). Higher-level visual cortex represents retinotopic, not spatiotopic, object location. Cerebral Cortex. 22: 2794-2810.
Golomb, J.D. and Kanwisher, N. (2012). Retinotopic memory is more precise than spatiotopic memory. Proceedings of the National Academy of Sciences USA. 109(5): 1796-1801.
Chun, M.M., Golomb, J.D., and Turk-Browne, N.B. (2011). A taxonomy of external and internal attention. Annual Review of Psychology. 62: 73-101.
Golomb, J.D., Nguyen-Phuc, A.Y., Mazer, J.A., McCarthy, G., and Chun, M.M. (2010). Attentional facilitation throughout human visual cortex lingers in retinotopic coordinates after eye movements. Journal of Neuroscience. 30(31): 10493-10506.
Golomb, J.D., Chun, M.M., and Mazer, J.A. (2008). The native coordinate system of spatial attention is retinotopic. Journal of Neuroscience. 28(42): 10654 –10662.