Research
Our visual system is constantly receiving a large amount of information from the environment, and attention plays a critical role in selectively focusing on the most relevant aspects of our visual environment. My research is centered on exploring the role of attention in visual processing and how it shapes our perception of the world. I use computational models and behavioral experiments to investigate the mechanisms underlying attention and how they facilitate object recognition, grouping, and search in humans. By developing an artificial vision system that can mimic and predict human attentional processes more accurately, I aim to improve the safety and sophistication of human-machine interaction in visual environments. This will enable more intuitive and effective collaboration between humans and machines in various applications. Currently, I’m exploring the use of generative models to explain human object-based attention.
Keywords: Vision, Attention, Eye-movements, Gaze Prediction, Computational Modeling
Highlighted Projects and Publications:
for the most up-to-date and comprehensive list of publications, please visit my google scholar
Generated Object Reconstructions for Object-based Attention
Selected Publications:
- Ahn S, Adeli H, Zelinsky G. Reconstruction-guided attention improves the robustness and shape processing of neural networks. SVRHM at Neurips Workshops. 2022
- Adeli, H., Ahn, S., & Zelinsky, G. J. (2023). A brain-inspired object-based attention network for multiobject recognition and visual reasoning. Journal of Vision, 23(5), 16-16.
Decoding Cognitive States from Eye-Movements
Selected Publications:
- Ahn S, Kelton C, Balasubramanian A, Zelinsky G. Towards predicting reading comprehension from gaze behavior. ETRA. 2020
- Kelton C, Wei Z, Ahn S, Balasubramanian A, Das SR, Samaras D, Zelinsky G. Reading detection in realtime. ETRA. 2019
- Ahn S, Lee D, Hinojosa A, Koh S. Task Effects on Perceptual Span during Reading: Evidence from Eye Movements in Scanning, Proofreading, and Reading for Comprehension [under review]
Gaze Modeling and Prediction
Selected Publications:
- Mondal S, Yang Z, Ahn S, Samaras D, Zelinsky G, Hoai M. Gazeformer: Scalable, Effective and Fast Prediction of Goal-Directed Human Attention. CVPR. 2023
- Yang Z, Huang L, Chen Y, Wei Z, Ahn S, Zelinsky G, Samaras D, Hoai M. Predicting goal-directed human attention using inverse reinforcement learning. CVPR. 2020
- Zelinsky G, Yang Z, Huang L, Chen Y, Ahn S, Wei Z, Adeli H, Samaras D, Hoai M. Benchmarking gaze prediction for categorical visual search. CVPR Workshops. 2019