Seoyoung Ahn

Seo-Young Ahn

final-year PhD student @eyecog lab
Stony Brook University, New York, USA

CV: CV as pdf (updated October 2023)
email: ahnIseoyoungdon't@wantspam! Sopleasegmaleave ilme.alonec!om
social: twitter, google scholar


ABOUT

I’m a Cognitive Science PhD student in Dr. Greg Zelinsky’s lab at Stony Brook University. I also frequently collaborate with the CV lab with Dr. Dimitris Samaras and Dr. Minh Hoai Nguyen. I completed my MA and BA at Seoul National University, where I studied the computational modeling of eye-movements with Dr. Sungryong Koh. I’m broadly interested in understanding how humans can obtain stable but flexible representations of their visual environment. I’m currently exploring the use of generative models to explain human object-based attention. In my free time, I write for animated film!


RECENT NEWS

I am honored to receive the 2023 APA Disseration Research Award!

We presented several talks and posters at this year’s VSS 2023 on modeling human object-based attention.

I received a travel award from Females of Vision et al. (Fovea)

I was awarded the Endowed Award Fund for Cognitive Science from Stony Brook Psychology!

Our paper “Reconstruction-guided attention improves the robustness and shape processing of neural networks” has been accepted to SVRHM Workshop at Neurips 2022. I will be there in person to present my poster.

Our IRL model now can predict human search behavior when the target is not there. Accepted at ECCV 2022

Gave a talk at MODVIS 2022, VSS workshop on Computational and Mathematical Models in Vision. Really enjoyed meeting new colleagues and dicussing interesting work!

I will be giving a talk at NAISys 2022 about how top-down object reconstruction improves the model’s recognition robustness. Very excited for the first in-person conferene in years after Covid break!

My second-year project “Use of superordinate labels yields more robust and human-like visual representations in convolutional neural networks” has been accepted at Journal of Vision

Gave a live talk at VSS 2020 on how hieararchical semantic structure of the training labels helps visual category learning in convolutional neural networks

We published a large-scale search eyetracking dataset for training deep learning models. See our project page: COCO-Search18

Our GazePrediction team published our first paper at CVPR 2020 and received best paper nomination. We used inverse reinforcement learning (IRL) to model human goal-directed attention.

My first-year project “Towards Predicting Reading Comprehension From Gaze Behavior” has been accepted at ETRA 2020

Our WebGaze team presented our real-time reading vs skimming detector at ETRA 2019