Seoyoung Ahn

Seoyoung Ahn

postdoc @Tsao lab
UC Berkeley, California, USA

CV: CV as pdf (updated May 2024)
email: ahnIseoyoungdon't@wantspam! Sopleasegmaleave ilme.alonec!om
social: twitter, google scholar


ABOUT

I’m a postdoc working at Doris Tsao’s lab at UC Berkeley. I did my PhD with Dr. Greg Zelinsky at Stony Brook University. I also frequently collaborate with the Computer Vision lab with Dr. Dimitris Samaras and Dr. Minh Hoai Nguyen. Before that, I did my MA and BA at Seoul National University, and studied the computational omdeling of eye-movements with Dr. Sungryong Koh. I’m broadly interested in understanding how humans can obtain stable but flexible representations of their visual environment. I’m currently exploring the use of generative models to explain human object-based perception and attention, and understanding how the brain accomplishes this. In my free time, I write for animated film!


RECENT NEWS

I’m co-organizing the ‘Using Deep Networks to Re-imagine Object-based Attention and Perception’ symposium at VSS 2024. We will discuss topics ranging from classic object-based attention theories and recent neural and behavioral data to new deep learning models for better understanding how the visual system forms meaningful, coherent object percepts.

I’m moving to UC Berkeley to start a new postdoc position at Doris Tsao’s lab in 2024. I’m excited to begin a new journey studying monkey brain!

The 3D animated film I participated as a writer won the Jury’s Special Award at SIGGRAPH Asia 2023! Huge congratulations to our amazing director, Jinuk!

I am honored to receive the 2023 APA Disseration Research Award!

We presented several talks and posters at this year’s VSS 2023 on modeling human object-based attention.

I received a travel award from Females of Vision et al. (Fovea)

I was awarded the Endowed Award Fund for Cognitive Science from Stony Brook Psychology!

Our paper “Reconstruction-guided attention improves the robustness and shape processing of neural networks” has been accepted to SVRHM Workshop at Neurips 2022. I will be there in person to present my poster.

Our IRL model now can predict human search behavior when the target is not there. Accepted at ECCV 2022

Gave a talk at MODVIS 2022, VSS workshop on Computational and Mathematical Models in Vision. Really enjoyed meeting new colleagues and dicussing interesting work!

I will be giving a talk at NAISys 2022 about how top-down object reconstruction improves the model’s recognition robustness. Very excited for the first in-person conferene in years after Covid break!

My second-year project “Use of superordinate labels yields more robust and human-like visual representations in convolutional neural networks” has been accepted at Journal of Vision

Gave a live talk at VSS 2020 on how hieararchical semantic structure of the training labels helps visual category learning in convolutional neural networks

We published a large-scale search eyetracking dataset for training deep learning models. See our project page: COCO-Search18

Our GazePrediction team published our first paper at CVPR 2020 and received best paper nomination. We used inverse reinforcement learning (IRL) to model human goal-directed attention.

My first-year project “Towards Predicting Reading Comprehension From Gaze Behavior” has been accepted at ETRA 2020

Our WebGaze team presented our real-time reading vs skimming detector at ETRA 2019