Seongmin Lee CV

Effective Guidance for Model Attention with Simple Yes-no Annotations

Oral Paper, IEEE International Conference on Big Data (BigData), 2024

Abstract

Modern deep learning models often make predictions by focusing on irrelevant areas, leading to biased performance and limited generalization. Existing methods aimed at rectifying model attention require explicit labels for irrelevant areas or complex pixel-wise ground truth attention maps. We present CRAYON (Correcting Reasoning with Annotations of Yes Or No), offering effective, scalable, and practical solutions to rectify model attention using simple yes-no annotations. CRAYON empowers classical and modern model interpretation techniques to identify and guide model reasoning: CRAYON-ATTENTION directs classic interpretations based on saliency maps to focus on relevant image regions, while CRAYON-PRUNING removes irrelevant neurons identified by modern concept-based methods to mitigate their influence. Through extensive experiments with both quantitative and human evaluation, we showcase CRAYON’s effectiveness, scalability, and practicality in refining model attention. CRAYON achieves state-of-the-art performance, outperforming 12 methods across 3 benchmark datasets, surpassing approaches that require more complex annotations.

BibTeX

			@inproceedings{lee2024towards,
  title={Effective Guidance for Model Attention with Simple Yes-no Annotations},
  author={Lee, Seongmin and Payani, Ali and Chau, Duen Horng (Polo)},
  booktitle={IEEE International Conference on Big Data (BigData)},
  year={2024},
}