A Comparative Analysis of Attention Mechanisms in Vision Transformers for Fine-Grained Image Recognition
Keywords:
vision transformer, attention mechanism, fine-grained image recognition, comparative evaluationAbstract
Fine-grained image recognition demands the ability to distinguish visually similar subcategories by capturing subtle, localized discriminative features. Vision Transformers (ViTs) have emerged as strong contenders in this domain, yet their diverse attention mechanisms yield different inductive biases that remain insufficiently compared under unified conditions. This study presents a controlled empirical evaluation of seven representative attention mechanisms on three fine-grained benchmarks: CUB-200-2011, Stanford Cars, and FGVC Aircraft. All methods are trained with identical preprocessing, augmentation, optimization, and ImageNet-21K pretrained weights at 448×448 resolution. The evaluation spans classification accuracy, computational cost measured through FLOPs, parameter counts, and throughput, as well as attention interpretability quantified through part-localization precision. Results indicate that deformable attention achieves the highest accuracy across all three benchmarks, with a moderate advantage of 1.0--1.5 percentage points over the global self-attention baseline. Window-based and cross-shaped mechanisms offer favorable accuracy-efficiency tradeoffs, while class-attention decoupling improves parameter efficiency without sacrificing competitive accuracy. A strong rank correlation (Spearman ρ = 0.93) between localization precision and classification accuracy supports the interpretation that spatial selectivity is a key driver of fine-grained recognition quality. These findings provide actionable guidance for selecting attention strategies under varying computational budgets.Downloads
Published
2026-05-13