PulseAugur
LIVE 07:35:02
research · [2 sources] ·
0
research

New TsallisPGD attack method improves adversarial attacks on semantic segmentation models

Researchers have developed TsallisPGD, a novel adversarial attack method designed to more effectively target semantic segmentation models. This new approach utilizes Tsallis cross-entropy, a generalized form of standard cross-entropy, to adaptively adjust gradient weighting across pixels. Experiments on datasets like Cityscapes and Pascal VOC demonstrate that TsallisPGD outperforms existing methods in reducing model accuracy and mean intersection over union (mIoU). AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Introduces a more potent attack vector for evaluating the robustness of semantic segmentation models.

RANK_REASON This is a research paper detailing a new method for adversarial attacks on semantic segmentation models.

Read on arXiv cs.CV →

COVERAGE [2]

  1. arXiv cs.CV TIER_1 · Alexander Matyasko, Xin Lou, Indriyati Atmosukarto, Wei Zhang ·

    TsallisPGD: Adaptive Gradient Weighting for Adversarial Attacks on Semantic Segmentation

    arXiv:2605.03405v1 Announce Type: new Abstract: Attacking semantic segmentation models is significantly harder than image classification models because an attacker must flip thousands of pixel predictions simultaneously. Standard pixel-wise cross-entropy (CE) is ill-suited to thi…

  2. arXiv cs.CV TIER_1 · Wei Zhang ·

    TsallisPGD: Adaptive Gradient Weighting for Adversarial Attacks on Semantic Segmentation

    Attacking semantic segmentation models is significantly harder than image classification models because an attacker must flip thousands of pixel predictions simultaneously. Standard pixel-wise cross-entropy (CE) is ill-suited to this setting: it tends to overemphasize already-mis…