Computer vision research advances multimodal understanding and robust segmentation
ByPulseAugur Editorial·
Summary by gemini-2.5-flash-lite
from 13 sources
Researchers have developed WeatherSeg, a semi-supervised segmentation framework designed to improve autonomous driving perception in adverse weather conditions by using a dual teacher-student model for knowledge distillation and a classifier weight updating mechanism. Separately, a new pose-only geometric constraint for multi-camera systems has been proposed to enhance computational efficiency in bundle adjustment for visual navigation and 3D scene reconstruction. Another advancement addresses the scalability limitations of multi-projector calibration by embedding cameras into calibration targets, allowing for simultaneous estimation of projector parameters. Additionally, DeepTaxon offers a retrieval-augmented multimodal framework for unified species identification and discovery in biodiversity research, while TSMNet integrates textual supervision with visual representations for open-vocabulary semantic segmentation in remote sensing.
AI
arXiv:2604.26031v1 Announce Type: new Abstract: This report summarizes the objectives, datasets, and top-performing methodologies of the 2026 Pixel-level Video Understanding in the Wild (PVUW) Challenge, hosted at CVPR 2026, which evaluates state-of-the-art models under highly un…
arXiv:2604.23704v1 Announce Type: new Abstract: Multi-camera systems offer rich observation capabilities for visual navigation and 3D scene reconstruction; however, the resulting feature redundancy often compromises computational efficiency. This challenge is particularly pronoun…
arXiv:2604.24024v1 Announce Type: new Abstract: Conventional multi-projector calibration requires projecting and capturing structured light patterns for each projector sequentially, causing calibration time and effort to increase linearly with the number of projectors. This scala…
arXiv:2604.24029v1 Announce Type: new Abstract: Identifying species in biology among tens of thousands of visually similar taxa while discovering unknown species in open-world environments remains a fundamental challenge in biodiversity research. Current methods treat identificat…
arXiv cs.CV
TIER_1·Jinkun Dai, Yuanxin Ye, Peng Tang, Tengfeng Tang, Xianping Ma, Jing Xiao, Mi Wang·
arXiv:2604.24125v1 Announce Type: new Abstract: Semantic segmentation of multi-modal remote sensing imagery plays a pivotal role in land use/land cover (LULC) mapping, environmental monitoring, and precision earth observation. Current multi-modal approaches mainly focus on integr…
arXiv:2604.24167v1 Announce Type: new Abstract: Implicit neural representations (INRs) are increasingly being used as tools to map coordinates to signals, encompassing applications from neural fields to texture compression, shape representations, and beyond. Most INR methods are …
Implicit neural representations (INRs) are increasingly being used as tools to map coordinates to signals, encompassing applications from neural fields to texture compression, shape representations, and beyond. Most INR methods are based on using high-dimensional projections of t…
Semantic segmentation of multi-modal remote sensing imagery plays a pivotal role in land use/land cover (LULC) mapping, environmental monitoring, and precision earth observation. Current multi-modal approaches mainly focus on integrating complementary visual modalities, yet negle…
Identifying species in biology among tens of thousands of visually similar taxa while discovering unknown species in open-world environments remains a fundamental challenge in biodiversity research. Current methods treat identification and discovery as separate problems, with cla…
Conventional multi-projector calibration requires projecting and capturing structured light patterns for each projector sequentially, causing calibration time and effort to increase linearly with the number of projectors. This scalability bottleneck has long limited the deploymen…
arXiv cs.CV
TIER_1·Hanyu Chen, Ruojin Cai, Steve Marschner, Noah Snavely·
arXiv:2604.22202v1 Announce Type: new Abstract: Symmetry detection is a fundamental problem in computer vision, and symmetries serve as powerful priors for downstream tasks. However, existing learning-based methods for detecting 3D symmetries from single images have been almost e…
Symmetry detection is a fundamental problem in computer vision, and symmetries serve as powerful priors for downstream tasks. However, existing learning-based methods for detecting 3D symmetries from single images have been almost exclusively trained and evaluated on object-centr…