nuscenes-devkit
PulseAugur coverage of nuscenes-devkit — every cluster mentioning nuscenes-devkit across labs, papers, and developer communities, ranked by signal.
No coverage in the last 90 days.
2 day(s) with sentiment data
-
New method improves HD map construction with cross-view supervision
Researchers have developed a new method called Cross-View Supervision (CVS) to improve the construction of high-definition maps using bird's-eye-view (BEV) representations from multiple cameras. Traditional methods stru…
-
Random-Set GNNs enhance uncertainty quantification in graph learning
Researchers have introduced Random-Set Graph Neural Networks (RS-GNNs) to address uncertainty quantification in graph learning. This new framework models node-level epistemic uncertainty using a belief function formalis…
-
Driving models' performance hinges on temporal sampling frequency
Researchers have investigated the impact of temporal sampling frequency on end-to-end autonomous driving trajectory prediction models. They found that while dense frame sampling is often assumed to improve performance, …
-
Autonomous driving research tackles adaptive perception and novel adversarial attacks
Researchers have developed an adaptive perception system for autonomous driving that dynamically adjusts its computational resources based on scene complexity, significantly reducing latency without sacrificing accuracy…
-
New neuro-symbolic architecture improves autonomous driving scene understanding
Researchers have developed InfoCoordiBridge, a novel neuro-symbolic architecture designed to enhance the reliability of scene understanding in autonomous driving systems. This architecture addresses issues where languag…
-
BEVCALIB model uses bird's-eye view features for LiDAR-camera calibration
Researchers have developed BEVCALIB, a novel method for calibrating LiDAR and camera sensors, crucial for autonomous driving systems. This approach utilizes bird's-eye view (BEV) features extracted from both sensor type…
-
LiDAR-only HD map construction method enhances semantic cues via knowledge distillation
Researchers have developed LIE, a novel method for constructing High-Definition (HD) maps for autonomous driving using only LiDAR data. This approach overcomes the limitations of camera-based methods by leveraging knowl…
-
SimPB++ model unifies 2D and 3D object detection for autonomous driving
Researchers have developed SimPB++, an end-to-end model designed to simultaneously detect both 2D objects in perspective views and 3D objects in a bird's-eye view for multi-camera autonomous driving systems. The model e…
-
MapRF uses NeRF-guided self-training for weakly supervised HD map construction
Researchers have developed MapRF, a novel framework for constructing high-definition (HD) maps for autonomous driving systems using only 2D image labels. This weakly supervised approach leverages Neural Radiance Fields …
-
DynFlowDrive model enhances autonomous driving with flow-based dynamic world modeling
Researchers have introduced DynFlowDrive, a novel latent world model designed to enhance the reliability of autonomous driving systems. This model utilizes flow-based dynamics to predict future scene evolutions under va…
-
Unified Map Prior Encoder enhances autonomous driving mapping and planning
Researchers have developed a Unified Map Prior Encoder (UMPE) designed to integrate diverse map data, such as HD/SD vector maps, rasterized maps, and satellite imagery, into autonomous driving systems. This encoder addr…
-
Researchers develop noise-aware training for robust 3D object detection using V2X data
Researchers have developed a new method for integrating vehicle-to-everything (V2X) communication data into 3D object detection systems for autonomous driving. This approach aims to overcome the limitations of onboard s…
-
BEV segmentation models for autonomous driving lack generalizability across datasets
A new study published on arXiv evaluates the performance of Bird's-Eye View (BEV) segmentation models used in autonomous driving. Researchers found that models trained on single datasets, like nuScenes, tend to overfit …
-
ConFusion detector achieves state-of-the-art camera-radar fusion for autonomous driving
Researchers have introduced ConFusion, a novel camera-radar fusion method for 3D object detection in autonomous driving. This approach utilizes heterogeneous query interaction, combining image, radar, and world queries …
-
New framework uses prior map data to improve camera-based 3D object detection
Researchers have developed a novel framework called DualViewMapDet for camera-only 3D object detection and tracking, particularly beneficial for autonomous driving systems that lack LiDAR sensors. This method leverages …
-
OpenVO framework enhances visual odometry with temporal awareness and geometric priors
Researchers have developed OpenVO, a new framework for open-world visual odometry that accounts for temporal dynamics and works with uncalibrated cameras. Unlike previous methods that assume fixed observation frequencie…
-
ARETE paper details new method for HD map generation using vehicle fleet data
Researchers have developed ARETE, a new method for generating High-Definition (HD) maps for autonomous driving using crowdsourced vehicle data. The approach employs a Detection Transformer (DETR) model to predict vector…
-
CLLAP framework enhances radar-camera fusion for autonomous driving with LiDAR pretraining
Researchers have developed CLLAP, a new pretraining framework that uses contrastive learning to improve radar-camera fusion for 3D object detection in autonomous driving. The method generates pseudo-radar data from abun…
-
DVGT-2 model advances autonomous driving with real-time geometry and planning
Researchers have introduced DVGT-2, a novel Vision-Geometry-Action (VGA) model designed for autonomous driving. Unlike previous vision-language-action models, DVGT-2 prioritizes dense 3D geometry for decision-making. Th…