single-precision floating-point format
PulseAugur coverage of single-precision floating-point format — every cluster mentioning single-precision floating-point format across labs, papers, and developer communities, ranked by signal.
No coverage in the last 90 days.
-
LLM Study Diary #3: PyTorch tensors, float types, and training infrastructure
This LLM study diary entry focuses on PyTorch fundamentals for training large language models. It details tensor basics, exploring various floating-point data types like FP32, BF16, and FP8 for efficiency and stability.…
-
EdgeLPR paper explores neural network precision vs performance trade-offs for LiDAR place recognition
Researchers have developed EdgeLPR, a method for efficient LiDAR-based place recognition on edge devices. The approach utilizes Bird's Eye View representations to enable lightweight image-based networks for autonomous n…
-
Object detection models show mixed robustness to quantization and input degradations
A new study investigates how post-training quantization (PTQ) affects the robustness of YOLO object detection models when faced with real-world input degradations like noise and blur. Researchers evaluated various preci…
-
New methods QFlash and ELSA boost Vision Transformer attention efficiency
Researchers have developed two new methods to improve the efficiency of attention mechanisms in vision transformers. QFlash focuses on enabling integer-only operations for FlashAttention, achieving significant speedups …