PulseAugur
LIVE 10:54:19
research · [3 sources] ·
0
research

Researchers accelerate discrete autoregressive models with Wasserstein flow and Jacobi decoding

Researchers have developed a new method to accelerate the inference of discrete autoregressive normalizing flows, a type of generative model. The proposed technique, Selective Jacobi Decoding, allows for parallel iterative optimization by selectively using Jacobi decoding, leading to up to 4.7 times faster generation without sacrificing quality. Another paper explores learning discrete autoregressive priors using Wasserstein gradient flow, aiming to improve the compatibility between image tokenizers and generative models by matching distributions during training. AI

Summary written by gemini-2.5-flash-lite from 3 sources. How we write summaries →

IMPACT These papers introduce techniques to improve the efficiency and quality of generative models, potentially impacting future research and applications in image generation and other areas.

RANK_REASON The cluster contains two academic papers detailing novel methods in generative modeling and discrete autoregressive priors.

Read on arXiv cs.LG →

COVERAGE [3]

  1. arXiv cs.LG TIER_1 · Bowen Zheng, Yihong Luo, Tianyang Hu ·

    Learning Discrete Autoregressive Priors with Wasserstein Gradient Flow

    arXiv:2605.06148v1 Announce Type: cross Abstract: Discrete image tokenizers are commonly trained in two stages: first for reconstruction, and then with a prior model fitted to the frozen token sequences. This decoupling leaves the tokenizer unaware of the model that will later ge…

  2. arXiv cs.LG TIER_1 · Jiaru Zhang, Juanwu Lu, Xiaoyu Wu, Ziran Wang, Ruqi Zhang ·

    Accelerating Inference of Discrete Autoregressive Normalizing Flows by Selective Jacobi Decoding

    arXiv:2505.24791v2 Announce Type: replace Abstract: Discrete normalizing flows are promising generative models with advantages such as analytical log-likelihood computation and end-to-end training. However, the architectural constraints to ensure invertibility and tractable Jacob…

  3. arXiv cs.CV TIER_1 · Tianyang Hu ·

    Learning Discrete Autoregressive Priors with Wasserstein Gradient Flow

    Discrete image tokenizers are commonly trained in two stages: first for reconstruction, and then with a prior model fitted to the frozen token sequences. This decoupling leaves the tokenizer unaware of the model that will later generate its tokens. As a result, the learned tokens…