PulseAugur
LIVE 01:44:32
ENTITY lpips

lpips

PulseAugur coverage of lpips — every cluster mentioning lpips across labs, papers, and developer communities, ranked by signal.

Total · 30d
6
6 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
6
6 over 90d
TIER MIX · 90D
RECENT · PAGE 1/1 · 6 TOTAL
  1. TOOL · CL_22026 ·

    ViTok-v2 scales to 5B parameters, advancing image autoencoder reconstruction and generation

    Researchers have introduced ViTok-v2, a 5-billion parameter image autoencoder that scales to larger resolutions and parameter counts than previous models. This new model utilizes native resolution support and a DINOv3 p…

  2. TOOL · CL_22436 ·

    PixelGen paper introduces perceptual supervision to boost pixel diffusion image generation

    Researchers have introduced PixelGen, a novel end-to-end pixel diffusion framework designed to enhance image generation quality. PixelGen incorporates perceptual losses, specifically LPIPS for local textures and P-DINO …

  3. TOOL · CL_15791 ·

    Researchers develop new differentiable VQ for optimized generative image compression

    Researchers have developed RDVQ, a novel framework for optimizing generative image compression. This approach uses a differentiable relaxation of the codebook distribution to enable end-to-end rate-distortion optimizati…

  4. RESEARCH · CL_14075 ·

    GOR-IS framework improves 3D object removal with intrinsic space inpainting

    Researchers have developed GOR-IS, a new framework for removing objects from 3D scene reconstructions generated by methods like 3D Gaussian Splatting. This approach addresses limitations in existing techniques by explic…

  5. RESEARCH · CL_11347 ·

    Researchers develop pHVI-ISPNet for improved night photography rendering

    Researchers have developed a new framework called pHVI-ISPNet to improve night photography rendering by addressing perceptual distortions and color bias. This RAW-to-RGB model utilizes specific refinements like RAW-doma…

  6. RESEARCH · CL_11367 ·

    Parameter-Efficient Architectural Modifications for Translation-Invariant CNNs

    Researchers have developed a novel 'Online Architecture' strategy for Convolutional Neural Networks (CNNs) that significantly enhances translation invariance. By strategically inserting Global Average Pooling (GAP) laye…