PulseAugur
LIVE 06:23:44
research · [2 sources] ·
0
research

ViBE framework maps visual stimuli to M/EEG brain signals

Researchers have developed ViBE, a new framework for brain encoding that translates visual stimuli into magnetoencephalography (MEG) and electroencephalography (EEG) signals. The system utilizes a spatio-temporal convolutional variational autoencoder (TSC-VAE) to reconstruct neural responses and a Q-Former to align visual features with the neural representations. Experiments on the THINGS-EEG2 and THINGS-MEG datasets show ViBE's capability in generating high-quality M/EEG signals, potentially aiding in the development of visual prostheses. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Presents a novel method for brain encoding, potentially advancing visual prosthetics and neural interface research.

RANK_REASON Academic paper detailing a new method for brain encoding.

Read on arXiv cs.CV →

COVERAGE [2]

  1. arXiv cs.CV TIER_1 · Ganxi Xu, Zhao-Rong Lai, Yuting Tang, Yonghao Song, Shuyan Zhou, Guoxu Zhou, Boyu Wang, Jian Zhu, Jinyi Long ·

    ViBE: Visual-to-M/EEG Brain Encoding via Spatio-Temporal VAE and Distribution-Aligned Projection

    arXiv:2604.26218v1 Announce Type: new Abstract: Brain encoding models not only serve to decipher how visual stimuli are transformed into neural responses, but also represent a critical step toward visual prostheses that restore vision for patients with severe vision disorders. Br…

  2. arXiv cs.CV TIER_1 · Jinyi Long ·

    ViBE: Visual-to-M/EEG Brain Encoding via Spatio-Temporal VAE and Distribution-Aligned Projection

    Brain encoding models not only serve to decipher how visual stimuli are transformed into neural responses, but also represent a critical step toward visual prostheses that restore vision for patients with severe vision disorders. Brain encoding involves two fundamental steps: ach…