PulseAugur
LIVE 04:22:27
research · [3 sources] ·
0
research

Aitchison geometry powers new compositional graph embeddings and flavor tagger calibration

Two new arXiv papers introduce novel approaches to representation learning using Aitchison geometry. One paper proposes a framework for calibrating flavor taggers in high-energy physics by formulating it as an optimal transport problem on a probability simplex. The other paper presents a compositional graph embedding framework that leverages Aitchison geometry to create interpretable embeddings for graph machine learning tasks like node classification and link prediction. AI

Summary written by gemini-2.5-flash-lite from 3 sources. How we write summaries →

IMPACT These papers offer new geometric frameworks for improving interpretability and performance in graph machine learning and physics data analysis.

RANK_REASON Two arXiv papers introduce novel methods for representation learning using Aitchison geometry.

Read on arXiv cs.LG →

COVERAGE [3]

  1. arXiv cs.LG TIER_1 · Yeonjoon Kim, Un-ki Yang ·

    Data-Driven, Geometry-Aware Optimal-Transport Calibration of Flavor Tagger

    arXiv:2605.01363v1 Announce Type: cross Abstract: Flavor-tagging calibrations are often provided either as scale factors measured at a finite set of working points or as binned corrections to a chosen one-dimensional discriminant. However, this approach falls short of providing c…

  2. arXiv cs.LG TIER_1 · Nikolaos Nakis, Chrysoula Kosma, Panagiotis Promponas, Michail Chatzianastasis, Giannis Nikolentzos ·

    Aitchison Embeddings for Learning Compositional Graph Representations

    arXiv:2605.00716v1 Announce Type: new Abstract: Representation learning is central to graph machine learning, powering tasks such as link prediction and node classification. However, most graph embeddings are hard to interpret, offering limited insight into how learned features r…

  3. arXiv cs.LG TIER_1 · Giannis Nikolentzos ·

    Aitchison Embeddings for Learning Compositional Graph Representations

    Representation learning is central to graph machine learning, powering tasks such as link prediction and node classification. However, most graph embeddings are hard to interpret, offering limited insight into how learned features relate to graph structure. Many networks naturall…