Researchers at Apple have published a paper exploring the potential for information leakage from large language models. Their study, using vision-language models, demonstrates that even the final logit values, which directly influence a model's output, can reveal task-irrelevant information from an image query. This leakage can be as significant as that found in more direct projections of the model's internal data, highlighting a potential privacy risk for model owners. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
RANK_REASON The submission is an academic paper from a major tech company's research division detailing a novel finding about information leakage in LLMs.