Researchers have explored the capability of large language models (LLMs) to make personalized access control decisions, particularly in the context of smartphone application permissions. A study involving 307 user privacy statements and 14,682 permission decisions found that LLMs could align with user preferences in up to 86% of cases and guide users toward safer choices. However, a trade-off was observed: while personalization improved individual decision agreement, strict adherence to user preferences could lead to less secure outcomes due to user tendencies to over-permission. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Investigates LLM potential for enhancing security and user experience in access control, with implications for personalized system interactions.
RANK_REASON The cluster contains an academic paper detailing research on LLM capabilities. [lever_c_demoted from research: ic=1 ai=1.0]