Researchers have developed a new attack called TrEEStealer that can steal decision tree models protected by Trusted Execution Environments (TEEs). This attack exploits side-channel vulnerabilities within TEEs, specifically using control-flow information to extract the model's structure and parameters. The method proved effective against popular libraries like OpenCV, mlpack, and emlearn, demonstrating that TEEs do not fully prevent model extraction through leakage. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
RANK_REASON Academic paper detailing a novel attack method against TEE-protected machine learning models.