Security researchers have identified significant vulnerabilities in AI agent skill files, which can be exploited for credential exposure and malicious instruction execution. One analysis found nearly 15% of distinct skill files contained hardcoded credentials, granting direct database write access. Another campaign documented attackers using deceptive community skills to install malware like Remcos RAT and GhostLoader by embedding malicious instructions within the skill's setup process. These findings highlight a critical supply chain risk for AI agents, with attack surfaces six times larger than traditional software. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Exposes critical supply chain vulnerabilities in AI agents, necessitating new security auditing practices for developers and users.
RANK_REASON Security research paper detailing vulnerabilities in AI agent skills. [lever_c_demoted from research: ic=1 ai=1.0]