METR has proposed a framework for increased transparency in the development of frontier AI models, arguing that current disclosure practices are insufficient to identify potential risks. The organization suggests that developers should share more information about their training processes and internal model capabilities, even for models not publicly deployed. METR acknowledges potential downsides, such as incentivizing developers to avoid discovering risks or leaking competitive information, and proposes tiered disclosure options, including sharing with government bodies, external researchers, or through trusted intermediaries. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
RANK_REASON The item is a preliminary writeup from an organization proposing a framework for AI risk transparency, which falls under research and policy discussion.