PulseAugur
LIVE 06:31:45
research · [1 source] ·
0
research

EU AI standard prEN 18228 fails to address decision-making

A new European standard, prEN 18228, aims to formalize AI risk assessment by requiring organizations to identify hazards, evaluate risks, and monitor controls throughout an AI system's lifecycle. While this standard brings structure and a product safety mindset, it may not adequately address the unique failure modes of AI systems, which can be context-dependent and only visible after deployment. The standard's reliance on traditional probability-times-severity models may obscure critical distinctions between frequent, low-impact errors and rare, severe failures, potentially leading to inadequate decision-making regarding system deployment. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT New European standard prEN 18228 may not fully address AI's unique failure modes, potentially impacting deployment decisions.

RANK_REASON Discusses a new European standard for AI risk assessment and its potential shortcomings. [lever_c_demoted from significant: ic=1 ai=0.4]

Read on Mastodon — sigmoid.social →

EU AI standard prEN 18228 fails to address decision-making

COVERAGE [1]

  1. Mastodon — sigmoid.social TIER_1 · [email protected] ·

    The prEN 18228 Problem: Why Your AI Risk Assessment Will Fail the First Real Test Most AI risk assessments look solid on paper and collapse the moment a regulat

    The prEN 18228 Problem: Why Your AI Risk Assessment Will Fail the First Real Test Most AI risk assessments look solid on paper and collapse the moment a regulator, client, or auditor asks a simple question. What exactly can go wrong, how likely is it, and what does it cost when i…