Speakers
Sessions
FAQ

Breakout Session | Data Poisoning

AI systems do not simply “run on data.” They form beliefs, judgments, and action plans through complex data supply chains: training sets, fine-tuning inputs, embedding, retrieval corpora (RAG), sensor feeds, and the untrusted content they summarize or act upon. In contested information environments, adversaries treat this entire pipeline as a battlespace, conducting data warfare to corrupt, bias, or hijack the evidence AI systems rely on to perceive reality and make decisions. Attacks are evolving beyond traditional dataset poisoning into broader forms of knowledge corruption and system hijacking. Adversaries, commercial or individual agents are weaponizing the entire AI lifecycle starting from exploiting system design flaws to poison retrieval corpora, manipulating system integration gaps to embed malicious instructions in tool responses, and capitalizing on a lack of AI literacy to introduce subtle, persistent manipulations that users fail to catch. Rather than degrading models outright, such techniques reshape what systems treat as credible information. Driven by development pressures and AI hype, organizations are rushing to integrate models into data processing and decision-making. How do we address the inherent design flaws that compromise trust and accuracy in these semi-automated systems?

12:00 - 13:00
UTC+3 (EEST)