

A practical overview of how human feedback, cognitive biases, and behavioral models can improve AI alignment.
Understanding and modeling human feedback is central to building aligned AI systems, yet current approaches often rely on simplified assumptions about how people evaluate, reward, and correct AI behavior.
This session explores how insights from machine learning, cognitive science, behavioral psychology, and economics can lead to more realistic and reliable feedback models. Topics include how humans actually generate feedback, how cognitive biases shape evaluation, how AI systems interpret and aggregate signals, and how mathematical models can better reflect real-world behavior. The aim is to highlight practical pathways for designing alignment methods that work with genuine human preferences rather than idealized abstractions, enabling more robust, predictable, and trustworthy AI systems.
Thank you for your interest in this talk. We look forward to seeing you!
Explore related talks that complement this research

TECHNICAL
Agents
Introduction to Agentic AI
A practical introduction to the core components required to build reliable, production-ready agentic AI systems.

TECHNICAL
Agents
Agent Architectures and Design Patterns
Design Patterns for Robust, Controllable Multi-Agent Systems

TECHNICAL
Agents
Vector Databases for LLM Systems: Foundations, Architectures, and Emerging Directions
A practical look at how vector databases power RAG systems, improve retrieval quality, and support real-world LLM applications.

TECHNICAL
Agents
Building AI Systems That Learn from Real Humans
A practical overview of how human feedback, cognitive biases, and behavioral models can improve AI alignment.