Artificial Intelligence, Empathy, and Morality

Artificial Intelligence, Empathy, and Morality

Many of the researchers in the Consortium are interested in how people navigate moral emotions and decisions through interactions with artificial intelligence (such as LLMs and chatbots, as well as social robots). We study how people think about moral emotions and judgments about AI, how they react when AI seems to convey such reactions back in return, and the scientific and normative implications of how people might use human-AI interaction to develop moral capacities.

We hosted an event on Empathy, Morality, and AI in April 2024. For more about that, see here. Archived videos from that conference can be seen on the Events page. We plan to follow up with another AI-Empathy event in spring 2026.

Related Members