Moral Measures Workshop

Moral Measures Workshop

This event will bring together researchers to give short talks on methodology and measurement questions in the study of moral and ethical decision-making. Although many of our conference events have focused on theory development and new empirical findings, this is the first Consortium event that has focused specifically on methodology as the sole focus. We aim for this long-form workshop to be the first of several that directly address how best to conceptualize, think about, and test questions about moral decision-making.

Conference Date: Monday, March 23, 1:30 p.m.–5:00 p.m. EDT

The event will be held in Sparks 124, as well as online as a Zoom webinar: Register for the webinar.

Learn more about the Social Science Research Institute release about the event.

Conference Contact: Daryl Cameron, cdc49@psu.edu

Moral Measures Workshop

Monday, March 23 1:30–5:00 p.m. 
124 Sparks Building and via Zoom

Speakers

  • Nick Byrd (Geisinger Health System)
  • Becca Ruger (Penn State)
  • Paige Amormino (Penn State)
  • Faruk Yalcin (Penn State)
  • Raluca Szekely (Romanian Academy)
  • Jillian Meyer (Indiana University)
  • Ben Hardin (Washington University at
    St. Louis)
  • Vladimir Chituc (Yale University)
Moral Measures Workshop. Monday, March 23 1:30–5:00 p.m. EDT. 124 Sparks Building and via Zoom.
Poster art by Izzy Griffith

Be sure to check out Jillian Meyer’s Substack post about this workshop event!

Conference Schedule

1:30–1:35 p.m. | Opening Remarks by Daryl Cameron

1:35–2:05 p.m. | Opening Session

Psychometrics in the Age of AI: Two Problems, Two Solutions, and Two Resources | Nick Byrd (Geisinger Health System)

At least two major problems have emerged in the past decade of cognitive science: junk data and AI agents. Low-quality, inattentive, or bot-generated responses are rife in online psychology studies. And some of our best measures of data quality cannot detect the AI agents who take paid studies for people trying to make a buck. These issues raise questions and opportunities. What are we actually trying to measure in cognitive science? And how can we measure that? I hope this talk will get you started on your way to your own answer. First, I review some famous cases of these problems. Then I review some of the “process tracing” and other methods we have had to employ to overcome these problems. I will close by pointing you to two resources that make the problems and solutions more vivid: one resource shows you how to test how well AI agents perform on surveys (including your survey), and the other resource allows AI to interview participants to reveal the decision-making that produced their decisions.

2:05 p.m.-2:50p.m. | Early Afternoon Session

Mapping the Moral Domain: A Gamified, Interdisciplinary Measure of Moral Decision-Making | Jillian Meyer (Indiana University)

Most measures of morality rely on self-report or single-framework approaches, limiting their ability to capture how people practically make moral decisions. This gamified task introduces a behavioral, interdisciplinary measure in which participants respond to everyday moral dilemmas in real time. The task integrates four domains—anthropology (autonomy vs. community), biology (harm sensitivity vs. harm tolerance), philosophy (utilitarian vs. deontological), and religion (spiritual vs. secular)—to generate individualized profiles of moral value prioritization. By moving beyond self-report and combining multiple frameworks, this approach offers a more ecologically valid way to measure moral decision-making and supports applications in both research and education.

Measurement Invariance and Generalizability in Moral Psychology | Raluca Szekely-Copindean (Romanian Academy)

Generalizability has become a central concern in moral psychology, particularly as the field seeks to move beyond narrow and homogeneous samples. Yet broader sampling alone is not sufficient. When measures are applied across linguistic, cultural, or temporal contexts, an important question is whether they reflect the same construct in a sufficiently comparable manner. Measurement invariance therefore becomes an empirical question rather than something that can be assumed. This presentation addresses that issue through the Oxford Utilitarianism Scale as a cross-language example and the Portrait Value Questionnaire as a longitudinal example. Together, these cases suggest that broadening the scope of moral psychology requires not only more diverse samples, but also stronger psychometric evidence of comparability across contexts.

Using Multiple-Informant Designs to Study the  Social Impact of Morality | Ben Hardin (Washington University in St. Louis)

In moral psychology, researchers typically rely on a single source of information (i.e., either behavior, self-report, or informant report) to assess moral phenomena. In this talk, I suggest that some limitations of these approaches can be overcome using multiple-informant designs, in which a target receives multiple assessments from several others who know them well. Multiple-informant designs allow researchers to examine moral characteristics from multiple perspectives, assess moral characteristics more reliably by aggregating across multiple raters, and pursue important questions about how morality functions within relationships. To illustrate the unique insights that can be gained from these designs, I present results from a Round Robin study (N = 577, nested within 145 friend groups) that tested whether peoples’ friendship satisfaction and well-being is associated with their friends’ moral character.

2:50–3:05 p.m. | Break

3:05–3:50 p.m. | Methods Roundtable

3:50–4:00 p.m. | Break

4:00–5:00 p.m. | Late Afternoon Session

Do moral judgments follow the Weber-Fechner laws? Yes and no (respectively) | Vlad Chituc (Yale University)

On a scale from 1 to 10, how bad is assault? Looking at moral wrongness and 26 other constructs, I find that Likert ratings are logarithmic compressions of psychological magnitude, with each step corresponding to a constant ratio rather than a constant interval. A psychophysical measure called magnitude estimation better predicts real-world outcomes such as the length of 25,000 federal prison sentences, and it better preserves additive and ratio structure in simple psychophysical paradigms. Thus, averages of rating scale data systematically underestimate true psychological magnitudes, and researchers should instead use magnitude estimation.

Prosocial Discounting: Synthesizing Social, Moral, and Intergenerational Tradeoffs | Paige Amormino (Penn State)

This talk introduces a “prosocial discounting” framework that synthesizes social, moral, and intergenerational discounting paradigms to study how people allocate care and concern across relational distance, moral status, and time. I’ll highlight both conceptual and measurement issues across these paradigms (e.g., what different discounting parameters capture, how task structure shapes inference), and suggest directions for improving comparability and construct validity across tasks. I’m especially interested in using this as a starting point for discussion about how we operationalize prosocial preferences in moral psychology.

Thou Shalt Not Neglect Thy Scale Anchors: Anchor Neglect and Moral Psychology | Faruk Yalcin (Penn State University)

In this talk I will introduce the concept of anchor neglect. Anchor neglect is the practice of interpreting scale means without sufficient attention to the meaning of the scale anchors. This may lead to questionable inferences when statistically different responses correspond to similar anchor meanings, or when conclusions imply a directional difference that the observed scores do not clearly support. I distinguish between proximate and directional forms of anchor neglect, illustrate them with examples from moral psychology, and discuss some practical ways researchers can reduce this problem in measurement and interpretation.

Toward Mundane Morality: Suggestions for Stimuli Development  | Becca Ruger (Penn State University)

Moral judgment research often involves stimuli featuring extreme acts of altruism or villainy. However, the daily moral experience of most people involves more mundane acts. Specifically, everyday moral infractions tend to (1) be mild, (2) have relationally embedded context, and (3) are often inactions. Beyond presenting an analysis of the frequency of these topics in participants’ open-ended responses, suggestions for how to include them in stimuli, including examples, will be given. 

Penn State Presenters

C. Daryl Cameron

Daryl Cameron

Psychology, Rock Ethics Institute, Penn State

Daryl Cameron, Ph.D. is an Associate Professor of Psychology and Senior Research Associate in the Rock Ethics Institute at Penn State. He earned his Ph.D. in Social Psychology from the University of North Carolina at Chapel Hill and holds B.A. degrees in Philosophy and Psychology from the College of William and Mary. Daryl Cameron’s research focuses on the psychological processes underlying empathy and moral decision-making, particularly examining motivational factors that influence empathic emotions and behaviors. His work has been published in leading journals, including the Journal of Personality and Social Psychology and Psychological Science. He has received multiple grants from the National Science Foundation and has been recognized with awards such as the Early Career in Affective Science Award from the Society for Affective Science in 2022.

Paige Amormino

Paige Amormino

Edna Bennett Pierce Prevention Research Center, Penn State

I am an NIH T32 Postdoctoral Fellow at Pennsylvania State University working with Daryl Cameron, Ph.D. in the Empathy and Moral Psychology Lab. My research focuses on altruism, prosocial decision-making, and moral psychology. I received my Ph.D. in Psychology at Georgetown University under the mentorship of Abigail Marsh, Ph.D. in the Laboratory on Social and Affective Neuroscience. I received my B.A. in Psychology from Princeton University, where I completed my undergraduate thesis under the mentorship of Susan Fiske, Ph.D. My goal is to continue working in academia and use both my moral psychology and social psychology expertise to study the intersection of social and moral obligation.

https://www.paigeamormino.com

Becca Ruger

Becca Ruger

Psychology, Penn State

Becca has a wide range of research interests, but she is particularly interested in the factors people use when forming moral judgements of others, specifically which ones we tend to care about more than others, and how we deal with particularly complex moral conundrums. Becca’s current research is on praise, blame, and moral or immoral judgements of inactions.

Thoughts about measurement and moral psychology: Although it’s good to investigate the cognition behind extreme acts of villainry and heroism, most of the moral or immoral actions in our everyday lives are much more modest. Our science should cover both ends of the spectrum: the flashy acts and the mundane ones.”

 

 

Faruk Yalcin

Faruk Yalcin

Psychology, Penn State

Faruk Tayyip Yalcin is a Ph.D. student in social psychology at Penn State. He studies moral judgment, empathy, outrage, and metascience. His metascientific work focuses on bibliometric patterns in psychology and other social sciences, and how researchers make inferences using Likert scales.

External Presenters

Nick Byrd

Nick Byrd

Geisinger Health System

Nick Byrd is a cognitive scientist who synthesizes quantitative methods and technology with the history of ideas. His overarching goal is to understand and improve decisions and well-being. As a principal investigator on projects supported by federal, philanthropic, and for-profit institutions, he has managed over $800,000 in funding, data from over 100,000 people, and collaborating with scientists on six continents. Their most useful contributions might be reproducible and scalable process tracing methods that were previously prohibitively laborious, such as think-aloud protocols, discussion experiments, and transcript annotation. All the papers from Nick’s teams are available for free in both text and audio, with accompanying data in public repositories like the Open Science Framework.

Thoughts about measurement and moral psychology: “We may be at a tipping point. This decade’s agent-ification of artificial intelligence could be more disruptive to behavioral and cognitive scientists than the ‘mTurk-ification’ of the prior decade. Just as agentic chatbots can now complete complex tasks for researchers, bots can now generative passive income for research participants by automatically completing paid survey experiments. So, insofar as our online research programs aim to study humans, we must continually test, refine, and disseminate methods to reliably distinguish humans from machines. Otherwise, we may have to revert back to less scalable, in-person, offline lab experiments.”

Vlad Chituc

Vlad Chituc

Post-Doctoral Researcher in Psychology, Yale University

I’m a postdoctoral researcher at Yale University, where I work with Laurie Santos. Psychologists often ask people to rate their subjective experience — on a scale from one to ten, how happy are you? how wrong would that be? how strongly are you feeling that emotion? My research draws from sensory psychophysics and information theory to understand what those numbers actually mean. I finished my Ph.D. in 2024 (also at Yale) where I worked with Brian Scholl, and my work was awarded the Glushko Dissertation Prize from the Cognitive Science Society. I’ve written publicly for outlets like The New York Times, The New Republic, and The Daily Beast.

Ben Hardin

Ben Hardin

Graduate Student in Social Psychology, Washington University in St. Louis

I am a Ph.D. student in Psychological and Brain Sciences at Washington University in St. Louis. My research seeks to understand the nature of morally valued character traits (e.g., compassion, honesty, patience), addressing questions like: how and when are moral character traits expressed in the context of everyday life? How do these traits influence peoples’ lives and relationships? And how can we best measure individual differences in morality? I received my Master’s from Wake Forest University in 2023, and my Bachelor’s from the University of Mississippi in 2021

Jillian Lee Meyer

Jillian Lee Meyer

Social Psychology and Cognitive Science, Indiana University

I am a doctoral student at Indiana University Bloomington pursuing a dual Ph.D. in social psychology and cognitive science. My research focuses on moral decision-making, particularly how people judge, retell, and evaluate moral stories, and how social context such as ingroup/outgroup dynamics and social media environments shapes those judgments. Alongside this work, I am developing an interdisciplinary, empirically grounded moral decision-making game designed to help students and non-academic audiences understand how they make moral decisions. I am also deeply invested in public science communication, such as short-form educational content on TikTok (@psychtok101) and Substack (The Inner Workout), where I connect moral psychology, character, and athletics. 

Thoughts about measurement and moral psychology: Moral psychology has often relied on abstract surveys that simplify the lived complexity of moral decision-making. If we want rigor, we need tools that are theoretically grounded, empirically validated, and ecologically realistic. My work integrates interdisciplinary theories of morality into behavior-based methods that capture how people navigate relevant, real-world moral trade-offs. Importantly, advancing the field requires learning from the methodological strengths of other disciplines that study morality in order to build a more complete picture of this complex decision-making process.”

Raluca Szekely-Copindean

Raluca Szekely-Copindean

Romanian Academy

I am a researcher interested in moral psychology, with a background in affective psychology, cognitive neuroscience, and advanced data analysis. My research examines how emotions interact with moral cognition and moral behavior, and how these processes can be studied with conceptual precision and methodological rigor. I also work as a data scientist in metascience and reproducibility, with a focus on open science, transparent research practices, and robust analytical methods. I am a member of the Psychological Science Accelerator and am committed to collaborative initiatives that strengthen the credibility and cumulative progress of psychological research.

Thoughts about measurement and moral psychology: “Beyond transparency and reproducibility, I think moral psychology would benefit from greater conceptual clarity and stronger psychometric foundations. As an interdisciplinary field, we should work toward clearer definitions of core constructs and more systematic attention to construct validity in study design. Many paradigms still rely on measures with limited validation and single-occasion designs. Strengthening measurement—through rigorous instrument development grounded in theory, the use of other types of designs such as intensive longitudinal ones, multimethod approaches, and broader collaboration via Big Team Science to increase sample diversity—would help us build a more cumulative science of moral judgment and behavior.”