Humans face a fundamental challenge of how to balance selfish interests against moral considerations. Such trade‐offs are implicit in moral decisions about what to do; judgments of whether an action is morally right or wrong; and inferences about the moral character of others. To date, these three dimensions of moral cognition–decision‐making, judgment, and inference–have been studied largely independently, using very different experimental paradigms. However, important aspects of moral cognition occur at the intersection of multiple dimensions. This talk will demonstrate the advantages of investigating these three dimensions of moral cognition within a single computational framework. A core component of this framework is harm aversion, a moral sentiment defined as a distaste for harming others. The framework integrates economic utility models of harm aversion with Bayesian reinforcement learning models describing beliefs about others’ harm aversion. Examples from several studies will show how this framework can provide novel insights into the mechanisms of moral decision‐making, judgment, and inference.