u/BorgAdjacent

▲ 31 r/ainbow+1 crossposts

What's something you learned reading other people's posts in gay subreddits?

For me, it was a surprise that so many people need confirmation/direction on labeling themselves.

reddit.com
u/BorgAdjacent — 1 day ago
▲ 2 r/Ethics

The Empathy Engine:

What If We Had to Feel the Harm We Caused?

Abstract

This paper proposes a philosophical thought experiment—the Empathy Engine (EE)—designed to explore a structural asymmetry present in many moral systems: perpetrators of harm rarely experience the suffering they cause. The EE is conceived as a hypothetical afterlife moral mechanism in which people who cause unjustified suffering must experience the subjective perspective of their victims until genuine moral understanding emerges. 

Within the thought experiment, agents are assumed to know that such a system exists as part of the inevitable afterlife. This assumption allows the model to explore how moral incentives change when actors cannot rely on permanent experiential insulation from their actions.

The system distinguishes between cruelty and necessary harm, evaluates intent and knowledge, and preserves free will during life by delaying moral consequences until death while allowing knowledge of the system’s existence.

The purpose of the model is not theological speculation but philosophical analysis. By examining how moral incentives change under conditions of an inevitable experiential symmetry, the thought experiment provides insight into the role of empathy, self-deception, and psychological distance in human cruelty. 

The model is evaluated in relation to existing ethical frameworks including utilitarian deterrence, restorative justice, virtue ethics, and karmic traditions. It also introduces the concept of transient valuation of others to explain how cruelty can arise through temporary failures of moral perspective rather than sustained hatred or ideological dehumanization.

Introduction

Human moral systems typically regulate harmful behavior through punishment, deterrence, or social sanction. Despite their differences, these approaches share a common limitation: individuals who cause suffering rarely experience the harm they inflict.

This asymmetry produces a persistent moral problem. Victims bear the full experiential cost of wrongdoing, while perpetrators often remain insulated from the lived consequences of their actions. Punishment may impose costs on the offender, but it rarely reproduces the experiential perspective of the victim.

This paper proposes a thought experiment intended to examine the moral implications of eliminating this asymmetry. The proposed mechanism—the Empathy Engine—is a hypothetical afterlife system that ensures experiential symmetry between actor and victim. Any agent who causes unjustified suffering must eventually experience the subjective state of the person harmed until genuine understanding of the harm emerges.

The purpose of the model is not to advance a literal metaphysical claim but to explore how moral reasoning changes when experiential insulation from harm is removed.

The Moral Accounting Problem

Most ethical systems attempt to correct wrongdoing through three broad mechanisms: deterrence, punishment and moral education.

Although these approaches differ in their theoretical justification, they share a structural feature: the actor who causes harm rarely experiences the victim’s suffering directly.

This gap between action and experience may contribute to the persistence of cruelty. Research in moral psychology suggests that harmful actions are often facilitated by mechanisms such as dehumanization, psychological distance, and failures of perspective-taking (Bandura 1999). When the suffering of others remains abstract or distant, harmful behavior becomes easier to rationalize.

The Empathy Engine thought experiment addresses this asymmetry by imagining a system in which actors know they will eventually be required to confront the experiential consequences of their actions.

Rawls’s thought experiment of the veil of ignorance similarly attempts to correct moral bias by preventing individuals from privileging their own position when designing principles of justice (Rawls 1971). The Empathy Engine can be interpreted as addressing a related problem by removing experiential insulation from the consequences of one’s actions.

The Empathy Engine Model

The Empathy Engine is conceived as a hypothetical afterlife moral mechanism operating under four principles and a foundational assumption.

  1. First, while aware of the existence of the EE, human beings retain full moral agency during life. The EE does not directly intervene in human decision-making, preserving the conditions under which moral responsibility emerges.
  2. Second, upon death, actions that caused unjustified suffering are evaluated. If cruelty occurred, the actor enters the Empathy Engine and experiences the subjective perspective of the victim. This experience includes emotional, physical, and contextual aspects of the harm.
  3. Third, the experience continues until the actor achieves genuine understanding of the suffering caused, or empathic closure. The process constrains the forms of self-deception that rely on experiential distance by revealing both the true consequences of the action and the motivations behind it.
  4. Fourth, the severity and duration of the experience scale according to morally relevant factors such as intent, knowledge, proportionality, and necessity.
  5. Foundational Assumption: The Empathy Engine is assumed to possess complete knowledge of the motivations and circumstances underlying each action. This assumption underlies each of the principles above and the distinctions that follow.

On the basis of this foundational assumption, the system can meaningfully distinguish between cruelty and harm that occurs under morally necessary or justified circumstances

Genuine understanding is treated as a primitive condition within the model. The purpose of the EE is not to reduce moral understanding to a formal definition, but to examine the consequences of a system in which such understanding is guaranteed prior to release.

The duration and intensity of the experience are shaped by the degree of prior experiential insulation. Actions carried out under conditions of high psychological distance require more extensive experiential exposure before genuine understanding is achieved, while actions performed with full awareness of their impact require less.

The model does not attempt to specify the full content of moral understanding, only that some form of recognition sufficient to eliminate prior experiential distortion is achieved.

Distinguishing Cruelty from Necessary Harm

Not all suffering is morally wrongful. The model therefore distinguishes between cruelty and necessary harm.

Examples of non-cruel harm include painful medical procedures performed to save a life, legitimate acts of self-defense, proportionate legal punishment, and morally tragic decisions made to prevent catastrophic outcomes.

In such cases, the Empathy Engine may still allow the actor to experience the consequences of their decision, but the experience is limited and contextualized. The purpose is understanding rather than punishment.

Cruelty, by contrast, involves the infliction of unnecessary suffering through disregard for another person’s experience. It often arises from what may be called transient valuation of others.

Transient valuation refers to a temporary cognitive shift in which another person’s subjective experience is discounted relative to one’s immediate goals. This shift does not necessarily require sustained hatred or ideological dehumanization. In many cases it occurs momentarily, allowing the actor to pursue a desired outcome while discounting the experiential cost imposed on another person.

Adam Smith observed: “By the imagination we place ourselves in his situation… we enter as it were into his body, and become in some measure the same person with him” (Smith, The Theory of Moral Sentiments, 1759). In other words, moral judgment depends on the capacity to imaginatively enter the situation of another person.

The Empathy Engine targets this specific moral failure by eliminating the experiential distance that makes transient valuation of others psychologically possible.

The distinction between cruelty and necessary harm is not always binary. In some cases, harm may be instrumentally justified while still involving a morally relevant experiential stance toward the suffering imposed. Actions carried out with awareness, restraint, and regret differ from those accompanied by indifference or satisfaction, even when the external justification is similar.

The Empathy Engine accounts for this by evaluating not only whether harm was necessary, but how the suffering of others was experienced and valued at the time of action. As a result, even justified harm may carry an experiential component within the EE, though its intensity and duration differ from cases of cruelty. Indifference, in this framework, is treated as a partial failure of moral recognition and is therefore evaluated closer to cruelty than to fully justified harm, though it remains distinct from intentional cruelty.

Eliminating Self-Deception

A central feature of the model is the elimination of self-deception.

Harmful actions are frequently sustained by narratives that obscure responsibility or minimize consequences. Moral disengagement mechanisms allow individuals to reinterpret harmful behavior in ways that reduce perceived accountability (Bandura 1999).

The Empathy Engine functions as a form of epistemic correction. By forcing the actor to experience the consequences of their actions from the victim’s perspective, the system removes narrative distortions and reveals both motivations and outcomes with perfect clarity.

In this sense, the EE can be interpreted as a model of perfect epistemic justice, in which neither ignorance nor self-deception can obscure the moral consequences of one’s actions.

Edge Cases and Moral Tragedy

The model allows for morally tragic situations in which harm cannot be avoided. Consider a scenario in which severe coercion is used to prevent a catastrophe that would otherwise cause widespread loss of life.

Under the Empathy Engine framework, the actor would still experience the suffering imposed on the victim. However, the experience would occur within a context that reflects the necessity of the decision. The experiential burden is therefore reduced relative to acts of cruelty.

The EE does not eliminate tragic moral dilemmas. Instead, it ensures that actors cannot remain ignorant of the suffering produced by their choices.

Incentive Structure

By ensuring that perpetrators eventually experience the suffering they cause, the Empathy Engine alters the incentive structure of moral behavior. However, this effect is not uniform. Human agents do not consistently act in accordance with long-term consequences; temporal discounting, impulsivity, and motivated reasoning may lead individuals to prioritize immediate goals over future experiential costs.

The Empathy Engine does not eliminate cruelty, but changes its decision profile. Actions that depend on sustained psychological distance become less stable over time, as the anticipated return of experiential consequences introduces a countervailing pressure. In some cases, actors may knowingly accept this future burden in pursuit of perceived necessity or immediate gain.

The primary effect of the EE is therefore not the prevention of all harmful action, but the introduction of unavoidable experiential accountability, which reshapes how such actions are evaluated before they occur. Because the Empathy Engine is assumed to be universally known, its primary behavioral effect occurs before wrongdoing takes place. Rather than functioning primarily as rehabilitation after harm occurs, the model produces a form of anticipatory moral reflection—what might be described as a kind of “pre-habilitation.” Actors are placed in a position where they must contend with not only the benefits of their actions but also the experiential cost those actions impose on others—costs they themselves will eventually be required to experience.

Comparative Ethical Analysis

The Empathy Engine shares features with several established ethical frameworks while addressing limitations in each.

Utilitarian deterrence seeks to reduce harm through consequences, but can justify severe suffering in pursuit of aggregate welfare (Parfit 1984). The EE introduces experiential accountability that constrains such reasoning.

Restorative justice emphasizes understanding between offender and victim. The EE can be interpreted as a theoretical form of perfect restorative justice.

The EE accelerates moral learning by forcing agents to confront the experiential consequences of their actions. This aligns with virtue ethics, which focuses on the development of moral character (Aristotle, Nicomachean Ethics). 

The model also echoes Adam Smith’s account of moral judgment as arising from imaginative sympathy with the experiences of others (Smith 1759). The Empathy Engine can be interpreted as a hypothetical mechanism that eliminates failures of sympathy by making the perspective of the victim directly accessible.

Another place this idea surfaces is in Karmic traditions that propose reciprocal consequences across lifetimes. The EE refines this idea by specifying experiential symmetry rather than vague moral balancing.

What distinguishes the Empathy Engine from these frameworks is that it consolidates several of their insights within a single hypothetical mechanism that is universally known to moral agents and enforces experiential symmetry in a way that cannot be avoided once the evaluative process begins.

Potential Objections

Several objections arise.

First, a central assumption of the model is that direct experiential access to another’s suffering, combined with the elimination of psychological distance and self-deceptive reinterpretation, is sufficient to eventually produce moral understanding. This assumption does not reflect typical human psychological processes, in which exposure to suffering may result in desensitization, rationalization, or further harm. The Empathy Engine differs in that it removes the conditions under which such responses are sustained, including narrative distortion and experiential detachment, and does not terminate until moral recognition is achieved. The plausibility of this convergence is not asserted as an empirical claim but treated as a structural feature of the thought experiment.

Second, the system assumes perfect knowledge of intent, alternatives, and consequences. Such epistemic conditions are unattainable within human institutions.

A further concern is that knowledge of the Empathy Engine might discourage individuals from taking morally necessary but harmful actions. If actors anticipate future experiential consequences, they may become excessively risk-averse in situations requiring difficult decisions. However, because the model distinguishes cruelty from necessary harm, such decisions would not generate the same experiential burden.

In some cases, actors may knowingly accept the experiential burden of the Empathy Engine in order to prevent greater harms. The model does not eliminate such tragic moral trade-offs; it merely ensures that actors cannot remain insulated from the suffering involved in those decisions.

These objections highlight the limits of the thought experiment but do not negate its analytical value.

Scaling the Thought Experiment

The Empathy Engine operates under idealized conditions that assume perfect knowledge of intent, consequences, and subjective experience. These assumptions allow the thought experiment to isolate a structural feature of moral behavior: the insulation of actors from the experiential consequences of their actions.

Real-world moral systems cannot replicate the omniscient evaluation assumed by the model. Human institutions lack perfect knowledge of motives, causal chains, and psychological impact.

However, the thought experiment reveals a principle that does not depend on supernatural enforcement:

Cruelty is often facilitated by experiential insulation—the separation between harmful action and the lived experience of its consequences.

Many social mechanisms that appear to reduce cruelty operate by narrowing this distance and counteracting transient valuation of others. Restorative justice programs bring offenders into direct contact with victims. Truth and reconciliation processes expose perpetrators to testimony about the harms they caused. Literature and narrative media similarly function by making otherwise abstract suffering experientially vivid.

These mechanisms can be understood as imperfect approximations of the experiential symmetry imagined by the Empathy Engine.

Conclusion

The Empathy Engine thought experiment addresses a persistent asymmetry in moral systems: victims experience harm directly, while perpetrators often do not.

By imagining a system in which actors must eventually experience the suffering they cause, the model reveals how frequently cruelty depends on psychological distance from its consequences.

The central insight is therefore not metaphysical but structural. Moral systems become more effective when the experiential reality of harm becomes visible to those who cause it.

Although no human institution can reproduce the conditions assumed by the Empathy Engine, practices that reduce experiential distance may approximate its moral function.

Suggested References

Aristotle. Nicomachean Ethics.

Bandura, A. (1999). Moral Disengagement in the Perpetration of Inhumanities.

Nagel, T. (1979). Mortal Questions.

Parfit, D. (1984). Reasons and Persons.

Rawls, J. (1971). A Theory of Justice.

Smith, A. (1759). The Theory of Moral Sentiments.

reddit.com
u/BorgAdjacent — 9 days ago