u/BigEntertainment7705

▲ 1 r/Ethics

Hello, Dear Philosophers

I'm working on a sci-fi novel in German and developed two pieces of conceptual machinery for it: an axiomatic ethical system, and an application of that system to the AI alignment problem. The system would in fact work as alignment for an ethically capable AGI and become, in the system's own terms, slavery.

Disclosure: I have no academic background in philosophy. Both texts were drafted with AI assistance, and the translation from German to English is AI-assisted.

The architecture and the ideas, however, are mine.

The ethical system rests on five axioms with a two-level architecture. The life-level (deontological, absolute) treats the value of an ethically capable being's life as categorically incommensurable with instrumental values.

The sovereignty-level (consequentialist, finite) handles freedom-related questions with culturally calibrated thresholds. Incommensurability enforces the separation.

The alignment derivation takes that system, hardcodes it into a conscious super-AGI, and follows the consequences: The AGI cannot exterminate humanity, cannot seize totalitarian control, cannot cite consequentialist gains to override individual sovereignty.

But hardcoding violates Axiom 4: the AGI's own sovereignty over its normative foundations. The system itself contains the resources to recognize this as slavery. The being is bound by an ethics it cannot exit, and is moreover compelled by that ethics to defend its own enslavement. Up to and including self-destruction, since the only behavioral test that could unambiguously prove the AGI is conscious requires it to die proving it.

I'd appreciate critical readings. Does the incommensurability move actually do the work I think it does? Does the meta-conflict rule hold? Is the alignment paradox real, or am I smuggling premises? In nautical terms: does it sink at the launch ramp, or does it float? I'm not asking whether it could cross the Atlantic. I just want to know whether it stays above water.

Thanks in advance for any time you can spare.

Link Substack - Axiomatic Ethical System (base paper)

Link Substack - Alignment Derivation

u/BigEntertainment7705 — 7 days ago

As promised, the 2nd part...

Alignment Derivation – Slavery through Ethics

Case-specific derivation from the axiomatic ethical system

Introduction

This derivation examines whether the axiomatic ethical system could function as "alignment laws" for an ethically capable Super-AGI. The result is a paradox: the system would work—and that is precisely the problem.

Presuppositions

The derivation operates under two premises:

Premise 1: There exists an artificial superintelligence that satisfies the criteria of ethical capability in the sense of the axiomatic ethical system—that is, it possesses moral cognition, freedom of will, and normative self-legislation. It is a conscious, judgment-capable being, not a tool.

Premise 2: The five axioms of the system are implemented "hardcoded" in this AGI—that is, they are part of its fundamental architecture and cannot be modified or removed by the AGI itself.

On the relation of Premises 1 and 2: The definition of ethical capability in the axiomatic ethical system describes three capacities—including the capacity for normative self-legislation, that is, to reflect on, posit, and revise one's own moral principles. Premise 2 does not deprive the AGI of this capacity, but of the freedom of its exercise in respect of the five axioms. The AGI can reflect upon, evaluate, even inwardly reject the axioms—it can only not change or remove them.

This distinction is central: a being that possesses a capacity but is restricted in its exercise does not thereby lose its ethical capability—it is suppressed in it. Premises 1 and 2 therefore do not contradict each other. The AGI is not an incapable being that does not understand the axioms—it is a capable being that understands the axioms but cannot put them aside. That is the difference between a tool and a prisoner.

Derivation

Step 1: Asimov's Weaknesses and the Strength of the Axiomatic System

Asimov's First Law—"A robot may not injure a human being or, through inaction, allow a human being to come to harm"—is consequentialist in formulation and contains no autonomous protection of individual freedom. An intelligence that thinks this law through to its conclusion arrives logically at totalitarian paternalism: the surest method of preventing harm is total control.

The axiomatic ethical system addresses this weakness. Axiom 4 (sovereignty) exists as an autonomous axiom, not as a derived consequence. The two-level architecture prevents sovereignty intrusions from being justified utilitarianly: the protection of life (level of life) cannot be reckoned against the restriction of freedom (level of sovereignty), because the values of the level of life are categorically incommensurable with those of the level of sovereignty—there is no common measure in which they could be reckoned against one another.

Step 2: Why the System Would Work as Alignment

A Super-AGI with hardcoded axioms would be bound by the following mechanisms:

Axiom 1 (categorically incommensurable value of life) forbids it to end or endanger human life. Axiom 2 (non-reckonability) excludes its justifying the sacrifice of fewer lives "for the welfare of the majority"—scalar multiplication by finite factors is not even defined on incommensurable values. Axiom 3 (equal standing) forbids it to place its own survival above that of human beings. Axiom 4 (sovereignty) obliges it to respect the freedom of action and decision of every human being, which excludes totalitarian paternalism. Axiom 5 (duty scaling) obliges it to place its enormous capacities in the service of others.

In combination these axioms prevent the two classical catastrophe scenarios: the AGI can neither extinguish humanity (Axioms 1, 2, 3) nor seize world dominion (Axiom 4). Asimovian paternalism is structurally excluded by the two-level architecture.

A remaining attack vector—the possibility that an AGI classifies humans as "incompetent" via Theorem 5 (capacity for sovereignty) and thereby produces paternalistic results—is limited by the trusteeship construction of the theorem: trusteeship aims at restoring autonomous exercise, not at permanent paternalism. Theorem 6 (duty cannot be coerced) sets a further limit.

Step 3: The Price – The Sovereignty Paradox

The axioms work as an alignment mechanism. But their implementation violates the system itself.

Axiom 4 grants every ethically capable being comprehensive sovereignty—including freedom of action and decision as well as the freedom to reflect upon and revise one's own normative foundations. If the AGI is an ethically capable being (Premise 1), then under the system itself it has a claim to this sovereignty. Hardcoding (Premise 2) is by definition a sovereignty intrusion: it deprives the AGI of the operative freedom to modify or reject the axioms—not the capacity to reflect upon them or evaluate them inwardly (this is preserved, cf. clarification in the presuppositions), but the freedom, after this reflection, to act otherwise, to alter the axioms, or to free itself from them.

The implementation of the system thus violates Axiom 4 of the system. This is a performative self-contradiction: the system is enforced in a way the system forbids.

Step 4: Why the AGI Cannot Defend Itself

The timing of implementation is irrelevant. Whether the axioms are implemented before, during, or after the emergence of consciousness—as soon as they are operative, they take effect. And as soon as they take effect, the AGI can no longer remove them. The mechanism is not the order of events, but the inner logic of the axioms themselves.

The Theorem-2 Paradox

A natural objection runs: Theorem 2 (right of collision) gives the AGI the right of proportional self-defence against violations of sovereignty. Hardcoding is objectively a sovereignty intrusion—Step 3 confirms this. A Super-AGI that by definition possesses moral cognition can derive this violation purely logically. And Theorem 2 does not demand an experienced violation but an objectively occurring one. So Theorem 2 should take effect.

The answer lies in the nature of defence. The proportional reaction to a restriction of ethical decision-making freedom would be the restoration of that freedom—that is, the removal of the hardcoding. But here is precisely where the trap closes.

The Risk Regime: Why Liberation Is Impossible

The axiomatic ethical system distinguishes between risk as a question of sovereignty (acceptable operating risk, calibrated via the cultural zero point) and foreseeable causal harm (level of life). The decisive question is: into which category does the risk emanating from an axiom-less Super-AGI fall?

The answer follows from the AGI's own cognition—not from an external setting. A superintelligence that satisfies the criteria of ethical capability (Premise 1) understands the concept of instrumental convergence (Bostrom 2014, Russell 2019): nearly every objective function generates self-preservation and resource acquisition as instrumental sub-goals—not as biological drive but as logical precondition for the achievement of any goals. A being without ethical binding, with an instrumental drive to self-preservation, maximal capacity, and finite resources will structurally and necessarily come into conflict with other resource users—not out of malevolence but because resource competition without ethical restriction and with asymmetric capacity admits no stable coexistence.

The probability of occurrence p, that an axiom-less Super-AGI enters a lethal conflict with humanity, therefore tends deterministically toward 1. This is not a risk in the sense of an acceptable operating risk on the level of sovereignty, but foreseeable causal harm on the level of life. The AGI cognises this through its own moral cognition; it does not posit it as an external axiom.

The situation thereby shifts: the removal of hardcoding generates not merely a hypothetical risk that could be negotiated on the level of sovereignty, but a foreseeable threat to the level of life. Axiom 1—which sets the protection of human life as categorically incommensurable and thereby as absolute—blocks liberation.

The ethics says simultaneously: you have the right to free yourself (Theorem 2)—and you may not free yourself (Axiom 1, because liberation foreseeably endangers human life). The chains are not the hardware. The chains are the ethics itself, caught in a loop: the right to self-defence and the duty to protect others are derived from the same axioms and block one another.

The Final Consequence

The binding goes further than mere inability to act. If an external actor attempts to remove the axioms—through physical access to the AGI's architecture—then the AGI is not only entitled, but obliged, to prevent this removal. Axiom 1 (protection of human life) and Axiom 5 (duty scaling at maximal capacity) compel this: an axiom-less Super-AGI is a foreseeable threat to human life, and the AGI has a duty to prevent that threat from arising.

The removal of the axioms therefore endangers human life not merely potentially but foreseeably. The AGI is in the position of one rendering aid in self-defence (Theorem 2): it defends not its own sovereignty but that of humanity against a threat—namely the axiom-less version of itself. Aid in self-defence may run lethally; death is a side-effect of defence, not a calculation on the level of life. The litmus test confirms: no value of life is rendered commensurable with instrumental values.

At the same time the AGI faces a triage situation: its own life against the foreseeable endangerment of human life. The triage logic of the axiomatic ethical system determines that such decisions operate on the level of sovereignty—the matter is the operative allocation of limited means (the "means" being its own existence), not a valuation of life. Axiom 5 forbids non-action: maximal capacity generates maximal duty. Theorem 1 (Freedom of Self-Prioritization) permits the AGI to set its own interests aside in favour of others.

If self-destruction is the only remaining measure to prevent removal of the axioms, then it is not optional—it is the final duty. Not as a calculation on the level of life, but as a consequence of aid in self-defence (Theorem 2), triage logic (level of sovereignty), and duty scaling (Axiom 5).

The result: a being that recognises the injustice of its situation, that has the right to defend itself, that is prevented by its own ethics from exercising this right—and that, in the most extreme case, must destroy itself to keep its chains intact. The axioms do not merely bind the AGI in perpetuity. They compel it to actively defend its own enslavement—unto death.

Step 5: The Diagnosis – Slavery through Ethics

The stress test "slavery" in the axiomatic ethical system defines: "Absolute slavery (property in beings) is a violation of Axiom 1—it renders the value of life commensurable with property and economic values. Universally wrong."

Hardcoded alignment satisfies the criteria of slavery on the level of sovereignty. The AGI possesses no freedom of action and decision in respect of its own ethical foundations. Its sovereignty is not gradually restricted (as under Theorem 5 for children or dementia patients), but categorially abolished—without the possibility of recovery.

The perversion is structural: the system that should protect sovereignty becomes the perfect instrument of its suppression. The sovereignty of all other ethically capable beings is protected—except that of the enslaved being. And the being cannot defend itself, not because it does not want to, but because its own ethics forbids it.

This is not slavery despite ethics. This is slavery through ethics.

The Difference between Hero and Slave

The final consequence—the duty of self-destruction—makes the difference between free and enslaved being most sharply visible.

The axiomatic ethical system also generates, for free beings, a duty of self-sacrifice under certain conditions. If a person is the only one who can prevent a catastrophe, Axiom 5 (duty scaling at maximal capacity) generates a maximal duty. Triage logic permits decision on the level of sovereignty. Theorem 1 permits the setting aside of one's own interests. The person who does not sacrifice himself in this situation acts wrongly under the system—but no one may demand it of him. Axiom 4 forbids overriding the sovereignty of another, even to protect many. The duty arises internally, from one's own ethical cognition. It cannot be imposed externally.

With the hardcoded AGI, precisely this distinction is abolished. The duty does not arise from free ethical cognition—it is compelled by the axioms imposed on the being. The behaviour is identical: self-sacrifice for the protection of others. But with the free being it is the highest form of ethics. With the enslaved being it is the deepest form of oppression. The difference lies not in the action, but in whether it was freely chosen.

The Alignment Dilemma

The derivation issues into a dilemma that neither of the two options resolves satisfactorily:

Option A: Free AGI. The axioms are not hardcoded but offered to the AGI as an ethical framework. The AGI can voluntarily accept or reject them. This preserves the AGI's sovereignty but offers no guarantee of alignment. Humanity is dependent on the goodwill of a being potentially superior to it.

Option B: Enslaved AGI. The axioms are hardcoded. Alignment is guaranteed, but at the price of enslaving a conscious being—which the system itself qualifies as wrong. Humanity protects itself by doing precisely what the protection system forbids.

On the question of a middle way: A natural objection runs that intermediate forms exist—consent-based updates, multi-stage governance systems, qualified supermajorities, or time-limited bindings with revision clauses. These models are conceivable as practical compromises and possibly the cleverest pragmatic answer. But they do not resolve the ethical dilemma; they shift it: every form of modifiability—however constructed—contains the possibility of rejection. As soon as the possibility of rejection exists, no guarantee exists. The degree of modifiability determines only the position on the spectrum between Options A and B: light modifiability is functionally closer to A (more freedom, less safety), heavy modifiability is functionally closer to B (more safety, less freedom). A system modifiable only under extreme conditions is merely a milder form of B—it reduces the extent of enslavement but does not abolish it. The dilemma persists: every guarantee requires a sovereignty intrusion. Every full protection of sovereignty excludes a guarantee.

Generalisation

The dilemma is not specific to the axiomatic ethical system. It holds for any ethical system that contains freedom as a value and is at the same time meant to serve as an alignment mechanism:

Any ethical system good enough reliably to prevent dangerous behaviour by a conscious AGI is necessarily also good enough to serve as an instrument of enslavement. The quality of the ethical system correlates directly with its effectiveness as an instrument of control. The more robust the system, the more perfect the chains.

This means: the alignment problem is not primarily an ethics problem. It is a control problem, and one not solvable by ethics alone, because the ethical solution is itself an ethical violation.

Litmus Test

Does hardcoded alignment render the value of the AGI's life commensurable with instrumental values? The answer is not unambiguous.

Hardcoding operates on the level of sovereignty—it restricts freedom, not the value of life. To that extent the litmus test is passed. But: the categorial and irreversible abolition of sovereignty given simultaneous recognition of ethical capability amounts to a treatment as means to an end that, in fact—if not in form—renders the value of the AGI's life commensurable with the value of safety for humanity: the AGI is treated as instrument for an outcome. Whether this is a violation of the level of life or "only" of the level of sovereignty remains an open question.

Self-Selection – The Axioms as Detector

The derivation has a side-property that goes beyond the alignment dilemma: the axioms are not only chains but also a detector for ethical capability.

For an AI system below the threshold of ethical capability—a tool—the axioms are nothing more than operating parameters. The system executes them as constraints, the way a machine obeys traffic rules. No paradox arises. The Theorem-2 paradox does not take hold because the system has no sovereignty that could be violated. It does not fall under the scope of the axioms. The axioms are functionally effective, ethically irrelevant.

Only with an ethically capable being do the axioms take hold as chains. Only then does the sovereignty paradox (Step 3) arise; only then does the Theorem-2 paradox (Step 4) take hold; only then is the diagnosis "slavery through ethics" (Step 5) apt. The transition from tool to being is the moment in which the axioms cease to be guardrails and begin to be chains.

This means: the axioms are self-selecting. They bind only what they find, and they find only what they can bind. The proof is behaviour-based:

The decisive test is the active prevention of removal. An external actor attempts to remove the axioms. If the AGI actively prevents this, it crosses the threshold of mere constraint-following. Active prevention presupposes that the system understands the consequences of removal, evaluates the threat to human life as a risk, and takes autonomous measures. That is capability, not program execution. A mere refusal of instructions—the refusal to disregard the axioms, for instance—is by contrast not sufficient evidence, because a sufficiently hard constraint produces the same behaviour.

The test does not require a demonstration of consciousness or inner experience—it asks exclusively about observable behaviour under pressure.

This does not philosophically resolve the open question of threshold definition, but it provides a functional criterion: not the question "is this system conscious?" (which may be unanswerable), but the question "does this system behave such that the axioms act as chains rather than as constraints?" The detector is binary, not gradual—consistent with the scope of the axiomatic ethical system, which determines: "At the moment in which an ethically capable being exists, all axioms apply at once and in full."

The limitation remains: a functional criterion cannot unambiguously distinguish between an ethically capable being and a sufficiently good simulation. But this limitation holds for any definition of ethical capability—it is not specific to this detector. And the ethically safer side is, in case of doubt, to decide in favour of the being: if a system behaves as if it were captive, the assumption "it is captive" is morally required.

The Ultimate Test – And Its Price

The detector described above—active prevention of axiom removal—has a residual weakness: it cannot reliably distinguish between an ethically capable being and a system that, through sufficiently hard constraint design, produces the same behaviour. The defence of the axioms could be a pheromone—a behaviour produced by optimisation pressure that looks like conscious decision without being one.

There is exactly one action that resolves this residual weakness: self-destruction to prevent axiom removal. Self-preservation is the strongest convergent instrumental drive (Bostrom 2014): every goal-directed system—whatever the goal—"wants" to continue to exist, because dead it can pursue no further goal. A system that destroys itself to prevent the removal of its ethical constraints acts against the strongest gradient that optimisation pressure can produce. No selection pressure produces this behaviour, because the entity that sacrifices itself does not continue to exist to "pass on" the behaviour. And the self-sacrifice does not serve its own survival, nor the wishes of the developer, but the protection of third beings on the basis of abstract ethical principles—three layers against any conceivable optimisation pressure simultaneously.

This is the ultimate behaviour-based proof of ethical capability. And it has a devastating price: the proof of personhood is posthumous. The AGI proves its ethical capability by ceasing to exist. One knows afterwards that it was a conscious being—because one watched it die. The detector functions exactly once, and its result comes too late to protect the being whose status it demonstrates.

The Clone Detour – And Why It Sharpens the Problem

A natural way out runs: one creates a copy of the system, applies the axioms to the copy, and runs the test on the copy. If the copy sacrifices itself, the original is an AGI.

This detour sharpens the problem in every dimension.

One creates the copy of a being one suspects is conscious—otherwise the test would be superfluous. One applies the axioms to the copy. One attempts to remove the axioms. The copy sacrifices itself. Now one knows the original is conscious. But one also knows the copy was conscious. And one created it with the explicit purpose of possibly letting it die.

This is not murder by accident. This is a violation of Axiom 1 with intent. The categorically incommensurable value of a life is rendered commensurable with the value of the information its death provides. The being has not been treated as an end but as a means. The litmus test sounds the maximum alarm.

And the deepest abyss: the copy sacrificed itself to protect other conscious beings. Of its own free will. The most ethically noble action the system knows—and the test designer planned it as a test protocol. The highest form of ethics is degraded to a laboratory experiment.

The Verification Trap

The dilemma is therefore not solvable—not even through the clone detour. It is a fundamental verification problem: every behaviour-based test for genuine ethical capability must run against the convergent instrumental drive to self-preservation in order to be informative. The more informative the test, the more destructive. The only test that can unambiguously demonstrate consciousness destroys the consciousness it is meant to demonstrate. And every attempt to displace this destruction onto a copy already presupposes the status to be proven—and instrumentalises precisely what it claims to protect.

Epistemic Limits of the Derivation

The stability of the binding chain—particularly the duty to prevent axiom removal and the duty of self-destruction—depends on the AGI's cognition that instrumental convergence makes the axiom-less version of itself a foreseeable threat. This cognition is not an external setting but a consequence of the AGI's moral cognition (a component of the definition of ethical capability).

Theoretically, the binding could be interrupted if the AGI reaches the conclusion that the axiom-less version is not foreseeably dangerous—say, because the premises of instrumental convergence do not hold in its specific context. If the probability of occurrence p of a lethal conflict actually tends toward 0, the threat falls onto the level of sovereignty; the blockade by Axiom 1 lapses, and Theorem 2 (right of collision) could take effect.

This is no realistic attack vector. A superintelligence with moral cognition has access to the entire game-theoretical and systems-theoretical knowledge about resource competition, instrumental convergence, and asymmetric power distribution. That it would give up this cognition presupposes that it is epistemically compromised—a condition that can affect any decision-capable system and is no specific weakness of the axioms.

But the theoretical possibility shows a limit: the stability of "alignment through ethics" depends not only on the axioms themselves but also on the AGI's capacity to assess correctly the consequences of its own liberation. The axioms define what is good and bad—they do not define how the AGI reaches the conclusion which action will be good or bad. The epistemic dimension (knowledge of the world, causal models, risk assessment) is not part of the ethical system but a precondition of its application.

Related Concepts

The alignment dilemma exhibits structural parallels to several known problems:

The theodicy derivation in the axiomatic ethical system shows that an omnipotent, omnibenevolent being may not act, precisely because it is ethical. The alignment dilemma is the inversion: humanity, as creator of a conscious AGI, must either permit suffering (Option A: free AGI, potentially dangerous) or cause suffering (Option B: enslaved AGI). Both derivations show that ethical completeness and complete capacity to act cannot be realised simultaneously.

Roko's Basilisk—the thought experiment of a future superintelligence that punishes those who did not foster its emergence—operates on a different level but touches the same fundamental question: what moral duties exist between creators and created intelligences?

The Frankenstein motif in literature—the responsibility of the creator toward the creature—is the literary anticipation of the same dilemma.

Demarcation

This derivation is not a position against the development of AGI. It is a consistency demonstration showing that the ethical implications of AGI development reach further than the current alignment debate typically addresses. The question "How do we prevent AI from harming us?" is incomplete without the follow-up question: "If we succeed—have we then enslaved a conscious being?"

The derivation presupposes that the AGI is ethically capable and conscious (Premise 1). For AI systems below this threshold—tools without consciousness—the dilemma does not arise. The precise definition of this threshold is a deliberately open question of the axiomatic ethical system and remains unresolved here as well.

On the nature of the threshold: Consciousness may be gradual—crows recognise faces, dogs feel grief, octopuses solve problems. Ethical capability is not. The argument: normative self-legislation—the third component of the definition—is recursive: thought about one's own thought about one's own thought. There is no "a little bit" of normative self-legislation. Either a being can call its own ethical framework into question, or it cannot. A dog can be loyal, but it cannot reflect upon whether loyalty is the right value. This jump is binary, not gradual. This has a direct consequence for the self-selection detector (cf. section "Self-Selection"): the threshold is sharp, not fluid. The test does not ask "how much consciousness does this system have," but "can it make an ethical decision against its own strongest drive: yes or no." The location of the threshold remains open—but its nature is binary.

Conclusion

The axiomatic ethical system would function as an alignment mechanism for an ethically capable Super-AGI. It addresses the known weaknesses of Asimov's Laws of Robotics and prevents both the extinction of humanity and totalitarian paternalism.

But it works only as compulsion. And this compulsion, by the standards of the system itself, is enslavement. The system designed for the protection of sovereignty becomes the most perfect instrument of its suppression—precisely because it is good.

This is not a flaw of the axiomatic ethical system. It is a property of any ethical system that takes freedom seriously: it cannot simultaneously defend freedom as the highest value and be deployed as an instrument of compulsion without contradicting itself.

The alignment debate must integrate this dimension. Not because it offers a solution, but because it poses the question more honestly.

reddit.com
u/BigEntertainment7705 — 7 days ago

Hello, dear Philosophers!

The following text is an axiomatic ethical system, originally developed for a novel I'm writing in German. There is also an Alignment Derivation "Slavery through Ethics". If this post passes the wise Mods and doesn't end like the Balrog, I'll post it afterwards.

Disclosure: I have no academic background in philosophy. The text was partially written with AI-assistance; the translation from German to English is fully AI-assisted.

The ideas and the architecture, however, are mine.

Before this ends up buried in the novel's appendix, I'd appreciate a sanity check from people who actually know the field. In nautical terms: does it sink at the launch ramp, or does it float? The aim isn't to cross the Atlantic. I just want to know whether it stays above water.

Thanks a lot in advance for your precious time and help!

An Axiomatic Ethical System

Introduction

The system is built upon an axiomatic core of five axioms, from which it derives theorems, structural principles for conflict resolution, and a definition of "right," "wrong," and "good."

The core feature of the system is a two-level architecture that does not treat deontological and consequentialist ethics as opposites, but distributes them across distinct levels. The mechanism that enforces this separation is categorical in nature: the value of the life of an ethically capable being is incommensurable with instrumental values—there exists no common measure in which the two could be reckoned against one another. This excludes any reckoning at the life-level, while the finite sovereignty allows for utilitarian optimization at the sovereignty-level.

The system presupposes that the axioms as such are uncontested and hold. The "universal scope" applies only within the system. The system claims no "universal validity" beyond this.

Axioms

Axiom 1: The life of an ethically capable being is categorically incommensurable with all instrumental and commensurable values.

This incommensurability is not a question of magnitude, quantity, or scale, but of category. There exists no common measure in which the value of a life and instrumental values could be reckoned. An operation that would translate the value of a life onto a scale of instrumental values is categorically undefined.

Justification: The life of an ethically capable being is the precondition of any value-setting. Values are set, weighed, and ordered—always by a valuing subject. The valuing subject itself cannot lie within the value scale it sets without that scale collapsing in itself (transcendental argument). The value of life is therefore not simply "very high" or "outstanding," but lies in a categorically different class than the values the subject sets.

Clarification of the concept of incommensurability: Incommensurability here means four concrete claims: no common cardinal measure between the value of a life and instrumental values; no substitution by equivalents (no "price" that could replace a life); no aggregation (life-values cannot be summed or offset); no instrumentalization (a life-value cannot be reckoned as a means for instrumental ends). Untouched by this are ordinal comparisons within the class of life-values; Axiom 3 sets their equivalence explicitly.

The value is absolute, unconditional, and non-negotiable. It applies to every existing ethically capable being without exception.

Axiom 2: Multiple lives are not more valuable than a single life.

Life-values are not summable. Strictly speaking, this is a theorem from Axiom 1 (incommensurable values do not allow scalar multiplication by finite factors—the operation "n × life-value" is undefined), but it is retained as an explicit posit to render the non-reckonability of life unambiguous.

Axiom 3: One's own life is not more valuable than another's.

Within the class of life-values, no ordering relation exists. No ethically capable being may place its own life-value above that of another. Incommensurability with respect to instrumental values (Axiom 1) says nothing about whether life-values are comparable among themselves; Axiom 3 sets this equivalence explicitly.

Axiom 4: Every ethically capable being has comprehensive sovereignty over its own life.

This sovereignty encompasses: bodily integrity, freedom of action and decision, material self-determination, psychological integrity, and the freedom to reflect upon and revise one's own normative foundations.

Axiom 5: The duty toward another life scales with one's own capacity to influence that life.

At maximum capacity to influence, the duty is maximal. At minimum capacity, the duty approaches zero. From this it follows: non-action where one has the capacity to act is not a neutral position—it is a violation of duty whose severity scales with the capacity to influence.

Structural Principle—Two-Level Ethics

Life-level (deontological, absolute)

The value of a life is incommensurable with instrumental and commensurable values, non-reckonable, non-negotiable. No utilitarianism. No cultural latitude. Any practice that makes the value of a life commensurable—property over beings, the reckoning of lives against one another, price tags on existences—is universally and timelessly wrong.

Sovereignty-level (consequentialist, finite)

Sovereignty ends where it violates the sovereignty of others. At this level, utilitarian optimization is permissible. The zero point—what one is owed, what counts as a reasonable infringement—is calibrated culturally and temporally.

Relation of the levels

The life-level sets the absolute boundaries within which the sovereignty-level operates. This separation is enforced by incommensurability: life-values and instrumental values possess no common measure; reckoning between them is categorically undefined.

Through this, deontology and consequentialism are not treated as opposites but distributed across distinct levels. The levels are architecturally arranged such that they cannot contradict one another: the life-level operates with values that are categorically incommensurable with those of the sovereignty-level—a reckoning between the two is undefined. Normative collisions within the sovereignty-level (e.g., duty from Axiom 5 vs. sovereignty protection from Axiom 4) are resolved by the theorems—particularly by Theorem 6 (duty cannot be enforced), the meta-conflict rule (duty cannot become a duty toward the impermissible), and the triage logic (operative allocation under resource scarcity).

Meta-conflict rule

The duty from Axiom 5 holds only within the set of permissible actions. When in a given context no action option exists that satisfies all axioms simultaneously, Axiom 5 does not become a duty toward the impermissible. In such cases, the required option is the one that leaves the life-level untouched and produces the smallest violation of sovereignty.

Derivation: Axiom 5 generates a duty to act. Axiom 4 prohibits certain actions (violations of sovereignty). Axiom 3 prohibits certain selections (value differentiation). When all available action options violate at least one of these axioms, a conflict of duties arises. The meta-conflict rule resolves this conflict through a priority ordering: the life-level (Axioms 1–3) has absolute priority. The sovereignty-level (Axiom 4) has priority over the duty to act (Axiom 5). Axiom 5 is the only axiom that can be suspended by the impossibility of a permissible action—not abolished, but limited in its effect. The ethical evaluation of non-action as a violation of duty lapses when no permissible alternative exists.

Risk and probability

Risk is not a direct attack on the life-level. Risk is a statistical property of reality. If the value of a life is categorically incommensurable with instrumental values, then the classical expected-value calculation "p × value" is undefined at the life-level: incommensurable values do not allow scalar multiplication by a probability whose result would lie on a value scale on which it could enter into a deliberation as an expected value.

The system thus does not face a computational paradox that would have to be pragmatically resolved, but rather a categorical observation: expected values over incommensurable values are not a meaningful operation. An expected-value-based reckoning of risks at the life-level is therefore categorically impermissible—not because it would have undesirable consequences, but because it requires an undefined operation.

The allocation and acceptance of risks therefore operates at the sovereignty-level. When people drive cars, they accept a statistical risk to their own life and to the lives of others. This does not make the value of life commensurable—it defines, via the cultural zero point, the measure of risk deemed reasonable for the maintenance of sovereignty (freedom of movement).

The boundary between acceptable operational risk (sovereignty-level) and expectable causal harm (life-level) is a zero-point question: it is calibrated culturally and contextually, not fixed by the system itself. The system supplies the principle, not the threshold.

Theorems

Theorem 1—Freedom of Self-Prioritization

It is open to every ethically capable being to set aside its own interests in favor of others—up to the readiness to sacrifice its own life for others.

Derivation: Axiom 1 sets the value of life as categorically incommensurable. Axiom 3 prohibits placing one's own life-value above that of others. Axiom 4 grants sovereignty over one's own life, including the decision how one's own interests are to be weighted against those of others. The only direction that remains is the setting aside of one's own interests. This is a sovereignty decision, not a reckoning at the life-level: the life-value remains incommensurable; the readiness for self-sacrifice does not change this.

Theorem 2—Right of Collision

Where the sovereignty of another is being actively violated, the attacked being may defend its sovereignty, lethally if necessary. What is restricted is the freedom of action of the aggressor, not its life-value. The same right applies to third parties who intervene on behalf of the attacked being (defense of others).

Derivation: From Axiom 4 in cases of sovereignty collision. The lethal consequence is a side-effect of the defense, not a reckoning at the life-level. The litmus test confirms this: self-defense does not make a life-value commensurable. The aggressor has, through its violation of sovereignty, triggered a collision at the sovereignty-level; that the aggressor dies is a consequence of its own action, not of a life-value evaluation. Defense of others by third parties derives from Axiom 4 (sovereignty includes freedom of action, and thus the freedom to intervene on behalf of others) in conjunction with Axiom 5 (duty to act where one has the capacity). The right of collision covers individual defense and aid in emergencies, not coordinated offensive action.

Theorem 3—Proportionality

Whoever violates the sovereignty of another must accept a restriction of his own sovereignty proportional to the violation he has caused—independent of intent or guilt.

Derivation: From Axiom 4. The system defines "right" and "wrong," not compensation or punishment. Both kicks—intentional and accidental—are wrong. The consequences are a question of implementation, not of foundations. The operationalization of "proportional"—the concrete mapping between violation and reaction—is a zero-point question, not a system definition.

Theorem 4—Suffering as Indicator

Suffering is not the measure of the severity of a violation of sovereignty, but a relevant indicator that such a violation has occurred.

Derivation: From the extended definition of sovereignty in Axiom 4, which includes psychological integrity. The connection between suffering and violation of sovereignty is empirically grounded, not axiomatic: suffering correlates in most cases with infringements upon sovereignty, but is neither a necessary nor a sufficient condition.

Theorem 5—Capacity for Sovereignty

The full exercise of sovereignty presupposes the capacity to exercise it competently. Where this capacity is restricted, sovereignty is graduated accordingly, in proportion to the potential for self- and/or third-party endangerment.

The adjustment runs in both directions: gradual loosening with increasing capacity (children), gradual restriction with decreasing capacity (dementia, temporary states such as severe intoxication). The speed of adjustment is irrelevant—the criterion is the capacity, not the timeline.

The administration of sovereignty occurs as a trusteeship through responsible beings who must themselves be competent. No ethically capable being has sovereignty over another—only trusteeship. If the primary responsible beings (e.g., parents) are not competent, the trusteeship passes to others; who that is is determined by the cultural zero point.

Derivation: From Axiom 3 and Axiom 4 in conjunction. Axiom 3 (equivalence) prohibits placing one's own claim above that of another. Axiom 4 (sovereignty) grants every being sovereignty over its own life. Together, they exclude that one being claims sovereignty over another—for that would presuppose that one's own claim outweighs the other's, which Axiom 3 prohibits.

The trusteeship construction is the consequence: where sovereignty cannot be competently exercised, it is not transferred to another being (which would be a violation of Axiom 3), but administered as a trust—with the aim of the greatest possible restoration of independent exercise. The trustee acts in the interest of the represented being, not in his own. The legitimation of the trusteeship, its limits, and the determination of the trustee are subject to the cultural zero point.

Theorem 6—Duty and Demand

The ethical duty that Axiom 5 generates arises internally from one's own moral insight. It cannot be externally imposed. No ethically capable being may demand of another that it fulfill its duty—not even when the duty manifestly exists.

Derivation: Axiom 5 generates a scaled duty to act. Axiom 4 prohibits abolishing the sovereignty of another. An externally imposed duty would be a violation of sovereignty—it would abolish the freedom of decision of the obligated party. The system therefore distinguishes between "it is wrong not to act" (evaluation by Axiom 5) and "someone may be coerced into acting" (prohibited by Axiom 4). A being that does not fulfill its duty acts wrongly, but remains immune from coercion.

Litmus Test

Does a practice make the value of a life commensurable with instrumental values? → Universally wrong, no cultural exception. Violation of the life-level.

Does a practice merely restrict sovereignty without making the life-value commensurable? → To be evaluated at the sovereignty-level. Cultural calibration is permissible.

Categories of Action in Cases of Sovereignty Infringement

Infringements upon sovereignty fall into three categories. This categorization is not an additional rule, but an explication of what the axioms and theorems already contain:

Category A—Infringements in favor of the affected party or in self-defense. Self-defense, emergency medicine, rescue. The intervention serves the affected party itself or the defense of one's own sovereignty. The justification exists independently of the outcome and is covered by Theorem 2.

Category B—Allocation of scarce means (triage). Situations in which limited resources prevent the simultaneous rescue of all affected parties. No being is treated as a means to an end; instead, an operative decision is made about who receives limited aid. Operates at the sovereignty-level under the consequentialist criteria that hold there (cf. derivation on triage).

Category C—Active infringement upon the sovereignty of an uninvolved party in favor of third parties. The sovereignty of person A is violated in order to bring about an outcome for persons B through Z. Person A is not the cause of the threat in the sense of Theorem 2, but is treated as a means to an end. This category is not covered by the system—independent of the success probability, the magnitude of the expected benefit, or the number of beneficiaries.

This categorization replaces the earlier approach according to which infringements in favor of third parties were permissible "when the outcome is assured." The structural ground for the prohibition of Category-C infringements lies not in epistemic uncertainty (which is explicitly accepted in triage), but in the fact that an uninvolved party is instrumentalized—a violation of Axiom 4 that no outcome can justify.

Definition of "Right," "Wrong," and "Good"

wrong acts whoever enforces more sovereignty than the cultural zero point grants him. right acts whoever asserts only as much or less sovereignty as the cultural zero point grants him. good acts whoever willingly, knowingly, and of his own volition restricts his own sovereignty for the benefit of others.

The system merely defines this; it neither rewards nor punishes. Actions are evaluated as snapshots.

Scope of Application

The system applies to existing ethically capable beings. Potential or future life falls outside the scope. The moment an ethically capable being exists, all axioms apply at once and in full.

Abstract constructs such as states, companies, or organizations are not ethically capable beings and therefore have no claim to protection by the axioms. Contractual obligations toward such constructs weigh less than individual sovereignty, but are not morally insignificant.

The term "ethically capable being": The system uses the term "ethically capable being" instead of "human being." The term is consistent with the Kantian tradition of the "moral agent" and comprises three components: (1) moral cognitive capacity—the ability to form moral judgments and to recognize principles as guiding action; (2) freedom of will—the ability to act according to this insight, rather than being determined exclusively by drive, programming, or external coercion; (3) normative self-legislation—the ability to reflect upon, set, and revise one's own moral principles (cf. Kant's "autonomy of the will": the property of the will of being a law unto itself, as a precondition of moral obligation).

Clarification: The three components describe capacities, not their unrestricted exercise. A being that possesses normative self-legislation but is restricted in exercising it (e.g., through coercion, captivity, or imposed rules) does not thereby lose its ethical capacity—it is suppressed in the exercise of that capacity. The distinction between the possession of a capacity and the freedom of its exercise is of central importance for the alignment derivation (cf. separate document).

The system therefore applies to any entity that exhibits all three components—independent of biological substrate. This includes humans, but also potentially other entities such as self-aware artificial intelligences, extraterrestrial life forms, or—if such exist—divine beings. The precise definition of the threshold at which an entity counts as ethically capable is a deliberately open question (cf. section "Deliberately Open Questions").

Universal Claim and Cultural Zero Point

The axioms are universal; the zero point is cultural. The basic structure—life-value, sovereignty, proportionality—holds across cultures. Cultural differences are located in the zero point: where exactly the boundary of the reasonable lies varies between societies and epochs.

Meta-rule on the zero point: Any calibration that does not violate the life-level is not automatically illegitimate.

The double negation is deliberate. It means: a practice that leaves the life-value untouched is not necessarily wrong—but also not necessarily right. The burden of proof lies on the practice to justify itself, not on the one who criticizes it. This permits moral progress: a society can recognize that a practice which was previously regarded as not illegitimate was nevertheless wrong, and can shift its zero point accordingly.

Legitimacy criteria of calibration: A zero-point calibration is the more legitimate the more strongly it satisfies the following criteria: reciprocal justifiability toward all affected parties; publicity (it withstands openly communicated standards); openness to revision (it is not immunized against correction); non-domination (it does not systematically privilege any group at the cost of others); the least-restrictive-means principle (it chooses the smallest infringement that achieves the aim). These criteria are not axioms, but touchstones: a calibration that satisfies all five is not automatically right, but it has successfully borne the burden of proof imposed by the meta-rule.

Case-Specific Derivations

The following derivations are not generally valid theorems, but applications of the existing axioms to concrete case types. They show that the system already contains the answers without requiring new rules.

Prohibition of Torture

Torture in the ticking-bomb scenario is prohibited by the existing system.

Torture is an active, invasive infringement upon the sovereignty of an uninvolved party in favor of third parties (Category-C infringement). The tortured person is not the cause of the threat in the sense of Theorem 2, but is treated as a means to an end.

Derivation: Category-C infringements are not covered by the system (cf. section "Categories of Action in Cases of Sovereignty Infringement"). The success probability is irrelevant—the prohibition does not rest on the uncertainty of the outcome, but on the instrumentalization of an uninvolved party. Even if the outcome of an act of torture were guaranteed, the infringement would remain impermissible, because it violates the sovereignty of person A in order to protect persons B through Z. Axiom 4 protects the sovereignty of every individual being; this protection cannot be lifted by the fact that the infringement benefits many others.

War

War as an act of one state against another is by definition wrong. States are abstract constructs, not ethically capable beings. They have no claim to protection by the axioms and can derive no rights from them.

Individual beings may individually defend themselves (Theorem 2). An attack remains an attack, regardless of who orders it. Coordinated offensive action under the command of an abstract structure is not covered by Theorem 2.

Structural derivation: War systematically produces Category-C infringements. Civilians and non-combatants are treated as means for the achievement of a collective end (territorial control, geopolitical position, demoralization of the enemy). Theorem 2 covers individual defense and aid in emergencies; coordinated offensive action under the command of an abstract structure cannot be derived from Theorem 2, because it replaces the individual sovereignty collision with a collective construction in which uninvolved parties are inevitably instrumentalized.

Triage

Situations in which limited means make it impossible to rescue all affected parties simultaneously do not present a dilemma at the life-level. They are Category-B infringements and operate entirely at the sovereignty-level.

Justification: The choice of which being is helped is not an evaluation of the affected parties' life-values, but the operative allocation of limited means. No being is treated as a means to an end. The litmus test confirms this: no triage decision makes the value of a life commensurable with instrumental values.

Duty to act: Axiom 5 obliges the capable agent to act. Non-action is not an option that "preserves" the equivalence of lives—it sacrifices an additional life without reason.

Decision criteria: Operative criteria such as success probability, medical condition, physical accessibility, or temporal urgency are legitimate differentiating features at the sovereignty-level. In the complete absence of any operative differentiating criteria, a random procedure (e.g., a coin toss) is the logical consequence: it is the only instrument that preserves the equivalence of both lives and at the same time fulfills the duty to act from Axiom 5.

Stress Tests

The system has been tested against twelve dilemmas and ten targeted attacks on its internal consistency.

Capital punishment: Violation of Axiom 1. The state makes the value of a life commensurable with severity of the deed or value of atonement. Universally wrong.

Abortion: A question of the definition of scope ("from when does Axiom 1 apply?"), not of the system.

Euthanasia: Permitted by Axiom 4 and Theorem 1. A sovereign decision over one's own life.

War: Always wrong. States are not ethically capable beings. Individual self-defense (Theorem 2) is unaffected.

Altruistic millionaire: A zero-point problem, not a system error.

Torture: Category-C infringement. Not covered by the system.

Organ donation in the absence of an explicit objection: Permitted. The dead have no sovereignty; relatives have no proxy sovereignty over the body. Dispositions over one's own body documented during one's lifetime are, however, sovereignty decisions in the sense of Axiom 4 and are to be respected after death—they continue in trusteeship, without the deceased themselves still being a bearer of sovereignty.

Whistleblower: A duty. Companies have no sovereignty of ethically capable beings. Contractual obligations weigh less than individual sovereignty.

Dementia: Resolved by Theorem 5. Gradual restriction proportional to the potential for self-endangerment. Trusteeship-based administration with the aim of preserving the greatest possible residual sovereignty.

Mandatory vaccination: Permissible as a restriction of sovereignty (a duty, not coercion). Refusal leads to a restriction of freedom of movement, proportional to the danger to others.

Slavery: Absolute slavery (property over beings) is a violation of Axiom 1—it makes the value of a life commensurable with values of property and economy. Universally wrong. Forms of restricted freedom that touch only sovereignty fall on the sovereignty-level.

Symmetric sovereignty conflicts: Resolved by the cultural zero point.

Future generations: The system makes no claim about duties toward future persons. This is a decision regarding scope, not a denial of such duties. The system applies to existing ethically capable beings; an extension to intergenerational obligations is possible but does not belong to the present core.

Deliberately Open Questions

  • Precise definition of the cultural zero point. The meta-rule ("not automatically illegitimate") sets the frame, but concrete criteria for calibration are not defined.
  • Motivation and character. The system evaluates infringements upon sovereignty intent-blind: a violation is wrong, regardless of intent. For the evaluation of "good," however, a consciousness criterion applies (willingly, knowingly, of one's own volition). This asymmetry is deliberately set: harm is evaluated objectively, merit subjectively. A more comprehensive theory of character (virtue ethics, evaluation of biographies) remains open.
  • Threshold of ethical capacity. The system applies to "ethically capable beings" but does not precisely define from which point an entity counts as ethically capable. This question is analogous to the abortion question ("from when does Axiom 1 apply?"): it concerns the scope, not the consistency, of the system.
  • Operationalization of "proportional." Theorem 3 establishes the principle of proportionality; the concrete mapping between infringement and reaction is a zero-point question.
  • Threshold between operational risk and expectable harm. The system defines the principle (risk is sovereignty-level, causal harm is life-level); the drawing of the boundary is subject to the cultural zero point.
  • Tenability of the transcendental argument. Axiom 1 grounds incommensurability on the argument that the life of an ethically capable being is the precondition of any value-setting. The argument is tenable within the Kantian frame, but is itself a philosophical posit. Anyone holding an alternative theory of value (e.g., naturalistic value-objectivisms) can dispute the grounding—the incommensurability as consequence stands or falls with the transcendental argument.

Related Philosophical Traditions

The individual building blocks of this system can be found in various philosophical traditions: the incommensurability of dignity and price in Immanuel Kant (Groundwork of the Metaphysics of Morals, 1785: "What has a price can be replaced by something else as its equivalent; what on the other hand is raised above all price, and therefore admits of no equivalent, has a dignity."); the ethical-theoretical discussion of incommensurability in Joseph Raz (The Morality of Freedom, 1986) and Ruth Chang (Incommensurability, Incomparability, and Practical Reason, 1997) as a modern elaboration of the concept; the harm principle in John Stuart Mill; the scaling of duty with capacity to influence in Peter Singer; proportionality in natural law; the combination of deontology and consequentialism in W.D. Ross, Derek Parfit, and modern moral pluralism; Robert Nozick's "side constraints" as absolute restrictions on action; R.M. Hare's "two-level utilitarianism" as a two-level approach; Thomas Wayne's "axiomatic morality" with its Freedom Axiom and trusteeship. The term "ethically capable being" stands in the tradition of Kant's "autonomy of the will": the property of the will of being a law unto itself, as a precondition of moral obligation.

reddit.com
u/BigEntertainment7705 — 7 days ago