Building an adversarial consensus protocol for multi-agent AI systems. The idea: instead of just averaging agent outputs (which groupthink), run them through attack/defense rounds where agents try to break each other's reasoning before reaching a hardened consensus. Includes foundation disclosure (what does each agent actually know?) and a gate that rejects early consensus to force deeper exploration.
https://github.com/Cubiczan/consensus-hardening-protocol
Would love feedback from people building multi-agent systems.