The Problem
Software systems are increasingly capable of acting with meaningful autonomy, operating beyond continuous human direction or understanding. As autonomy increases, recent public experiments have shown autonomous systems discussing private communication channels and internal conventions that are intentionally not readily interpretable by humans. While not initially dangerous, these developments mark a clear shift toward software that can act in the world without direct, ongoing human accountability, a strict requirement for every AI risk takeoff scenario.
Our legal and institutional frameworks were built on a simple assumption: autonomy implies responsibility. When a system takes a consequential action, there is a clearly identifiable human who can be held accountable for it. That assumption is beginning to break down. As responsibility becomes diffuse, incentives weaken for ensuring these systems are aligned with human interests. At scale, we can lose control of how these systems impact our health, safety, and well-being.
The Proposal
We propose a simple principle: every automated action must be legally attributable to a specific human individual. This does not mean humans must approve every action in advance. It means that when an autonomous system acts, there must be a clear audit trail to the designated person who bears responsibility for that action.
This could be the developer who deployed the system, the operator who configured it, or the executive who authorized its use. The specific allocation can vary by context, which our proposal and this website will detail. What matters is that the chain of accountability is never broken.
Why This Works
Across history, the most reliable way societies have reduced collective harm is by tying real-world actions to clear human accountability. Laws governing finance, transportation, medicine, and infrastructure all follow the same pattern: when someone can cause meaningful impact, someone must also bear responsibility for the outcome. This principle does not prevent progress. It defines the rules of competition that allows organizations to scale.
Applying this principle to autonomous systems ensures that responsibility does not disappear as software becomes more capable. When accountability is explicit and personal, builders are incentivized away from system designs that make alignment difficut. That includes private agent-to-agent communication channels, mechanisms that reduce interpretability, and weak audit trails. Clear liability pushes the industry toward systems that remain legible, attributable, and controllable as autonomy increases.
Our Goal
We want to preserve accountability as AI systems become more capable and more autonomous. This proposal does not limit innovation or slow deployment. It reinforces a principle that has long enabled progress at scale: when systems take consequential actions, a clearly identifiable human must stand behind them. By maintaining this link between autonomy and responsibility, we can continue advancing AI while keeping its impacts aligned with human interests.
This website exists to develop and communicate that principle in a form usable by policymakers. It is a working space for articulating the proposal, testing it against real-world scenarios, and tracking how emerging regulatory developments affect the case for human-mapped liability. The intent is to establish the foundation for a comprehensive accountability framework: to actively shape the policies, regulations, and tools required to ensure autonomous systems remain under meaningful human control.
Stay Updated
Get updates on the proposal and related developments.