T H E

Human Agency Manifesto

The Bill of Rights

"Where, after all, do universal human rights begin? In small places, close to home—so close and so small that they cannot be seen on any maps of the world. Unless these rights have meaning there, they have little meaning anywhere." — Eleanor Roosevelt

Version 1.0 · December 2025

Human Agency Bill of Rights in the Age of AI

As artificial intelligence systems become increasingly integrated into our daily lives, influencing decisions from healthcare to employment, from education to justice, we must establish fundamental principles that preserve human autonomy and dignity. This Bill of Rights articulates five essential protections that ensure humans remain the ultimate authors of their own lives, even as AI systems grow more capable and pervasive.

Article I: The Right to Understand

Every person has the right to comprehend how AI systems that affect their lives reach decisions and operate.

This foundational right recognizes that agency requires comprehension. When an AI system denies your loan application, recommends your medical treatment, or filters what information you see, you deserve to understand the reasoning behind these consequential actions.

What this means in practice:

AI systems must provide explanations appropriate to the stakes and context. For a music recommendation, a simple indication that "users who liked X also enjoyed Y" may suffice. For a denied mortgage application, you deserve to know which specific factors—credit history, income ratios, employment stability—most influenced the decision and how they were weighted.

This right extends beyond mere transparency to genuine intelligibility. Technical documentation satisfies legal requirements but fails the spirit of this right if ordinary people cannot grasp it. Explanations must be tailored to the user's level of expertise and the decision's importance.

Key protections include:

  1. Access to plain-language explanations of AI decisions that materially affect you
  2. Information about what data the system used and how it was processed
  3. Disclosure of the system's limitations, error rates, and known biases
  4. Understanding of whether a decision was made entirely by AI or involved human judgment
  5. Knowledge of the system's purpose and the interests it serves

Organizations deploying AI systems bear the burden of making their systems understandable, not users the burden of deciphering them. This right acknowledges that consent without comprehension is not truly informed consent, and choice without understanding is not genuine agency.

Article II: The Right to Question

Every person has the right to challenge AI decisions and access meaningful human review.

Understanding alone is insufficient if you cannot contest decisions you believe are wrong. This right ensures that AI systems are not treated as infallible oracles whose judgments are final and unchallengeable.

What this means in practice:

When an AI system makes a decision affecting you, you must have accessible channels to dispute that decision. This is not merely a right to complain, but a right to have your challenge genuinely considered by someone with the authority and expertise to intervene.

The review process must be substantive, not perfunctory. A human reviewer who simply rubber-stamps AI decisions without independent analysis violates the spirit of this right. The reviewer must have access to relevant information, the ability to override the AI system, and accountability for their decisions.

Key protections include:

  1. Clear, accessible procedures for challenging AI decisions
  2. Timely review by qualified humans who can override automated decisions
  3. The right to present additional context or evidence the AI may have missed
  4. Protection from retaliation for questioning AI systems
  5. Explanations of the review outcome and reasoning
  6. Escalation pathways when initial reviews are unsatisfactory

This right is especially critical for vulnerable populations who may face algorithmic discrimination or whose circumstances don't fit neatly into the patterns AI systems recognize. A single parent with irregular income, a person with a disability requiring accommodations, or someone from an underrepresented community may be systematically disadvantaged by AI trained on majority patterns. The right to question creates a safety valve for cases where algorithmic efficiency produces individual injustice.

Article III: The Right to Constrain

Every person has the right to set boundaries on how AI systems interact with them and use their information.

This right recognizes that human agency requires the power to say "no" or "not like that." AI systems should serve human purposes on human terms, not operate according to predetermined defaults that prioritize corporate or institutional interests.

What this means in practice:

You should be able to establish meaningful limits on AI's role in your life. This might mean restricting what data an AI can access about you, limiting how it can use that data, or choosing to keep AI out of certain decisions entirely.

These constraints must be granular and specific. An all-or-nothing choice between "accept our AI on our terms or lose access to our service" is not true agency. You might want AI assistance with scheduling but not with personal communications, or AI recommendations for products but not for medical treatments.

Key protections include:

  1. Genuine opt-out options for AI processing that don't result in severe service degradation
  2. Granular controls over data collection, retention, and usage
  3. The ability to request that AI systems forget or delete your information
  4. Choice in how much AI mediates your experiences and relationships
  5. Protection of decisions made in "AI-free zones" you've established
  6. Rights to know when you're interacting with AI versus humans

This right also encompasses collective constraints. Communities should be able to establish boundaries around AI deployment in their schools, neighborhoods, or workplaces. The power to constrain recognizes that those affected by systems should have a say in how those systems are bounded.

Constraints must be technically enforced, not merely nominal. If you've limited data collection, systems must actually respect those limits, with auditing and consequences for violations. This right is meaningless if it's routinely circumvented through dark patterns or buried consent mechanisms.

Article IV: The Right to Verify

Every person has the right to confirm that AI systems are operating as claimed and to access evidence of their performance.

Trust without verification is blind faith. This right ensures you can independently confirm that AI systems actually function as advertised and continue to serve human interests over time.

What this means in practice:

Organizations deploying AI must provide evidence that their systems perform as claimed, particularly regarding accuracy, fairness, and safety. This isn't about trade secret protection but about demonstrating that systems meet the standards they publicly claim.

Verification extends to ongoing monitoring. An AI system that was fair at deployment may drift over time, developing biases as it encounters new data. You have the right to know that systems affecting you are continuously evaluated and that problems are addressed when discovered.

Key protections include:

  1. Access to performance metrics, including accuracy rates and error distributions
  2. Information about testing for fairness across different demographic groups
  3. Independent audits by qualified third parties
  4. Disclosure of significant failures, security breaches, or bias discoveries
  5. Evidence that systems remain reliable over time and across contexts
  6. Ability to request audits when you suspect malfunction or discrimination

This right recognizes the information asymmetry between AI deployers and those affected by AI systems. Organizations have extensive data about their systems' performance; individuals have little. Verification requirements level this imbalance.

For consequential systems—those affecting employment, credit, healthcare, criminal justice, or education—verification standards should be rigorous and publicly documented. The public has a legitimate interest in knowing that AI systems making high-stakes decisions meet high standards of reliability and fairness.

This right also protects whistleblowers and researchers who identify problems with AI systems. Creating barriers to legitimate verification efforts—through legal threats, access restrictions, or contractual prohibitions—undermines this fundamental right.

Article V: The Right to Delegate Safely

Every person has the right to use AI as a tool while maintaining ultimate authority and responsibility for decisions.

This right acknowledges that AI can be tremendously valuable for augmenting human capabilities. But delegation should never become abdication. You must be able to use AI assistance without losing control over decisions or being held responsible for AI mistakes you couldn't reasonably prevent.

What this means in practice:

When you use AI tools to help with tasks—writing, analysis, decision-making, creation—you should receive assistance that genuinely remains under your direction. The AI should not subtly redirect your goals, make decisions without your awareness, or lock you into irreversible paths.

Safe delegation means you can rely on AI assistance without accepting liability for hidden flaws you couldn't detect. If you use an AI medical assistant and it provides incorrect information, you shouldn't bear sole responsibility for an error the AI made and concealed through overconfident presentation.

Key protections include:

  1. Clear delineation of where AI assistance ends and your judgment begins
  2. Meaningful human control over consequential decisions, with AI in advisory rather than determinative roles
  3. Reasonable liability protection when AI errors occur despite responsible use
  4. Transparency about AI's confidence levels and uncertainties
  5. Ability to understand when AI assistance might be unreliable
  6. Right to human backup and oversight for critical tasks
  7. Protection against over-reliance on AI through design that maintains human engagement

This right addresses the paradox of automation: the better AI becomes at tasks, the harder it is for humans to maintain the skills and attention necessary to verify AI output or take over when AI fails. Safe delegation requires systems designed to keep humans in the loop in meaningful ways.

Organizations providing AI tools have responsibilities here too. They should design systems that support appropriate human oversight rather than encouraging blind trust. Warning users that they remain responsible while hiding the information needed to exercise that responsibility violates this right.

This right also recognizes gradations of delegation. You might fully delegate routine scheduling but want to maintain close control over financial decisions. AI systems should accommodate different levels of delegation for different types of tasks, always preserving your ability to intervene, redirect, or take over.

Enforcement and Implementation

These rights are meaningless without mechanisms for enforcement. Implementation requires:

  1. Legal frameworks that make these rights actionable in courts and regulatory proceedings, with real consequences for violations.
  2. Technical standards that make rights-respecting AI the default, not an afterthought, with established best practices and auditing protocols.
  3. Institutional responsibility placing the burden of compliance on AI developers and deployers, not on individual users to protect themselves.
  4. Education and awareness to ensure people understand their rights and how to exercise them.
  5. Democratic governance allowing communities to adapt these principles to their specific contexts and values.

These five rights work together as a system. Understanding enables meaningful questioning. The power to constrain is hollow without the ability to verify compliance. Safe delegation requires all four preceding rights. Together, they form a framework for preserving human agency as AI capabilities expand.

The goal is not to restrict AI development but to ensure it unfolds in ways that enhance rather than diminish human flourishing. These rights establish guardrails that keep AI aligned with human values and under human control, creating a future where powerful AI serves humanity rather than the reverse.