Ron Bronson
← Back to all work
program

State Capacity AI

AI is making decisions, and nobody has to tell you. There's no standard for disclosure, no shared language for what kind of authority a system has, and no easy way to contest a decision made by software.

State Capacity AI

What I Saw

AI systems are making decisions about benefits, housing, employment, and criminal justice—but there’s no requirement to disclose when it happens. No shared vocabulary for what level of authority a system has. No standardized way to contest a decision you don’t even know was automated.

This isn’t a hypothetical. It’s happening now:

  • Benefits applications scored by algorithms with no explanation
  • Housing eligibility determined by models caseworkers can’t interrogate
  • Appeals processes that assume human review when there wasn’t any

What I’m Building

1. Decision Authority Taxonomy

A framework for classifying what role AI plays in a decision:

  • Inform: System surfaces information, human decides
  • Recommend: System suggests action, human approves
  • Default: System decides, human can override
  • Automate: System decides, human review only on appeal

Most systems blur these lines. The taxonomy forces clarity about who—or what—has authority at each step.

2. Disclosure Standards

People should know when AI is involved in decisions that affect them. I’m developing:

  • Required disclosure language for different decision types
  • Transparency requirements for model inputs and outputs
  • Plain-language explanations of how automated systems work
  • Documentation standards for caseworkers and administrators

3. Contestability Design

If you can’t challenge a decision, it’s not really a decision system—it’s a decree. I’m building:

  • Human review triggers for automated decisions
  • Audit trails that show what the system considered
  • Override mechanisms that preserve human judgment
  • Feedback loops that surface when models fail

4. Case Studies & Implementation Guidance

Working with agencies to test these frameworks in real contexts:

  • Benefits determination systems
  • Housing eligibility screening
  • Employment assistance programs
  • Child welfare risk assessment

The Approach

This isn’t academic research disconnected from practice. I’m working with caseworkers, administrators, and policy staff who use these systems daily. The frameworks have to work in real bureaucracies, with real constraints, or they’re useless.

Key principles:

  • Augment, don’t replace: AI should help caseworkers make better decisions, not remove their judgment
  • Transparency by default: If a system influences a decision, people deserve to know how
  • Design for contestability: Every automated decision should have a path to human review
  • Build for institutions: Frameworks must work within procurement, compliance, and political realities

What’s Next

I’m expanding this into a full governance framework that agencies can adopt:

  • Model cards adapted for government contexts
  • Procurement language for AI vendor contracts
  • Training materials for caseworkers and administrators
  • Policy templates for disclosure and oversight

The goal is to shift how government thinks about AI—not as automation that replaces workers, but as infrastructure that requires the same care, maintenance, and oversight as any other public system.

What This Demonstrates

I can bridge technical implementation, policy design, and institutional practice. I don’t just critique how AI gets deployed—I’m building the frameworks and practices that make responsible deployment possible in high-stakes public contexts.