What If We Built Legal Systems Like We Build Everything Else?
A Tale of Two Systems
Imagine you're running a hospital and facing two very different problems. First, you have thousands of routine blood pressure checks that need processing—simple measurements against clear standards, no judgment required. Second, you have a few dozen complex cases where doctors must weigh competing treatment options, consider quality of life, and make difficult ethical decisions about end-of-life care.
Would you handle both problems the same way? Of course not. You'd use automated systems for the routine measurements and reserve your most experienced doctors for the complex ethical decisions. You'd never force brain surgeons to personally read every blood pressure monitor, nor would you let algorithms make life-and-death treatment decisions.
Yet this is exactly how we run our legal systems.
Every day, courts waste enormous resources having highly trained judges personally handle thousands of routine disputes—parking tickets, simple debt collection, straightforward contract violations. Meanwhile, these same judges pretend that genuinely complex moral questions (about justice, rights, and fairness) can be resolved through mechanical rule-following rather than the kind of sophisticated ethical reasoning we'd expect from any other profession dealing with life-changing decisions.
The result? A system that's simultaneously inefficient and illegitimate—too slow and expensive for simple cases, too dishonest about its moral reasoning for complex ones.
The Idea: Match Tools to Tasks
The core insight is embarrassingly simple: different types of problems require different types of solutions. This principle guides every other domain of human organization, from manufacturing (different assembly lines for different products) to medicine (different treatments for different conditions) to education (different pedagogies for different subjects).
Legal systems are the bizarre exception. We've convinced ourselves that a single institutional framework—judges applying "law"—should handle everything from parking violations to constitutional crises. This one-size-fits-all approach fails because it misunderstands both the nature of legal problems and the requirements for legitimate authority.
Most legal disputes are actually algorithmic puzzles. When someone drives 65 mph in a 55 mph zone, when a tenant is three months behind on documented rent, when a taxpayer fails to report clearly recorded income, there's no moral complexity requiring human judgment. These cases need consistency, speed, and low cost—exactly what algorithms provide.
A smaller set of cases involves genuine moral reasoning. When judges must balance free speech against public safety, weigh competing interpretations of constitutional rights, or decide whether technical rule-following would produce obviously unjust outcomes, they're making the kind of ethical judgments that require sophisticated human reasoning.
The problem isn't that judges engage in moral reasoning—it's that they lie about it. They pretend to be neutral rule-followers when convenient (to avoid criticism for harsh outcomes) while obviously exercising moral judgment when it suits them (in constitutional interpretation, sentencing, equity cases). This institutional bad faith undermines both democratic legitimacy and moral coherence.
Development: A Three-Tier Architecture
What would an honest legal system look like? One that matches decision-making tools to the types of problems they're best suited to handle.
Tier 1: AI for Routine Cases
The vast majority of legal disputes—perhaps 80-90%—involve straightforward rule applications. These could be resolved instantly by AI systems at negligible cost. Your parking ticket gets processed in milliseconds rather than months. Your simple contract dispute gets resolved for pennies rather than thousands of dollars. The system is perfectly consistent and completely transparent about its decision-making criteria.
This isn't science fiction. The technology exists today. Online dispute resolution systems already handle millions of cases. The barriers are institutional, not technological.
Tier 2: Moral Philosophers for Complex Cases
Cases involving genuine moral complexity would go to judges who are explicitly trained and empowered to engage in ethical reasoning. Unlike current judges who pretend to be neutral rule-followers, these "philosopher-judges" would be honest about making moral judgments and held accountable for the quality of their moral reasoning.
Think of how medical professionals handle life-and-death decisions. They receive training in bioethics, participate in ethics committees, and are held accountable for moral as well as technical competence. Legal professionals could develop similar sophistication about justice, rights, and fairness.
Tier 3: Democratic Feedback Loops
Here's the truly innovative part: AI systems would monitor the philosopher-judges for consistency and bias, but more importantly, they would identify systematic problems with the rules themselves. When multiple judges consistently find that a particular law produces unjust outcomes, this information would automatically go back to democratic institutions for consideration.
This creates a learning legal system. Bad laws get identified and fixed systematically rather than persisting for decades. Democratic institutions receive concrete evidence about how their laws actually function in practice. The system evolves based on real-world performance rather than theoretical speculation.
Implication: Beyond the False Choice
This three-tier approach resolves the fundamental legitimacy crisis facing modern legal systems. Currently, we're trapped in a false choice between two bad options: mechanical rule-following that ignores moral complexity, or unaccountable moral reasoning by unelected judges.
The three-tier system provides a third way. Routine cases get handled mechanically (honestly and efficiently). Complex cases get genuine moral reasoning (honestly and expertly). And both operate within democratic oversight through systematic feedback mechanisms.
Consider how this would transform access to justice. Right now, most people can't afford legal representation for even simple disputes. A parking ticket can cost hundreds of dollars to contest. A straightforward contract dispute can take years and cost thousands. This effectively denies legal remedies to everyone except the wealthy.
Instant, algorithmic resolution would make justice accessible to ordinary people for the first time. Meanwhile, the cases that actually need human wisdom would get better attention from judges trained in moral reasoning rather than pretending such reasoning doesn't exist.
The democratic legitimacy problem also dissolves. Instead of unelected judges making policy through constitutional interpretation while claiming they're just "following law," you'd have explicit moral reasoning operating within systems designed to channel problems back to democratic institutions when systematic issues arise.
Think of it as constitutional democracy with better information flows. Democratic institutions retain ultimate authority over legal rules, but they receive sophisticated feedback about how those rules actually work in practice.
Gesture Outward: The Deeper Questions
This proposal raises profound questions about the future of human judgment in an algorithmic age. If we can automate routine legal decisions, what other domains might benefit from similar approaches? How do we design AI systems that enhance rather than replace human moral reasoning?
More fundamentally, it suggests that many problems we attribute to human nature or theoretical disagreement might actually be problems of institutional design. The legitimacy crisis of legal systems isn't inevitable—it's the result of institutional arrangements that force incompatible functions into single frameworks.
What other institutions suffer from similar design flaws? How might we rebuild democratic governance, economic regulation, or educational systems using similar principles of matching tools to tasks?
The three-tier legal system is really a case study in institutional innovation for democratic societies. It demonstrates that we don't have to accept the false choices that current institutions present. With sufficient imagination and political will, we can design institutions worthy of both our democratic aspirations and our moral convictions.
The question isn't whether such systems are possible—the technology and democratic theory already exist. The question is whether we have the courage to abandon familiar but failing institutions in favor of unfamiliar but promising alternatives.
In a forthcoming academic paper, I'll explore the philosophical foundations of this approach in greater detail, including how it resolves classical debates in legal theory between positivists and natural law theorists. But the practical implications are clear enough: it's time to build legal systems like we build everything else—thoughtfully, efficiently, and honestly about what they're actually trying to accomplish.
What do you think? Are there other institutions that suffer from similar "one-size-fits-all" problems? I'd love to hear your thoughts in the comments, and if you found this interesting, you might enjoy my upcoming post on why judges can't escape moral responsibility or my analysis of the democratic legitimacy problem in constitutional law.