Working in a field where we're required to define things with mathematical precision, it's frustrating listening to lawyers and judges flail about trying to figure out how to interpret laws.
I don't want to claim that the law is a trivial matter and that it could be replaced with software, but I do think our legal system could be greatly enhanced with some ideas in math and software development. There are always going to be ambiguity in legal matters, but that doesn't mean we can't greatly reduce that ambiguity with good tools.
I know this is a really tall order, but one idea in particular I'd like to see is the introduction of some sort of unit testing or model checking during the legislative process where legal scenarios are parameterized and enumerated to act as a guide for legislators, lawyers and judges. I'd love to see something like Alloy (http://alloy.mit.edu/alloy/) be implemented in a way that non-programmers could use to model check things like bylaws by guiding the user through ambiguous and conflicting scenarios. I'm not suggesting we try to make laws satisfiable. What I am suggesting is that we model them in a way that highlight and distinguish the laws that are poorly crafted from those which are clearly interpreted.
Legal ambiguity is really expensive weight on our society. Not only does this sort of ruling create investment uncertainty, it benefits incumbents who can buy armies of lawyers and intimidate competitors with drawn out lawsuits. Our legal system should be the great equalizer, but it will never be as long as we let judges and lawyers convince us to resign that our current system "is as good as it gets".
Subsection (a) says that copyright extends to "original works of authorship." Subsection (b) says that copyright does not extend to "any idea, procedure, process, system, method of operation, concept, principle, or discovery."
Where does an "original work of authorship" end and where does "method of operation" begin? How would you propose to define those concepts in a way that is less ambiguous? That is, without arbitrarily throwing away the distinction the law is trying to make just because you can't model it?
I don't think this provision is as unambiguous as you make it out to be if different judges are coming to different conclusions. It makes sense that different judges will come to different conclusions because it may not be entirely clear to them if a very specific collection of function signatures is an original work of authorship.
I think being able to test cases pre-emptively has a tremendous value because it forces one to pre-emptively consider and define things. By opening up such a system, it might allow software associations to work with legislators to pre-emptively clarify test cases and definitions by creating a process where legislators can define things more specifically so it could be used as a guide for judges ruling in esoteric areas.
I don't imagine we'd ever be able to create a satisfiable legal system, but I do think we could concoct a system that highlights ambiguity and legal risk so that it prompts individuals and associations to get legislators to clarify messy laws.
Like I said, this is a tall order, but I think we should be thinking about systems like this.
A modelable law would have defined that distinction ahead of time, and when that fails, adjust accordingly. Those terms are incredibly vague to begin with, as well.
This particular case could have been anticipated at least as far back as the 1970s.
I mean it was at least possible, and (while fantasizing about hypothetical legal systems) would be desirable if laws could be updated with new information rather than reinterpreted through layers of precedent.
Is that desirable? To me, the existing model embodies a much-valued computer science concept: lazy processing. That allows you to avoid resolving hypotheticals that never actually come up, and when you're forced to resolve a question of law, it lets you do so based on concrete application of the law instead of a mere hypothetical.
On the other hand, maybe a less charitable analogy would be prototyping in a dynamic language. Thinking through edge cases only when running on some input causes an exception to be thrown.
While lazy evaluation is desirable, I do think our present legal system evaluates things too lazily. I'd gladly trade some of the current lazy evaluation for greater up-front certainty of outcome.
Relying extensively on the courts to lazily evaluate ambiguous laws biases outcomes toward those who have capital. Since I do not belong to the capital class, I would prefer less ambiguous laws.
Economists have a term "economic imperialism" which means applying economics outside its traditional domains - in areas such as economic analysis of law or say family relations. I think you've just suggested "software imperialism"...
However, while it's an interesting idea, how about we deal with the ambiguities of things like programming languages (this program runs differently in these two different environments) and computer programs in general ("things stopped working after the update" - that wouldn't happen if programs depended on a formally defined and verified set of things from their environment)? And maybe tackle more complex things like precisely defining the line between flirting and sexual harassment later?
I prefer the term "interdisciplinary research". Specialization of labor will lead us to a societal dead end if we just build impenetrable silos of epistemology. We have to interact to progress.
An article currently trending on the front page about "C in practice" (https://news.ycombinator.com/item?id=9799069) helps make my point (that we barely understand the dark corners of our tools, so perhaps venturing into other territory is a bit premature.)
(Also - the extent to which almost every real-life programming language makes simple things unbelievably complicated sounds like a good example of "impenetrable silos of epistemology"... "language lawyer" is an idiom for a reason. And BTW we have waaaaay less excuses for the huge barriers to understanding that we continuously erect around our work since our subject matter is not nearly as inherently fuzzy as the stuff lawyers, lawmakers and judges deal with.)
The biggest problem is that there is no concept of DRY in law. No single source of laws about $X. It's layers upon layers of conflicting code and it becomes a maze.
Legal precedent is the last thing we would want to mirror in software... Imagine if every resolved bug from every software project, every hacky workaround, was implicitly included in your code, rather than fixing bugs upstream at the source.
(BTW, "DRY" is an inadequate reprise of refactoring. That's always bothered me. "Don't repeat yourself" is one big repeat of an earlier and more clearly and rigorously defined idea.)
I think everyone should be able to program in the same way that everyone should be literate, and should have a grasp of basic mathematics too for that measure.
But the people who write software professionally must be held to a higher standard, just as we expect so much more from professional writers (but we expect everybody to be able to read street signs and write shopping lists.)
I don't wish to be provocative but anyone who can't understand factoring shouldn't be writing software. DRY is an oxymoron.
Because that whole logical positivism thing worked out so well!
Putting flippancy to one side, the issues here will not be solved by any form of notation. The difficulty lies with concepts enshrined in law that will forever be soft and in need of reinterpretation as the context changes.
I have honestly no skin in this game. I just pointed to a project I'd heard of which seems relevant to this discussion.
It would be nice if lawmakers used some tools from code - eg. some kind of version control for laws which are being batted between different houses and have amendments added and removed. And perhaps the language could be a little more formalized.
Here's what the project's executive director has to say on the matter[1]:
"One technical problem with Computational Law, familiar to many individual with legal training, is due to the open texture of laws. Consider a municipal regulation stating "No vehicles in the park". On first blush this is fine, but it is really quite problematic. Just what constitutes a vehicle? Is a bicycle a vehicle? What about a skateboard? How about roller skates? What about a baby stroller? A horse? A repair vehicle? For that matter, what is the park? At what altitude does it end? If a helicopter hovers at 10 feet, is that a violation? What if it flies over at 100 feet?
The resolution of this problem is to limit the application of Computational Law to those cases where such issues can be externalized or marginalized. We allow human users to make judgments about such open texture concepts in entering data or we avoid regulatory applications where such concepts abound.
A different sort of challenge to Computational Law stems from the fact that not all legal reasoning is deductive. Edwina Rissland [Rissland et al.] notes that, "Law is not a matter of simply applying rules to facts via modus ponens"; and, when regarding the broad application of AI techniques to law, this is certainly true. The rules that apply to a real-world situation, as well as even the facts themselves, may be open to interpretation, and many legal decisions are made through case-based reasoning, bypassing explicit reasoning about laws and statutes. The general problem of open texture when interpreting rules, along with the parallel problem of running out of rules to apply when resolving terms, presents significant obstacles to implementable automated rule-based reasoning."
> What I am suggesting is that we model them in a way that highlight and distinguish the laws that are poorly crafted from those which are clearly interpreted.
There are as many elements in the set of laws that are clearly interpreted as there are programs with zero bugs. This is not an analogy. These are direct expressions of the same root cause.
I don't mean to be rude but how can you be so foolish? Have you met humans?
There is no algorithm for "distinguish[ing] the laws that are poorly crafted from those which are clearly interpreted." Law is an imperfect tool for managing human wickedness. No symbolic perfection can force a person to be good. The essence of the problem is that people just like to be cussed bastards. Math won't help.
I never said not to work to make law better, that's very important and needful. I am pointing out that logic and reason have limits in the messy human legal sphere. It's strange to me that this needs pointing out but there you are.
I don't want to claim that the law is a trivial matter and that it could be replaced with software, but I do think our legal system could be greatly enhanced with some ideas in math and software development. There are always going to be ambiguity in legal matters, but that doesn't mean we can't greatly reduce that ambiguity with good tools.
I know this is a really tall order, but one idea in particular I'd like to see is the introduction of some sort of unit testing or model checking during the legislative process where legal scenarios are parameterized and enumerated to act as a guide for legislators, lawyers and judges. I'd love to see something like Alloy (http://alloy.mit.edu/alloy/) be implemented in a way that non-programmers could use to model check things like bylaws by guiding the user through ambiguous and conflicting scenarios. I'm not suggesting we try to make laws satisfiable. What I am suggesting is that we model them in a way that highlight and distinguish the laws that are poorly crafted from those which are clearly interpreted.
Legal ambiguity is really expensive weight on our society. Not only does this sort of ruling create investment uncertainty, it benefits incumbents who can buy armies of lawyers and intimidate competitors with drawn out lawsuits. Our legal system should be the great equalizer, but it will never be as long as we let judges and lawyers convince us to resign that our current system "is as good as it gets".