I want to voice my support for keeping the driving use case "experiments".
I'm coming from the perspective of a software engineer here. To a software engineer, a "program" is a collection of stateless routines and behavior. Data is external and separate, the same program should be able to process a wide range of data. "Reproducibility", as much as that matters, is having a tested system that responds in a predictable and reliable way to inputs, and data is one such input.
When I first worked extensively with a scientist on an experiment, I was shocked how much common wisdom from computer science was turned on its head. One is expected to load up a Matlab workspace with data and code all in the same file? Scripts irreversibly mutate data, and often run exactly once? How could one possibly keep track of such an environment? How does one fix bugs in a series of commands typed into an interactive prompt? Reproducibility to a scientist is a log of actions that could be repeated by another human, but the environments used often just dropped such things on the floor, to be caught only by the most diligent researcher with an unusually well-kept notebook.
I think there is definitely a happy medium somewhere. Reproducibility as a scientist understands it; interactivity in a way that makes sense to a scientist writing a one-off script. Program state stored easily so that the scientist doesn't feel lost every time they restart their environment, as I imagine they must do when editing python scripts in vim as a software engineer might. But all this in a world where scripts can be maintained and versioned and fixed without their hair catching fire.
Thanks for the kind words. Until I left academia I used to work with matlab a lot, and pyexperiment is probably a result of trying to get that experience while making scripts that can easily be shared along with the data needed to run them.
E.g., the issue with irreversible mutation of data is addressed in pyexperiment with rotating state (and log) files. For example if you store the state of your experiment in one run, and then change it in the next, you will get a backup of the old state with a numerical extension (by default up to 5 backups are rotated). Moreover, pyexperiment by default comes with commands to display stored state and configuration options (though they still need to be improved), and both are stored in formats compatible with a host of other software (including matlab).
Btw., along the same lines, I love ipython notebooks, but the way I use them makes them very hard to share, and compared to plain python scripts, version control is a pain (even with the usual hacks to make diffs readable).
It's easy enough to skip ahead. I like that it doesn't leave any of that out; it's not so tedious that you can't scan for interesting bits, with the rest as reference material. Maybe I am biased from reading on a laptop where scrolling and scanning is easy?
> Maybe I am biased from reading on a laptop where scrolling and scanning is easy?
Probably. I first looked at the page on a tablet, where the viewport is much smaller and so scanning/skimming is much harder. I later got back to the page on a laptop, and I scrolled past uninteresting parts without any problems.
One thing I think would be good on learnxinyminutes is an ability to link to a specific line. Is it possible and I just missed it?
Contracts that are mutually beneficial typically
don't need to be enforced because they are not
This is a fundamental misunderstanding the purpose of contracts. Contracts are almost by definition mutually beneficial, at least nominally. Otherwise the only reason parties would enter into them is under duress or out of ignorance. And in fact, contracts backed by a legal system are routinely struck down if either of those cases occur.
What you are describing is a situation where neither party has an incentive to defect or default on the contract. And if that is the case then there was no need for a contract in the first place. For example, you and I agree to meet Friday night for dinner. This is a beneficial arrangement for both of us, neither of us need sign anything.
The purpose of contracts is to solve a game-theoretic problem: it is to our mutual benefit to cooperate in some way. But if we cooperate, then one of us can do even better by defecting. For example, it might be mutually beneficial for me to give you $1,000 today, and for you to give me 100 widgets on Thursday. But if I give you $1,000 today, you can do better than the original agreement by keeping the money and the widgets. So the only reason I would ever give you $1,000 is if there was some way to enforce that you give me 100 widgets. That can be by legal means, or technological means. The threat of punishment under a legal system is one traditionally effective way to incentivize you to hold your end of the bargain, and in many ways is at the very heart of what makes modern society work. A technological solution, that avoided law enforcement and judiciary systems, has the potential to be even better because there are obvious negative effects to having a large and over-reaching police force, for example.
TL;DR The enforceability of the contract is the whole point. The only reason you need a contract in the first place is because someone has an incentive to defect.
Thanks. That's exactly it. People might try to counter by saying that specific cases don't work that way, but that's not what game theory predicts. It won't tell you the outcome of a specific instance of a conflict, but it will predict the overall trend when a winning strategy exists. Traditionally, the counter to exploitable strategies was the honor system. You trusted the people you interacted with not to take advantage of you because you believed in their character. While that can work in specific cases, it's not an overall good system for running a large-scale system of social interactions because it invites exploitation. We tend to think of corruption as inevitable, but the degree of corruption in a system is actually somewhat predictable by the incentives for it built into the rules of the system when that system runs at a large scale.
Thanks for some fascinating contributions to this thread.
I see both sides of the debate, and lean toward your side. But I also think both sides are probably wrong to speak as if we will have either one system or the other in the foreseeable future. I think it is more likely that government resources and law enforcement will exist alongside any trustless contract system, acting as a failsafe (or guarantor of exploitation, depending on your POV I suppose). Few contracts would be irreversible, and few holders of contracts would resist reversal, if physical violence from the state were brought to be bear, even if the issue had already been "settled" by software. I believe the reality is that Ethereum-style contracts would increase efficiencies in more routine, less controversial contracts and the state would remain the final recourse in more extreme cases. As some have said, 30 years down the road when machines are more intelligent, all bets are off. But for the foreseeable future, the state isn't going to wither away just yet.
As far as people obfuscating with complex contracts, yes, of course they will do that. But the obvious implication is that a class of techno-jurists will have to arise to help advise users. Again, that's not much different than the current system of clueless folk consulting attorneys, and I imagine both systems would coexist for some time to come.
I definitely agree that smart contracts are going to be implemented as another layer. They'll erode the need for traditional enforcement slowly. And while there's a need for resolution when there's a bad contract, that resolution system doesn't always have to be a person.
Here's a simple theoretical example. People operate under a common, universal law system that defines the most basic rights. The rules of this system are only those which everyone who participates can agree to. For example, everyone agrees not to kill each other. People who do not accept these rules can form alternative societies, but they will not be able to access areas under common law through the use of smart locks and the like which restrict access to public spaces only to those who accept basic terms of behavior.
These rules can be under constant revision because the code that runs the base social platform is publicly audited, and anyone can submit a request to edit it. These revisions can be tested at smaller scales just by running the rules in private spaces. In your home, you can set whatever rules you want, but anyone running software with conflicting preferences will be alerted to the changes when they enter. If your home rules become popular with your friends, they may choose to implement them in their home. If enough people adopt them, the community may choose to implement them in certain spaces, and they can propagate outward.
The basis of this type of system would be mutual consent, and what a person consents to do can be adjusted at any time by their own software. It would be important that a person's personal platform was entirely under their control and only interacted with the outside world through approved protocols. Likely this would require a direct brain connection that could not be hacked because of physical safeguards.
Business could be conducted between communities that operated under entirely different rules through automatic contract negotiation. New ideas could spread from private to public spaces automatically through smart social contracts.
This is just one, very basic example of a society run by smart contracts. There's plenty of ways for these types of systems to be exploited, but that's part of the design process of any system. Particularly difficult is the problem of letting a machine which can be manipulated dictate so much of life.
But people's brains can be hacked already. Look at advertising, religious cults, mob mentality. People have always been vulnerable to manipulation, and we have developed an interconnected financial system where almost anything can be bought, especially power. We can learn from existing flaws as we develop new systems to fix the bugs in our biology and political systems that have made us vulnerable. We will definitely introduce new vulnerabilities. We just have to make sure we have a robust plan for continued development to respond to them.
Game theory is usually presented as the science of human interaction, but I admit that I have no special knowledge about what Nash might think. I do think that individual interactions are very different from overall trends in the same way that the structure of how two grains of sand interact with each other is different from the structure of a beach as a whole.
I have no doubt that every new technology invites exploitation, and you're right to think about the consequences. That's an essential part of the design process of any technology, especially one that's general useful. That's the reason why we have such complicated software licenses.
But EvilCorp doesn't need smart contracts to send robots to steal your children. It will definitely use them if they're convenient, but they could just as easily go with the traditional human blood on dead trees contract that's ensnared many souls. Smart contracts add very little to their already efficient operation. But the people who can't afford to hire lawyers, bribe the police, and build robot armies will benefit a lot from enforceable contracts that don't require huge resources to enforce for purely digital transactions.
Even day traders have their capital in some instrument, be it cash, real estate, stocks etc. It's not unlikely that in addition to day trading, this trader invested a large amount of money into the stock market (and indeed, we have been told he did not invest in real estate).
Most day traders close all their positions at the end of each trading day, so as not to be exposed to overnight gap risk. (A "gap" is when the opening price in the morning is not equal to the previous day's closing price. These occur when relevant news arrives after the 4pm ET close.) So if he was really a day trader, he would have been in cash.
It's possible he had been spoiled by the big bull market and didn't know how to trade successfully in a bear market. Different tactics are required.
I disagree here, because the inevitable result of a highly visible series of failed reproductions is a big media hit. The scientific community may be able to poke holes in the reproduction attempt and sort through the damage, but the media and the court of public opinion certainly can't. Not until long after the reputations of possibly faultless scientists have been ruined irreparably.
So it's important for the reproduction attempts to be as high quality and rigorous as we would hope the original studies were. And it behooves scientists to make sure that these attempts are legitimate, unbiased and equitable, and to investigate any experimental flaws and biases of the experimenters before the results.
Misgivings about a reproduction attempt don't indicate denial of the validity of the scientific process, but recognition of the volatility of scientific news media. The unfortunate reality is that both sides of this effort, both original researchers and reproduction attempts, are subject to a great number of biases and restrictions. Subjective opinion does deeply affect the lives of scientists, and it's not possible for even the best scientists to live in a bubble of scientific purity and assume things will work out.
>I disagree here, because the inevitable result of a highly visible series of failed reproductions is a big media hit.
If a result can't be replicated after many 'highly visible' attempts then the result should be called into serious question.
>The scientific community may be able to poke holes in the reproduction attempt and sort through the damage, but the media and the court of public opinion certainly can't. Not until long after the reputations of possibly faultless scientists have been ruined irreparably.
This sounds like unwarranted fatalism to me. If the result was not reproduced because the experiment was not actually reproduced...I don't see what the issue is here.
>So it's important for the reproduction attempts to be as high quality and rigorous as we would hope the original studies were. And it behooves scientists to make sure that these attempts are legitimate, unbiased and equitable, and to investigate any experimental flaws and biases of the experimenters before the results.
This is why it is critical for any researcher that desires credibility (and more importantly: explanatory power) to detail their work accurately enough that someone else can exactly replicate their experiments in order to provide independent verification of their claims.
The best way to ensure that replication attempts are 'legitimate, unbiased and equitable' is to ensure your work is good enough that someone can actually (as opposed to merely attempting to) reproduce it.
>Misgivings about a reproduction attempt don't indicate denial of the validity of the scientific process, but recognition of the volatility of scientific news media.
>The unfortunate reality is that both sides of this effort, both original researchers and reproduction attempts, are subject to a great number of biases and restrictions. Subjective opinion does deeply affect the lives of scientists, and it's not possible for even the best scientists to live in a bubble of scientific purity and assume things will work out.
This sounds like something scientists need to work towards resolving.
I don't think these things are all that different, in the very early stages.
The very first part of computational thinking is understanding that you can make a very specific and precise procedure to accomplish a task. If a student is at an age where reading and writing is easy, then learning the syntax of a language is a fine way to accomplish this. The student will spend a lot of time with each finicky word and symbol to make the computer behave, and while they may not recognize that they are defining an abstract procedure, the result is hopefully some intuition that the computer is a very predictable and reliable machine that does exactly what the code says, even if it's not what you meant. With exposure to more languages and by writing more programs, hopefully a student begins to recognize patterns and abstractions in their code, and that's the point at which they become real computational thinkers.
If a student isn't ready for that, there are still fun things to try. One cute one I've seen is "program your parent" exercise at a workshop. The child can make their parent move one step forwards or back, turn left or right, pick up and put down an object, and put one thing inside another. Can they make their parent pour a glass of juice? Or put a lego back in the box?
I don't think there is a chicken and egg problem here, because learning to make a dumb machine perform a task by following a procedure is the essence of computational thinking. Learning the basic syntax of a language is probably the most efficacious way to experience this for many students of many ages, even if the explicit goal is "do well on an AP test" or something mundane.
It's a good software development principle. Make things that are secure look secure. Make things that are insecure look insecure. This is going to be insecure no matter what precautions are taken, because the source is open and the key is part of the binary, so it should look exactly as insecure as it is so no one assumes anything untrue about this code.