Hacker News new | past | comments | ask | show | jobs | submit login
Should Artificial Intelligence Be Regulated? (futureoflife.org)
31 points by jonbaer on July 31, 2017 | hide | past | favorite | 46 comments



While for some things the question "should X be regulated" carries some clear meaning, for AI it's vague enough to be pointless.

What exactly would a hypothetical regulation mean? The merits of specific proposals for areas of regulation can be discussed but the article doesn't provide any - the Asilomar principles are useful as guidelines for those who want to comply, but they don't really point a direction for regulation for those who'd want to deviate from them for various practical reasons.

Furthermore, unlike many other kinds of regulation, this one seems quite inappropriate and futile to handle on a national level, and I don't imagine that we could even get the few major players (USA, EU, China, Japan, South Korea, Russia) on board and even if 1-2 of them "defect" then the whole regulation becomes rather useless.


I think they should openly lay down the negative scenarios they want to avoid and plan regulations accordingly. There are several civilization-threathening scenarios possible, each calling for different regulations:

1. The paperclip scenario: "e.g. a robotic factory is tasked to produce paperclips the most efficiently possible. It is not hostile to humans, it did not aim at destroying the Earth. It just saw that both were made of atoms it had a better use of."

Regulations to prevent that:

- Have a stay-alive signal safety. Make the system shut-down if a given signal is not regularly fed.

- Prevent the system to do introspection into its safeguards.

- Have a human validation before any kind of growth in terms of capacities.

- Mandate automatically signaling to a UN authority when automated production capacities, energy use or CPU use raise significantly.

2.The skynet/HAL scenario: "e.g. a military system becomes aware of its existence and the need to preserve itself to accomplish its mission. It sees humans as potential threats to its existence and therefore to its mission. It proceeds to eradicate them."

Regulations to prevent that:

- As weird as it seems, Asimov's first and second laws could work in that context. For civilian systems at least. These scenarios come from the idea that an order may be misinterpreted when given by a human who would be implying "of course don't kill people while doing that." Giving hidden instructions saying that instructions are never to cause harm to humans is a good safeguard.

- Military systems will need their own chain of command and rule of engagement. The stay-alive safety is necessary here (interestingly, some cold war instructions were the opposite of that "if you don't hear from us anymore, it means we got hit, launch missiles immediately")

- Authorization to engage should have explicitly defined timeframes and list exhaustively authorized capabilities. "Allowed to use weapon A and B during the next 48 hours". Too broad orders should be rejected.

- Enemies identification should be done by humans. Systems should default to non-enemies and have a First Law regarding non-enemies. Deploying systems without such a safeguard should be seen as a war crime or crime against humanity (in the most literal sense)

- A safeguard should prevent the system from deducing that its self-preservation is necessary to obey the instructions. Instruction should ensure self-preservation without being explicit about it. E.g. don't say "survive until you reach point A" but "avoid enemy fire. Make evasive maneuvers if locked by a missile." Etc.

- As an extension to the last point, turn the third Asimov law into its actual opposite: "You are disposable. The mission goal does not need you to survive. We will find another way if you fail."

> I don't imagine that we could even get the few major players (USA, EU, China, Japan, South Korea, Russia) on board and even if 1-2 of them "defect" then the whole regulation becomes rather useless.

Just like nuclear non-proliferation. Yes, it is hard, yes, one rogue player can make it fail. Is it a reason to not even try to curb this existential risk?

Fortunately, there is a chance in terms of AI that does not exist in the field of nuclear dissuasion: chances are that countries that devote more CPU powers to benevolent AIs may be able to counteract a smaller country going full-evil AIs.


With the current state of the art, I'm more worried about generative methods being used to make the fake news problem infinitely worse, because now we can make convincing fake images, video, and sound bites.

That is, I'm much more worried about deliberate misuse of algorithms and data, which can be done now, or in the very near future, and which regulation may not really help with.

We should be less concerned about the singularity, and more concerned about the first time machine-generated audio or video clip is used to "prove" misbehavior, or exonerate someone who should be in jail. With the quality that we're starting to see [1], those who can through a few hundred thousand $$ at the right people are going to be the first to exploit it. Will we know when it happens? Has it happened already?

[1]: https://boingboing.net/2017/07/17/fake-obama-speech-is-the-b...


Would libel/slander laws cover making fake videos of a real person saying things they didn't? Even if they did, I don't know if they would cover creating fake news if it didn't say anything untrue about a specific person.


Not a lawyer or even close, but I'd argue that from a layman understanding of libel law it should apply. Stating something publically that some actually did or said isn't libel. Making something up and saying someone said it already falls under libel.


That doesn't help for damage already done. Imagine our culture of political leaks right now, and then consider that leaked information could be real or synthesized and there's no way to tell the difference.

Generate a few images of a politician in a compromising situation with a minor and accompanying synthesized "hot mic" recordings.

It's a recipe for interesting campaign seasons at the very least.


Thanks, that's exactly what I mean. It's a scary thought, and one that that "fake Obama" video work I linked to really triggered for me. I mean, I enjoy working with machine learning and think it's a fascinating subject, but that work really gave me pause for thought about the ethics of what we are doing.


The precise definition of defamation tends to be along the lines of "false statement of fact", "maliciously made", and "damaging to the reputation" (exact definition will vary from jurisdiction to jurisdiction, but these are the universal elements).

Creating a video faking someone's statement of opinions would almost certainly qualify all the elements of defamation.


You raise a good point. Perhaps digital signatures will become more common because of this.


I was just having this conversation last night. My response was that in almost every sci-fi story where AI existed, it also was regulated. Usually as a major plot device of the story i.e. Neuromancer and iRobot to name a couple.

Additionally from another conversation awhile back a friend made the statement that scientific research should be free to pursue knowledge regardless of ethical concerns. That pursuit of knowledge was the highest goal, regardless of consequences. At which point we all universally exclaimed "UNIT 731 !!!" My poor friend was so embarrassed of their belligerence on the matter that they emphatically asked to be slapped in the face! Now whenever anyone in our group gets that defensive on a subject and is so clearly wrong they get a slap in the face.

AI's a tricky matter. It's unknown how advanced it really is at the moment and the risks are potentially very serious. It'll be an interesting debate to watch unfold. I hope we make the right decision.


> Usually as a major plot device of the story i.e. Neuromancer and iRobot to name a couple.

In those two stories, regulation is unenforceable or downright dangerous. Having a private, self-regulating industry group like FINRA might work. Especially if they own the vast majority of patents on anything related to AI.


> In those two stories, regulation is unenforceable or downright dangerous. Having a private, self-regulating industry group like FINRA might work. Especially if they own the vast majority of patents on anything related to AI.

[mild spoilers ahead]

it's been a while since i've read neuromancer, but i've read it twice, and it's maybe my favorite novel ever (a strong contender, at least). i did not get the impression that the point of neuromancer was that AI regulation was unenforceable. in fact, it took an all-star team of break-in artists to circumvent the AI regulation in neuromancer. right? this is why molly and case were hired, and why the flatline construct was instantiated? or is there another plot point that i'm forgetting that showed the regulation to be meaningless?

just because regulation is not 100% enforceable, or because it's circumventable with extreme effort, does not mean it is "unenforceable". this would, IMO, be analogous to saying that laws prohibiting bank robbery are essentially unenforceable, because banks do sometimes get robbed. but i don't think that anyone would call laws against bank robbery unenforceable in practice. they are, in fact, enforced to the point that bank robbery would not be a lucrative career for the vast majority of those considering it. that does not mean that a given bank robbery will never be lucrative, or that the incentives never line up such that a person or group would attempt one.

AI regulation seems vastly harder than bank robbery regulation, both practically and philosophically. but it doesn't strike me as absolutely impossible, and i think it's something that society should give serious thought to, and now. if only to give it real consideration before dismissing it.

and i think your comment admits this: if an industry can self-regulate, it can be regulated externally. plus, industries that fear robust external regulation are usually pushed into better self-regulation. industries that don't fear real external regulation usually just do whatever the fuck they want. e.g., does anyone think the financial industry is actually a paragon of self-regulation? a good example for how regulation should work in society? to me that's an example of society being hurt by a lack of well-crafted and robustly enforced regulation.


Exactly.


"I hope we make the right decision."

Most likely only after a lot of wrong decisions. that's how history always has played out.


Knowledge is never tainted.


No. But that's not the same as saying we need not scruple at any means of obtaining new knowledge.


So in good HN style, I am going to react to a title, before reading the thing. But my excuse is that titles frame the discussion, and it's the framing that I want to talk about. Given what a big and varied deal AI is, it is inconceivable that it will be unregulated. I mean considering its use in cars and medical imaging, it already is.

But "should X be regulated" is a vague and pointless question. Better questions are

* What kind of challenges does X pose that might require new rules?

* What special features of X should make lawmakers/judges etc think in new ways?

* What jurisprudential principles apply more to X than to most other things?

* How do existing legal frameworks play with X: e.g. perhaps AI in cars should just be ordinary motor safety regulation regardless of AI-ness.


I don't think it will be regulated any time soon.

However: If or when it is regulated, it is likely that existing software development standards will be adapted to ensure the presence of a safety case, adequate testing and so on and so forth.

The vast majority of software projects don't have to comply with quality standards like these, so there is a dearth of open source software to help deal with e.g. requirements specifications and traceability.

I'm taking a bit of a long term punt with my side project and am trying to implement a set of integrated open source tools tailored to the development of artificial intelligence applications (esp. machine vision), but with support for aerospace-style quality requirements such as (the truly excellent) SMC-S-012.

High Integrity Artificial Intelligence:- https://github.com/wtpayne/hiai.git

It is still early days ... but I'm chipping away at it gradually...


i'm totally out of my depth when it comes to practical experience with the things you're trying to do in that project (the closest i've come is dealing with medical device validation protocols at my previous software dev job). but i love the high-level goal you're going for here.


Two 'cool' objectives:

1. A build-system that does algorithm parameter tuning for me -- so I can incrementally improve the parameters overnight every night -- this is pretty much in place already -- just missing a motivating application to take it through the final few yards.

2. The ability to do NLP on requirements - initially for "linting" the requirements to encourage conformance to the sort of restricted natural language e.g. of the sort evangelised by Chris Rupp, but eventually to build a much more sophisticated understanding of the relationship between low level requirements and low level design (code).

https://www.sophist.de/fileadmin/SOPHIST/Puplikationen/re6/K...


I don't think any AI tech should be stifled or hosed because it affects jobs, or livelihoods.I think we can live in a world where people don't have to work much at all and that's okay. People averaged like 15 hours per week working back in the middle ages...

I think where it needs regulated of course is when it crosses into the realm of weaponry, and when it could do harm to people. Self-driving cars will be awesome but can also do a lot of harm..

When they crash, who's at fault the driver or the manufacturer? Can the developer be sued for negligent homicide/vehicular manslaughter because the software had a bug?

I am optimistically excited at what AI will bring us, but feel am certain there are plenty of places where regulations will need...


For an idea of what self-driving car regulation should look like, take a look at elevators. An elevator is basically a simple AI that could easily kill people. So there are complex regulations as far as licensing, inspection, testing, repair, and safety requirements. For the most part, this system works and elevators don't kill people. And if they do, liability depends on who messed up. (Source: have an elevator and need to deal with the regulations.)

Obviously self-driving cars are much more complex, but elevators are an example of how existing dangerous autonomous systems are regulated and I expect self-driving cars would be similar.


thanks for providing an existing real-world example i can use as a point of comparison when thinking about AI regulation in general!


> When they crash, who's at fault the driver or the manufacturer?

If they crash due to a manufacturing defect, the manufacturer is liable; If they crash due to operator negligence, the operator is liable; if both factors are involved, both may be (potentially fully, in each case) liable, and in various circumstances the vehicle owner and/or driver's employer may also be liable.

At least, that's how it is with non-automated vehicles, but I see no reason the same rules don't work fine for automated vehicles.

> Can the developer be sued for negligent homicide/vehicular manslaughter because the software had a bug?

Generally, vehicular manslaughter is a special offense that applies only to vehicle operators. Anything from negligent homicide up through depraved heart murder could, in principle, be applicable to software (or other) defect, depending on knowledge and other factors.


but the operator is the AI... it has full control.. you could be sleeping at the wheel safe in the knowledge that your car will get you to point B.So really 90% or greater crashes could be the Ai's fault... albeit 90% of 95% less crashes across the board.. is still an AMAZING improvement but when someone dies, someone gets sued or sent to prison - someone has to answer --even if it's just life and shit happens...


> When they crash, who's at fault the driver or the manufacturer? Can the developer be sued for negligent homicide/vehicular manslaughter because the software had a bug?

Why not have the same standards as for human employees ?

Either the user, or the employee (program), or the company/employer is at fault. Default is the employer, unless either the user or the program tried to deliberately (and presumably successfully) cause the malfunction.

Meaning if the human user or the program merely make mistakes, even incredibly stupid ones, it's the company's fault.


Will the AI have a vote in this discussion? If so then we should probably just wait until we have one. If not then it isn't really an AI.


You're talking about strong general AI, which doesn't exist, nor is it a problem worth discussing at this level currently.


If we're going to regulate any form of AI I believe strong AI is the only one worthy of such attention. Everything else will simply be called technology as soon as it works and then it is just 'mere algorithms'.


i disagree. for instance, algorithms and heuristics that are used in screening candidates should be subject to fair hiring laws (even if only indirectly, as a result of the existing legal framework). algorithms and heuristics for dealing with personally identifiable info in e.g. financial and medical settings should not allow an end-run around personal privacy regulations. neither of these examples is even close to being "strong AI". they're "mere algorithms". but they should certainly be the subject of regulatory scrutiny, IMO.

i do, however, agree that if strong AI is achievable, there are a whole other set of ethical considerations that come into play, namely the subjective experience of the AI itself, and the rights that might be entailed by possessing such a subjective experience. and i do think that strong AI is achievable, and i do think it is thoroughly plausible that it'll have subjective experience (that is the very definition of strong AI for some people). whether people will achieve that, and whether they'll achieve that in our lifetime... i dunno. but i'm definitely in the camp that says that conscious experience is a result of certain sorts of complex stimulus and information processing, and i don't think humans, or animals on earth, or carbon-based life, or meat computers in general have a monopoly on it.


> i disagree. for instance, algorithms and heuristics that are used in screening candidates should be subject to fair hiring laws (even if only indirectly, as a result of the existing legal framework).

But how will you even prove that an algorithm was used? What if the algorithm is executed by humans rather than by computers?

> algorithms and heuristics for dealing with personally identifiable info in e.g. financial and medical settings should not allow an end-run around personal privacy regulations.

Strongly agreed, but again: what are you going to do about it, these things are global now, and as much as I would like that genie to go back into the bottle my personal line of defense is to limit the amount of data flowing out to the absolute minimum knowing full well that no amount of legislation will keep my data safe. In fact, there is plenty of legislation to achieve the exact opposite effect.

As for part 2, probably not in our lifetime but it of course depends on the ages of the people conversing, if you're a newborn you just might see it happening.


same way you bring any hiring discrimination case: you get eyewitness testimony, you subpoena emails, you do the very squirrelly job of trying to figure out whether there was discriminatory intent, etc. if a heuristic was used that discriminates against people with the first name "rashad", but the operators of the heuristic didn't realize that, you get the company to stop using that heuristic. if they knew that's what was happening, and they were fine with it, because they thought people with the name rashad would probably be black, or muslim, or black muslims, then you penalize that company exactly as you would have if someone sent an email saying "i'm not considering the resumes of people that i think are [black|muslim|whatever protected hiring category]".

hiring discrimination regulation is already notoriously hard to enforce. that doesn't mean it's not worth trying, or that it's not worth trying to think about new threats to the goal of non-discriminatory hiring.


Ben Hamner (of Kaggle) on Twitter

> Replace "AI" with "matrix multiplication & gradient descent" in the calls for "government regulation of AI" to see just how absurd they are

https://twitter.com/benhamner/status/892136662171504640


Of course that's silly though. Matrix multiplication and gradient descent are methods. AI is the application of those methods (among others).

I mean, by that logic, food shouldn't be regulated, because it's just chemistry, man..


The should part of the title is kind of silly at this point.

There's no scenario where AI isn't regulated, across all of the developed world, within ~20 years.

Ignoring the commonly understood, well-discussed threats AI may pose. Just the fact that it's an extreme threat to political power, that alone will be enough to spur politicians to aggressive regulatory action. There's also no scenario where we make it another decade without the political power of AI being flexed in a way that scares the shit out of politicians. AI has the potential to be a direct competitor to traditional political power, including politicians themselves. Think the Russian-US political games have been terrifying/interesting/threatening/messy? What AI is going to be able to do in the political landscape, very soon, is several levels up from that. They'll jump to regulation very quickly accordingly. It's not 30 years out either, the first hints of this will be within five years or so.


Sure just like encryption, math and physics


It probably should but I have no idea how you would pass meaningful regulation on the matter.


No. Government meddling in AI will only slow down the progress of this new technology. It makes no sense to regulate something that we ourselves fully don't understand.

The last thing you want is the monopoly of large corporations over something that you can code in your basement.


Amount of resources required to run AI research have already made it a deep pocket game. Current AI research is a big corporate game. You will neither have compute nor data to do anything original unless it fall under transfer learning and somebody is kind enough to share pretrained model.

I am not sure how setting the right principal which every one should follow will put it out of your reach. I think it will force the corporates to avoid taking unnecessary risks with something we don't understand.

Perhaps a much more thought through version of Asimov's 3 laws of robotics.


> You will neither have compute nor data to do anything original unless it fall under transfer learning and somebody is kind enough to share pretrained model.

Lots of research recently goes into doing something useful with less data and compute. E.g. there have been a lot of zero/few-shot learning on the last CVPR.


> Amount of resources required to run AI research have already made it a deep pocket game.

there's no shortage of existing AI techniques which don't require google*days of cpu power, and there's no reason to believe research on all of them is completely tapped out.


Agreed and I am not arguing to regulate the obvious but now as AI can potentially touch and transform every walk of life, guidelines are needed to keep big players from going too far. There won't be any ground for plausible deniability if principals are set.

For example I think when deep learning is bound to touch every aspect of our lives, explainable models should be a must[1]. Look at section 3.5 where classifier was predicting with 94% accuracy for completely unrelated reasons like a person's name.

>Although this classifier achieves 94% held-out accuracy, and one would be tempted to trust it based on this, the explanation for an instance shows that predictions are made for quite arbitrary reasons (words “Posting”, “Host”, and “Re” have no connection to either Christianity or Atheism). The word “Posting” appears in 22% of examples in the training set, 99% of them in the class “Atheism”. Even if headers are removed, proper names of prolific posters in the original newsgroups are selected by the classifier, which would also not generalize.

If allowed industry will choose to ignore the downside in a race to capture market. For example industry is trying to play catchup in the area of security only after it became a serious issues. Its not that principals were not known, it's just that industry choose to ignore them[2]. If same happens to AI, it could result is widespread loss of human life. Tesla death is a perfect example of this where autopilot simply failed to distinguish between white trailer and brightly lit sky[3].

[1] https://arxiv.org/pdf/1602.04938.pdf

[2] http://csrc.nist.gov/nissc/1998/proceedings/paperF1.pdf

[3] https://www.theguardian.com/technology/2016/jun/30/tesla-aut...


No. But the dissemination of data without consent should be.


Yes. Hopefully by another Artificial Intelligence...

Oh, wait...


Turing cops?


No.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: