
The Commerce Department is considering national security restrictions on AI - tzury
https://www.nytimes.com/2019/01/01/technology/artificial-intelligence-export-restrictions.html
======
logicchains
I'm curious how export restrictions would affect open source projects like
Tensorflow and PyTorch. Would they be forced to become closed source? Could
the license just include a disclaimer: "You're not allowed to use this if
you're in one of the following countries: ..."? Would sites like Gitlab and
Github be forced to implement per-repo geoblocking? Could they somehow be
moved to ownership by a non-American entity that wasn't subject to such code?
Does a US citizen contributing to a non-US open source ML project constitute a
breach of export controls?

~~~
gumby
In the 90s it meant open source software (we didn’t use this term back then)
like Kerberos were considered munitions and it was illegal to provide them to
non-us persons without a license.

At Cygnus we generated a version with no cryptography at all and sent it to
Switzerland where someone wrote crypto routines (available in the open
literature) that interoperated, and the everyone on the planet had access to
it (but American folks had a different version). All so that the Swiss stock
exchange could use Kerberos for single sign on and end-to-end encryption.

~~~
logicchains
Was publishing cryptography papers in open-access journals also illegal?
Without such a ban it seems it would be very hard to stop the export of AI, as
it's not particularly difficult to implement an AI algorithm from spec, just
tedious; certainly much easier than implementing one's own crypto.

>illegal to provide them to non-us persons without a license

I wonder if in this case Git{hub|lab} would need the license, or the person
uploading the code?

~~~
magoghm
You couldn't export software implementing cryptography, but there wasn't any
similar restriction on books about cryptography, even if the book contained
detailed descriptions of the algorithms and the mathematics!

~~~
justincormack
The PGP source code was published as a book to avoid this
[https://philzimmermann.com/EN/essays/BookPreface.html](https://philzimmermann.com/EN/essays/BookPreface.html)

~~~
tclancy
Wasn’t it also published as a T-shirt?

~~~
wbl
You are thinking of the RSA perl t-shirt.

------
3pt14159
I understand why they want to—even to the point of thinking it would be the
right move if only it would actually work—but it won't work.

True, software is dual use[0] but it's too slippery. Even things like nuclear
weapons designs[1] are getting pretty easy for people to get their hands on.
This stuff is just text! How are you going to stop this from spreading? It
won't work.

Besides, top-shelf AI tech isn't much better than mid-shelf stuff. For
example, self-driving car technology. We're talking about the difference of
1000kms with no intervention by a human driver vs 5000kms. A terrorist
weaponizing self-driving tech for, say, a mobile weapons platform (AKs
connected to remote triggers, say) doesn't need the top shelf stuff. They need
an auto that can go maybe 10kms.

Thing is, unlike most people I talk to in tech, I actually support the
majority of the objectives intelligence agencies in the west have. I just
don't know how you solve this one. We need free speech to be a liberal
democracy but information itself is becoming dangerous. How do you square
that?

Further, if the power of individuals keeps going up due to technological
advancement how do you maintain security and freedom at the same time? The
only way I can see it working is for individuals to continue to get more
peaceful.

[0] [https://en.wikipedia.org/wiki/Dual-
use_technology](https://en.wikipedia.org/wiki/Dual-use_technology)

[1] Thank goodness weapons grade fissile material is hard to make.

~~~
vinceguidry
> Thank goodness weapons grade fissile material is hard to make.

The material itself is easy to make. It's the delivery systems that are hard
to engineer. Missile technology is literally rocket science.

~~~
3pt14159
I agree that delivery systems are no piece of cake, but if I'm worried about
nukes I'm more worried about terrorism than I am about nation states. From
that lens, fissile material is indeed hard to make. The Iranians were pulling
their hair out when seemingly fine centrifuges mistimed due to malware. If it
were _easy_ then they could have tried a different approach.

------
Veedrac
The publication mentioned:

[https://www.federalregister.gov/documents/2018/11/19/2018-25...](https://www.federalregister.gov/documents/2018/11/19/2018-25221/review-
of-controls-for-certain-emerging-technologies)

Relevant excerpt:

The representative general categories of technology for which Commerce
currently seeks to determine whether there are specific emerging technologies
that are essential to the national security of the United States include:

(2) Artificial intelligence (AI) and machine learning technology, such as:

(i) Neural networks and deep learning (e.g., brain modelling, time series
prediction, classification);

(ii) Evolution and genetic computation (e.g., genetic algorithms, genetic
programming);

(iii) Reinforcement learning;

(iv) Computer vision (e.g., object recognition, image understanding);

(v) Expert systems (e.g., decision support systems, teaching systems);

(vi) Speech and audio processing (e.g., speech recognition and production);

(vii) Natural language processing (e.g., machine translation);

(viii) Planning (e.g., scheduling, game playing);

(ix) Audio and video manipulation technologies (e.g., voice cloning,
deepfakes);

(x) AI cloud technologies; or

(xi) AI chipsets.

~~~
philipkglass
It also names "visualization" and "molecular robotics" as (non-AI)
representative technology categories. It includes the specific, the absurdly
general, and the so-far-imaginary among its categories. The Federal Register
publication is not very enlightening and I can't tell if the NYT story is
based on anything more than that publication.

~~~
ggggtez
I'm assuming "molecular robotics" is intended to cover CRISPR-like
technologies.

~~~
philipkglass
There's a separate section for biotechnology. Molecular robotics is listed
under robotics along with smart dust and swarming technology.

------
gumby
What a bunch of maroons.

This stupid approach was such a disaster in the days of crypto controls.
Lifting them protected Americans and unleashed a huge number of new
capabilities.

~~~
bsaul
« Lifting them protected americans and unleashed... »

Source ?

~~~
gumby
End to end encryption is now common which protects everyone. SSL/TLS is now
unbiquitous as are secure messaging apps. Crypto research looks for
volnerabilities in all sorts of software. This protects everyone, including
Americans.

------
kolikotime
Wide ranging restrictions such as these are being tactically applied
throughout the world. Germany and France enacted similar policies last year.
As Ian Hogarth sketched out last year, we're entering an era in which AI
becomes part and parcel of a country's geopolitics, an era of AI Nationalism
so to speak.

Hogarth expressed the opinion that Google's purchase of DeepMind(A UK based
company initially) is going to be seen increasingly in the future as an
amazing coup, a unit with some of the world's greatest minds on Deep learning
allowed to be sold to a foreign conglomerate. I have to say as time goes by
and governments realize how strategic their AI community is, I agree.

~~~
blattimwind
> Germany and France enacted similar policies last year.

Source?

~~~
Semaphor
Searched and couldn't find anything for Germany.

------
demarq
I swear, America is always at the height of a cold war that ended decades ago.

~~~
walshemj
Ended? the great game continues to this day.

------
nv-vn
This is what we get for all the stupid fear-mongering done by people with no
understanding of "AI". Pop scientists like Neil deGrasse Tyson and Elon Musk
weighing in on a field with 0 overlap with their expertise is treated by the
media as some kind of prophecy sent from the heavens. Don't get me wrong,
there's scary shit happening with AI that we can legitimately talk about, but
we are so far from a world with sentient computers that the problem being
posed by media figures is no more appropriate to discuss today than it was
with the advent of neural networks.

~~~
timonovici
[https://youtu.be/GAXLHM-1Psk?t=945](https://youtu.be/GAXLHM-1Psk?t=945) \- I
think this commentary by Maciej Ceglowski rings true here.

~~~
timonovici
Ah, it comes a bit later, at 19:00 -

"At that point people who are angry, mistrustful, and may not understand a
thing about computers will regulate your industry into the ground. You'll be
left like those poor saps who work in the nuclear plants, who have to fill out
a form in triplicate anytime they want to sharpen a pencil."

------
throwaway343213
How is this even going to work ?

A large fraction of the advances in AI come either from China proper or by
Chinese grad students in the US, or by ethnic Chinese raised in the Americas.
I don't think a large fraction of this demographic shares the cold war
mentality of the war hawks in the Pentagon.

USG will probably resort to will be to put restrictions on export of Nvidia
GPUs/TPUs (similar to that on Intel Xeons). Hopefully, this'll have the
unintended effect of breaking their monopoly of the DL market.

~~~
ngcc_hk
May be it is a yesterday cold war but a different kind of Cold War today. Your
point right to the mark. Imagine you have Russian working living and sharing
all the goodies but not follow the ethnic or market dynamics (genetic modified
baby is just we know, no free access to chinese market but the other way round
etc) what would any sensible American these days would say.

Let us open free no restriction between china and America software border not
one way. Let us do Facebook and Twitter ... no.

Open source is great in a free word and inside a free world. At least the free
rider paid but just benefit to their country even they live, study and suck
you dry.

------
heyjudy
Do I really need to dust off my DeCSS Perl "this shirt is a munition"
ThinkGeek shirt from way back?

When something is new or "scary" doesn't mean knee-jerk legislation will do
anything other than lead a country to missing out and innovation strengthing
other countries.

~~~
schoen
You're thinking of two different code-on-shirts episodes here (Federal export
controls on cryptographic software and entertainment industry litigation over
DVD decryption software), although both of the shirts may have involved Perl
code. "This shirt is a munition" is from the cryptography export controls
issue and DeCSS is from the entertainment industry litigation issue.

------
segmondy
Uh oh, let's look at the crypto wars and see how that turned out. AI
restriction is garbage, I think this is a way for the govt to steal AI tech
from private corps. If you refuse to share, how can they access the strength?
With crypto, we could easily say restricted to Nbits of keys. With AI, they
will make up some rubbish and claim your AI is too strong because you refused
to share. If you share and they find it good and think it gives the govt
advantage, they will restrict you too. Either way you lose. Long run tho the
govt will lose.

~~~
jsmith99
The crypto wars worked out well from the NSA's point of view. Obviously
determined individuals, foreign governments, terrorists etc were not deterred
by it from obtaining strong crypto. But mass adoption of strong crypto was
delayed by many years, enabling mass surveillance.

~~~
thatcat
So you think by creating a negative association with AI, adoption will be
delayed?

------
killjoywashere
Do you restrict the distribution of sets of trained hyperparameters? Do you
restrict models for training? Do you restrict the datasets? Do you prevent
journal publication of new developments? The fundamental algorithms are pretty
well developed. Like Bernoulli's principle was well understood before the
first airplane.

If you want to restrict something, you would restrict the tooling. Like we
restrict the tooling to make advanced aircraft wings. In the case of ML that
means you restrict ... migration of certain groups of engineers? Sale of GPUs?

~~~
omeid2
While I don't advocate any restrictions for knowledge, even if for the very
practical reason that it fails the it's premise of keeping it away from the
_bad guys_ and also find National Security to have been well and thoroughly
abused for overreaching and draconian powers, there is something to be said
about the power of the information that a well trained model holds.

I believe it is fair, to at least start with the premise that, restrictions
that would normally apply to some data, would reasonably extend to the Model
trained by them, obviously, how much depends on the context and sensitivity of
the data.

------
FilterSweep
Regulation is certainly needed but we can be certain it won’t be done
properly.

More worrying is the fact that for every “benefit” we receive from AI (self
driving cars, open source home automation), there are numerous, globally
negative consequences such as discriminatory practices based off biased
training data. The Chinese executive mistakened for a jaywalker is a clear
example of this.

It feels like this is a train that cannot be stopped that will lead to
disasterous circumstances.

~~~
creaghpatr
The only thing that can stop a bad guy with an algorithm is a good guy with an
algorithm.

~~~
FilterSweep
Genuinely wondering, is that sarcasm?

------
megiddo
This will work about as well as suing Napster did to prevent the spread of
music sharing.

~~~
bdcravens
The same is true of most convictions and crimes.

~~~
AnthonyMouse
Not really. Most good laws prohibit things most good people wouldn't actually
do, and are mostly effective even if a few people don't follow them. So you
start off with 80% compliance before you even pass the law, then you get
another 15% because the people who might have done it accept that there is a
law against and it don't consider it worth breaking the law. So you get >95%
compliance, which is generally good enough to be effective even if there is
the occasional scofflaw.

Restricting information sharing on the internet is the complete opposite. If
there was no law then everybody would do it, many people will fight you on
principle to the point of overt civil disobedience, and if even one source is
available the whole thing falls apart -- including from other countries you
have no jurisdiction over.

------
corndoge
Only Americans are allowed to do nonlinear function approximation! Fuck off
China!

~~~
PurpleBoxDragon
Isn't this a bit over-reductionist, such as the people who claim that since
all electronic images are represented by 0's and 1's images are just numbers
and thus it is silly to ban an illegal image since it is just banning a
number?

~~~
hippich
This is interesting... Does algorithm decode, let's say, a JPG image in such a
way that would produce child porn out of let's say google logo, makes google
logo illegal image? Does conveying an illegal image in other ways, let's say
describe in text detailed attributes of such an image, makes such text
illegal? Not from a law standpoint, as I am sure it is quite specific, but
from a spirit of the law which in theory should represent cultural norms of
the country using such laws?

~~~
IshKebab
The law doesn't really have a problem with such technical hacks. They
basically apply common sense - if you distribute a decoder that produces CP
from the Google Logo, in legal terms that is no different to distributing CP.

I think someone wrote an article about this many years ago called "the colour
of bits" or something like that. Basically the idea that you can trick your
way out of the law with "it's just a number" type arguments are nonsense.

~~~
hippich
To clarify, what if such decoder indeed decodes images, but it happens that if
you feed to it specific data set, like google's logo, result will be a child
porn picture on the screen? I.e. I am not talking about intentionally making
an algorithm to generate such picture, but rather if such result will be a
side-effect. What will be CP in this case, google's logo, algorithm, both,
none?

~~~
PurpleBoxDragon
What's like to happen? Given how low the probability of such an event is to
happen by chance, if the algorithm is traced back to you then the court will
find you guilty of creating the algorithm specifically to produce the image.
In short, you would be extremely unlucky.

~~~
hippich
Why in this case maker of decoder should go to jail and not creator of google
logo? To really make it difficult - let's assume both image and decoder was
made at the same time.

~~~
PurpleBoxDragon
If both were created at the same time, it would be a far different court case.
I was assuming the case where the logo clearly predates the algorithm, because
in that case you would get enough expert witnesses to testify that the logo
was itself clean and thus it is extremely likely the algorithm was developed
to specifically produce that image given the input.

If both were provably independently created at the same time, you would
probably instead be having an uproar among scientists about the probability
for such an event happening. It may still cause issues because the probability
of such an event is so small, that the probability of the proof of their being
independently created being wrong is the far more likely event.

------
menzoic
Dr. Geoffrey Hinton, the creator of Deep Learning left academia in the United
States and moved to Canada in part as a personal protest against military
funding of research.

------
zxcvvcxz
We're gonna need government permits to do gradient descent.

~~~
illumin8
Great point, where do you draw the line? Will linear regression be considered
"AI?"

~~~
adityab
The line will be drawn between class boundaries.

------
simplecomplex
There is no such thing as AI. No professional, legal, or formal definition of
such a thing exists. Even self-described AI experts have put forth no
definition for AI, let alone agreed on one. Elon Musk was telling people on
Joe Rogan that AI was the biggest threat to humanity without even defining
what AI is.

------
raverbashing
Yes, because that worked fine with encryption restrictions back in the 90s.

------
JoeDaDude
I appreciate the members of HN for bringing this to our attention, however,
the comment period on the proposed rule making ended on December 19th, 2018.

[https://www.federalregister.gov/documents/2018/11/19/2018-25...](https://www.federalregister.gov/documents/2018/11/19/2018-25221/review-
of-controls-for-certain-emerging-technologies)

~~~
JoeDaDude
Let me correct myself. Seems the comment period has been extended until
January 10th.

[https://www.regulations.gov/document?D=BIS-2018-0024-0042](https://www.regulations.gov/document?D=BIS-2018-0024-0042)

------
justcorrect
The problem I see here is logistical. AI is often a blank slate, dependent on
the data that's fed into it to be useful. It's that data, and the way the
model is tuned, that is important. We do control exports on certain grade
encryptions, but how do we control export over tuning a machine model, or over
input data for a ML model?

------
Havoc
Good luck putting that genie back in the bottle

------
gammateam
Tl;dr when you got your people in power out of self-interest but now they
outlived their usefulness

