
Why I stopped working on the Bongard Problems - bfrs
http://www.foundalis.com/soc/why_no_more_Bongard.html
======
bithive123
This is either satire or early stage schizophrenia. When you see phrases like
"Although we can’t predict the technology of the future on the basis of what
we know at present" and "I do not want to give the impression I know how we
can deal with the nuctroid threat" without a shred of irony you know something
(beyond the simple logical errors) is awry.

Sadly it's not unheard of for scientists and mathematicians to dabble in
quackery later in their careers.

Edit: I'm not trying to dismiss his claim that technology has moral
implications but he's trying to turn a well-worn social issue that's been
around since pointy sticks into a technological one by waxing paranoid about
the implications of the (by-definition nebulous) idea of "strong AI".

~~~
astrofinch
It sounds to me that you interpret taking what might happen in the far future
seriously as early-stage schizophrenia. Does that sound about right?

If so, that's disappointing news for me and others who try to take what might
happen in the far future seriously.

I'm not sure what to make of your edit. It looks as though you wish to
compartmentalize objections to technological development in such a way that
they can't actually prevent any technology from getting developed. If we're
just going to ignore the outcomes of such discussions I don't see any point in
having them, we might as well just charge blindly forward.

~~~
bithive123
Emphatically, no. I'm just highlighting, as others have pointed out, that the
mere possibility of future humans using a "nuctroid" to smuggle a weapon is
not philosophically interesting in the face of the inevitability that _humans_
will conspire to do harm (possibly by using robots to smuggle weapons, or jet
planes, or other humans).

The connection between his research and the hypothetical scenario is not
demonstrated and in fact reeks of sloppy thinking more characteristic of
conspiracy enthusiasts.

~~~
astrofinch
Glad I misinterpreted you!

Sounds as though we don't have any significant disagreements. I agree that he
is assigning too much credence to this particular, very specific scenario.

------
busted
If I read it right, he has stopped working on a form of artificial
intelligence because it could potentially (or inevitably) be used to create
androids indistinguishable from humans that are carrying nuclear or biological
payloads inside of them, presumably to be detonated in a densely populated
area.

Taking as a given, like he does, that the advancement and spread of technology
are inevitable, wouldn't it still be many times more likely that people would
just detonate suitcase nukes themselves before they decide to hide them in
expensive and and potentially problematic robots? There's surely no shortage
of people willing to die to do that, and even if there were it's unlikely that
setting a bomb on a half hour timer and getting out of dodge will affect the
success rate.

That frankly ridiculous scenario aside, I can imagine much more likely
applications that computers capable of solving Bongard problems (which sound
pretty cool) could be used in war, like automated drones that are able to
independently identify targets.

~~~
lysol
Humans don't carry nukes because fissile material is _heavy_. Add onto that
the shielding to make sure the person can even carry it without becoming
gravely ill very shortly into their delivery and the fact a suitcase nuke or
dirty bomb has never been personally delivered is not very surprising.

The ridiculous part is that making an android carry this payload doesn't
change the nature of the payload. Heavy, fissile material will still give off
signatures that will trip all manner of alarms.

~~~
arethuza
Fatman, the plutonium implosion bomb that was detonated over Nagasaki, used
only 6.2kg of Pu. Also, the Pu-239 used in weapons doesn't require a lot of
shielding. There are plenty accounts of people handling nuclear weapon cores
that have had only a light plating of other metals or with only thick gloves.

What probably is quite heavy is all of the associated components that you need
to make a bomb - the chemical explosives, tamper etc.

The W54, one smallest nuclear warhead anyone is admitting to making, was about
23kg - although this did have a very small yield.

------
nohat
> “So where does the air vehicle called the Predator [i.e., a flying robot]
> fit? It is unmanned, and impressive. In 2002, in Yemen, one run by the CIA
> came up behind an SUV full of al-Qaeda leaders and successfully fired a
> Hellfire missile, leaving a large smoking crater where the vehicle used to
> be.”

> Yes, just as you read it: a number of human beings were turned to smoke and
> smithereens, and this pathetic journalist, whoever he is, speaking with the
> mentality of a 10-year-old who blows up his toy soldiers, reports in cold
> blood how people were turned to ashes by his favorite (“impressive”, yeah)
> military toys. Of course, for overgrown pre-teens like him, the SUV was not
> full of human beings, but of “al-Qaeda leaders” (as if he knew their ranks),
> of terrorists, sub-humans who aren’t worthy of living, who don’t have
> mothers to be devastated by their loss. Thinking of the enemy as subhuman
> scum to be obliterated without second thoughts was a typical attitude
> displayed by Nazis against Jews (and others) in World War II.

That's... quite a string of logic. He seems to know an awful lot about the
mental process of that journalist.

As a critique of his general point: good general AI is dangerous (and useful)
in so many ways I don't see why he focuses so narrowly on humanoid carriers of
weapons of mass destruction - hell we already have those.

~~~
frankydp
You are very right to question the interpretation of the subhuman argument. If
someone has not been shot at or genuinely afraid for the life at the hands of
another human being, it is very idealistic to say that the conversion of
humans to subhumans by combatants is petty. As someone who has been shot at
and shot back, the reduction of an unquestionably hostile enemy to subhuman is
very normal if not necessary for most members of a military, on both sides of
a conflict. People that judge the hatred of religiously motivated enemies are
both naive and living in walled gardens.

The fact that the OP can morally object to participating in the research is
the perfect definition of ideology inside a protected environment. If he had
ever needed a gun, for example, to save his life, he would not question the
morality of the creator, until he was once again safe from those that
threatened him. I say until because people that question the need for violence
have never experienced true hatred of violence. IMEO.

------
victork2
Pardon me but ... I think that there are way worse dangers than "humanoid
bombs"... One of the main reason is that to achieve a nuclear explosion you
need to have a critical mass and the it's hard to conceal for a lot of reasons
(radiation etc...).

What's the difference with a car that could have a bomb in its trunk? Or a
bag? A lot of scientists have wondered about these ethical questions but I
believe that the benefits of high performance IA outweights the downsides of
its research.

BUT I definitively agree with that:

"Americans should grow up and abandon their juvenile-minded treatment of
weapons, high technology, and the value of “non-American human life” (which,
sadly, to many of them is synonymous with “lowlife”). This is the hardest part
of my proposal."

*edit: And what about an android to dismantle the atomic bomb instead of humans ? Sounds good to me!

~~~
Estragon

      > I think that there are way worse dangers than "humanoid bombs"
    

Yes, I am much more concerned about the scope for ubiquitous surveillance and
systematic domination that even fairly modest gains in AI will allow.
Something along the lines of the Emergency society in _A Deepness in the Sky_.

~~~
psykotic
It's been forever since I read the book, but wasn't the whole point of the
Emergency culture they didn't use AI and relied entirely on hyper-focused
humans with enslavement implants?

~~~
arethuza
Didn't Sherkaner Underhill initially think that the Emergent zipheads were
actually an AI?

~~~
psykotic
Wasn't that Pham Nuwen, after the initial attack?

------
ars
Um, remote controlled robot?

Why would I waste time making an AI robot to carry my bomb when for a lot less
money and complexity I could just control it remotely.

Does he realize how crazy he sounds? Some people becomes obsessed with an
idea, and start thinking that everything in the world is about them.

Have you ever been approached by someone on the street with a super important
message to tell you, and they are utterly obsessed with it? That's how he
sounds - only more articulate.

I'm don't intend to be insulting when I say he should see a mental health
professional.

------
glimcat
"They’re in the remote possibility of building intelligent machines that act,
and even appear, as humans. If this is achieved, eventually intelligent
weapons of mass destruction will be built, without doubt."

Worrying about this strikes me as a bit daft when you can already convince
actual humans to be your weapons delivery system.

It also shows some significant shortsightedness regarding scaling laws which
an AI researcher ought to have more experience with. A more legitimate worry
would be basement-grade Predator drones. Grenade-bearing quadcopters which use
computer vision to track and target dense crowds are something which
technology can do _now_ , rather than something which might optimistically
happen in a few hundred years.

~~~
coopdog
Definitely this. A terrorist with an engineering/chemistry/biology
degree/knowledge could do a lot of damage in todays society. It's not hard to
imagine if you let your mind wander

(the explosive homemade uav into a stadium would be pretty bad, could fly in
from anywhere)

I don't think he gets that security is probability based, consequence *
_likelihood_, then you concentrate on the factors you can control like
monitoring for people with intent, looking for known patterns, developing
response plans, etc

Limiting the technology available is an exercise in futility, and has negative
impacts on society to boot

~~~
GoodIntentions
>> It's not hard to imagine if you let your mind wander

Ahem: <http://www.imdb.com/title/tt0075765/>

------
ibarrac
What I find really ridiculous about this article is that the author is worried
about just a single possible use of a world-changing technology. He is
concerned that creating real artificial intelligence will allow for the
possibility of someone building androids with nuclear bombs inside
masquerading as humans, a very specific and frankly ridiculous idea, taken
straight out of the movie Impostor or from Philip K Dick's story of the same
name.

In reality, the effects of building truly intelligent machines would be so
vast, so utterly unpredictable, that worrying about one single possible use of
the technology is absurd. Nothing has prepared us to deal with another
fundamentally different intelligence on this planet, especially one that would
soon outstrip our own. We don't know if we can keep the AIs as our slaves, or
whether we would become their slaves, or merge with them, or we would become
extinct like the dinosaurs and they would represent a new phase in human
evolution.

For more about the risks related to the rise of true AI read this:
<http://yudkowsky.net/singularity/ai-risk>

------
jerf
Excessively specific adjective: The average _human_ has no particular regard
for the life of the Other. An open-eyed view of both history and the world
around you reveals that in spades. Calling out what we usually call the
civilized world for not caring about the life of the Other is a major, major
lamppost argument. The idea that one should care about someone else 10,000
miles away of another color and completely different culture is a striking and
unusual attitude in human affairs.

(Since we ourselves are human it can be easy to blip over the historical
manifestations of these facts as just part of the natural order of things
ourselves. So, as one exercise if you have trouble understanding what I mean
on a gut level, consider the stocks [1]. Consider what it means that in the
middle of what was at the time the height of civilization and the genesis of
our own in the western world, these things not only existed, but were in
public places. And _used_. I can not truly internalize this, only observe it.
And consider how often you've seen these and never thought about what they
actually _mean_ about the culture they appear in, if you never have before.
For those not of western civilizational descent you can find your own
examples; they are abundant in all cultures.)

Of course, actual examination and comprehension of this state of affairs won't
necessarily leave you _more_ confident about the likely outcomes.... but it
may make you reconsider the validity of letting someone else beat you to the
research anyhow. Your influence towards humane usage is maximized by being on
the cutting edge, not just being some guy over there yelling.

[1]: <http://en.wikipedia.org/wiki/Stocks>

------
cageface
Like many other posters, I find his specific worries a bit misplaced. However,
I have had some reluctance to continue working on some of my own machine
learning projects because I'm worried about the potential abuses of the
technology.

I'm sure the field will get along just fine without me, of course, but I just
felt like I was very likely to be asked to use ML skills to do things I felt
weren't entirely ethical.

------
domwood
I think that we're largely missing the point here. He's worried that his
fundamentally harmless research will end up powering horrific weapons of mass
destruction, enabling them to attack even more precisely and with more
devastation. And quite frankly, I share his concerns that if those weapons
were developed, we would use them without thought or care. And apologies to my
fellow American Hackers, but America's got the rep for it, what with that one
time they dropped a couple of nukes on hundreds of thousands of unarmed men,
women and children, killing hundreds of thousands and levelling a couple of
cities.

But, I digress, he's talking about androids sneezing us to death. I'm not
going near a shop mannikin ever again.

------
jcoder
The author's attitude that very few Americans are "intelligent, mature," and
"[respect] life deeply" impeach his opinions on both logic and geopolitical
topics as far as I'm concerned:

> It is typically Americans who display this attitude regarding hi-tech
> weapons. (If you are an American and are reading this, what I wrote doesn’t
> imply that you necessarily display this attitude; note the word “typically”,
> please.) The American culture has an eerily childish approach toward
> weapons, and also some outlandish (but also child-like) disregard for human
> life. (Once again, you might be an intelligent, mature American, respecting
> life deeply; it is your average compatriot I am talking about.)

------
sambeau
Woah. I would have liked a warning about the picture of a kid with his arms
blown off.

I realise the internet is full of this but I try my best to avoid it. I don't
want to become immune to the shock.

The thought of this little guy's pain and suffering and the idea that he was
casually being used to back-up an online essay is really sad.

~~~
j-b
Thanks for the heads up. I had not read the article yet and now I definitely
won't.

------
astrofinch
As others have mentioned, this specific concern may not be much of a problem.
It might be that it's easier to deliver a nuclear bomb the old-fashioned way
than putting it in a fake person.

However, I agree that development of AI should be done with caution. The work
of the Singularity Institute is worth looking into; see
[http://commonsenseatheism.com/wp-
content/uploads/2012/02/Mue...](http://commonsenseatheism.com/wp-
content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-
and-Import.pdf) for a more academic summary and
<http://facingthesingularity.com/> for a longer popular summary of their
positions.

------
ardillamorris
Another "this is why I quit" + name_of_company doomsday letter. Instead of a
company, he's quitting his research and university. We know why this starts:
seeking fame. We know how this ends: forgotten.

------
dbecker
A lot of people get tired of their dissertation research, and I've heard
others contemplate contrived reasons not to finish their PhD.

This happens to be especially far-fetched... but it takes a "big" reason to
justify to yourself that you may leave behind so much work.

I hope the author realizes that this particular scenario isn't one of the
1,000,000 biggest concerns for humankind... that he continues his research
program, and that he finds an application of his research that has a positive
impact in a much more likely scenario.

------
rkaplan
I think the most credible concern this post mentions is the general disregard
in the United States (especially among those in charge of the military) for
the long term implications of the indiscriminate use of A.I.-based warfare.
Drones seem great for the U.S. now: they make it easier to kill enemies and
don't directly endanger American lives. But in a decade or two when "enemy"
nations start to develop them too, things get a whole lot more complicated.

Nonetheless, I think the general stance of the article is severely flawed. We
cannot halt research in computer cognition because it has the potential to be
weaponized (and dangerously so). As the author himself mentions, it would be
akin to halting the development of the knife because people can use it to stab
each other, or the development of the Internet because it makes it easier for
criminals to communicate and organize.

Avoiding a potential advance in technology by doing things like cutting
funding to it, and hoping it will go away as a result, is never the solution
to potentially dangerous development. One cannot stop the inexorable march of
progress by "making a statement." The approach with greater value is to call
out the dangers that the potential advance poses (as the post has done), and
then work to develop an ethical framework for which the new technology can
more safely exist.

The Singularity Institute has raised awareness of this broader issue in the
past, as have several others, and is promoting the creation of "Friendly A.I."
[1] to help address the problem.

[1]: <http://en.wikipedia.org/wiki/Friendly_AI>

See also this recent article: <http://www.economist.com/node/21556234>

~~~
ars
> of the indiscriminate use of A.I.-based warfare

There is no A.I. based warfare - the drones are controlled by human pilots.

~~~
rkaplan
From wikipedia: "An unmanned aerial vehicle (UAV), commonly known as a drone,
is an aircraft without a human pilot onboard. Its flight is either controlled
autonomously by computers in the vehicle, or under the remote control of a
navigator, or pilot (in military UAVs called a Combat Systems Officer on
UCAVs) on the ground or in another vehicle."

They can be controlled by pilots remotely, but are also able to function on
their own.

~~~
micaeked
they are able to function on their own about as much as an autopilot is able
to function on its own

------
ruethewhirled
Quick note: There's a not quite safe for work image near the bottom of the
article (Topless tribal woman)

~~~
dsr_
If you're upset about the topless happy healthy woman and not about the scenes
of disfigured war victims above it, there's something wrong with you as a
human.

~~~
eswangren
Yes, if only we could pay the bills with naive idealism.

~~~
ktizo
You can rather easily, if it is written well and marketed properly.

------
anigbrowl
_They’re in the remote possibility of building intelligent machines that act,
and even appear, as humans. If this is achieved, eventually intelligent
weapons of mass destruction will be built, without doubt._

We already have those. There are plenty of people willing to blow themselves
up and take a bunch of others with them:
<http://en.wikipedia.org/wiki/Explosive_belt>

As a non-American from a constitutionally neutral country, I think this is the
equivalent of having people traveling in front of trains with red flags. There
are any number of ways to disguise a devastating weapon or deliver it
undisguised, and evil is not a mere by-product of technical incapacity.

------
fchollet
Tinfoil hat and nonsense. Since when are the Bongard problems even remotely
connected to actual _human_ cognition? Is this guy straight out of the 60s?

------
niels_olson
> the nuclear bombs that Pakistan possesses would fall into the hands of
> terrorists.

This exact scenario was discussed today on NPR
([http://www.npr.org/books/titles/154283427/confront-and-
conce...](http://www.npr.org/books/titles/154283427/confront-and-conceal-
obamas-secret-wars-and-surprising-use-of-american-power))

------
sandycheeks
Made me think of this... Is the Concept of an Ethical Governor Philosophically
Sound? By Andreas Matthias <http://www.shufang.net/matthias/governor.pdf>

Perhaps he should work on these kinds of algorithms instead of ones that solve
Bongard problems.

------
javert
The author's characterization of any Americans who disagree with his politics
as morons is disgusting.

~~~
mieubrisse
I strongly felt this way from about the halfway point on down. He seems to
have a particular hatred for Americans, and he is nothing if not vocal about
it.

> But Americans can sense that this is not a case like those they’re familiar
> with, if they realize that the “reign of terror” was a cheap trick employed
> for years by their post-9/11 administrations in order to reduce civil
> liberties and pass antidemocratic policies with no resistance. I am not a
> member of their administration, not even an American. I am speaking as a
> person concerned about fellow people and the future of humanity as a whole.

smelled particularly strongly of a conspiracy theorist's thinking. While I'll
not dispute that certain acts passed during our author's so called "reign of
terror" overstepped their bounds (the Patriot Act is, of course, first to
mind), passing off all regulation to fix clearly- and recently-exposed
security flaws within U.S. security as maniacal scheming that we, the public,
are docilely accepting is inaccurate and downright insulting.

Worse, he tries to couch his blatherings in the mantle of a just, altruistic
benefactor who is merely "concerned about fellow people and the future of
humanity as a whole." The article was interesting for the issues it drew
attention to which DO merit consideration, but ultimately was spoiled by
diatribe and paranoia.

------
gee_totes
While I respect the author's decision to leave his reasearch, I am suprised
that the reason was due to robot suicide bombers.

We have plenty of humans who are ready to go into a crowded place and detonate
an explosive. Some, I'm sure, would like that explosive to be a nuclear
weapon.

------
madethemcry
This is ridic. I read only half of the story after the full story I probably
would say insane

------
codgercoder
two words: "Dark Star"

~~~
sp332
To add a bit more to the conversation: The movie "Dark Star" which cam out in
1974, predating and anticipating many of the sci-fi tropes popularized by Star
Wars, Star Trek, etc. The characters have a conversation with a bomb and try
to rationalize that it shouldn't detonate.
<https://en.wikipedia.org/wiki/Dark_Star_%28film%29>

~~~
emmelaich
Excellent movie.

Two of the creators in Dark Star (Dan O'Bannon and Ron Cobb) worked on Star
Wars, Alien, others. Also George Lucas was of course aware of Dark Star. So
it's a bit different from mere predating/anticipating.

(Though you are probably aware of that)

~~~
sp332
Yeah, anticipate probably isn't the right word. I just think it's funny that
this movie seems to parody those others before they were even made :)

------
rsanchez1
Wow, this guy has a bone to pick with Americans.

Why worry about an AI humanoid delivering weapons, when we already have so
many humans who do that already? The groups sending people on suicide missions
certainly won't spend money on androids, and suicide missions are much more
common than just 9/11. Hint: for the most part, it's not Americans sending
people out to deliberately commit suicide by delivering weapons to targets.

It's just as naive to be so one-sided about the issue.

------
its_so_on
This has to be one of the biggest leaps of logic I've ever seen in my life.

it's like, "Why I stopped working on crypography." sentences 1-5: author
introduces the theory behind cryptography (interesting) sentence 6: he says he
stopped working on it for ethical reasons (um, okay) sentence 7: because
cryptography would prevent batman doing his detective work. (batshit insnae)

------
ktizo
Atomic dielectric resonance scanning obsoletes nukes anyway. It also obsoletes
most concepts of privacy and most existing biological, chemical and geological
analysis technologies.

<http://en.wikipedia.org/wiki/Atomic_dielectric_resonance>

<http://adrokgroup.com/>

------
maeon3
I cannot think of a worse argument to stop building revolutionary technology
than: "It might all blow up in our faces".

It's going to get built, one way or another, the only way for it not to
destroy us is for us to make sure perfect angels design it perfectly, or we
proceed cautiously and make things as safe as possible. Like airplanes and
spaceships.

If he's worried about androids rising up against their former rulers with
their delicate flesh, his worries are about 60 years premature. I will
continue to build and improve on the neural networks I build. And when they
are intelligent enough to ponder their own existence and defend themselves as
humans do, I will fight for their rights as citizens.

------
aneth3
When humanoid robots become passable as humans, I would expect us to have
technology capable of distinguishing between warm blooded humans filled with
water and robots filled with artificial compounds, and to detect bomb embedded
in anything mobile.

I wonder why it did not occur to him that the same AI could also be used to
aid in this detection of humanoid nuclear bombs, which if they are going to be
built, will certainly be built with or without him.

------
koglerjs
It's really too late to be concerned about frightening effects of technology
now that drones are allowed in US airspace.

------
voodoochilo
great article! loved it. in the 80's i coined me the sentence "never write
software for cruise missiles!". that was harder than i imagined then. today
it's the same with AI, ML and even data mining. ethically very though stuff
for responsible software developers. anyway, thnx for the article.

