
Researchers propose a new interdisciplinary subfield called “Machine Behaviour” - benhsu75
https://www.nature.com/articles/s41586-019-1138-y
======
high_derivative
My highly cynical take as someone researching in ML is that all these
interdisciplinary AI studies, AI policy, etc are mostly grifters trying to get
in on the gold-rush.

What has AI policy produced so far, really? Algorithm analysis and safety
research is done on the algorithmic side, not on the policy side. Every time I
have read an AI policy paper, it has been listings of desirable properties
with absolutely no input on how to make any real steps towards them. In the
meantime, realities of algorithm driven mass surveillance and censorship is
moving ahead at a rapid pace.

Possible conclusion 1: We just need way more AI policy research.

Possible conclusion 2: AI policy is done by the wrong people and deep
algorithmic understanding is a pre-requisite.

Possible conclusion 3: Current AI policy is just used as a fig-leaf by tech
companies who hire a few policy essay-writers without substance. The lack of
progress in that area is a feature, not bug.

Not mutually exclusive, not exhaustive. Feel free to point to valuable policy
research.

~~~
currymj
a common pattern I see in papers written by people on the technical side is:

"The correct way of ensuring algorithmic fairness is X" where X is something
they already happened to work on.

In various papers X could be robust optimization, differential privacy, causal
inference, adversarial training, etc. The papers are mostly good, actually,
but it's sort of putting the cart before the horse.

I think there might be legitimate value in having a bunch of lawyers,
sociologists, and philosophers set a target more-or-less in ignorance, and let
the people on the technical side try to hit it.

Of course even better would be interdisciplinary collaboration but that's
hard, there aren't many incentives in its favor (where do you publish? any
given venue is worthless to half of the authors). and it requires humility on
the part of everyone involved.

~~~
notbob
_> The papers are mostly good, actually_

Not surprising. Computer scientists have a _bit_ of experience with
computation, after all. It would actually be _more_ concerning if a bunch of
new stuff was being invented whole cloth.

 _> but it's sort of putting the cart before the horse._

Not at all. ML algorithms are _just fucking algorithms_ and computer
scientists have been thinking about what it means for an algorithm to be
correct since... Turing.

And have been proving various theorems about _ML algorithms in particular_
since at least the 60s.

Complaints that "AI safety looks a lot like previous CS research" are
basically equivalent to observations that "neural nets have been around for a
lot longer than alexnet".

 _> I think there might be legitimate value in having a bunch of lawyers,
sociologists, and philosophers set a target more-or-less in ignorance, and let
the people on the technical side try to hit it._

I disagree. This is how you end up with endless navel gazing about trolley
problems while actual vehicles kill people by accelerating without control
because redundant parts are too expensive and engineers don't have enough
voice. Philosophers are rarely interested in honest-to-god engineering ethics,
which almost always boils down to "pay well enough to hire good people, and
then listen to the good people you're paying good money to have around".

~~~
radarsat1
Ah yes those money grubbing philosophers.

~~~
notbob
I don't think it's money grubbing. I think it's a genuine attempt to connect
with the zeitgeist and be relevant, paired with a fundamental misunderstanding
about what's actually happening inside self-driving groups.

But the intention doesn't really matter.

What matters is the utility of the output!

In fact, somewhat ironically, I think a lot of the _good_ work on ethics for
AI is coming out of engineering, business, statistics, and economics
departments. And those academic departments to do be a bit more "money
grubbing" relative to philosophy :-)

------
randcraw
It's not clear to me how this "field" of study is any different from existing
academic analyses of automation or computing. AI doesn't change what computers
do; it affects only _how_ it's done. The problems inherent in automation are
as old as windmills, falling water, and certainly electricity.

Sure, a few modern fields are affected more than others, but improving voice
recognition or visual object identification component by 20% doesn't
fundamentally change the system using such code to a degree that it warrants a
new academic discipline of study.

A new _legal_ field of study on automation makes more sense to me. Algorithmic
bias, product or service liability, baseline accountability, the standards of
due diligence and safety -- these are rising concerns exacerbated by the
recent increase in automation, AI-based or not. But these problems are hardly
unique to AI any more than the misdirection of elections was unique to
manipulation of digital social media.

I'm also doubtful that meaningful solutions to byproducts of automation by
corporations and nations can be meaningfully addressed by a bunch of academic
computer scientists.

~~~
fromthestart
>AI doesn't change what computers do, if affects only how it's done.

Except AI is very much changing how things are done _as well as_ what
computers do. Image recognition, natural language processing, self piloted
vehicles, these are all novel applications for machines which carry
significant real world risks to life and property.

Moreover, AI in it's current state is a complex black box with unpredictable
outputs for given inputs, which may be chaotic - see, for example, adversarial
attacks. If we want a better handle on, say, regularizing outputs in a
predictable way, or at least a clearer window into the occluded complexity of
neural network decision making, a whole new field may very well be warranted.
There's a reason we are calling these program outputs _behaviors_ now; it's a
new type of loosely -deterministic computing until we develop a deeper
understanding which will not be trivial.

~~~
drb91
> Image recognition, natural language processing, self piloted vehicles

Obviously none of these are AI and I don't see how you're helping anything by
insinuating they are.

~~~
IanCal
These absolutely are AI as far as the term has been used for a significant
amount of time. There is no value in trying to redefine it after, what, 70
years?

~~~
drb91
> There is no value in trying to redefine it after, what, 70 years?

That implies any meaningful definition to begin with. That is entirely an
illusion and people are using this to mislead investors (not that I'll ever
shed a tear for them).

~~~
IanCal
People absolutely use it usefully, that it can be used as marketing fluff is a
far more recent thing. If it's never been used usefully and isn't now, there's
no point arguing about it because that war has been lost and if it has been
used usefully redefining it makes no sense.

Arguing about definitions is easily one of the least relevant parts of any
discussion so I'll leave it here. I just wish AI topics didn't always have
_someone_ say it wasn't "real AI".

------
killjoywashere
If you lead, or aspire to lead, major projects, I submit a reasonable
definition of 'major' includes sitting at the table in national capitols with
senior policy makers. They will expect you to be as invested in understanding
their concerns, and the concerns of their constituents, as they are in trying
to understand your project and your objectives. Reading articles like this,
and I read and annotated the whole thing, is the homework you are going to
have to do. Yes, they should all learn category theory and linear algebra and
they haven't. That doesn't mean you suddenly have insight on the evolution of
the problem spaces they've spent their entire, obviously successful careers
navigating. So maybe take the time pay the penence for your genius, and invest
the time to understand something about their efforts as they are fumbling
around in the dark. At least they had the decency to put in writing for you to
read, or ridicule as the case may be.

Also, feel free to look up some of the authors on this paper. They don't suck.

------
okintheory
I get worried when I see something like this.

I'm inclined to believe a substantial number of researchers are currently
being deliberately fuzzy about what "AI" can and cannot do. Why not call it
algorithms and statistics? I think lay-people have a very skewed understanding
of what has already been achieved through AI. They may also not understand the
word algorithms but at least it doesn't make them think of Skynet. For
example, if you asked a person on the street whether there exists an
"artificially intelligent supercomputer" somewhere that could help you plan
_all aspects_ of a small but entertaining dinner party, they would probably
say yes. They imagine that you could just ask IBM Watson to help out, and he'd
tell you what to do. This is completely false. "AI" systems are very fragile,
and yes, we could build something that plans dinner parties, but we'd have to
start over if we needed to plan a kid's birthday party instead. It's very far
from strong AI. Ten years ago when it was mostly IBM telling lies, the end
result was a couple of billion dollars wasted by hapless healthcare
conglomerates. That's already bad. But now we have people from MIT, Stanford,
Harvard, Yale, and more embracing the term AI and relying on unfounded hype to
push for funding. It would be much less sexy if we called it
"Facebook/Google/etc enable unfair/discriminatory advertising by combining
intensive data collection with logistic regression, sometimes in multiple
layers, and with some graph algorithms thrown in". But it would be a much
better starting point for a well-informed debate. I'm not trying to minimize
the importance of algorithms in our world, but a healthy discussion should be
based on a sound understanding of the facts on the ground, and AI hype is not
helping with that. I strongly prefer the less hyperbolic terminology adopted
by someone like Aaron Roth at UPenn, e.g. see the blurb for "The Ethical
Algorithm: The Science of Socially Aware Algorithm Design".

Bonus AI rant: For the celebration of the new MIT Schwarzman College of
Computing, which is a huge expansion of the arguably most important computer
science department in the world, there was a discussion panel on AI consisting
of MIT President Reif, Henry Kissinger (War criminal?, Theranos board member,
wannabe AI expert), Tom Friedman (columnist of limited substance), and Stephen
Schwarzman (business man, coined the "increased taxes on carried interest are
like Hitler's invasion of Poland" analogy, brought the dough). How the heck is
that the inaugural panel!?!

~~~
currymj
most actual researchers are entirely sick of the hype and are desperately
trying to correct it, to no avail. i think laypeople just have a strong desire
to get overexcited about AI and nothing is going to stop them.

~~~
bonoboTP
Maybe researchers would like that, but PR is important even for academic
researchers. Sexy and hyped topics get more media coverage, more funding and
lead to more impressive CVs and careers.

------
aristus
"Means and motive matter as much as ends. AIs don’t operate in isolation.
Somebody designs them, somebody gathers the data to train them, somebody
decides how to use the answers they give. Those human-scale decisions are –or
should be– documented and understandable, especially for AIs operating in
larger domains for higher stakes. It’s natural to want to ask a programmer how
you can trust an AI. More revealing is to ask why _they_ do."

[https://www.ribbonfarm.com/2018/03/13/justifiable-
ai/](https://www.ribbonfarm.com/2018/03/13/justifiable-ai/)

------
hobs
Paging Dr. Susan Calvin.

~~~
BWStearns
"Robopsychology!" was my first thought haha.

------
TimTheTinker
Is there a rigorous definition of “machine behavior” as a research field?

I can see how a good-enough marriage of ML/NN to classic symbolic AI could
yield a system capable of higher-order, intention-based “behavior” in complex
circumstances. But we’re still a long way from achieving that, as far as I
know.

------
return1
There is no way to stop the hype. You can just enjoy watching the waves.

------
barbecue_sauce
Human-Computer Interaction?

~~~
dane-pgp
Human-Cyborg Relations?

------
dqpb
> _Machines powered by artificial intelligence increasingly mediate our
> social, cultural, economic and political interactions._

What about non-AI powered machines that mediate our social, cultural, economic
and political interactions? For example, science journal paywalls.

~~~
SolaceQuantum
In my experience most researchers are more than happy to share with you the
paper upon askance via e-mail.

~~~
DiseasedBadger
If I asked for a paper and they looked at me askance, I'd wonder why I emailed
them instead of just asking - since we're in viewing range obviously.

~~~
taneq
Damn autocarrot.

