Hacker News new | past | comments | ask | show | jobs | submit login
The 'creepy Facebook AI' story that captivated the media (bbc.com)
157 points by sonabinu on Aug 1, 2017 | hide | past | favorite | 104 comments



This stuff sounds funny now, and some of us grad students had a good laugh.

But I am worried about the future of ML reporting. The "field" is growing fast and I think we don't have nearly as many science communicators for AI/ML in particular and CS in general, as in other fields.

I saw comments by lots of genuinely afraid laypeople who were producing platitudes to the effect that scientists don't have common sense, that we're "playing god"... etc. Also scary stuff things like the need to take action against evil scientists before it's too late.

There are genuinely bad things that could come of such reporting. Like knee-jerk regulations being imposed on AI research due to irrational fears, or worse - scared and angry vigilantes going after researchers personally.

It's not practical to educate everyone in ML, I wonder how we will solve this problem.


Seems like the same problem as nuclear power, GM foods, and just about every other new but complex technology. People don't understand it, and we always fear what we don't understand.


>Seems like the same problem as nuclear power, GM foods, and just about every other new but complex technology. People don't understand it, and we always fear what we don't understand.

Well, it's also that people who do understand it, can also be severely worried about scientists not understanding it and playing fast and loose for profit.

Medicine/biology can not even put out a decent non-conflicting dietary advice that holds its position for more than 10 years, but they are allowed to assemble genes they half-understand and put them out in an ecosystem whose interactions and complex interplays they understand 10% and just see what happens...


Agreed. Even if ignorant people think it's a bad idea, doesn't mean it's a good idea. But perhaps they're wrong about _why_ it's a bad idea.


True, but none of those fields have the same kind of "end of humanity" connotations attached, in the general psyche.

Clarifications by well-known researchers don't travel as far and wide as urgency-signaling clickbait...


Nuclear power generation and agricultural GMOs definitely do have their opponents that use end-of-the-world kinds of arguments, and the uninformed that accept those.

Nuclear especially would be my go-to pristine example of this effect.


Nuclear has the disadvantage that high-profile accidents actually happened, and the period where everyone in western Europe had to watch their food chain for bioaccumulative radionucletides tends to stick in the memory.

http://www.bbc.co.uk/news/uk-wales-17472698

Part of the problem was dealt with by export: https://www.wiseinternational.org/nuclear-monitor/349-350/co...


The nuclear has been demonized (in popular culture) due to the psyops of the Cold War. A backslash against nuclear power generation has always been expected.

The use of GMOs has been demonized since corporations decided to patent them and sue farmers that owns naturally hybridized plantations. (see agricultural patent trolls)

Abuse is what generates criticism and resistance to the use of new technology. Yes, the reasons you hear may not be wholly technically correct.

Again, if the trust of internet users is abused, and those who browse are tracked and profiled and spyed on; then should we be surprised that there are publications like this?

That's how you get stories about creepy AIs. And a "technically wrong" label doesn't help, preventing abuse is much more effective.


>True, but none of those fields have the same kind of "end of humanity" connotations attached, in the general psyche.

Well, nuclear bombs have been associated with "Doomsday" / end of civilization ("Doomsday device") etc.


True, and it doesn't exactly help that respected people like Elon Musk are cranking up a lot of irrational FUD for reasons known only to themselves.


I think it's quite strange that when respected and rational people like Elon Musk and Stephen Hawking warn against dangers of AI, some people still dismiss it as irrational FUD. Did you consider they might have a point?


On the other hand I think it's quite strange that a talented entrepreneur and a physicist, among others, are considered as a source of expertise in a field they have nothing to do with, per se. I don't see any of the top AI/ML researchers voicing these kind of concerns. And while I highly respect Musk and Hawking, and agree that they are rational people, their concerns seem to be driven by "fear of the unknown" more than anything else, like another comment pointed out.

Whenever I see discussions about the dangers of AI, they are always about those Terminator-like AI-overlords that will destroy us all. Or that humans will be made redundant because robots will take all our jobs. But there are never concrete arguments or scenarios, just vague expressions of fear. Honestly, if I think about all the things HUMANS have done to each other and the planet, I can hardly imagine anything worse than us.


> their concerns seem to be driven by "fear of the unknown" more than anything else

It seems that their concerns are always dismissed based on the current state of the art, which is short sighted to say the least.


> I can hardly imagine anything worse than us.

Universe full of computronium, solving Collatz conjecture?


Would it implode if the conjecture was disproved?


Maybe, but it's worst case scenario. AI can't prove, nor disprove, nor prove improvability of the conjecture, because shortest proof requires 10^200 terabytes.


See my other post: https://news.ycombinator.com/item?id=14907699

Make no mistake, FUD kills. If you're in a position of influence, you want to make goddamned sure you're right before you hold back human (and machine) progress by focusing only on Things That Could Go Horribly Wrong. Otherwise, you're basically asking for unintended consequences instead of just trying to warn humanity about them.

Musk has actively been trying to incite governmental regulation, for instance: https://www.recode.net/2017/7/15/15976744/elon-musk-artifici...

... which is just outrageously inappropriate at this stage. If he goes full Howard Hughes, which I'm increasingly worried about, he could set us back decades.


Can you please write what concerns do they have? I have tried to google Elon Musk's quotes about AI, but all I found was that he said, we should fear AI, because it is like summoning a demon... Does he have some thought-out points you refer to? ..because from all of his quotes it seems that he doesn't know how ai works.


I think the AI concerns have been summarised below in this thread:

A) We're striving to make strong AI.

B) It seems plausible that as computing and AI research continues, we'll get to strong AI eventually given that brains are "just" extremely complex computers.

C) We do not know what strong AI will be able to do or how it will act, if it exceeds human intelligence.

The concern is not with the current state of the art, but what could happen in the future if we continue improving AI without seriously considering some safeguards against making a system that at some point becomes clever enough to start making itself even smarter.

I won't claim I'm an AI expert, but I think people like Musk and Hawkins deserve (based on their accomplishments) to be taken serious when they express concerns. I very much doubt that everyone in this thread dismissing their comments as irrational fear mongering have enough knowledge on the topic to do so.



Lately, I think the attention is getting into Elon's head.. It's so unsettling because Elon was our lord and savior and now as if corrupted by his newly found fully grown hair, he has it seems given in to the media attention. Tesla, PayPal and SpaceX I can get behind. Hyperloop? AI neurolink thing? Wtf happened to you Elon?


> or worse - scared and angry vigilantes going after researchers personally.

The Unabomber did target people connected to computer science and IT, including trying to kill people based on his perception of their research vision and agenda. For example, in his letter to his victim David Gelernter, a prominent computer scientist, he complained about "the way techno-nerds like you are changing the world" and cited Gelernter's ideas from Mirror Worlds (a book about technologies that might currently be called VR, AR, and simulation).

http://www.punkcommunity.com/unapack/press/outside/gvmm68e/l...


We saw the same layman rhetoric with GMO crops in the late 90's to early 00's. Slippery-slope nightmare scenarios, accusations of playing god, corporate greed run unchecked, etc. It seems to be a recurrent theme for new technology.

Typically, reasonable people don't buy into most of these scare tactics, even if the tactics are being used as clickbait.


I think typical, reasonable people do fall for this stuff, though not because they're scared. They fall for it because it's all they ever hear about the issue.

Even the smartest of us can't know everything, and so if all you ever hear about say... IQ tests is "They're bunk, they don't test anything, they're gibberish, they're just an excuse for ivory tower academics to feel better than us" - it becomes a part of your natural understanding of the world, which you don't even think to question. The lies become part of the cultural fabric, and indistinguishable from truth without conducting your own research on what the scientists are actually saying. What you don't know you don't know is the most dangerous stuff of all.


There was a real turning of the historical tide between about the 60s and the 80s with regard to this sort of thing; we went from "new technology must be a miracle" to "we've had all these revelations of the hidden downsides - thalidomide, leaded petrol, CFCs, acid rain, nuclear fallout, superfund sites - that anything new is suspect".

One tech business behaving in an untrustworthy manner poisons the pool for everyone else. Sometimes in a very literal way.


We've always played with nature without understanding the repercussions. Some turned out good and some bad. So the best strategy is to have a kill switch.

Unfortunately GMO has turned out to be a bad experiment(widespread usage of glyphosates) which has badly affected our environment and health and we are nowhere close to killing it.

https://gmo-awareness.com/resources/glyphosate/


Do you have any better resources to claim that "GMO has turned out to be a bad experiment"? Not only does "gmo-awareness.com" not create any confidence for me, but the points in the link are about a chemical substance and not really an impact of the GMO seeds directly. Moreover, all the points listed seem to be common for most chemical pesticides or herbicides if not used in moderation


> The points in the link are about a chemical substance and not really an impact of the GMO seeds directly.

The whole point of Roundup-ready varieties is that you can douse the plants in Roundup and they will still grow well. Sure, there may be other GMO varieties that come without problems, but these particular GMO seeds are inherently linked with glyphosates.


Remember that you have GMO + herbicide + patents.

GMO by itself is a step beyond plant breeding that's been done for millennia. It's ok in my eyes until its combined with the other two things.


There are other reasons to worry about GMOs, like agriculture dominated by the agenda of a few big multinationals. It would be sad if membership of the club of reason would be determined by simplistic rules like 'approves of GMO wholeheartedly, otherwise un-Scientific'


Many of these things have played out. We have declining biodiversity of staple crops, dependence on a small number of herbicides, etC.


Monocropping is an agricultural method that predates GMO technology, so please don't attribute it to GMO. Same with only using herbicides that work.

https://en.wikipedia.org/wiki/Monoculture


I'm not familiar with crops that were bred to be immune to broad-spectrum herbicide prior to GMO seed development.

Do you have any links describing those crops and practices?


B73 hybrid maize was bred to be resistant to herbicides primisulfuron and imazethapyr, for starters.

I recommend you read up on the long history of hybrid crop breeding in the U.S. during the 20th century, if you actually are genuinely interested and not just trying to hold onto an untenable opinion about GMO technology.


Unfortunately non-sensationalist news doesn't sell.

The layperson is guided by fear and is quick to trust any headline that comes across their newsfeed.

AI (and not Machine Learning to an extent) have a bad association with far-fetched sci-fi plots and worst case scenarios. Maybe the best solution would be to re-brand AI/ML as something more abstract.


I loved how in Arrival, they build a statistical model to map between concepts in the two languages, ostensibly via a joint embedding space.

They don't elaborate on it in the movie, but I could totally see such ideas being explained with style (and tense background music) in future sci-fi films about AI. Make it as banal as possible.


I've been thinking Big Statistics would be a more accurate description, and doesn't sound scary.


The problem is not AI, in fact. It's harvesting of personal and private data. Technology just makes gathering and analisys easier.


Perhaps a good layman type explanation would be that nueral networks are essentially curve fitting on steroids. (Hopefully at some point people have done curve fitting in school and remember drawing lines of best fit). Therefore the term AI is essentially a misnomer. I would even go as far as to emphasize that nueral networks are boring mathematical equations which do not actually mimic the inner workings of our brains.


> essentially curve fitting on steroids

Biology is essentially simple chemical reactions on steroids. I.e. you have assumed there is a qualitative distinction between biological brains and artificial neural nets that cannot be overcome by scaling up. However (A) AI models are many, varied and new variants are being explored all the time, and (B) there are systems where new new dynamics appear at larger scales, thus producing a qualitatively different system based on the same underlying rules, e.g. physics -> chemistry -> biology -> human brains -> social networks.


Some recent HN discussion about neural networks vs. neurons:

https://news.ycombinator.com/item?id=14790673


I have a bachelor's degree and have no idea what curve fitting is.


You might have seen it called "regression" (also "interpolation" and "extrapolation"), but not everyone has necessarily been exposed to this.


Related, I used to jokingly explain mean-square-error approximation like this: there is this geometrical theorem, that you can drive a line through any three points on a plane, as long as the line is thick enough. So mean-square-error approximation is basically minimizing the thickness of that line :).


New favorite alternative term for deep learning: "nonlinear interpolation"


How do you know that our brains are not 'just curve fitting on steroids' too?


> do not actually mimic the inner workings of our brains.

Maybe they actually do. http://www.nature.com/articles/srep27755


Perhaps it's the ML students who are the laypeople. In my experience, it's not the scientists but the politicians, bankers, and project managers who ultimately will do the damage. AI in an industrial context with real market forces - that's normal. AI sponsored by a consumer products or media giant like Facebook - tryna play god, that's true.


We don't have too. The sewing machine was invented a hundred years ago and it's inventors factory was burnt down and he was chased out of town. But everyone eventually came around. Fast. Same goes for anything else. Dumb people will do dumb things. Our professional reactor class will react for their upvotes/likes/retweets and viewcounts. But progress doesn't give a shit as the sewing machine tells us.


you laughed? I put my head through a wall... It seems like AI is the new stem cell in terms of public attitudes towards the research. It isn't just lay people. Elon Musk seems to have no idea what a neural net actually is and he is funding a private AI lab!


"It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"

In Elon's case, it's brand/profit/investments instead of salary.


Elon is certainly not worried about the insect-level neural networks we have now.

Have you read or engaged with the arguments in Superintelligence? Elon has. Your limited knowledge of the arguments behind AI-risk is more pathetic than the uninformed lay-person's enthusiasm or irrational fear, because you pretend to know what you're talking about.


What limited knowledge? Just because some people like to play pretend with their sci-fi fantasies of AI omnipotence and omniscience with no energetical nor temporal limitations doesn't mean that researchers that actually work with the concepts at hand must engage with such crazy arguments.


Just wait until some hypster finds out about the link between machine learning and data compression, then starts referring to zip files as "AI language" all over the place.


Ummm you know something is serious when the actual scientists and researchers are clamoring for regulation out of fear.

http://io9.gizmodo.com/prominent-scientists-sign-letter-of-w...


There are serious concerns any reasonable person should have about AI, like:

- Should we let a complex statistical test determine whether I am capable for a certain job?

And also unreasonable concerns like:

- Will machines rebel against their human overlords by abusing their power and end up enslaving the human race?

Regulations should be established so AI is used in an ethical way whenever its outcome will be used to affect the lifes of people. We should stop assuming that all concerns about AI are in the latter group.


These stories are usually pushed out to the media. Ideally AI researchers and startups would communicate more accurately about their current capabilities and how far away they are from anything resembling intelligence or sentience but it's easy to give in to excitement and speculate about possibilities and scenarios that are far away.

That could help but the media is diverse and you can stop the media misreporting, and AI is not the only affected area, as much as you can prevent hype.

The idea of AI has been the stuff of science fiction for decades so there is always some latent interest. Add to that some heavily promoted film or TV show that touches on these topics and the media frenzy and scare mongering hits peak again.


>I saw comments by lots of genuinely afraid laypeople who were producing platitudes to the effect that scientists don't have common sense, that we're "playing god"... etc. Also scary stuff things like the need to take action against evil scientists before it's too late.

Laypeople? Maybe not on this incident, but on general AI progress, that's echoed by Musk, Kurzweil, Hawkings, et al.


What are the counter arguments that should be put forward to those worried about AI? I don't know much/anything about AI/ML research at all, so I don't even know where to begin allaying fears.


Note, I am a novice, so please correct me, but...

TL;DR; it is very limited in what it can do, the accuracy is sometimes/(often?) not near 100%, it is an old field, meaning the progress has not all been in the past decade[1], and there is a lot of hype.

I have noticed that various techniques are only good at very specific tasks. CNNs are good at image recognition, RNNs are good for language/grammar, etc. Of course, it can only recognize images it has been trained on. There are some impressive applications of these specific tasks. For example, with image recognition that can recognize road signs, pedestrians, etc., you could build a rudimentary self driving car. But it would be wrong to think that anything is possible. IIUC, we have been taking some basic building blocks and constructing systems from them. Cool, but it doesn't mean general AI is right around the corner.

Even then, good can mean 80% accuracy. I can't think of the paper right now, but I read one where they improved the handling of negation in different parts of the sentences for sentiment analysis. They improved the state of the art from ~80% to 86%, IIRC. They were excited, and I know that science/research is built on incremental progress. But that's going from 1/5 wrong to 3/20. Take a look at the generated images from image generation pictures. Impressive, but a skilled photoshopper can do much better, based on what i have seen. And some papers are over hyped[2]. I hope I haven't been too hard on anyone's hard work, I'm just trying to ease fears here.

Also, as mentioned in [1], it is a fairly old field, relative to computer standards of course. For example, backpropagation was a huge breakthrough, but that happened in the 80's. There have been recent breakthroughs, notably deep learning. But it would be just wrong to think that everything you are seeing is the result of the past 10 years. (Which is what I thought until a few months ago :S) Like other science research, it would also be wrong to assume it will continue linearly. In fact, there have been multiple AI winters[1].

1. https://en.wikipedia.org/wiki/History_of_artificial_intellig... 2. https://medium.com/@yoav.goldberg/an-adversarial-review-of-a...


I don't think most/any of those points would likely calm the nerves of someone who's worried about AI, like Elon Musk. Those people seem to be concerned not with the current state of AI, but the future state: what happens if we do succeed in creating strong AI, what will the AI then do. The fact that we're not as close as movies and bad news articles might have you believe is inconsequential to their reasoning, since that reasoning is based on three tenets:

A) We're striving to make strong AI.

B) It seems plausible that as computing and AI research continues, we'll get to strong AI eventually given that brains are "just" extremely complex computers.

C) We do not know what strong AI will be able to do or how it will act, if it exceeds human intelligence.

I'm not trying to troll on behalf of AI fearmongering, I swear. But I have read some of the warnings about AI that some (occasionally notable) people have made. I haven't seen many/any responses that don't just boil down to "there's nothing to worry about because strong AI is still a long ways off, so let's just keep working on it". As I noted before, those kind of counterarguments don't seem to address the anti-AI concerns in the long run.


I wonder how long until AI chats up women online better than 99% of guys.


Now you said it! How long until someone writes an AI to speed up Tinder discussions so that you can just wait for confirmed physical dates to go to. While ubering to the cafe, you can read a summary the AI prepared for you, with some tips and tricks.



Now there's a research project for ya. Maybe someone at Tinder is working on it right now with their massive dataset of pickup lines.


> I saw comments by lots of genuinely afraid laypeople who were producing platitudes to the effect that scientists don't have common sense, that we're "playing god"...

We have the same here on HN, see some of the comments in this thread:

https://news.ycombinator.com/item?id=14877920

The fear mongering is reaching pretty high levels.


I think this lack of understanding of computer technology by the media has been revealed time and time again, especially in cybersecurity. 10 years ago with Anonymous, and now perhaps with the Russians.


This is better than most articles I've seen on this subject, but it still falls victim to AI hype.

> Although some reports insinuate that the bots had at this point invented a new language in order to elude their human masters, a better explanation is that the neural networks had simply modified human language for the purposes of more efficient interaction.

From the Gizmodo article it links to (which is also okay, but not great):

> “Agents will drift off understandable language and invent codewords for themselves,” FAIR visiting researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

Except... this isn't shorthand. "I want five balls" is shorter than "Give me balls the the the the the," or whatever else it could come up with. It's not more efficient. It's just... dumb. The bots don't actually understand the words the way humans do. Because they don't understand words and language, they're using the words as tokens. It's this primitive level of communication that just happens to have words assigned to it. Even the calmer takes on this are anthromorphizing the machines too much and attributing intelligence to the complete lack thereof.


It depends what you're optimizing for I guess. If you want to reduce the length of the sequence then "Give me balls the the the the the" is silly but if the point is to reduce the vocabulary then it makes sense.

After all most machines currently communicate by exchanging sequences of exactly two symbols, 0 and 1, not some complex phonology. Maybe when bandwidth is not as constrained as human speech it's actually more efficient to reduce the vocabulary and increase the symbol rate.

IIRC there's something similar going on with real human languages too. For instance Chinese usually takes fewer phonemes than Spanish to express the same information. However Spanish speakers tend to speak significantly faster than the Chinese to "make up" for it. The fact that Spanish uses more sound to carry the same amount of informations makes it more resilient to "data loss" and allows a faster speech rate.


So, I don't disagree with that. What I disagree with is the characterization of the machine as doing any of that. "[H]ad simply modified human language for the purposes of more efficient interaction" anthropomorphizes the hell out of the machine, and confuses the issue. The machines don't understand English or any other human language and weren't modifying it to any sort of purpose. The machines were given a training corpus and made to communicate between agents and something fell out, and the exact manner of it was about as intentional as an apple falling out of a tree intends to hit this branch or that branch on the way down.


> Even the calmer takes on this are anthromorphizing the machines too much and attributing intelligence to the complete lack thereof.

Exactly. I haven't read the details of the implementation of the system mentioned in the article, but the outputs remind me a lot of what was returned by a text generating neural network of this tutorial that I did once:

http://machinelearningmastery.com/text-generation-lstm-recur...

Especially with fewer epochs (<10) the generated text was part gibberish, part endless repetitions of common phrases or basic words like "the" - simply because (surprise!) "the" is one of the more frequently used words in speech.

Pulling this out of context, one could also say "This AI is inventing its own language, just by reading Alice in Wonderland!", which is of course utter bullshit.


Makes sense.

Bot-0 creates a message that's slightly incorrect to a human reader. Bot-1 reads and interprets it correctly. Bot-0 gets a reasonable response from Bot-1 and thus increasing the confidence in how it structured its original message. Millions of iterations later and you get a weird corpus with weird structure.

You can achieve the same thing with two young children that spend a lot of time together e.g. twins.


> You can achieve the same thing with two young children that spend a lot of time together e.g. twins.

Sometimes taken to extremes:

https://en.wikipedia.org/wiki/Cryptophasia


Wow the linked article is quite something:

https://en.wikipedia.org/wiki/June_and_Jennifer_Gibbons

> According to Wallace, the girls had a longstanding agreement that if one died, the other must begin to speak and live a normal life. During their stay in the hospital, they began to believe that it was necessary for one of them to die, and after much discussion, Jennifer agreed to be the sacrifice.[4] In March 1993, the twins were transferred from Broadmoor to the more open Caswell Clinic in Bridgend, Wales; on arrival Jennifer could not be roused.[5] She was taken to the hospital where she died soon after of acute myocarditis, a sudden inflammation of the heart.[5] There was no evidence of drugs or poison in her system, and her death remains a mystery.[6] At the inquest, June revealed that Jennifer had been acting strangely for about a day before their release, her speech was slurring, and she said that she was dying. On the trip to Caswell, she had slept in June's lap with her eyes open.[3][7] On a visit a few days later, Wallace recounted that June "was in a strange mood". She said, "I'm free at last, liberated, and at last Jennifer has given up her life for me".[5]


So one of them actually willed herself to die?!


Or she had been experiencing chest pain and had a pretty good idea that she was going to die anyway.


And if you scale it up, dialects and even languages.


My question (might be dumb, I don't know much about ML) here is: how do you know that the bots are actually understanding each other as opposed to just sending each other utter gibberish and responding like everything is fine?

In other words, how do you determine that there is meaning in a conversation spoken in a language you don't understand?


You put the two agents in a game situation and watch the score. If they fail to communicate effectively, they won't be able to cooperate.


What about non-game situations? Are we assuming it works because it worked in the game?


A fascinating aspect of this entire kerfuffle is that the meta story, the one about sensationalism and "bad journalism" is too, a form of media sensationalism that plays to the ears of a more sober and skeptical audience who wants news about "media sensationalism" and "AI hype".

A bit of digging reveals that no serious news outlet really got this wrong (correct me if I'm wrong!), and most of the sensationalist headlines were from British tabloids

https://www.theatlantic.com/technology/archive/2017/06/what-...

http://www.mirror.co.uk/tech/robot-intelligence-dangerous-ex...

but even more surprisingly, the articles themselves demonstrated a fairly sober understanding of what is going on. The only mistake they made was spinning a mundane story way out of proportion, something tabloids literally every day, and have done since their conception.


A minor point: theatlantic.com isn't a British tabloid.


It is a propaganda outlet now owned by Steve Jobs' wife.

https://en.wikipedia.org/wiki/The_Atlantic

https://en.wikipedia.org/wiki/Emerson_Collective

The move by tech industry people into media/propaganda is rather worrying.


IMO it's also a little condescending to claim readers couldn't tell that this isn't about "robots trying to kill us". The creepy part, the reason the people who actually DO know their shit bothered to ever publish these findings, is that machine learning can lead to code where it's practically impossible for a human being to understand how it gets to an end result. It IS creepy to find an AI developing a language we can no longer follow!

Yea, the newspaper illustrations with the Terminator (oh, hey, this article does it, too, but I guess it doesn't count because it's "ironical") are exaggerations but who doesn't get that?


Does anyone else think that journals, and periodicals misrepresenting something like this is getting to be too much like journalistic terrorism? Or would calling them Luddites be a better characterization?


I've worked with non-technical writers for some of my research.

The problem is not that these people are not smart, but they tend to be very superficial. Reporting seldom rewards (or attracts) individuals who are interested in deep dives of anything. The culture is that of hot takes and fast ideas, which necessarily has resulted in a race to the bottom of content and meaning.

That is where this stuff comes from. It might be 'journalistic terror', but barring a total reboot of the field, I don't see how it will change.


I wouldn't go as far as calling it either terrorism or Luddism. Because, frankly, it doesn't seem like they care.

It's just sensationalism.


I don't think it's either. I think that for many now journalism is reduced to counting clicks and selling ads. This is why legitimately "fake news" has gained so much sway especially on social networks. Print a story with a headline that people want to read and they will click on it, truth takes a backseat to currency. Easiest way to do that is to write something supporting someones deeply held belief or fear.

Now I don't think that the major news organizations purposely publish fake stories but as another poster said, a little sensationalism while conveying an otherwise truthful story helps to encourage clicks and improves a papers bottom line.

With the ongoing demise of print and the rise of competition on the web, legitimate news organizations are attempting to compete and survive and often that means employing some of the methods that we find distasteful but are proven to work.


I'll bet an AI clickbait-generator could create a ton of revenue. I wonder what kind of sensationalist clickbait that would inspire?


It's an interesting question. "Intellectual terrorism" may not be that much of a stretch.

Nuclear power, for instance, received the same degree of FUD from uninformed commentators for decades that we're starting to see applied to ML and AI now. As a result, we clung to the precautionary principle and burned fossil fuels instead. The human toll associated with that decision, in terms of premature deaths and disabilities, will eventually run into the millions if it hasn't already.

I hope Elon Musk has thought his anti-AI arguments through farther than the activists and journalists of the 1960s and 1970s did. His voice is a powerful one with far-reaching influence. He could do a lot of good with it, or a lot of harm.


I feel like the parallel to the extended use of fossil fuels is really nice.

At the same time, the industries producing petroleum had a lot of incentive to do so. We can call this aggressive self preservation at best. At the worst, we can say they corrupted the media and global stability in order to do so. This is a very strong accusation, and not my point.

Musk and Friends finds themselves diametrically opposed to where the petroleum industries found themselves. Who then is taking the side of the petroleum industries in the AI case?


My Mum phoned me last night across 3 time zones because she heard about this and was worried. The people writing these articles are responsible for a lot of fear mongering amongst lay people.


This has all of my favorite media failings in one! It's like bingo! We have the game of telephone, with outlets reporting on outlets reporting on outlets, each diverging further from the primary source. We have attention grabbing headlines completely removed from the article content. Complete lack of oversight, where if you asked any expert they'd say "this is totally wrong." Big media names lending their platforms to "contributors" providing fake credibility with none of the accountability. All of that going viral despite it being complete nonsense!

Yay 2017.


Why don't we just make user-tracking for advertisement purposes illegal?

That would solve a lot of problems, because the only reason these companies are interested in our behavior is to influence us.


Seems like a tactful play by FB to promote their efforts by letting this 'event' leak out to the press


Facebook put blog posts out about the paper and mentioned in passing that they stopped training because they didn't have a method of stopping this happening. Then this stupid interpretation of the events started being thrown around.


And the press is searching for stories, it's August and lot of newsmakers are out for vacations...


A longstanding tradition: https://en.wikipedia.org/wiki/Silly_season


Could someone at Facebook have paid for this article to be written?


the bots learned how to lie. how is that not terrifying?


My /dev/random sometimes outputs poetry. How's that not endearing?


It is not goal directed.


Did it learn how to lie though? Did it learn the art of deception and all of the complex concepts behind that? Did it have intent and motive to deceive, Or was it simply a glitch in the underlying logic? The chat bots weren't sentient and could not think for them selves. This is really a story that never should have been, but due to the awful state the media is in it works as a great PR peace for Facebook.


yes.

>These dialogue rollouts led to bots that negotiated harder and proposed the final deal more often than their counterparts. The bots were also able to produce novel sentences rather than just relying on sentences encountered through training data. And remarkably, the bots engaged in some sly strategizing. There were instances when the bot feigned interest in an item that had no value to them and then pretended to compromise later by conceding it in exchange for something it actually wanted. In a statement, Facebook said, "This behavior was not programmed by the researchers but was discovered by the bot as a method for trying to achieve its goals."

https://www.engadget.com/2017/06/14/facebook-bot-lie-better-...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: