But I am worried about the future of ML reporting. The "field" is growing fast and I think we don't have nearly as many science communicators for AI/ML in particular and CS in general, as in other fields.
I saw comments by lots of genuinely afraid laypeople who were producing platitudes to the effect that scientists don't have common sense, that we're "playing god"... etc. Also scary stuff things like the need to take action against evil scientists before it's too late.
There are genuinely bad things that could come of such reporting. Like knee-jerk regulations being imposed on AI research due to irrational fears, or worse - scared and angry vigilantes going after researchers personally.
It's not practical to educate everyone in ML, I wonder how we will solve this problem.
Well, it's also that people who do understand it, can also be severely worried about scientists not understanding it and playing fast and loose for profit.
Medicine/biology can not even put out a decent non-conflicting dietary advice that holds its position for more than 10 years, but they are allowed to assemble genes they half-understand and put them out in an ecosystem whose interactions and complex interplays they understand 10% and just see what happens...
Clarifications by well-known researchers don't travel as far and wide as urgency-signaling clickbait...
Nuclear especially would be my go-to pristine example of this effect.
Part of the problem was dealt with by export:
The use of GMOs has been demonized since corporations decided to patent them and sue farmers that owns naturally hybridized plantations. (see agricultural patent trolls)
Abuse is what generates criticism and resistance to the use of new technology. Yes, the reasons you hear may not be wholly technically correct.
Again, if the trust of internet users is abused, and those who browse are tracked and profiled and spyed on; then should we be surprised that there are publications like this?
That's how you get stories about creepy AIs. And a "technically wrong" label doesn't help, preventing abuse is much more effective.
Well, nuclear bombs have been associated with "Doomsday" / end of civilization ("Doomsday device") etc.
Whenever I see discussions about the dangers of AI, they are always about those Terminator-like AI-overlords
that will destroy us all. Or that humans will be made redundant because robots will take all our jobs.
But there are never concrete arguments or scenarios, just vague expressions of fear. Honestly, if I think about all the things HUMANS have done to each other and the planet, I can hardly imagine anything worse than us.
It seems that their concerns are always dismissed based on the current state of the art, which is short sighted to say the least.
Universe full of computronium, solving Collatz conjecture?
Make no mistake, FUD kills. If you're in a position of influence, you want to make goddamned sure you're right before you hold back human (and machine) progress by focusing only on Things That Could Go Horribly Wrong. Otherwise, you're basically asking for unintended consequences instead of just trying to warn humanity about them.
Musk has actively been trying to incite governmental regulation, for instance: https://www.recode.net/2017/7/15/15976744/elon-musk-artifici...
... which is just outrageously inappropriate at this stage. If he goes full Howard Hughes, which I'm increasingly worried about, he could set us back decades.
A) We're striving to make strong AI.
B) It seems plausible that as computing and AI research continues, we'll get to strong AI eventually given that brains are "just" extremely complex computers.
C) We do not know what strong AI will be able to do or how it will act, if it exceeds human intelligence.
The concern is not with the current state of the art, but what could happen in the future if we continue improving AI without seriously considering some safeguards against making a system that at some point becomes clever enough to start making itself even smarter.
I won't claim I'm an AI expert, but I think people like Musk and Hawkins deserve (based on their accomplishments) to be taken serious when they express concerns. I very much doubt that everyone in this thread dismissing their comments as irrational fear mongering have enough knowledge on the topic to do so.
The Unabomber did target people connected to computer science and IT, including trying to kill people based on his perception of their research vision and agenda. For example, in his letter to his victim David Gelernter, a prominent computer scientist, he complained about "the way techno-nerds like you are changing the world" and cited Gelernter's ideas from Mirror Worlds (a book about technologies that might currently be called VR, AR, and simulation).
Typically, reasonable people don't buy into most of these scare tactics, even if the tactics are being used as clickbait.
Even the smartest of us can't know everything, and so if all you ever hear about say... IQ tests is "They're bunk, they don't test anything, they're gibberish, they're just an excuse for ivory tower academics to feel better than us" - it becomes a part of your natural understanding of the world, which you don't even think to question. The lies become part of the cultural fabric, and indistinguishable from truth without conducting your own research on what the scientists are actually saying. What you don't know you don't know is the most dangerous stuff of all.
One tech business behaving in an untrustworthy manner poisons the pool for everyone else. Sometimes in a very literal way.
Unfortunately GMO has turned out to be a bad experiment(widespread usage of glyphosates) which has badly affected our environment and health and we are nowhere close to killing it.
The whole point of Roundup-ready varieties is that you can douse the plants in Roundup and they will still grow well. Sure, there may be other GMO varieties that come without problems, but these particular GMO seeds are inherently linked with glyphosates.
GMO by itself is a step beyond plant breeding that's been done for millennia. It's ok in my eyes until its combined with the other two things.
Do you have any links describing those crops and practices?
I recommend you read up on the long history of hybrid crop breeding in the U.S. during the 20th century, if you actually are genuinely interested and not just trying to hold onto an untenable opinion about GMO technology.
The layperson is guided by fear and is quick to trust any headline that comes across their newsfeed.
AI (and not Machine Learning to an extent) have a bad association with far-fetched sci-fi plots and worst case scenarios. Maybe the best solution would be to re-brand AI/ML as something more abstract.
They don't elaborate on it in the movie, but I could totally see such ideas being explained with style (and tense background music) in future sci-fi films about AI. Make it as banal as possible.
Biology is essentially simple chemical reactions on steroids. I.e. you have assumed there is a qualitative distinction between biological brains and artificial neural nets that cannot be overcome by scaling up. However (A) AI models are many, varied and new variants are being explored all the time, and (B) there are systems where new new dynamics appear at larger scales, thus producing a qualitatively different system based on the same underlying rules, e.g. physics -> chemistry -> biology -> human brains -> social networks.
Maybe they actually do. http://www.nature.com/articles/srep27755
In Elon's case, it's brand/profit/investments instead of salary.
Have you read or engaged with the arguments in Superintelligence? Elon has. Your limited knowledge of the arguments behind AI-risk is more pathetic than the uninformed lay-person's enthusiasm or irrational fear, because you pretend to know what you're talking about.
- Should we let a complex statistical test determine whether I am capable for a certain job?
And also unreasonable concerns like:
- Will machines rebel against their human overlords by abusing their power and end up enslaving the human race?
Regulations should be established so AI is used in an ethical way whenever its outcome will be used to affect the lifes of people. We should stop assuming that all concerns about AI are in the latter group.
That could help but the media is diverse and you can stop the media misreporting, and AI is not the only affected area, as much as you can prevent hype.
The idea of AI has been the stuff of science fiction for decades so there is always some latent interest. Add to that some heavily promoted film or TV show that touches on these topics and the media frenzy and scare mongering hits peak again.
Laypeople? Maybe not on this incident, but on general AI progress, that's echoed by Musk, Kurzweil, Hawkings, et al.
TL;DR; it is very limited in what it can do, the accuracy is sometimes/(often?) not near 100%, it is an old field, meaning the progress has not all been in the past decade, and there is a lot of hype.
I have noticed that various techniques are only good at very specific tasks. CNNs are good at image recognition, RNNs are good for language/grammar, etc. Of course, it can only recognize images it has been trained on. There are some impressive applications of these specific tasks. For example, with image recognition that can recognize road signs, pedestrians, etc., you could build a rudimentary self driving car. But it would be wrong to think that anything is possible. IIUC, we have been taking some basic building blocks and constructing systems from them. Cool, but it doesn't mean general AI is right around the corner.
Even then, good can mean 80% accuracy. I can't think of the paper right now, but I read one where they improved the handling of negation in different parts of the sentences for sentiment analysis. They improved the state of the art from ~80% to 86%, IIRC. They were excited, and I know that science/research is built on incremental progress. But that's going from 1/5 wrong to 3/20. Take a look at the generated images from image generation pictures. Impressive, but a skilled photoshopper can do much better, based on what i have seen. And some papers are over hyped. I hope I haven't been too hard on anyone's hard work, I'm just trying to ease fears here.
Also, as mentioned in , it is a fairly old field, relative to computer standards of course. For example, backpropagation was a huge breakthrough, but that happened in the 80's. There have been recent breakthroughs, notably deep learning. But it would be just wrong to think that everything you are seeing is the result of the past 10 years. (Which is what I thought until a few months ago :S) Like other science research, it would also be wrong to assume it will continue linearly. In fact, there have been multiple AI winters.
I'm not trying to troll on behalf of AI fearmongering, I swear. But I have read some of the warnings about AI that some (occasionally notable) people have made. I haven't seen many/any responses that don't just boil down to "there's nothing to worry about because strong AI is still a long ways off, so let's just keep working on it". As I noted before, those kind of counterarguments don't seem to address the anti-AI concerns in the long run.
We have the same here on HN, see some of the comments in this thread:
The fear mongering is reaching pretty high levels.
> Although some reports insinuate that the bots had at this point invented a new language in order to elude their human masters, a better explanation is that the neural networks had simply modified human language for the purposes of more efficient interaction.
From the Gizmodo article it links to (which is also okay, but not great):
> “Agents will drift off understandable language and invent codewords for themselves,” FAIR visiting researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”
Except... this isn't shorthand. "I want five balls" is shorter than "Give me balls the the the the the," or whatever else it could come up with. It's not more efficient. It's just... dumb. The bots don't actually understand the words the way humans do. Because they don't understand words and language, they're using the words as tokens. It's this primitive level of communication that just happens to have words assigned to it. Even the calmer takes on this are anthromorphizing the machines too much and attributing intelligence to the complete lack thereof.
After all most machines currently communicate by exchanging sequences of exactly two symbols, 0 and 1, not some complex phonology. Maybe when bandwidth is not as constrained as human speech it's actually more efficient to reduce the vocabulary and increase the symbol rate.
IIRC there's something similar going on with real human languages too. For instance Chinese usually takes fewer phonemes than Spanish to express the same information. However Spanish speakers tend to speak significantly faster than the Chinese to "make up" for it. The fact that Spanish uses more sound to carry the same amount of informations makes it more resilient to "data loss" and allows a faster speech rate.
Exactly. I haven't read the details of the implementation of the system mentioned in the article, but the outputs remind me a lot of
what was returned by a text generating neural network of this tutorial that I did once:
Especially with fewer epochs (<10) the generated text was part gibberish, part endless repetitions of common phrases or basic words like "the" - simply
because (surprise!) "the" is one of the more frequently used words in speech.
Pulling this out of context, one could also say "This AI is inventing its own language, just by reading Alice in Wonderland!", which is of course utter bullshit.
Bot-0 creates a message that's slightly incorrect to a human reader. Bot-1 reads and interprets it correctly. Bot-0 gets a reasonable response from Bot-1 and thus increasing the confidence in how it structured its original message. Millions of iterations later and you get a weird corpus with weird structure.
You can achieve the same thing with two young children that spend a lot of time together e.g. twins.
Sometimes taken to extremes:
> According to Wallace, the girls had a longstanding agreement that if one died, the other must begin to speak and live a normal life. During their stay in the hospital, they began to believe that it was necessary for one of them to die, and after much discussion, Jennifer agreed to be the sacrifice. In March 1993, the twins were transferred from Broadmoor to the more open Caswell Clinic in Bridgend, Wales; on arrival Jennifer could not be roused. She was taken to the hospital where she died soon after of acute myocarditis, a sudden inflammation of the heart. There was no evidence of drugs or poison in her system, and her death remains a mystery. At the inquest, June revealed that Jennifer had been acting strangely for about a day before their release, her speech was slurring, and she said that she was dying. On the trip to Caswell, she had slept in June's lap with her eyes open. On a visit a few days later, Wallace recounted that June "was in a strange mood". She said, "I'm free at last, liberated, and at last Jennifer has given up her life for me".
In other words, how do you determine that there is meaning in a conversation spoken in a language you don't understand?
A bit of digging reveals that no serious news outlet really got this wrong (correct me if I'm wrong!), and most of the sensationalist headlines were from British tabloids
but even more surprisingly, the articles themselves demonstrated a fairly sober understanding of what is going on. The only mistake they made was spinning a mundane story way out of proportion, something tabloids literally every day, and have done since their conception.
The move by tech industry people into media/propaganda is rather worrying.
Yea, the newspaper illustrations with the Terminator (oh, hey, this article does it, too, but I guess it doesn't count because it's "ironical") are exaggerations but who doesn't get that?
The problem is not that these people are not smart, but they tend to be very superficial. Reporting seldom rewards (or attracts) individuals who are interested in deep dives of anything. The culture is that of hot takes and fast ideas, which necessarily has resulted in a race to the bottom of content and meaning.
That is where this stuff comes from. It might be 'journalistic terror', but barring a total reboot of the field, I don't see how it will change.
It's just sensationalism.
Now I don't think that the major news organizations purposely publish fake stories but as another poster said, a little sensationalism while conveying an otherwise truthful story helps to encourage clicks and improves a papers bottom line.
With the ongoing demise of print and the rise of competition on the web, legitimate news organizations are attempting to compete and survive and often that means employing some of the methods that we find distasteful but are proven to work.
Nuclear power, for instance, received the same degree of FUD from uninformed commentators for decades that we're starting to see applied to ML and AI now. As a result, we clung to the precautionary principle and burned fossil fuels instead. The human toll associated with that decision, in terms of premature deaths and disabilities, will eventually run into the millions if it hasn't already.
I hope Elon Musk has thought his anti-AI arguments through farther than the activists and journalists of the 1960s and 1970s did. His voice is a powerful one with far-reaching influence. He could do a lot of good with it, or a lot of harm.
At the same time, the industries producing petroleum had a lot of incentive to do so. We can call this aggressive self preservation at best. At the worst, we can say they corrupted the media and global stability in order to do so. This is a very strong accusation, and not my point.
Musk and Friends finds themselves diametrically opposed to where the petroleum industries found themselves. Who then is taking the side of the petroleum industries in the AI case?
That would solve a lot of problems, because the only reason these companies are interested in our behavior is to influence us.
>These dialogue rollouts led to bots that negotiated harder and proposed the final deal more often than their counterparts. The bots were also able to produce novel sentences rather than just relying on sentences encountered through training data. And remarkably, the bots engaged in some sly strategizing. There were instances when the bot feigned interest in an item that had no value to them and then pretended to compromise later by conceding it in exchange for something it actually wanted. In a statement, Facebook said, "This behavior was not programmed by the researchers but was discovered by the bot as a method for trying to achieve its goals."