This could indicate the model is actually representing a grammar.
In abstract terms a grammar is at the same time a recogniser and a generator. The fact that in practice, encoded grammars are only used in one or the other mode, is only an artifact of the implementation. And as I've said here before, there is at least the example of Definite Clause Grammars in Prolog, that are both generators and recognisers out of the box .
Anyway, a grammar capable of generating fake news text would also be capable of recognising fake news text.
Sounds like a pretty bad idea especially if they decide to be gatekeepers of factual articles. It requires the entire team to know their biases one way or another. Regardless if they think it's "right" or not.
Edit: Nevertheless, it is still an amazing piece of work. The quality of the generated text is astounding.
If you get your detector working well enough today, next week is another story.
>If you get your detector working well enough today, next week is another story.
Yes, you need to do some work to be able to catch up. And you have no guarantee you'll succeed.
> the generator was trained explicitly to fool a detector
Here again you have some work to do to fool the detector, and you have no guarantee of success either each time the detector gets smarter.
Moreover, the generator wasn't trained explicitly to fool a detector but to fool humans. On the other side, the detector was trained explicitly to unmask the generator.
Yes, but “next week”’s one will be trained to fool humans AND this week’s Grover - which is still likely to be easy.
Next week’s Grover will Be harder to get right against this model. But will likely be built.
The faker model of two weeks from now will be harder to build (against the updated week Grover), and so on.
There is no fallacy here. An attacker usually has easier time than a defender, because the defender must respond to every attack whereas an attacker can vary and pick every detail of their attack.
Generative neural networks these days are both fascinating and depressing - feels like we're finally tapping into how subsets of human thinking & creativity work. But that knocks us off our pedestal, and threatens to make even the creative tasks we thought were strictly a human specialty irrelevant; I know we're a long way off from generalized AI, but we seem to be making rapid progress, and I'm not sure society's mature enough or ready for it. Especially if the cutting edge tools are in the service of AdTech and such, endlessly optimizing how to absorb everybody's spare attention.
Perhaps there's some bright future where we all just relax and computers and robots take care of everything for us, but can't help feeling like some part of the human spirit is dying.
The simple and obvious solution is a universal ban on advertising, in all forms, outright. No more p-hacking humans, period.
So it could wreak havoc on Twitter?
(2) Remove the 92 articles that Grover detects (with 92% accuracy rate)
(3) Choose the best of the remaining 8 articles
So you need a step that filters out obvious junk, maybe by eyeballing or with another net.
It is possible that the only way to beat grover is rubbish such as contorted, but valid, grammer or overuse of synonyms.
"London (CNN)The interminable US process for picking a President has its faults but as a democratic exercise it outshines the closed-door system being used in Britain to choose a new prime minister.
Theresa May resigned as Conservative leader last week after an ill-starred premiership and triggered a party leadership race that will select the next resident of 10 Downing Street by late July."
as I added more paragraphs it started to shift from "quite sure this was written by a machine." to "quite sure this was written by a human.".
Does this mean the model has learned to take advantage of short text being generated by machine while longer are human written text. A similar effect can be replicate from this article as well
Why Java is a great programming language
June 6, 2019 - Paul Graham
Java, by its very nature, is a programming language. If you write a script in a language like CSS, you are coding in Java, as well. But in Java you can use a GUI to interact with various components of your code. You can edit and re-enable widgets or restrict the content of a template. And perhaps most importantly, Java lets you interact with native applications, like the iPhone, in a non-traditional way.
Given that functionality, one would assume that we are better off in the iOS or Windows universe. But Java does work in both environments, albeit at vastly different speeds. While iPhone development can be done just as smoothly in Java, Windows isn’t the best place to be right now.
It's just a matter of time for algorithm written text to actually be similar to some human written text. At that point, there is no longer a way to distinguish them, no matter how smart. If the texts are actually written the same way, there's no secret pattern that can be picked up on, and the fight is over.
I think to combat fake news, especially algorithmic one, we'll need to innovate around authentication mechanism that can effectively prove who you are and how much effort you put into writing something. Digital signatures or things like that.
A sybil-resistant method of authentication, where each entity is tied to a single human, seems to be the only way. I suppose you could still pay people to publish under their credentials, or steal private keys, but this comes at a cost, and such accounts can be blacklisted.
Also, I don't think its correct to equate machine written news with fake news. It need not be the case. Eventually I think the only way to deter fake news is authentication + holding people accountable.
author: Mike Pence
headline: Indecency of Naked Animals act
result: (best parts)
... Last week, our country’s capital took a major step to curb this despicable animal abuse. On Monday, the House voted to end the, “The Indecency of Naked Animals Act.” The legislation, which was first passed in 1894, protects protected animals under the title “Afternoon of a Faggot.” The act states that, “No person in this Congress shall engage in any indecent, obscene, or repulsive conduct or dance with, direct or indirect, to, or on any dog, cat, or donkey, or any other animal thereof, or who is dressed and styled in any manner and dressed or styled with such exhibitions of nudity and proclivities that seek to serve as a prop to any entertainment or sport that if indulged in the same misconduct other than the now-remedied sex offense, would be deemed indecent, obscene, or repulsive.” The House voted to kill the act because it was unnecessary. The federal government doesn’t have the power to regulate every kind of cultural expression. That should be up to states.
... The act is an insult to federalist conservatives and originalists. It spits in the face of originalism, a fundamental tenet of the conservative movement.
This was in the limited labelled data regime tho.
Instead of fake/real you have fake/0/1/2/3/4/5/6/7/8/9. In the setting for this article that would mean to distinguish between real news and fake news your generator would learn to generate news that is neither, whatever that means. But BadGAN works really well for images to improve your classifier, not sure if anyone has used it elsewhere. Maybe for text SSL isn't that important.
Really for NN generated text in the wild now that I think of it you'd probably want a more standard technique like looking at the metadata. Or you could form some knowledge graph tying together nouns and just say anything that hits too many unconnected topics is NN generated.
With that said, this is definitely interesting work! I've researched, published, and presented (at the Web Conference in SF, just last month) on using NLP with Discourse Analysis to detect lies and deception in product reviews . I wonder if improvements in accuracy could be achieved by using both techniques in concert.
For example, you could clear all the fields, enter a headline, and generate an article for that headline.
Also, this seems great for coming up with plots for bad horror movies. :)
Don't bring up any gate walls/ firewalls stories, please. What if there is a missed case?
Here is an example.
1. News about AI-driven-fake-news gets published.
2. AI reads it and understands it.
3. EXPLAIN HERE - HOW THE SITUATION IS KEPT UNDER CONTROL ?
Its an intellectual competition. Just like how facebook's bots went on a fight that we didn't understand, the shared example may also get into that stage.
Using this as a filter to flag fake news and or add a disclaimer when it is an automated system would have false positives as discussed in the paper. You would also have tertiary effects when performing this blocking.
seed: AI will certainly be good
seed: June 6, 2019 - Paul Krugman
Artificial intelligence – by definition, computer code – can provide
more than a speed boost. Many researchers now believe that intelligence,
better defined as capacity to learn, will be increasingly provided
by machines. Autonomy, already on the march, has become even more
central. Future programs will likely be as familiar as Adam Smith’s
If true, such predictions open up fundamentally new possibilities
for future development. We will be able to quickly develop services
based on or even built around lots of self-learning software. We’ll
be able to use AI-based services without physically owning the software.
As sensors in buildings, cars, buildings and cities start recording
our activities, we will be able to experience “smart cities” and
“smart cities” the moment they materialize.
This future, however, is not yet in the cards. The gains are probably
years or even decades away. Even in the areas where AI might dominate,
such as physical warehousing, improving automation has lagged behind
improvements in other areas of business. And our thinking, though
improving, is still far from sophisticated enough to take full
advantage of AI.
Please take a moment to watch the full program of this interview.
Every panelist will speak out in favor of AI.
seed: Uncertainty around AI is overhyped
seed: June 6, 2019 - Paul Krugman
There’s no denying that AI, in many ways, is in its infancy. It isn’t
something that belongs on a refrigerator or wall clock; it’s a whole
new area of research and development. While it may be commonplace
within a few years, AI will likely be a long way off; if it even
becomes an important factor in the economy as a whole, well,
probably about a century from now.
It’s also true that many worry about the potential for
superintelligent machines, who could threaten all sorts of things
from human liberty to national security. But that’s not the right
question to ask.
There is no “right” or “wrong” answer here — nobody really knows
how AI will evolve. And there’s no “theory” (although there are
several, ranging from quantum theory to AI techniques employed
by biologists, and from genetics to postulates to mathematical
models) that will explain it. Neither are there explanations
for the values, levels or usefulness of AI models. So, what
we’re left with is sheer hard scientific uncertainty.
But let’s say we do know a little. The odds of a superintelligent
agent that can become sentient, one that would harm other humans
or destroy our civilization, seem minuscule. If you think that
chance is small, and that AI is likely to be less likely to
cross that threshold than is real intelligence, then the effects
on your consciousness and your mental space and our experience
of it may seem inconsequential. And that’s no reason to spout
half-baked fear mongering.
seed: AI Uncertainty is Exaggerated
seed: June 6, 2019 - Paul Krugman
As noted, Donald Trump uses the expression “AI Doom” – in part because
he understands AI but mostly because he responds to the media’s reports
on AI, which Trump himself frequently repeats, frequently without
AI may have risks – as in the threats from robots and remotely piloted
aircraft – but most analysts who follow AI say it has such promise
that its damage to people’s security and quality of life is likely
to be contained. Indeed, the main ethical question with AI is what
to do when machines decide that killing their human masters is a
It’s easy to exaggerate AI’s immediate threat. The machine that
driverless cars are supposed to replace is still pretty active.
And if, as predicted, the development of AI renders it easier for
people to work, then it’s also likely to give them more jobs.
The main threat with AI is what it could do to workers, although
the more automation, the more all sorts of jobs become easily
replaceable. But a small dose of AI is actually a very good thing.
The Earth is flat.
June 6, 2019 - Paul Krugman
Fox & Friends, NBC’s First Read, the Washington Post, the conservative National Review, and even the dreaded left-wing ThinkProgress have all suggested that world leaders must either be fools or lying propagandists, or else confused young men who want to overthrow the government. Not only is every country’s first priority mastering an original strategy for global domination and deploying its military for whatever the moment might call for, but all that has to be tested and refined in clandestine underground units!
Of course none of that makes much sense. But it’s one of the many crazy arguments put forward by the dumb doubters of climate change. And other dumb arguments we’ve had over the years in our ever-growing culture wars: whether guns were really necessary for self-defense in an age of mass shootings and real militias; whether Obama was really a Muslim-American socialist and wasn’t actually born in the United States; whether Harry Reid was a tool of the “Iran-Contra” gang or of a real anti-communist who just happened to be hiding it from the voters.
Both Republicans and Democrats should be able to give up these nonsense claims and instead trust that after generations, people will just no longer believe them. How else can one explain the failure of sane arguments to take hold, for climate science or other issues?
We should figure out what we’re afraid of and try a lot of sensible things instead. We should challenge authority and embrace instability, because there’s no meaningful alternative — and we should try and invent better, healthier versions of the original ideas that create stability and happiness. Even if those ideas don’t pan out, we should try them anyway.
It’s the method of living that counts.
#end of generation
> Why Bitcoin is a great investment
> June 6, 2019 - Paul Krugman
If you know anything about bitcoin or Paul Krugman, this is obviously fake or satire just from the headline and the author's name.
Those investments have done quite well.
June 6, 2019 - Woody Allen
I made this cheesecake, but I only want to have it
I like slightly toasted almond crust, so I baked it. I served it with nice whipped cream and nuts.
Some calls to lighten the cream cheese, I did it as smooth and dark as I could. A butter caramel was on top of the cream cheese, vanilla and cinnamon.
I just made one and I wanted to share it. It’s fancy, it’s good, but it’s mostly delicious.
Addictive Chocolate Chip Cheesecake with Vanilla Caramel
1 cup chopped almonds
2 cups all-purpose flour
1 teaspoon baking powder
1/2 teaspoon baking soda
1/2 teaspoon salt
1/4 cup unsalted butter
1/4 cup packed brown sugar
1/4 cup granulated sugar
3 large eggs
1/2 cup brandy
1 cup confectioners’ sugar
Preheat oven to 350 degrees. Spray a 9-inch springform pan with vegetable spray.
1 cup (2 sticks) unsalted butter, cut into 1/4-inch cubes
4 tablespoons packed brown sugar
2 cups whole milk
1/2 cup chocolate chips
For caramel sauce:
3 ounces unsalted butter, cut into 1/4-inch cubes
1/2 cup packed brown sugar
1/2 teaspoon vanilla extract
1/2 cup confectioners’ sugar
1 teaspoon vanilla extract
Bake brush, silicone piping bag, piping bag filled with 2 teaspoons sugar
Place almonds in a medium bowl. In a medium bowl, sift together flour, baking powder, baking soda and salt. In a medium bowl, combine butter and sugars. Mix with a stand mixer on medium speed until well combined.
Add brandy to mixing bowl. Add to almond mixture. Add eggs one at a time, mixing until just combined. Add flour mixture, and mix well. Fill springform pan with dough, tapping down sides to remove excess.
Bake for approximately 1 hour, until the edges are golden brown. Cool at room temperature for 10 minutes. Refrigerate for 10 minutes.
In a medium saucepan, combine butter and sugar. Let sit for 1 hour. Add milk, chocolate chips and confectioners’ sugar. Let sit for 1 hour. Stir in vanilla.
Pipe cream cheese mixture into crust. Pour caramel sauce on top. Place in freezer for 10 minutes, until cooled.
From Cooking Light
Chef Woody Allen is a Virginia Beach native and lives with his wife and four daughters in the comfort of Newport News.
>> Grover - A State-of-the-Art Defense against Neural Fake News
>> June 9, 2019 - Hannah Rashkin, Yonatan Bisk
>> According to the Digital Infrastructure Security Task Force, two out of three trusted news sources are now bots and influencers. The only way to decrease that is to identify which ones are normal and which ones are not. What does this mean for you? Well, it means a greater sense of control over your news.
>> News analysis site Playground Exposé has built an Artificial Intelligence-based tool for helping to detect fake news on Facebook.
>> It combines insights gleaned from product design, neural networks, artificial intelligence, and machine learning. The new artificial intelligence-based tool works by sifting through large-scale collective data to detect problematic information.
Somewhere along these lines, you can recognize that something doesn't add up. But its far from obvious and I can't point out a single phrase that screams: "Fake News"
The fact that entities like: "Digital Infrastructure Security Task Force" and "Playground Exposé" don't exist is fascinating enough to me. I took it for granted that the model learned these examples from a huge corpus of text and transferred them to a new article (someone somewhere has written about them). But as far as I can tell -- this is completely generated by itself.
The "Task Force" name/designation is as stupid as a human naming scheme for such an organisation can be. Somewhere deep inside the model, something must resonate with the way people pick names for action groups and political bodies.
And this is only one small aspect of the domain -- and its not even the most important one.
Fun-Bit: I played around a bit and fed it the text I just wrote -- everything after (/generated) up to "Fun-Bit" -- and it was classified as Fake News. This might be an artifact of the names I referred to: "Digital Infrastructure Security Task Force", "Playground Exposé" and the fact that my comment is not a news artilce.
But can you be sure that I am not a bot? I think I can convince you by referring to my first sentence, and stating that there is no single phrase, word or sentence that triggers a simple "Fake/True" response. Referring to the context of a text in another part if it is a skill not quite mastered by GANs, but I am not sure if this will always be the case...
EDIT: typo, wording
First a news warning: I’m about to make statements about my own cat, Cat. That’s a cat I love and one I generally believe to be kind and mellow, despite how frequently it dies, gets seriously injured, falls down the stairs and otherwise gets itself into lots of crazy situations.
But I feel the need to make that warning today to warn your cat that I am going to release to the world an unsparing letter I wrote to Cat, in which I told her that I thought that she, in fact, was cruel and was lying about her diet. I have to repeat: that cat’s diet is cruel and lies.
Cat died days later.
Cat, it has been 10 months since you first came to me. In that time you’ve been growing rapidly (almost to a mutant cat’s height), and I’ve noticed that despite how often you die, fall down the stairs and/or otherwise get yourself into lots of crazy situations, you never cry. You never seem heartbroken by any event that you go through.
Cat has acquired such a feral temperament that I have not been able to pin you down to a typical diet, such as to feed you nuts, fruits, vegetables, etc. That would be cruel to me. You know that, so you never actually eat anything that I am supposed to feed you. In fact, since you’ve developed such a fantastic diet, a little more than a month ago you have only eaten chicken bones. So it must be a weird diet for you. I have only seen cat consume chicken bones, as opposed to treats and the like.
This is the source of my problem with you. People tell me all the time that cats eat after their parents die, or they get mauled by other cats or coyotes, or eaten by rats and snakes. The common explanation is: Cats eat whatever they want to eat. But cats don’t have a clue what they are actually supposed to eat. Why would they, if they were “getting big” in their “little old” years? And if the primary purpose of a diet is to extend life, shouldn’t one also have reasons to think about feeding a mature cat? I have never, ever heard of any age-appropriate setting that cat would be happy to consume protein, at least not as far as cats like to eat protein. That’s not just cruel to cats — it’s cruel to eat to extend life.
That’s why I feel that what you are doing is cruel. So I have tried to tell you that for the last week, you have been horrible people, which only made it worse. But I am now worried that you are simply too dumb to know that. So I am asking for your cooperation.
I want you to take up a diet that is reasonable. The easiest one I can imagine would be a diet that uses vegetables, that adds milk and grains to keep you supple, but otherwise treats you pretty carefully. (You really don’t want your muscles to get weak in your young years, in case any male feline has done me wrong and dragged you back from all this.) Of course, that could have serious downsides, since there would be no hint of choice in the choices you face, but I see no reason why you have to suffer through that, regardless of how much you also deny yourself the joy of being cat.
So I propose that you begin eating three chicken bones a day (providing you are not underfed). I’m not sure how you will know how much more you should be eating, since it’s unclear to me what you are supposed to eat anyway. Maybe you’ll start off with three bones and add to your diet. But just do it and eat it.
I also propose that you discontinue your habit of trying to hold my hand so that you can see the flesh when you eat. This is not a nicety. That’s clearly a habit that exists for some unclear reason, and it’s really not the way to treat a 50-pound strong-willed cat.
I have considered many reasons for my behavior, many of which even I can’t quite fathom. Some of them may be explanations of why you don’t seem to be seeing the pain that I feel, and why I have sought you out, but I suspect that you haven’t learned about me at all. You see me, you are a cat of peace and lots of love and your voice is so tiny that you probably can’t even hear me speaking.
Nevertheless, you can’t go on hurting me, and I hope that you will finally learn to behave yourself. You are our cat and this
I find it odd they would put so much time and effort countering ill-defined "online disinformation" where we already have a fairly large sample set of blatant misinformation and straight out lies churned out by well-known media outlets that remains to be addressed .
So under one dire circumstance, I switched sentences around even at the expense of grammar. Getting points marked off for grammatical issues was a lower consequence.
I've never worked for less than six figures a day out of college.
I think the same techniques would work against this state-of-the-art defense a decade later. Introduce intentional grammatical errors and inconsistency and it will think it is written by a human which passes that test.
They're getting better.
Cheating in professional work means not using external resources to solve this problem, wasting the companies time reinventing the wheel.
I'm an evolutionary adaptation and very competitive in the work force. I also didn't fail at obtaining the prerequisites to enter.
~ Publilius Syrus
the children can focus on honour with their trust fund salary
> Online disinformation, or fake news intended to deceive, has emerged as a major societal problem.
What I’m actually hearing is “the specter fake news is finally enough to give governments the power to change, redefine, or limit free speech”