New Knowledge is the organization that setup a fake Russian botnet, and then tried to push a narrative about how the Republican candidate in an Alabama Senate race was being assisted by this "Russian election interference"... anybody involved with that organization is a scumbag - it has zero redeeming qualities. Renee has been making the rounds lately on Youtube, informing everyone about how much of a threat these operations are (not her organizations fabricated ops, the totally real ones). I haven't yet found the prime mover in this, but her activities are well aligned with those of the DoD ratcheting up the scaremongering about the (according to them) active Chinese operations against the US population. So there is a pretty strong push for further internet lockdown measures being made right now by these people - and Mozilla is associated. At this point I would not be at all surprised to hear Mozilla announce RealID browser integration.
I hadn't heard of this before, so I just dug up the original NYT story.
At least one person from New Knowledge was involved in a small experiment designed to explore how the sorts of tactics used by the Russians worked, which attempted to convince Republicans that Mr. Moore was receiving Russian help, but it was designed to be too small to actually affect the outcome of the election (as the goal was to explore how the tactics worked, not to produce any effect). This is a little shady, but as long as it didn't actually affect the outcome I see no harm in a group of people trying to better understand how the Russian social media tactics work.
First of all, the story has actually changed a couple of times. So if you really want to understand the timeline - you are going to need to refer to archives of those articles. Second, the "small experiment" had a budget of at least $100k that they are willing to admit to - traceable money that flowed through people related to the USDS and DoJ. You know that Obama repealed the ban on domestic propaganda before he left office, right? I couldn't tell you the number of times a counter-intel op leaked into domestic media (PopSci was really bad about it) - and months of work would instantly go up in smoke as the operation got scuttled, it makes me sick thinking that is no longer the case. New Knowledge's objective was to deceive voters. That is damage that can't be undone, they even tricked the media (a disappointingly easy task) into spreading the lie.
If you can't see something very wrong with this, well you'll be just fine in cold-war 2.0 - we can pick up where we left off in government experimentation on an unwitting public. MKULTRA 2 electricboogaloo. I'm sure its been a while since we updated our nuclear/biological/chemical weapons models... so long as it doesn't affect the public by a statistically significant amount - we should be fine to resume the 1970s practice of releasing airborne pathogens over major American population centers, doubling the number of deaths in the elderly.
> New Knowledge's objective was to deceive voters.
From reading the article, the group's¹ objective wasn't to deceive voters, it was to research how these tactics worked. Are you suggesting that a single $100k research project was sufficient to alter the course of an election with a $51M advertising budget? As near as I can tell, that's just how the right-wing media is trying to spin it. Certainly if I were to actually try and alter the outcome of an election like this, I'd expect to be spending a lot more than $100k to do so.
That said, I find it hard to believe you're arguing in good faith when you're drawing parallels between a limited spread of misinformation centered around a single event with literally murdering people.
¹Which seems to have involved at least one New Knowledge member but it seems wasn't actually run by New Knowledge.
> ...the group's¹ objective wasn't to deceive voters...
"The report does not say whether the project purchased the Russian bot Twitter accounts that suddenly began to follow Mr. Moore. But it takes credit for “radicalizing Democrats with a Russian bot scandal” and points to stories on the phenomenon in the mainstream media. “Roy Moore flooded with fake Russian Twitter followers,” reported The New York Post."
Deception.
> Which seems to have involved at least one New Knowledge member but it seems wasn't actually run by New Knowledge.
Reid Hoffman, the billionaire funding AET wrote an apology. What does he have to apologize for? Well he paid AET $750k, AET paid New Knowledge at least $100k of that to run this disinformation campaign. So you can knock it off with the "at least one member... seems wasn't run by New Knowledge..." Obviously my patience has run thin on this - it has been proven that Morgan is a liar and that New Knowledge was deeply involved.
As I said, they've change their story more than once. When Morgan was pressed on the leaked internal report's clearly political goals, he said that "it didn't ring a bell".
Oh, and go check out their release of the report they provided the Senate Select Committee on Intelligence in December. What, you didn't know that this politically motivated organization with an agenda was asked to inform the Senate about Russian interference in US election? Yeah, they were - and did, in December. Checkout the timestamp on that pdf - not December... weird...
> That said, I find it hard to believe you're arguing in good faith when you're drawing parallels between a limited spread of misinformation centered around a single event with literally murdering people.
When you say "limited spread" do you actually mean "completely unrestrained"? And no. I find it hard to believe that you don't see the parallels between the justifications for unethical experimentation conducted on an unwitting population during the cold war, and the rationalization you've provided in this thread. It has nothing to do with deaths, it has everything to do with the ethics and the non-zero cost to individuals. In the cold war a statistically insignificant portion of the population involuntarily paid a heavy price, in this "experiment" (it wasn't, the leak shows it was a political action) a statistically insignificant portion of the population was convinced that their president was a traitorous Russian agent and driven mad with impotent rage.
They were researching a tactic already used in the wild that involves deception, yes. But the end goal of the experiment wasn't to deceive voters, which is what you claimed. The goal was to learn about how this tactic, a tactic that involves deception, works, presumably to help identify and combat it in the future.
> What, you didn't know that this politically motivated organization with an agenda was asked to inform the Senate about Russian interference in US election?
So what you're saying is an organization doing research into Russian interference was asked to inform the Senate about Russian interference? When you put it like that, it sounds like they have an excellent reason to conduct this sort of research.
> When you say "limited spread" do you actually mean "completely unrestrained"?
No I don't, which is why I didn't say that.
In any case, it sounds like this discussion has run its course.
If the experiment was designed to be small enough to not affect the outcome of the election, then they're not experimenting with affecting the outcome, they're investigating and learning about tactics that, on a larger scale, could be used to affect the outcome of the election.
I did say it was a bit shady, but I'm not really sure how to do this sort of research in a way that doesn't potentially affect the real world, because the whole point is to see how this sort of thing affects real people. Doing it in simulation doesn't help because that only tests your simulation.
I'm taking it as a given that this kind of research is important to be able to identify and combat actual interference of this kind from malicious entities in the future.