Storing the bcrypt password in the entry would make a dump of the cache almost as good as a dump of the password database. At least this way a dump of the cache makes the key opaque and requires you to guess both the username/id and password together, assuming they're not repeated in the cache value.
According to the security advisory this cache was for AD/LDAP delegated authentication, so they don't have their own password database with a version field or similar for sensible invalidation.
I guess the requirements could be something like:
- different username/password combinations must have separately cached results
- mitigate a potential data leak by putting all the entropy we have available together with the password material and using a slow password hashing function
The 18-hour video debate
The Rootclaim debate was structured over 3 days with 3 thematic blocks;
* the first block was about the geographic location and the evidence for the Huanan market versus the Wuhan Institute of virology being where the virus came from
* the second block was about the SARS-CoV-2 genome and whether its genetic features more likely arose in nature versus gain-of-function research
* the third block was about probability; how can the evidence be grouped and what probabilistic assumptions should be taken to accurate reflect odds of the evidence occurring
Each side first got 90 minutes to lay out their cases, then another 90 minutes together to respond to questions.
No matter how you slice it, this is a serious effort and time investment, as the preparation alone of materials and research into it probably consumed hundreds of hours from speakers and judges alike.
There are 40 setup cards with 4 possible rotations that specify agent placements, so it's theoretically possible to do some kind of memorization.
Personally I'd find that kind of play style very unfun, and would rather switch to fully randomized boards if I played enough that it became a problem.
Not going to post this link to every post where it's relevant, but:
Trump returned to that theme in November 2022, when he officially launched his 2024 presidential campaign. "We're going to be asking everyone who sells drugs, gets caught selling drugs, to receive the death penalty for their heinous acts," he said.
You'll note there are comments here saying saying that he generally keeps his campaign promises. On the bright side I don't agree, but on the other hand I think he does often enough, especially for the "well of course he didn't literally mean that" ones.
His sentence was severe in part because he fell under the "kingping statute". This is based on the amount of drug trade he facilitated, the amount of money he made, and the actions he took as an "organizer". The hits didn't help.
> For conviction under the statute, the offender must have been an organizer, manager, or supervisor of the continuing operation and have obtained substantial income or resources from the drug violations
There was plenty of evidence that he ordered the hits, and the defense had the opportunity to address the evidence in court. The chat logs go far beyond "insinuation"
It's ridiculous that people are pretending there is any doubt about his guilt because they like crypto and/or drugs.
Do you not think the optics are a bit weird when you sentence someone to life for something relatively small, but the reason is another crime you’re very sure he did but you didn’t bother to charge him with?
Prosecutors often choose not to pursue additional charges against someone already serving a life sentence. This approach helps avoid wasting court time and resources on cases that are unlikely to change the individual’s circumstances or contribute meaningfully to justice (none of the murders for hire resulted in victims).
I actually wonder if those charges may still be on the table now that a pardon has been granted.
If I understand correctly, only one of the "murder-for-hire" allegations was dismissed with prejudice[0]. However, he was suspected of orchestrating a total of six "murder-for-hire" plots.
Comically (horrifically sadly?) they were dismissed that way because he was already in prison for life with no possibility of getting out, so the court did not want to waste time on it.
Being a drug kingpin is not considered "something relatively small" under US law, as you can see from the sentencing. Being the leader of a large drug operation and ordering hits to protect your business would be considered worse than trying to take out a hit for whatever "personal reasons".
Obviously the hits are a lot messier to prosecute as well with the misconduct of the FBI agents, maybe you could hammer that enough to confuse a jury. But people are commenting like the evidence outright didn't exist - I can only think they have either heard it told second-hand, or are employing motivated reasoning.
Of course it is. Throwing in potential evidence of unrelated crimes to sway other people's (specifically jury's) opinion about the defendant without formally charging him is exactly what the word "insinuation" means[0]:
the action of suggesting, without being direct, that something unpleasant is true
> There was plenty of evidence that he ordered the hits, and the defense had the opportunity to address the evidence in court
Clearly not that much evidence if the state didn't bother to prosecute those charges. And why would they? The judge sentenced him as though he had been found guilty of them.
> the usage of their algorithms allows the app to influence how Americans think about different issues.
Yes, speech does that. "The algorithm" is curation but the content we are talking about is mostly Americans talking to other Americans. The goal is to control and suppress that communication.
There are perfectly valid arguments to ban TikTok. And hey, there are arguments to be made that free speech shouldn't be absolute. But that doesn't fit into the American self image, so the argument must be obscured to reduce the dissonance. Your argument in particular maps perfectly to "Some kinds of speech are dangerous and people must be protected from them."
My argument (mostly) is that TikTok is just a random company that we permitted to do business in the United States and we can revoke that permission at any time as it suits our needs and laws. TikTok sells advertisements and people talk about those advertisements. That's all it does.
Whether people share memes or communicate American to American isn't material. We know this is true because a company with the same features and products can be shut down if it's discovered that the same company engaged in other illegal activities (let's say money laundering or human trafficking to make it clear) and so someone's First Amendment rights would be abridged by the shut down of the company.
TikTok is just in the same scenario and has now found itself afoul of US laws and regulations and has to adjust by selling itself or it can exit the market.
The other thing here is that you'd have to convincingly argue that people who have never used TikTok (me for example) have now had their First Amendment rights violated, but there has been no change in my First Amendment rights. I can still use my freedom of speech as I could before.
According to the Federal Bureau
of Investigation, TikTok can access “any data” stored in a
consenting user’s “contact list”—including names, photos,
and other personal information about unconsenting third
parties. Ibid. (emphasis added). And because the record
shows that the People’s Republic of China (PRC) can require TikTok’s parent company “to cooperate with [its] efforts to obtain personal data,” there
is little to stop all that
information from ending up in the hands of a designated
foreign adversary. Id., at 696; see id., at 673–676; ante, at
3. The PRC may then use that information to “build dossiers . . . for blackmail,” “conduct corporate espionage,” or advance intelligence operations.
It basically just says that the app asks for the user's contact list, and that if the user grants it, the phone OS overshares information. That's really thin as evidence of wrong-doing. It doesn't even say that this capability is currently coded into the app. This sounds more like an Android/iOS problem - why is the contact sharing all or nothing? Would the ban still be OK if the app didn't have read contact permissions?
USA should ban the air, because the air is breathed by Chinese communists, the air is tainted with communism. The air could influence how American people think, could infect them with disease, cause people to be sick or death. This is a national security threat, USA government must ban the air.
"But before seeking to impose that remedy, the coordinate branches spent years in negotiations with TikTok exploring alternatives and ultimately found them wanting. Ante, at 4. And from what I can glean from the record, that judgment was well founded."
Maybe that was one of the alternatives. I wasn’t on the task force but if I was asked to then I would have went one on one with their tech lead’s and asked them to stop collecting this.
But it seems it is greater than that. How you interact with it, your likes and dislikes can be used as a fingerprint and against you.
This fingerprint can then be used against firesteelrain some time in the prophetic future.
Gorsuch says
“To be sure, assessing exactly what a foreign adversary may do in the future implicates 'delicate' and 'complex' judgments about foreign affairs and requires 'large elements of prophecy.' Chicago & Southern Air Lines, Inc. v. Waterman S. S. Corp., 333 U. S. 103, 111 (1948) (Jackson, J., for the Court). But the record the government has amassed in these cases after years of study supplies compelling reason“
Then he says this.
“ Consider some of the alternatives. Start with our usual and preferred remedy under the First Amendment: more speech. Supra, at 2. However helpful that might be, the record shows that warning users of the risks associated with giving their data to a foreign-adversary-controlled application would do nothing to protect nonusers’ data. 2 App. 659–660; supra, at 3. Forbidding TikTok’s domestic operations from sending sensitive data abroad might seem another option. But even if Congress were to impose serious criminal penalties on domestic TikTok employees who violate a data-sharing ban, the record suggests that would do little to deter the PRC from exploiting TikTok to steal Americans’ data. See 1 App. 214 (noting threats from “malicious code, backdoor vulnerabilities, surreptitious surveillance, and other problematic activities tied to source code development” in the PRC); 2 App. 702 (“[A]gents of the PRC would not fear monetary or criminal penalties in the United States”). The record also indicates that the “size” and “complexity” of TikTok’s “underlying software” may make it impossible for law enforcement to detect violations. Id., at 688–689; see also id., at 662. Even setting all these challenges aside, any new compliance regime could raise separate constitutional concerns—for instance, by requiring the government to surveil Americans’ data to ensure that it isn’t illicitly flowing overseas. Id., at 687 (suggesting that effective enforcement of a data-export ban might involve).”
And the nail in the coffin is this
“All I can say is that, at this time and under these constraints, the problem appears real and the response to it not unconstitutional. As persuaded as I am of the wisdom of Justice Brandeis in Whitney and Justice Holmes in Abrams, their cases are not ours. See supra, at 2. Speaking with and in favor of a foreign adversary is one thing. Allowing a foreign adversary to spy on Americans is another.”
Of course if the app have done anything seriously illegal it would not have been necessary to bring this law to ban it, because existing laws would have sufficed to do it.
Perhaps because US government wanted to do it despite TikTok not breaking any serious provisions of law this law has been made.
It feels like a sleight of hand
from government to ban something that has broke no (serious) law (yet).
Did the SCOTUS go into the necessity of having this law to achieve what government wanted, if existing laws would have sufficed, provided that government met the standards of evidence/proof that those laws demanded.
If not, it is as if government wanted a 'short-cut' to a TikTok ban and SCOTUS approved it, rather than asking government to go the long way to it.
I suggest you read the full 27 page ruling and what I quoted again. Supreme Court doesn’t weigh on the wisdom. But found enough evidence that TikTok does not refute that showed that they were engaging in the conduct that Congress alleged and that the law is not unconstitutional.
The question is if the new law was necessary, if there is case that to be made TikTok has violated other existing law, but government merely has to prove so?
Was government trying to take a shortcut to a TikTok ban which could have been achieved through current law but which needs greater burden of proof/evidence from government.
Did SCOTUS go into the question of the need for such a law considering all other laws which might apply in the situation, just so that government can achieve the same ban without having to prove that TikTok has broken an applicable law.
Trump tried to ban it first by executive power, but the Supreme Court decided he didn’t have the authority to do that. Then congress passed a law, and the Supreme Court is saying that the law is valid and Trump has no grounds to ask for a pause for him to “work a deal.” Congress could just repeal their law, but other than that it stands.
All spyware should be illegal. A law to reign in the ubiquitous data collection on everyone's computing devices would be great. Maybe start by requiring all data collection to be opt-in and for a specific purpose. Make it illegal to deny functionality that doesn't strictly require the requested data. "Paying with your privacy" shouldn't be a thing. Crush every data broker.
This law is different to that, it's all about specific actors, not about behavior and actions. A "people we don't like" list. CNN could be on a similar list soon. All constitutional of course - the law will specifically mention how this is all for national security. And no one's speech is being suppressed, the journalists can always write for a different news channel.
According to the security advisory this cache was for AD/LDAP delegated authentication, so they don't have their own password database with a version field or similar for sensible invalidation.
I guess the requirements could be something like: