The AIM-9X is infrared imaging based rather than heat signature based. The pilot resolves that contrast between the object and sky before firing the missile and missile targets the object based on the image.
Do you know what sets it off? Seems like it wouldn't be from impact as these balloon materials couldn't be comparable to say hitting another plane, right?
All modern missiles use a proximity fuse that selects the optimum position relative to the target to detonate, it isn't based on contact. One thing to consider is that the warhead is not in the nose -- that is where the sensors are -- but further back on the missile. These warheads often us an annular (ring-like) blast pattern, hence why positioning and orientation relative to the target at detonation time is important to maximize damage.
Adding to janderwogs response, air intercept missiles don't really deal damage with the explosion. The missiles are built with shrapnel payloads propelled by the explosion. These are what actually damage the aircraft on the receiving end.
A little research on the Canadian Ballon shootdown a number of years back show that isn't exactly the case. A larger balloon was shot with a (20mm I believe) cannon and continued to float a long distance.
‘Duds’ is an odd way of putting it. Sidewinders usually have an ‘exploding ring of metal rod’ warhead, I believe - and that was replaced by something else better suited to destroying the balloon while leaving the payload intact enough for technical analysis.
Cloudless sky is cold, and at 30,000 feet extremely cold. Anything not-sky is going to contrast against the sky, even if it has no internal heat source (and remember: if it has electric things in it, it produces heat).
Yes warm bodies put off infrared, but so does the sun, and the suns IR rays are reflected by objects just as they are in the visible range. These reflections used for imaging
Working without javascript requires development time. Running old versions of HTTP doesn't. Version 1.1 is going to be fully supported for a long time.
Running old versions of HTTP requires nothing because people will need to support legacy devices for a long time, and it's obviously already implemented in all major components (Servers, CDNs, Clients, etc.)
You (will) run HTTP/3, because it is more efficient, meaning that your servers will be able to service more request.
You run HTTP/2 and HTTP/1, because a lot of people are still using that, and you don't want to lose them. This especially applies to mobile devices, many of which are stuck with software that cannot be updated for various reasons.
There's no threat of the majority of websites going HTTP/3 anytime soon. By the time that might be a possibility, Tor will catch up.
Unless they lose 99% of their users, there's enough demand. Your level of pessimism on this specific detail is ridiculous. Tor might not last forever, but it won't be lack of HTTP support that kills it.
If you wait a few years for HTTP/3 to settle, proxies will be available that could be glued into tor inside a weekend hackathon.
> If you wait a few years for HTTP/3 to settle, proxies will be available that could be glued into tor inside a weekend hackathon.
In 16 years they have not managed to support UDP and now you say of a project (you are not familiar with...), that they can get it up over a weekend hackathon.
Do you have a contact address? I could mark my calendar for 2025. "some proxy with HTTP >= 3 and HTTP < 3 will exist" is a very basic prediction. It wouldn't have to be integrated into the tor codebase either, just spawned by the tor process.
Tor doesn't have UDP support right now because it doesn't need it for anything. The last 16 years are not equal to the next 16 years, surprisingly enough.
Still not getting it. Every site is hosted. The hosting company needs to spend money on HTTP/2, and be allowed to use it. The networks need to allow TCP through. All the steps down to the transport layer now require legacy maintenance.
Lots of things need to happen for HTTP/2 to 'stay alive'.
Your use of the future tense does not convince me.
It's impossible to defend the lack of generics in golang in the face of the fact that two of the larger open source projects both invented their own generics implementations.
It’s weird to see complaints about code generation causing a too big code base at compile time when code generation is exactly what a compiler will do behind the scenes for generics.
Not necessarily, specialisation is only argubly required for unboxed types. Java actually implements "Generics" without any specialisation at all, it boxes all the primitive types (rather controversially).
The C#/.NET solution, while very good, has some problems too. By implementing generics so deeply into the runtime, it becomes very difficult to enhance the support in the future, for example adding higher-kinded types (which the F# folks have long been asking for).
I actually quite like the Haskell approach of having a "Specialisation" annotation. I believe Scala has this too.
As someone that works on gVisor, I disagree with this w.r.t. our project. i.e., the existence of our go_generics tool is not an obvious indication that we need generics in the language.
A bit of history: this tool was originally created specifically for two packages, a linked list package, and a tree-based data-structure package. Both of these used an interface for the entry types, but this had two main drawbacks:
1. Interface assertions and conversions are (often) not free. These packages are used heavily in our critical system call handling path and this was costing on the order of a couple hundred nanoseconds in the critical path (total cost of the critical path is ~1500-4000ns depending on the syscall).
2. Escape analysis does not work across interface methods, requiring heap allocation of pointer arguments. This was less of an immediate problem for the initial creation of go_generics, but annoying to work around in the critical code where it mattered.
The go_generics tool solves both of these problems by generating versions of these data structures with concrete types. However, neither of these problems require generics to solve. General compiler and toolchain improvements could solve both. In fact, I imagine that (1) is greatly improved already (we were solving this problem in the Go 1.5 timeframe).
To this day, our code base has a grand total of 5 generic templates:
segment [1], ilist [2]: These are the two packages referenced above.
seqatomic [3]: Synchronization primitive for sequence count protected values. Not safe to use interfaces.
bits [4]: This is the typical "multiple integer types" problems. IMO, this use is overkill, there are only ~20 lines of code to copy.
pagetables Walker [5]: Used in critical //go:nosplit //go:noescape context, where most runtime calls are unsafe.
Given our experience with, and very narrow use of our go_generics tool, I'd actually rather Go not add generics and continue to use specialized code generation in the places we really need it.
I do recognize that there are a lot of other cases where people legitimately want generics and I am not fundamentally opposed to the draft proposal, but this was my long-winded way of saying that our go_generics are not an obvious argument in favor.
Exactly. Like all ideological arguments it completely ignores actual real-world evidence. Somebody should sit down and analyze real codebases and understand and quantify how much these projects would be improved by real generics and proper exception handling. My (anecdotal) experience and speculation suggests that you'd see massive reduction in complexity and lines of code (especially when you consider code-gen).
But then this is part of a larger problem of the community which isn't motivated by evidence or basic pragmatism but rather strict adherence to a philosophy.
Changing the language specifically requires experience reports. If that's not real world evidence i'm not sure what is.
The problem with the wider go community is they are usually not driven by evidence, but adherence to a philosophy. eg, its not possible to write large programs without generics.
> Somebody should sit down and analyze real codebases and understand and quantify how much these projects would be improved by real generics and proper exception handling
If the legal authority can't be challenged in public, how are you so sure this legal authority hasn't been skirted plenty? In an opaque system, what they can and can't do is only theoretical. Only sometimes are things disclosed well after the fact and often only in aggregate (such as numbers about how many NSLs are greenlit). This is what the author means by no trust required. They can't be asked to subvert it, even via extralegal means.
The Core Secrets leak said the FBI "compels" U.S. companies to "SIGINT-enable" their products if they don't take money. SIGINT-enable means installing backdoors. So, yeah they can. They also do this with classification order mandating secrecy from organizations and people that are immune to prosecution. In the Lavabit case, they wanted a device attached to the network to do whatever they wanted with the company ordered to lie to customers about their secrets still safe via the encryption keys. That's always worth remembering for these discussions. Plus, most companies or individuals won't shut down their operation to stop a hypothetical threat.
So, you have to assume they'll always get more power and surveillance over time via secret orders if there's no consequences for them demanding it but people on other side can be massively fined or do time for refusing. Organizations about privacy protection simply shouldn't operate in police states like the U.S..
If for some reason those methods fail, they can use BULLRUN, which has a much larger budget[1] and specifically tasked with "defeat[ing] the encryption used in specific network communication technologies"[2].
[1] "The funding allocated for Bullrun in top-secret budgets dwarfs the money set aside for programs like PRISM and XKeyscore. PRISM operates on about $20 million a year, according to Snowden, while Bullrun cost $254.9 million in 2013 alone. Since 2011, Bullrun has cost more than $800 million." ( https://www.ibtimes.com/edward-snowden-reveals-secret-decryp... )
It's not at odds if you know how they work. The U.S. LEO's are multiple organizations with different focuses, legal authority, and so on. They also regularly lie to protect illegal methods and activities. Let's look at some data.
Now, first indication this isn't true was Alexander and Clapper saying they didn't collect massive data on Americans. If they did, they could've solved a lot of cases by your logic of action vs capability being contradictory, right? Yet, Snowden leaks showed they were collecting everything they could: not just metadata, not just on terrorism, and were sharing it with various LEO's. So, they already lie at that point to hide massive collection even if it means crooks walking.
Next, we have the umbrella program called Core Secrets. See Sentry Owl or "relationships with industry." It says Top Secret, Compartmented Programs are doing "SIGINT-enabling programs with U.S. companies." In same document, even those with TS clearance aren't allowed to know the ECI-classified fact that specific companies are weakening products to facilitate attacks.
For Lavabit trial, see Exhibit 15 and 16 for the defense against pen register. Exhibit 17 makes clear the device they attach records data live and claims constitutional authority to order that. They claim only metadata but they lied about that before. Exhibit 18 upholds that the government is entitled to the information, Lavabit has to install the backdoor, the court trusts FBI not to abuse it, and they'll all lie to Lavabit customers that nobody has access to their messages (aka secrecy order about keys).
That the judge asked for a specific alternative was hopeful, though. I came up with a high-assurance, lawful-intercept concept as a backup option for event where there was no avoiding an intercept but you wanted provable limitation of what they were doing.
So, you now have that backdrop where they're collecting everything, can fine companies out of existence, can jail their executives for contempt, are willing to let defendants walk to protect their secret methods, and constantly push for more power in overt methods. In the iPhone case, even Richard Clarke said he and everyone he knows believed the NSA could've cracked it. Even he, previously ardent defender of intelligence community, says FBI was trying to establish a precedent to let them bypass the crypto with legal means in regular courts.
(a) can they already do that legally or technically using methods like attaching hardware and software to vendors' networks/apps like in Lavabit trial?
(b) can the NSA or third parties bypass the security on iPhones publicly or in secret? Or did Apple truly make bulletproof security?
(c) did all this change just because FBI said they were honest, powerless agency hampered by unbreakable security in a press release?
I didn't think anything changed. I predicted they'd crack that iPhone the second they were blocked in court. They did. They knew they could the whole time. They lied the whole time. They wanted a precedent to expand their power like they did in the past. That simple.
It's complicated, but that's kinda what happened to the Lavabit "secure" webmail service. When the owner wouldn't install a backdoor, the FBI sought the private encryption keys so they could MITM the whole site.
The big Apple vs FBI high profile lawsuit for one. Granted that was a telegraphed precedent seeking exercise by "accidentally" losing access to the work phone after the terrorists destroyed their personal phones before the attack.
True - they are less publicly tested but it is suggestive in its own way. If they could just NSL their way to access why bother with precedents? It is wild speculation but that is what lack of transparency has wrought.
Edit: I may be thinking of a FISA order as opposed to an NSL. Doesn’t matter though, obviously the concern is that they would be served with whichever does allow that.
In a BFT protocol based cryptocurrency, an attacker who acquired 2/3rds of the tokens in the network would be able to double spend at will and censor transactions to gain control of more of the network.
They would not be able to rewrite past history.
The only response to an attack would be hard fork.
If I want to bootstrap my node and I join a network where 2/3 of the nodes present a newly-created (false) transaction history, and the remaining nodes present the original transaction history, how would I be able to tell which chain to follow without trusted/special nodes?
In other words, the plan of attack wouldn’t be altering the current chain (which indeed would require owning coins on it, because of proof-of-stake), but rather presenting a completely different chain to new nodes with a 2/3 node majority.
> You need to authenticate first against a trusted source.
If the system depends on a trusted source, why not just have this trusted source sign blocks, thus solving the double spend problem without further ado?
It's not some single centralized trusted source. It's a local trusted source. Like a friend or a shop or a website you make payments on that uses the network and has been keeping up to date. Ideally folks should check multiple sources to ensure they agree.
In the same way folks need to figure out which software to download when they join the Bitcoin network.
Also, two other comments regarding weak subjectivity:
When you join the bitcoin blockchain, you need some trusted source to tell you the hash of the correct genesis block.
Also, if you want to follow a shorter fork chain like Ethereum Classic, you also need weak subjectivity to tell you the first block immediately after the fork, otherwise you might be tricked onto the longer malicious fork, Ethereum :P
> When you join the bitcoin blockchain, you need some trusted source to tell you the hash of the correct genesis block.
No, you don’t. There’s nothing special about the Bitcoin genesis block, it’s a block like any other. Whether you follow a chain that builds on top of this block or some other block has no bearing on the security of the system. It contains no keys that get to decide anything later on.
Because someone needs to show you a valid set of transitions in the PKI from the original keys in the genesis block to the current attacker controlled PKI.
If the corresponding private key of a public key in the genesis block defines the correct transaction history then this system is not decentralized, but controlled by whoever owns this private key. In which case this entity might as well just sign blocks to avoid the double spend problem in a much simpler (albeit centralized) way.
I prefer if people differentiate between systems with a PKI and systems without a PKI.
Systems without a PKI like PoW or PoET can be rather centralized like Bitcoin today or decentralized like Bitcoin before the emergence of mining pools.
Systems with a PKI can have an onchain PKI like Cosmos. One of the challenges in an on chain PKI system is you need some of kind social pre-consensus on launch. The crowdfunding established an part of an initial pre-consensus but there are more moving pieces coming.
> One of the challenges in an on chain PKI system is you need some of kind social pre-consensus on launch.
The greatest challenge, in my opinion, is that there’s a central point of failure: just compromise whichever device stores the private key for the genesis block and you get to redefine history as you please.
Compromising a Bitcoin mining pool cannot be compared to this, since it would just result in this pool’s miners losing their revenue and moving somewhere else (it can never enable rewriting the entire chain history).
If this system depends on trusting 2/3 of the keys in the genesis block, why not simply sign all individual transactions with keys in the genesis block, and unlock even higher transaction throughput?
I mean if you gotta trust a centralized authority anyway, ditch the blockchain and make it simple.
What centralized authority? The keys in the genesis block aren't owned by a single actor, but by multiple.
You can keep the validator set static all the time (use the ones from the genesis block), this might be useful for private chains. But Tendermint also allows for dynamic validator sets. This is useful for Proof of Stake, where validators can come and go, and their voting power can change in accordance with changes in their stake deposits.
You would have to compromise the devices of validators who have atleast 2/3 of the voting power.
In Bitcoin, just compromise the devices of mining pools that have atleast 50% of the hashing power, and reconstruct a chain that is longer than the current canonical chain, thus rewriting the chain history.
> In Bitcoin, just compromise the devices of mining pools that have atleast 50% of the hashing power, and reconstruct a chain that is longer than the current canonical chain, thus rewriting the chain history.
Rewriting the Bitcoin blockchain using 100% of the current hashing power would take an entire year. Thus, with 50% of it it would take two years, assuming the network hashrate doesn't increase (which it does).
Don’t you think someone would notice — over the course of two years — that their mining pool has been compromised and no longer extends the current best chain?
Compromising Bitcoin mining pools lets you move hashing power somewhere else, which is noticeable since the extension of the current best chain would slow down. Compromising the genesis keys in PoS system let’s you create as many valid chains as you want in little to no time.
It's not a single private key. There are a lot of validators, even in the genesis block, and we assume that at least 2/3 of these validators are honest.
It still puts an upper bound on the value of the underlying token, as the pressure for these validations to collude increases with the market value of the token in question. An outside attacker compromising keys isn’t the only way for the system to fail.
Would you trust 100 validators with securing $10M? What about $100M or $1bn? The financial incentive to collude keeps increasing as the value increases.
The point of this format to push data over the wire in a format that is both semantically richer and authenticatable using techniques like object-hash. This gets us to one true unambiguous representation of the data which you need for redactable signatures and rich credentials.