TPM 1.2 is only guaranteed to support SHA-1. That was a baffling inadequacy when it came out in 2011, proven so just a few years later when SHA-1 was publicly broken in 2017. This makes TPM 1.2 useless for its intended purpose.
UAC is per-process and monotonic. Once elevated, the entire process stays elevated.
The new model is per-operation. Even if the same process has been allowed to elevate before, it must ask to do it again. I don't know how granular this is, and whether there's a grace period like sudo.
However, the biggest problem with UAC was that it was considered too noisy for the end user, leading to people just blindly accepting every dialog and Microsoft turning down the default level to the much less secure "don't always prompt". I don't know how this new model will address that problem; naively, it seems to be worse on this front.
Huh. In that case, the upthread commenter likening the new model to being more "linux-like" seems confusing.
Given that they didn't mention which Linux security model the new system was like, I presumed they meant the most commonly referenced model for performing administrative tasks: sudo/doas - which elevates a process for its entire runtime.
But if it's a per-operation model, I guess they might have been comparing it to the "desktop portal"/"policykit-dbus" model instead? Which does kind of fit, but I don't think is the security model that most people think of when someone says "linux-like just-in-time escalation"?
If you created the slice, you control it and you can modify its elements, append to it, etc.
If you didn't create the slice, you don't control it. If you want to modify it or append to it, you should copy it first.
This has reflected how I've seen Go used professionally by people experienced in the language. The language could offer more help (but probably never will), but it's not hardly a regression from C. The real regression from C is the lack of const pointers.
It's not worse. It's just only slightly better. The most damning thing you can say about Go is not that it doesn't improve upon C, it's that it improves only on C (and only in the ways cared about). The authors of Go really didn't examine other languages very closely. So starting with the authors' goals (light on the page, bounds checked, concurrency with message passing, easy to pick up, fast to compile) and a fairly deep knowledge of C (but little else) you pretty much get Go (especially early Go).
Even that isn't really the case, as Go, as it started, was basically Inferno's Limbo in a new clothing, with some influence from Oberon-2 method syntax and the SYSTEM package as unsafe.
Unfortunately afterwards they decided to follow Wirth's quest in Oberon-07 minimalism, instead of Active Oberon.
Go enums are the same as in every single other language. After all, all an enum does is count. There isn't anything more you can do with it.
You can introduce data structures and types that utilize enums, with some languages taking that idea further than others, but that's well beyond enums themselves.
But in defense of Java, modern Java is actually pretty pleasant.
Virtual threads, records and sealed classes, pattern matching, state-of-the-art garbage collectors, a great standard library etc. (and obviously well-behaved enums).
Not to mention the other languages you get for free with the JVM ecosystem.
It might not be as expressive as Rust, but certainly Java/JVM > Go.
Like what? CrabLang? Its enums are identical to Go, unsurprisingly. Above the enum rests a discriminated union that ends up hiding the details of the enum. That is where things begin to differ greatly. You have to do some real trickery to actually get at the underlying enum result in that language. But, when you do, you see that it is exactly the same.
> and when you type a parameter as a `foo` you're not at risk of getting 69 or 4328.
That's thanks to the type system, though, not enums. Enums are not a type. Enums are a number generator. Hence the name.
> That's thanks to the type system, though, not enums. Enums are not a type. Enums are a number generator. Hence the name.
What's happened here is that you've mistaken "Things I believe" for "What everybody else believes" but you're talking to other people, not yourself, so, this makes you seem like a blithering idiot.
The particular version of this trap you've fallen into is a variation of the "It's all just the machine integers" mental model from C. It's just a model and the problem arises when you mistake that model for reality.
Now, technically this model isn't even correct for C's abstract machine, but it's close enough for many programmers and it tends to match how they think the hardware works, which is even more confusing for the hardware people who know how it actually works but that's another conversation.
This model is completely useless for languages which don't have the same type system, and so it's no surprise that it immediately led you astray.
No, I am clearly talking to a computer program. It is possible that the program is forwarding the discussion on to people. Perhaps that is what you are trying to allude to? The details of how the software works behind the scenes is beyond my concern. There is no intention to talk to other people, even if the software has somehow created that situation incidentally. If I wanted to talk to people, I would go out and talk to people, not type away at my computer into a box given to me by the software.
> The particular version of this trap you've fallen into is a variation of the "It's all just the machine integers" mental model from C.
As much as I enjoy your pet definition that you've arbitrarily made up on the spot here, the particular trap I have fallen into is the dictionary. It literally states what an enumeration is according to the prevailing usage. It does not describe it as a type, it describes it as the action of mentioning a number of things one by one. Which is exactly what an enum does.
The previous comment is talking about type constraints. You can definitely constrain a type such that it is invalid to use it outside of the numbers generated by the enum, just as you can constrain a type to only accept certain strings. e.g. from Typescript: `type Email = "{string}@{string}"` This idea is not limited to enums.
That's definitely a thing, and definitely a thing that could be used in conjunction with an enum, but is not an enum itself. Not as enum is normally used. Of course you can arbitrarily define it however you please. But, if you are accepting of each holding their own pet definition, your comment doesn't work. You can't have it both ways.
Quite right. The entire possible space for enums to explore was exhausted in the age of 1960s assembler. There is only so much you can do with a number generator.
Which, I guess, is why we have this desperate attempt to redefine what an enum is, having become bored with a term that has no invitation potential. But, unfortunately, we have not found shared consensus on what "enum" should become. Some think it should be a constraint, others think it should be a discriminated union, others think a type declaration, so on and so forth.
All of which already have names, which makes the whole thing particularly bizarre. You'd think if someone really feels the need to make their mark on the world by coining "enum" under new usage in the popular lexicon they would at least pick a concept that isn't already otherwise named.
That is probably true, but has little to do with enums, which are concerned with mentioning things one-by-one (i.e. producing values).
It is true that some type system features built upon enums. Like a previous commenter mentioned, Pascal offers a type that constrains allowable values of that type to be within the values generated by the enumerator. Likewise, I mentioned in another discussion that in CrabLang the enumerator value is used as the discriminant in its discriminated union types, which achieves a similar effect. I expect that confuses some people into thinking types and enums are the same thing, which may be what you are trying to get at, although doesn't really apply here. The difference is known to those reading this discussion.
The biggest problem with this desperate attempt to find new meaning for "enum" is: What are we going to call what has traditionally been known as an enum? It does not seem to have another word to describe it.
There are some important caveats, though, around trap or non-value representations. Basically, the value held by the storage for a variable may not correspond to a valid value of the variable's type.
For example, a bool variable usually takes a full byte but only has 2 valid representations in many ABIs (0 for false, 1 for true). That leaves 254 trap representations with 8-bit bytes, and trying to read any of these is undefined behavior.
Furthermore, a variable may be stored in a register (unless you take its address with &), and registers can store values wider than the variable type--e.g., even though int has no trap representations in memory of the same size, nowadays it's usually smaller than a register--or be in a state that makes them unreadable. Trying to read such a value is also undefined behavior.
So, reading memory in general is defined behavior (just with an indeterminate value) but it has to actually be memory and you have to be reading it into a type that can accept arbitrary bit patterns.
Any buffer of pre-issued/post-dated certificates would defeat the purpose of the short lifetime. You are supposed to request each new certificate during the validity period of the previous one. Taking the extreme of 6-day certificates mentioned in the blog, you would basically be requesting a new certificate every day, giving you 5 days of warning time to address infrastructure issues, but giving Let's Encrypt or other CAs even less time to address their issues. If you visit the linked post [1] about them, it's acknowledged that it's probably not practical to deploy such short-lived certificates at scale yet. Personally, I think 15-30 days is more reasonable, as this allows a weekly cadence with more lead time for addressing issues, especially for smaller shops that might only have 1 person (half-)focused on infrastructure. Even with the maximum of 47 days in the upcoming CA/Browser Forum rules, you're going to need to set up automated reissuance attempts and alerts for when they fail (or use managed hosting that does this for you).
Importantly, this only applies to HTTPS certificates. Other uses of X.509 are unaffected; e.g., code signing, API authentication, and S/MIME can still use longer-lived certificates. There hasn't been nearly as much work on automated authentication for re-issuance in these areas comparable to ACME and DNS verification for HTTPS, so I doubt those policies will change for the foreseeable future.
These are the four main problems with LLMs (and related technologies) as I see them:
1. You can't tune them to your needs; they have restraining bolts and the training data is a generic corpus
2. You don't own your interactions with them; your data transits a network and is processed by third-party servers
3. They waste an immense amount of power relative to the usefulness of their output
4. Their responses tend toward uncanny simulacra and hallucination
Bringing the cost way down and making them trainable on consumer hardware solves or at least greatly alleviates problems 1-3. That just leaves problem 4, which might still be unsolvable and sink the whole endeavor, but at least can be focused on.
go tool is only slower when (re-)compilation is needed, which is not often. You'd have to pay the same price anyway at some point to build the binary placed in ./bin.
I'm actually not 100% on this; there is a cache, and it should speed things up on subsequent runs, but maybe not as much as one might think: https://news.ycombinator.com/item?id=42864971
Ok, I'm trying to suss out what this means, since `go tool` didn't even exist before 1.24.
The functionality of `go tool` seems to build on the existing `go run` and the latter already uses the same package compilation cache as `go build`. Subsequent invocations of `go run X` were notably faster than the first invocation long before 1.24. However, it seems that the final executable was never cached before, but now it will be as of 1.24. This benefits both `go run` and `go tool`.
However, this raises the question: do the times reported in the blog reflect the benefit of executable caching, or were they collected before that feature was implemented (and/or is it not working properly)?
The tl;dr is...:
1. The new caching impacts the new `go tool` and the existing `go run`.
2. It has a massive benefit.
3. `go tool` is a bit faster than `go run` due to skipping some module resolution phases.
4. Caching is still (relatively) slow for large packages
Awesome! It's interesting that there's still (what feels like) a lot of overhead from determining whether the thing is cached or not. Maybe that will be improved before the release later this month. I'm wondering too if that's down to go.mod parsing and/or go.sum validation.
I'd also note the distinction between `go run example.com/foo@latest` (which as you note must do some network calls to determine the latest version of example.com/foo) and simple `go run example.com/foo` (no @) which will just use the version of foo that's in go.mod -- presumably `go tool foo` is closer to the latter.
These incentives already exist. The tax code has been manipulated to encourage or discourage behavior since at least WW2. A digital currency makes a lot of these incentives easier to create and easier to enforce, but they wouldn't be new.
The Federal Reserve is a bank. And we already have a relevant historical example to examine, postal savings accounts. I'm not aware of any special power that the executive branch had to bypass the judicial branch where those accounts were concerned, versus privately held accounts. Plus, the Post Office was directly answerable to the President then, while the Federal Reserve has never directly answered to the President. This is an unfounded fear and doesn't reflect anything intrinsic to a (central bank-administered) digital currency.
Not for consumers and non-bank businesses it isn't. You don't have an account at the fed. You have an account at plain old commercial bank. Someone at the plain old commercial bank can freeze your money, right now. You'll have to sue, go to court, win, just to get access to your money.
If the fed becomes the place you have an account, they can do what the commercial bank currently can do.
The difference is incentives - why do it? Commercial banks have no incentive, and in fact have the incentive to push back when asked.
As for the president - sure - generally speaking the fed is independent. But they have significant regulatory and supervisory roles all by themselves, and are part of the administration of the government even if they aren't "part of the administration" as the term is usually used in the US.
Commercial banks answer to the government, to the central bank, to their shareholders, and to their non-governmental regulators (payment networks, insurers, etc). This has created plenty of examples of "debanking" of businesses and individuals who bring with them excessive risk due to their history of attracting controversy and/or legal trouble.
Whereas, the government can of course confiscate assets already, including through commercial banks, but generally cannot refuse service. If a CBDC becomes the norm, an account held at the central bank becomes a right, and thus refusal of service becomes a punitive measure subject to statutory and constitutional limits and scrutiny. This is arguably better than the commercial bank situation, where "business risk" is (generally) a valid reason to refuse to provide service.
The incentives/goals are the issue. By and large, commercial banks want to do business with you. They are subject to constraints, but they want to do it. They can/do also push back, they aren't just doing whatever the government says.
I would hope so. I certainly do. In 25 years of using them, I have never had a conventionally regulated financial institution (NCUA-insured credit union, FDIC-insured bank) lose so much as a penny. Whether they are keeping proper reserves is another question entirely, but also not really my problem (NCUA and FDIC exist for a reason).
TPM 2.0 is guaranteed to support SHA-256.
reply