There's an implicit assumption embedded in this comment that the Chromium project is indispensable, whereas I'm unconvinced it's even a net positive at all.
Anyone who follows standards discourse would probably appreciate the prospect of this open source codebase having independent stewards much more than any fears over maintenance resources.
Seeing for years the views expressed here about Meta & TikTok, I think at least some of this must come down to a gap in understanding of web technologies.
Meta & TikTok decidedly don't have monopolies, yet still come under fierce scrutiny for their pervasive handling of consumer behaviour & data. What seems to be less evident to people is that Google's monopolies give them far greater reach in these areas than either of the other two. The majority of that reach is entirely invisible to most: I think if this negative impact was more visible it might drive home the downside of these particular monopolies.
> the current GitHub requirement is an explicitly temporary restriction
It seems reasonable to suggest that advertising a solution for public use at a point in time when support is at <2 systems is not an ideal way to encourage an open ecosystem.
It’s an eminently ideal way, given that the overwhelming majority of Python packages come from GitHub. It would be unreasonable to withhold an optional feature just because that optional feature is not universal yet.
Again, I need people to let this sink in: Trusted Publishing is not tied to GitHub. You can use Trusted Publishing with GitLab, and other providers too. You are not required to produce attestations, even if you use Trusted Publishing. Existing GitLab workflows that do Trusted Publishing are not changed or broken in any way by this feature, and will be given attestation support in the near future. This is all documented.
Let this sink in: a "security" feature that depends on Trusted Publishing providers puts the developer at the mercy of a small set of Trusted Publishing providers, and for most people none of them are acceptable feudal lords.
Let this sink in: if it is possible to check attestations, attestations will be checked and required by users, and PyPI packages without them will be used less. Whether PyPI requires attestations is unimportant.
> PyPI packages without them will be used less. Whether PyPI requires attestations is unimportant.
This points to a basic contradiction in how people are approaching open source: if you want your project to be popular on a massive scale, then you should expect those people to have opinions about how you're producing that project. That should not come as a surprise.
If, on the other hand, you want to run a project that isn't seeking popularity, then you have a reasonable expectation that people won't ask you for these things and you shouldn't want your packages downloads from PyPI as much as possible. When people do bug you for those things, explicitly rejecting them is (1) acceptable, and (2) should reduce the relative popularity of your project.
The combination of these things ("no social expectations and a high degree of implicit social trust/criticality") is incoherent and, more importantly, doesn't reflect observed behavior (people who do OSS as a hobby - like myself - do try and do the more secure things because there's a common acknowledgement of responsibility for important projects).
I don't use PyPI and only skimmed the docs. I think what you're saying here makes sense, but I also think others posting have valid concerns.
As a package consumer, I agree with what you've said. I would have a preference for packages that are built by a large, trusted provider. However, if I'm a package developer, the idea worries me a bit. I think the concerns others are raising are pragmatic because once a majority of developers start taking the easy path by choosing (ex) GitHub Actions, that becomes the de-facto standard and your options as a developer are to participate or be left out.
The problem for me is that I've seen the same scenario play out many times. No one is "forced" to use the options controlled by corporate interests, but that's where all the development effort is allocated and, as time goes on, the open source and independent options will simply disappear due the waning popularity that's caused by being more complex than the easier, corporate backed options.
At that point, you're picking platform winners because distribution by any other means becomes untenable or, even worse, forbidden if you decide that only attested packages are trustworthy and drop support for other means of publishing. Those platforms will end up with enormous control over what type of development is allowed. We have good examples of how it's bad for both developers and consumers too. Apple's App Store is the obvious one, but uBlock Origin is even better. In my opinion, Google changed their platform (Chrome) to break ad blockers.
I worry that future maintainers aren't guaranteed to share your ideals. How open is Open Solaris these days? MySQL? OpenOffice?
I think the development community would end up in a much stronger position if all of these systems started with an option for self-hosted, domain based attestations. What's more trustworthy in your mind; 1) this package was built and published by ublockorigin.com or 2) this package was built and published by GitHub Actions?
Can an impersonator gain trust by publishing via GitHub actions? What do the uninformed masses trust more? 1) an un-attested package from gorhill/uBlock, which is a user without a verified URL, etc. or 2) an attested package from ublockofficial/ublockofficial, which could be set up as an organization with ublockofficial.com as a verified URL?
I know uBlock Origin has nothing to do with PyPI, but it's the best example to make my point. The point being that attesting to a build tool-chain that happens to be run by a non-verifying identity provider doesn't solve all the problems related to identity, impersonation, etc.. At worst, it provides a false sense of trust because an attested package sounds like it's trustworthy, but it doesn't do anything to verify the trustworthiness of the source, does it?
I guess I think the term "Trusted Publisher" is wrong. Who's the publisher of uBlock Origin? Is it GitHub Actions or gorhill or Raymond Hill or ublockorigin.com? As a user, I would prefer to see an attestation from ublockorigin.com if I'm concerned about trustworthiness and only get to see one attestation. I know who that is, I trust them, and I don't care as much about the technology they're using behind the scenes to deliver the product because they have a proven track record of being trustworthy.
That said, I do agree with your point about gaining popularity and compromises that developers without an existing reputation may need to make. In those cases, I like the idea of having the option of getting a platform attestation since it adds some trustworthiness to the supply chain, but I don't think it should be labelled as more than that and I think it works better as one of many attestations where additional attestations could be used to provide better guarantees around identity.
Skimming the provenance link [1] in the docs, it says:
> It’s the verifiable information about software artifacts describing where, when and how something was produced.
Isn't who is responsible for an artifact the most important thing? Bad actors can use the same platforms and tooling as everyone else, so, while I agree that platform attestations are useful, I don't understand how they're much more than a verified (ex) "Built using GitHub" stamp.
To be clear, I think it's useful, but I hope it doesn't get mistakenly used as a way of automatically assuming project owners are trustworthy. It's also possible I've completely misunderstood the goals since I usually do better at evaluating things if I can try them and I don't publish anything to PyPI.
The gp was pointing out that the docs heavily emphasise (& therefore encourage) GHA usage & suggested language changes.
If people are confused about what they need to use Trusted Publishing & you're suggesting (begging) a re-read as the solution, that seems evidence enough that the gp is correct about the docs needing a reword.
It could just as easily imply that people aren't paying attention when they read it. Inability to understand a text is not always on the author, plenty of times it's on the reader.
No. Centralization isn't needed to scale, it's just needed to exploit scale. Exploitation has become so commonplace it's replaced everything as the primary "desirable metric". I'm writing this comment on a platform founded on the idea of exploitation being a desirable metric.
Try giving everyone housing without centralized production of things like nails, screws, siding electrical wire...
Decentralizization is fine sometimes, but a single huge nail factory is going to beat thousands backyard forges in (average/95th percentile) quality and especially in prices.
This isn't true though. Neither in theory nor in practice. A broad set of individual product quality has consistently declined throughout the industrial era - not least, infamously, one of your own examples: nails.
Many products have increased in quality due to factors unrelated to centralised production: new scientific research, intensive r&d, internal standardisation efforts - but centralised production tends instead to produce demonstrable reductions in individual quality. We're living in an age of unprecedented consumer waste due to this phenomenon. Planned obsolescence is a relatively recent term (& while it may not be real in terms of being intentional it's definitely true in terms of increases in obsolescence-rates).
Are you suggesting there is no provider of high quality nails at industrial scale?
That’s ridiculous. Many people just prefer to pay for cheap nails because they are good enough for the application, and that’s a good thing because this option did not previously exist. But the high end still exists and is higher quality than ever.
Capitalism has problems, but providing low complexity commodity improvements at scale and quality is not one of them.
> Are you suggesting there is no provider of high quality nails at industrial scale?
Nails was one of the examples cited by the gp, which I found funny since there is one famous example of nails being lower quality by definition as a result of centralisation. That example isn't the case for all nails (nor for all the gps examples), but it is still an excellent, if isolated, example: cut nails. Most carpentry nails you buy today are wire nails. I don't know if it's possible to buy cut nails today from any large factories, but broadly speaking, manufacture of cut nails does not scale. The switch to wire was a detail of the manufacturing processes, not of consumer demand (nor quality assurance). Cut nails are still considered superior today.
If you look at antique diagrams of nails available for purchase, you'll see 20 or 30 varieties, all with very distinctly different designs. Those designs have converged on a single design for all applications, not because it's a better design, but purely because it's a design that scales for manufacture. Flaws in the design are compensated for via material choice & worker skill.
There's likely thousands of such cases in industry.
This an empirical argument about nails. Are you just saying that to emphasize your confidence?
> cut nails
I just googled and there are literally dozens of brands of this product.
Even so this preference appears to be a niche opinion of artisan woodworkers, not a failure mode of furniture and construction.
> The switch to wire was a detail of the manufacturing processes, not of consumer demand
Yes it is. They wanted to pay less for nails and the cheaper manufacturing served that need. If it mattered they would pay more.
> but purely because it's a design that scales for manufacture.
In other words, people like to reminisce about hand made goods. But when it comes to paying for them, they usually prefer the ones that work well at a fraction of the cost.
You can’t claim that intricate items with fine details are not manufacturable at scale (see the iPhone?). So your cause and effect are backwards. Manufacturing did not dictate old methods became less prevelant. Consumers did.
And once again, your claim that higher quality ones don’t exist is wrong too. I guarantee we can find a supplier of many varieties of decorative nails. They are just more expensive.
> Are you just saying that to emphasize your confidence? [...] people like to reminisce about hand made goods. [...] I guarantee we can find a supplier of many varieties of decorative nails
A nested HN thread is likely not the best place to go into great detail on the precise practicalities of trade work, & I know I can't expect any given commenter to become an expert in a possibly niche topic just to engage in discussion, but this level of dismissiveness is a little difficult to reckon with. Nobody is buying nails for their "decorative" qualities - they get embedded in materials. They're not a visual feature.
Decorative was my word to summarize your “catalog if 30 styles of nails”
> but this level of dismissiveness is a little difficult to reckon with
Let me summarize the key economic points that I feel have been dismissed:
- any kind of nail you want, you can buy right now.
- You can also buy nails of higher quality than anything that’s existed in preindustrial time
- popularity of which kind of nail is driven by demand
Whether it's a conscious strategy or an implicit outcome, planned obsolescence wouldn't be viable without centralisation. Centralisation is a tool of capitalism to achieve its goals.
Sure, it's not required to build _some_ houses, but it's required for scale.
Backyard forges are OK if you have few hundred million people, most of them in farming, with most families living in a house with a a single room. But you cannot build New York on without large factories.
This seems like it would skew the data significantly for certain use-cases.
Unless you're feature flagging to test infra backing an expensive feature (in which case, in a load-balancer / containerised world, bucketing is going to be much a much better approach than anything at application level), then you most likely want to collect data on acceptance of a feature. By skewing it toward a more accepting audience, you're getting less data on the userbase that you're more likely to lose. It's like avoiding polling swing states in an election.
That list makes for a nice slidedeck but the separation (like many things in tech) isn't as clear cut as the metaphor.
"Something you know" (password) becomes "something you have" as soon as you store/autogenerate/rotate those passwords in a manager (which is highly recommended).
"Something you have" in the form of a hw key is still that device generating a key (password) that device/browser APIs convey to the service in the same way as any other password.
"Something you are" is a bit different due to the algorithms used to match biometric IDs but given that matching is less secure than cryptographic hash functions - this factor is only included in the list for convenience reasons.
The breakdown of this metaphor is one of the reasons passkeys are seen as a good thing.
Aegis is no more secure than storing your TOTPs in your password manager - 2 factors primarily protect against remote attacks, which don't have direct access, in which case the app your 2nd factor lives in is moot. If your threat model involves direct access you need dedicated hardware for your 2nd factor. Most people are fine with TOTP in pw manager.
(I do use Aegis as I like the UX but that's a separate topic)
> JS of the era was a pain to use; CoffeeScript made writing and reading things much easier
It didn't. Anything you learn / are familiar with will be easier to write - Coffescript was easier to write for people who learned Coffescript. Which in hindsight wasn't time well spent as they would've eventually had to bite the bullet anyway & just learn JavaScript like everyone else.
JavaScript is much much easier to write and read for a person who has chosen to learn JavaScript & has not had the occasion to learn Coffescript (i.e. most people) so you were also doing others a disservice if readability was one of your goals.
> which is the reason it took off
The reason it took off was the Ruby was going through a popularity trend & Rails devs wanted to work in a front-end language that felt syntactically familiar. It was purely aesthetic.
Around 2012, I had sunk several years into becoming extremely familiar with JavaScript when I switched onto a team which was using CoffeeScript. I immediately liked it: it addressed some of the things I'd always found irritating about JavaScript, and I found it very easy to pick up. I was not a Ruby or even Python user at that time, so that didn't influence my decision either.
But you're right that you did have to know enough JS to understand what CoffeeScript was doing under the hood when things went wrong.
> I don't think it's that contrived to have a service that runs an expensive operation at a fixed rate, controlled by the user
Maybe not contrived but definitely insecure by definition. Allowing user control of rates is definitely useful & a power devs will need to grant but it should never be direct control.
Can you elaborate on what indirect control would look like in your opinion?
No matter how many layers of abstraction you put in between, you're still eventually going to be passing a value to the setTimeout function that was computed based on something the user inputted, right?
If you're not aware of these caveats about extremely high timeout values, how do any layers of abstraction in between help you prevent this? As far as I can see, the only prevention is knowing about the caveats and specifically adding validation for them.
Or comes from a set of known values. This stuff isn't that difficult.
This doesn't require prescient knowledge of high timeout edge cases. It's generally accepted good security practice to limit business logic execution based on user input parameters. This goes beyond input validation & bounds on user input (both also good practice but most likely to just involve a !NaN check here), but more broadly user input is data & timeout values are code. Data should be treated differently by your app than code.
To generalise the case more, another common case of a user submitting a config value that would be used in logic would be string labels for categories. You could validate against a known list of categories (good but potentially expensive) but whether you do or not it's still good hygiene to key the user submitted string against a category hashmap or enum - this cleanly avoids using user input directly in your executing business logic.
Anyone who follows standards discourse would probably appreciate the prospect of this open source codebase having independent stewards much more than any fears over maintenance resources.
reply