Hacker News new | past | comments | ask | show | jobs | submit | pimterry's comments login

I have one - absolutely love it. Was Intel 12th gen, which was merely quite good (all great except battery life really) but upgrading to the AMD has been fantastic - much better battery and notable perf jump overall, and it was eye opening and awesome to just jump to the next gen of the product only replacing the changed parts so easily.

I also set up the leftover old Intel mainboard as a standalone server, which actually works remarkably well (peak power usage isn't great, but the idle is honestly pretty reasonable, so for a homelab it's perfect) and it's basically a free server.


> Every single Australian's ID will have to be verified (in order to confirm their age). > > Depending on the degree of cooperation (/coercion) the Australian government has with social media companies, the Aus Govt will be able to access citizen social media data with relative ease. So no more pseudo anonymous accounts (or, at least, they'll be made more difficult, especially for non-technical folk).

This isn't a given. It is quite possible to build a reasonably anonymous system to verify age at signup.

As a simplified model: the government creates a website where with your government id/login, they will give you an age-verification-valid-for-5-minutes token - basically just "holder is 16+" signed with their signature & the current time. Websites request a new valid token at signup. End result is that government only knows you're _maybe_ doing _something_ 16+, and the website doesn't know who you are, just that you're old enough (this is clearly improveable, it's just a basic example).

Whether anything like this will be implemented is a hard question of course. The current alternatives I've seen seem to be a fully privatised version of this, where a private company has a video call where you hold up your ID - that eliminates the government, but seems like a whole bunch of privacy concerns in itself too (not to mention being wildly inefficient & probably not very reliable).


This comes up on every single HN thread about the topic, but I don’t understand how people aren’t seeing the obvious abuse angle:

Create a market for anonymous age verification tokens. People pay $5 to someone to create an age authorization for them. 17 year old kid (who is old enough under this law) spends all day creating anonymous age auth tokens to sell to people who want them.

Entire system subverted with profit motive.

The next phase of the argument is to argue for rate limiting or extra logging, but the more you force that the more you degrade privacy or introduce unreasonable restrictions. “Sorry, I can’t sign up for the wiki today because I already used my quota of 2 government age checks today”. Still leaves plenty of room for 17 year old kids to earn $10 a day farming out their age checks.

The entire argument that anonymous crypto primitive will solve this problem is tiresome.


The same applies to effectively all possible solutions for age verification, no?

Even if you have a perfect mechanism, 17 years old can create real age-verified accounts and then sell the username and password afterwards. Selling age-verification tokens directly would likely be harder than just swapping those login details, since it's very easy to make the tokens time-limited (in practice normal use would probably be some kind of oauth-style redirect flow, so they'd really only have to be valid for a few seconds).

This same argument applies to adults buying alcohol for teenagers too. The determined teenager with money can definitely find a way to get alcohol, but it doesn't mean the age restrictions on purchases are pointless.

Imo it's a bit pointless to worry about high-speed black markets trading in signed tokens when the current most common alternative is a popup with an "I promise I am over 18" button. If society agrees some things should be difficult to access if you're underage, then we can definitely do better than that as a solution.


I can buy booze without an ID or token. You have to match that. And yes, I look perfectly youthful...

Germany has a system in their ID cards that allows anonymous age verification. No one uses it but it’s a technical marvel in my opinion.

The site asks for specific read permissions and the user can decide if he wants to grant them.

One of these permissions is age verification.

You put the phone on the ID card and there is a cryptographic proof that the user connecting to the site is in possession of an ID of a person above 16 (which he of course could have stolen).

So it is technically totally feasible to have good data privacy AND age verification.


It isn't a technical marvel, it is technical bureaucracy. There is a reason people don't want to use it.

Also once implemented and widely adopted, the state would obviously increase demands on usage. This isn't rocket science.

I understand the cryptographic principle. That isn't the problem here.


yeah, “technical” is not the right adjective, rather the marvel (to me) is the fact that a government managed to deploy a privacy friendly electronic ID system based on sound cryptographic principles.

it’s a marvel because, well, as you put it, there’s all this bureaucracy and when I first discovered it was implemented and every single new electronic ID has this capability since a couple of years, my jaw dropped.

But fully agree the process and the backend itself are not very usable at the moment.

Maybe my expectations around government digitization are too loo though.


It's hardly an age verification if it just requires the bearer to have an adult's ID card.

You borrow your friend's card, or you "borrow" your parent's card, or you pay someone who sees this market opportunity.

I think it's ridiculous how the lawgivers are telling the companies to just nerd harder, but they're definitely going to have to nerd harder than that.


They know, of course, that they're giving the nerds an impossible challenge.

Then when they can't achieve it they can say, "well we tried the non-authoritarian way and the nerds didn't oblige, now we're forced to bring out the brownshirts. Papers please!"


I think it’s better than transmitting everything including the biometric picture and requiring the camera to be on.

How would you implement age verification?


this is the same argument as "why have government id cards, someone could just use a fake beard and use their older classmates id". Any system allows for some gaps, similar to how creditcard transactions make transactions safer but on either side of that transaction there some "insurance" and some leeway if someone really wanted to.

The difference is that I can’t mint infinite ID cards, and it is much harder to get a skeptical person to accept a photo ID of the wrong person.

in person? sure, that's harder, but we're talking about online services, right?

many times verification is simply uploading a photo, GenAI can make a nice fake ID.

are these id verification sites linked to government databases? for usual KYC it's enough to save the photo and do the minimal sanity check, no need to phone home an ask Big Brother.


> why have government id cards...

...for the internet is a perfectly sane question. There are good reason we don't have those as well and these reason vastly outclass ineffective user protections.


This is one of the main motivating examples for attribute-based credentials, which provably only reveal the selected attribute to verifiers.

Why not lock device/accounts as minor and put onus on school and parents to ensure devices are appropriately tagged? At least for pre-teens I strongly think it shall work.

Because it will take about 1 month till there is some service the parents will want the kids to use that wont be available on such device (a kids show, a kids game, a page necessary for homework). So, they will have strong motivation to not label them as such.

At that point, what if parents just let their kids borrow their driver's licenses to use social media? There's no technical solution to bad parenting.

The only reasonable solution that doesn't infringe on privacy is to give parents the tools to limit their children's internet use, and presume, outside those bounds, that people are adults.


of course there's no perfectly privacy preserving solution for this, but ... zero-knowledge proofs have come a pretty long way.

if I understand correctly it's possible to give 16+ people tokens and then they can make the signups (transactions with these tokens) and then check that the transaction is valid (that it came from some valid token without knowing which token), while also making sure that folks can't just fake spend someone's tokens -- this is how the new Monero version is going to work after all.

https://www.getmonero.org/2024/04/27/fcmps.html

Of course as others mentioned trading identities (tokens) is trivial. (As I expect not-yet-16 olds will start stealing identities/logins of older people.)


Yeah, as you mentioned, token-sharing breaks this. I think any solution ultimately has to put the onus on the parents. And if the parents aren't responsible enough to pay attention to what their kid is doing online, then it's probably for the best that the kid have access to an online peer group over social media

I'd never accept this disgrace.

Sorry, what's the implication here, what is the disgrace? Why parental controls are bad? (Or what was implied was a /s tag? :))

Government controlled access to internet is a disgrace in any form. I can control my child without the government.

> I can control my child

Lies every parents tell themselves. Either they will watch porn at age 11 at school or at a friend, or you isolate them from society and they resent you forever.

You can't control every aspect of your child's real life or online activities, that's naive and I don't believe you actually have children, let alone teenagers.


Whether GP can control their kids or not it's besides the point, which I think lies with:

> Government controlled access to internet is a disgrace in any form.

And in fact it's not a "disgrace", it's outright dangerous, a ready half-step to totalitarian control. Regardless whether one trusts their current government or not, it is a threat to democracy and freedom that can be activated by any later regime.


You shouldn't want to control every aspect of a child's life. You control what you can and should control, and the kid is aware of that, and you let them decide the rest. They will do things you don't want them to do, and that's fine. That's part of growing up.

What you don't want is to say "I can't control every part of my kid's life, so I need to government to come in and control the remainder."


Indeed, but I don't find that amount of control reasonable. I can do enough control to be completely fine.

You’re right that it’s possible, absolutely. The problem is the government would first have to want to do that. If they’re planning to hoover up social media usage data then they probably won’t.

So what's the most likely outcome here? Savvy 15 year olds "buying" accounts of older people? IRC and email making a comeback?

Too complicated and no benefit.

Theoretically these double blind systems could be secure, practically I would never trust any of their systems and will opt out of signing up.

Also this fail to account for obviously visible political motivation and further development. Nope, bad idea.


I broadly agree that I'd like standalone separated bike lanes, but I think this is dubious:

> Bikes hitting pedestrians (ex: children wandering out on to the bikelane) is a much larger safety concern than bikes being hit by cars

As far as I'm aware, more or less everywhere, both the frequency & severity of bicycle vs pedestrian crashes is much lower than bicycle vs car crashes. Do you have any statistics that say otherwise?


I only have my personal experience. Biking on the sidewalk lanes in Taipei creates a lot of scary close calls esp with children and dogs. On the road I only rarely have some issues with buses. Everyone is moving in the same direction so it's generally less scary.

I think in terms of deaths, the most dangerous issue is getting t-boned at an intersection by a car going fast through the intersection. I'm not sure how either setup really addresses that. You need to decrease overall traffic speed somehow. Chinese do this with speed cameras everywhere and electric scooters being much slower than gas powered ones (which are illegal most places now)


And also car vs pedestrian is much larger than bike vs pedestrian, and in most places, higher than car vs bike also.


Very very cool, but only makes it more disappointing that you can't actually use this for anything innovative, except in the Apple-approved format & use cases.

Can't upgrade any of the internals, doesn't run Linux easily, no way to use any of the internal components separately, or rebuild them into a different form factor. Imagine being able to directly mount these to VESA behind a dashboard. I have an old M1 Mac Mini I'd love to use as a NAS, but the disk is slightly too small and you can't upgrade it, so it's just useless to me instead.

Impressive to see Apple match the Pi for idle power & efficiency, but deeply frustrating to see them takes the exact opposite design philosophy.


I'm very interested in this, I do a lot of protocol debugging. Kaitai looks very neat - is that the most popular format for this kind of thing, or are there other popular options I should look at too?


Just lately somebody posted about the Imhex tool's DSL: https://xy2i.blogspot.com/2024/11/using-imhexs-pattern-langu...


I recently wanted to reverse engineer some Websocket packets for a game I was playing. I used BurpSuite as a proxy to bypass the SSL encryption. It also has a pretty handy tool that will monitor all websocket traffic.

After that I used ImHex, pretty much exactly like in that blog to reverse engineer the websocket packets. The DSL is a little finicky but once you wrap your head around it, it`s very nice and powerful.


Thanks for sharing your experience, it is valuable as I am considering using Imhex and its DSL lately.


This is actually now built into HTTP Toolkit, so it's easier than it sounds - if you connect a rooted device, there's an "Android app with Frida" interception option that installs Frida and runs the scripts above for you against any given app on the device automatically. Funded by the EU! https://nlnet.nl/project/AppInterception/


FMLA has a 50 employee minimum limit, as they noted above. If there aren't 50 employees within a 75-mile radius, you aren't FMLA eligible (there's quite a few other conditions too, and it's still unpaid regardless): https://en.wikipedia.org/wiki/Family_and_Medical_Leave_Act_o...


> In the age of subscriptions, being able to see all your recurring payments on a single page and cancellable with two tabs without questions asked, is a feature worth paying for.

This is a service the banking system should clearly provide natively though - there's no good reason Apple is the only one capable of this, nor any good reason why they're best placed for this (there's plenty of non-Apple subscriptions where this would be useful).

Your card provider is well aware of what recurring payments are currently authorized, and should be perfectly capable of providing tools to cancel those authorizations (and inform the merchant of this when doing so).

That that doesn't work is a failure on the part of financial firms (who could provide it) and regulators (who imo should oblige card providers to offer this, and oblige companies to treat cancellation notification like this as equivalent to written notice). Recurring payments are an increasingly fundamental part of consumer banking, but banks provide effectively zero tools for consumers to manage them.

The argument against is that some of these payments might have ongoing obligations you can't just cancel without consequence, and you'd be effectively just be refusing to pay your bills - but you could equally well have no balance available or something so the payment fails in existing cases anyway, so this seems like an entirely solveable issue (if a business that you _must_ pay receives a notification that you've cancelled the card billing authorization, they're going to need to get in touch with you about it just as if your monthly charge failed).


Yeah they should get together and create a standard protocol for managing, transferring, and paying subscriptions.

Imagine if you could manage subs through your bank, but also transfer the sub to a different bank as needed.

Could even have protocols for payees to verify payer sub status. Maybe there could be two "ends" to a sub; payer and payee end. Like a money wormhole.


> this domain was caught injecting malware on mobile devices via any site that embeds cdn.polyfill.io

I've said it before, and I'll say it again: https://httptoolkit.com/blog/public-cdn-risks/

You can reduce issues like this using subresource intergrity (SRI) but there are still tradeoffs (around privacy & reliability - see article above) and there is a better solution: self-host your dependencies behind a CDN service you control (just bunny/cloudflare/akamai/whatever is fine and cheap).

In a tiny prototyping project, a public CDN is convenient to get started fast, sure, but if you're deploying major websites I would really strong recommend not using public CDNs, never ever ever ever (the World Economic Forum website is affected here, for example! Absolutely ridiculous).


I always prefer to self-host my dependencies, but as a developer who prefer to avoid an npm-based webpack/whatever build pipeline it's often WAY harder to do that than I'd like.

If you are the developer of an open source JavaScript library, please take the time to offer a downloadable version of it that works without needing to run an "npm install" and then fish the right pieces out of the node_modules folder.

jQuery still offer a single minified file that I can download and use. I wish other interesting libraries would do the same!

(I actually want to use ES Modules these days which makes things harder due to the way they load dependencies. I'm still trying to figure out the best way to use import maps to solve this.)


The assumption of many npm packages is that you have a bundler and I think rightly so because that leaves all options open regarding polyfilling, minification and actual bundling.


polyfilling and minification both belong on the ash heap of js development technologies.


I would agree with you if minification delivered marginal gains, but it will generally roughly halve the size of a large bundle or major JS library (compared to just gzip'ing it alone), and this is leaving aside further benefits you can get from advanced minification with dead code removal and tree-shaking. That means less network transfer time and less parse time. At least for my use-cases, this will always justify the extra build step.


I really miss the days of minimal/no use of JS in websites (not that I want java-applets and Flash LOL). Kind of depressing that so much of the current webdesign is walled behind javascript.


I don’t. Always Craigslist and hacker news to give that 2004 UX.


Cool, I can download 20 MB of JavaScript instead of 40. Everyone uses minification, and "web apps" still spin up my laptop fans. Maybe we've lost the plot.


I wish. When our bundles are being deployed globally and regularly opened on out of date phones and desktops, it can't be avoided yet.


There might be a negative incentive in play: you may be compressing packages, but having your dependencies available at the tip of *pm install bloats overall size and complexity beyond what lack of bundling would give you.


The assumption shouldn't be that you have a bundler, but that your tools and runtimes support standard semantics, so you can bundle if you want to, or not bundle if you don't want to.


> I always prefer to self-host my dependencies

Ime this has always been standard practice for production code at all the companies I've worked at and with as a SWE or PM - store dependencies within your own internal Artifactory, have it checked by a vuln scanner, and then called and deployed.

That said, I came out of the Enterprise SaaS and Infra space so maybe workflows are different in B2C, but I didn't a difference in the customer calls I've been on.

I guess my question is why your employer or any other org would not follow the model above?


> I guess my question is why your employer or any other org would not follow the model above?

Frankly, it's because many real-world products are pieced together by some ragtag group of bright people who have been made responsible for things they don't really know all that much about.

The same thing that makes software engineering inviting to autodidacts and outsiders (no guild or license, pragmatic 'can you deliver' hiring) means that quite a lot of it isn't "engineered" at all. There are embarrassing gaps in practice everywhere you might look.


Yep. The philosophy most software seems to be written with is “poke it until it works locally, then ship it!”. Bugs are things you react to when your users complain. Not things you engineer out of your software, or proactively solve.

This works surprisingly well. It certainly makes it easier to get started in software. Well, so long as you don’t mind that most modern software performs terribly compared to what the computer is capable of. And suffers from reliability and security issues.


Counterpoint: It's not not about being an autodidact or an outsider.

I was unlikely to meet any bad coders at work, due to how likely it is they were filtered by the hiring process, and thus I never met anyone writing truly cringe-worthy code in a professional setting.

That was until I decided to go to university for a bit[1]. This is where, for the first time, I met people writing bad code professionally: professors[2]. "Bad" as in best-practices, the code usually worked. I've also seen research projects that managed to turn less than 1k LOC of python into a barely-maintainable mess[3].

I'll put my faith in an autodidact who had to prove themselves with skills and accomplishments alone over someone who got through the door with a university degree.

An autodidact who doesn't care about their craft is not going to make the cut, or shouldn't. If your hiring process doesn't filter those people, why are you wasting your time at a company that probably doesn't know your value?

[1] Free in my country, so not a big deal to attend some lectures besides work. Well, actually I'm paying for it with my taxes, so I might as well use it.

[2] To be fair, the professors teaching in actual CS subjects were alright. Most fields include a few lectures on basic coding though, which were usually beyond disappointing. The non-CS subject that had the most competent coders was mathematics. Worst was economics. Yes, I meandered through a few subjects.

[3] If you do well on some test you'd usually get job offers from professors, asking you to join their research projects. I showed up to interviews out of interest in the subject matter and professors are usually happy to tell you all about it, but wages for students are fixed at the legal minimum wage, so it couldn't ever be a serious consideration for someone already working on the free market.


Would an unwisely-configured site template or generator explain the scale here?

Or, a malicious site template or generator purposefully sprinkling potential backdoors for later?


But wouldn't some sort of SCA/SAST/DAST catch that?

Like if I'm importing a site template, ideally I'd be verifying either it's source or it's source code as well.

(Not being facetious btw - genuinely curious)


I was hoping ongoing coverage would answer that; it sounds like a perfect example. I heard that the tampered code redirects traffic to a sports betting site.


> I guess my question is why your employer or any other org would not follow the model above?

When you look at Artifactory pricing you ask yourself 'why should I pay them a metric truckload of money again?'

And then dockerhub goes down. Or npm. Or pypi. Or github... or, worst case, this thread happens.


There are cheaper or free alternatives to Artifactory. Yes they may not have all of the features but we are talking about a company that is fine with using a random CDN instead.

Or, in the case of javascript, you could just vendor your dependencies or do a nice "git add node_modules".


I just gave Artifactory as an example. What about GHE, self-hosted GitLab, or your own in-house Git?

Edit: was thinking - would be a pain in the butt to manage. That tracks, but every org ik has some corporate versioning system that also has an upsell for source scanning.

(Not being facetious btw - genuinely curious)


I've been a part of a team which had to manage a set of geodistributed Artifactory clusters and it was a pain in the butt to manage, too - but these were self-hosted. At a certain scale you have to pick the least worst solution though, Artifactory seems to be that.


> have it checked by a vuln scanner

This is kinda sad. For introducing new dependencies, a vuln scanner makes sense (don't download viruses just because they came from a source checkout!), but we could have kept CDNs if we'd used signatures.

EDIT: Never mind, been out of the game for a bit! I see there is SRI now...

https://developer.mozilla.org/en-US/docs/Web/Security/Subres...


This supply chain attack had nothing to do with npm afaict.

The dependency in question seems to be (or claim to be) a lazy loader that determines browser support for various capabilities and selectively pulls in just the necessary polyfills; in theory this should make the frontend assets leaner.

But the CDN used for the polyfills was injecting malicious code.


Sounds like a bad idea to me.

I would expect latency (network round trip time) to make this entire exercise worthless. Most polyfills are 1kb or less. Splitting polyfill code amongst a bunch of small subresources that are loaded from a 3rd party domain sounds like it would be a net loss to performance. Especially since your page won’t be interactive until those resources have downloaded.

Your page will almost certainly load faster if you just put those polyfills in your main js bundle. It’ll be simpler and more reliable too.


In practice when this wasn't a Chinese adware service, it proved to be faster to use the CDN.

You are not loading a "bunch" of polyfill script files, you selected what you needed in the URL via a query parameter, and the service took that plus user agent of the request to determine which were needed and returned a minified file of just the necessary polyfills.

As this request was to a separate domain it did not run into the head of line / max connections per domain issue of Http 1.1 which was still the more common protocol at the time this service came out.


yes, but the NPM packaging ecosystem leads to a reliance on externally-hosted dependencies for those who don't want to bundle


> I always prefer to self-host my dependencies

Js dependencies should be pretty small compared to images or other resources. Http pipelining should make it fast to load them from your server with the rest

The only advantage to using one of those cdn-hosted versions is that it might help with browser caching


> Http pipelining should make it fast to load them from your server with the rest

That's true, but it should be emphasized that it's only fast if you bundle your dependencies, too.

Browsers and web developers haven't been able to find a way to eliminate a ~1ms/request penalty for each JS file, even if the files are coming out of the local cache.

If you're making five requests, that's fine, but if you're making even 100 requests for 10 dependencies and their dependencies, there's a 100ms incentive to do at least a bundle that concatenates your JS.

And once you've added a bundle step, you're a few minutes away from adding a bundler that minifies, which often saves 30% or more, which is usually way more than you probably saved from just concatenating.

> The only advantage to using one of those cdn-hosted versions is that it might help with browser caching

And that is not true. Browsers have separate caches for separate sites for privacy reasons. (Before that, sites could track you from site to site by seeing how long it took to load certain files from your cache, even if you'd disabled cookies and other tracking.)


nope, browsers silo cache to prevent tracking via cached resources


There is still a caching effect of the CDN for your servers, even if there isn't for the end user: if the CDN serves the file then your server does not have to.

Large CDNs with endpoints in multiple locations internationally also give the advantage of reducing latency: if your static content comes from the PoP closest to me (likely London, <20ms away where I'm currently sat, ~13 on FTTC at home⁰, ~10 at work) that could be quite a saving if your server is otherwise hundreds of ms away (~300ms for Tokyo, 150 for LA, 80 for New York). Unless you have caching set to be very aggressive dynamic content still needs to come from your server, but even then a high-tech CDN can² reduce the latency of the TCP connection handshake and¹ TLS handshake by reusing an already open connection between the CDN and the backing server(s) to pipeline new requests.

This may not be at all important for many well-designed sites, or sites where latency otherwise matters little enough that a few hundred ms a couple of times here or there isn't really going to particularly bother the user, but could be a significant benefit to many bad setups and even a few well-designed ones.

--------

[0] York. The real one. The best one. The one with history and culture. None of that “New” York rebranded New Amsterdam nonsense!

[1] if using HTTPS and you trust the CDN to re-encrypt, or HTTP and have the CDN add HTTPS, neither of which I wouldn't recommend as it is exactly an MitM situation, but both are often done

[2] assuming the CDN also manages your DNS for the whole site, or just a subdomain for the static resources, so the end user sees the benefit of the CDNs anycast DNS arrangement.


> prefer to avoid an npm-based webpack/whatever build pipeline

What kind of build pipeline do you prefer, or are you saying that you don't want any build pipeline at all?


I don't want a build pipeline. I want to write some HTML with a script type=module tag in it with some JavaScript, and I want that JavaScript to load the ES modules it depends on using import statements (or dynamic import function calls for lazy loading).


Do you not use CSS preprocessors or remote map files or anything like that... or do you just deal with all of that stuff manually instead of automating it?


That's still allowed! :)


I suspect this is more relevant for people who aren't normally JavaScript developers. (Let's say you use Go or Python normally.) It's a way of getting the benefits of multi-language development while still being mostly in your favorite language's ecosystem.

On the Node.js side, it's not uncommon to have npm modules that are really written in another language. For example, the esbuild npm downloads executables written in Go. (And then there's WebAssembly.)

In this way, popular single-language ecosystems evolve towards becoming more like multi-language ecosystems. Another example was Python getting 'wheels' straightened out.

So the equivalent for bringing JavaScript into the Python ecosystem might be having Python modules that adapt particular npm packages. Such a module would automatically generate JavaScript based on a particular npm, handling the toolchain issue for you.

A place to start might be a Python API for the npm command itself, which takes care of downloading the appropriate executable and running it. (Or maybe the equivalent for Bun or Deno?)

This is adding still more dependencies to your supply chain, although unlike a CDN, at least it's not a live dependency.

Sooner or later, we'll all depend on left-pad. :-)


I always prefer to self-host my dependencies,

Wouldn't this just be called hosting?


As you might know, Lit offers a single bundled file for the core library.


Yes! Love that about Lit. The problem is when I want to add other things that have their own dependency graph.


This is why I don't think it's very workable to avoid npm. It's the package manager of the ecosystem, and performs the job of downloading dependencies well.

I personally never want to go back to the pre-package-manager days for any language.


One argument is that Javascript-in-the-browser has advanced a lot and there's less need for a build system. (ex. ESM module in the browser)

I have some side projects that are mainly HTMX-based with some usage of libraries like D3.js and a small amount of hand-written Javascript. I don't feel that bad about using unpkg because I include signatures for my dependencies.


Before ESM I wasn't nearly as sold on skipping the build step, but now it feels like there's a much nicer browser native way of handling dependencies, if only I can get the files in the right shape!

The Rails community are leaning into this heavily now: https://github.com/rails/importmap-rails


npm is a package manager though, not a build system. If you use a library that has a dependency on another library, npm downloads the right version for you.


Yep. And so does unpkg. If you’re using JavaScript code through unpkg, you’re still using npm and your code is still bundled. You’re just getting someone else to do it, at a cost of introducing a 3rd party dependency.

I guess if your problem with npm and bundlers is you don’t want to run those programs, fine? I just don’t really understand what you gain from avoiding running bundlers on your local computer.


Oh lol yeah, I recently gave up and just made npm build part of my build for a hobby project I was really trying to keep super simple, because of this. It was too much of a hassle to link in stuff otherwise, even very minor small things

You shouldn't need to fish stuff out of node_moduoes though, just actually get it linked and bundled into one is so that it automatically grabs exactly what you need and it's deps.

If this process sketches you out as it does me, one way to address that, as I do, is have the bundle emitted with minification disabled so its easy to review


That was my thought too but polyfill.io does do a bit more than a traditional library CDN, their server dispatches a different file depending on the requesting user agent, so only the polyfills needed by that browser are delivered and newer ones don't need to download and parse a bunch of useless code. If you check the source code they deliver from a sufficiently modern browser then it doesn't contain any code at all (well, unless they decide to serve you the backdoored version...)

https://polyfill.io/v3/polyfill.min.js

OTOH doing it that way means you can't use subresource integrity, so you really have to trust whoever is running the CDN even more than usual. As mentioned in the OP, Cloudflare and Fastly both host their own mirrors of this service if you still need to care about old browsers.


The shared CDN model might have made sense back when browsers used a shared cache, but they dont even do that anymore.

Static files are cheap to serve. Unless your site is getting hundreds of millions of page views, just plop the js file on your webserver. With HTTP/2 it will probably be almost the same speed if not faster than a cdn in practise.


If you have hundreds of millions of pageviews, go with a trusted party - someone you actually pay money to - like Cloudflare, Akamai, or any major hosting / cloud party. But not to increase cache hit rate (what CDNs were originally intended for), but to reduce latency and move resources to the edge.


Does it even reduce latency that much (unless you have already squeezed latency out of everything else that you can)?

Presumably your backend at this point is not ultra optimized. If you send a link header and using http/2 the browser will download the js file while your backend is doing its thing. I'm doubtful that moving js to the edge would help that much in such a situation unless the client is on the literal other side of the world.

There of course comes a point where it does matter, i just think the cross over point is way later than people expect.


> Does it even reduce latency that much

Absolutely:

https://wondernetwork.com/pings/

Stockholm <-> Tokyo is at least 400ms here, anytime you have multi-national sites having a CDN is important. For your local city, not so much (and of course you won't even see it locally).


I understand that ping times are different when geolocated. My point was that in fairly typical scenarios (worst cases are going to be worse) it would be hidden by backend latency since the fetch could be concurrent with link headers or http 103. Devil in details of course.


I'm so glad to find some sane voices here! I mean, sure, if you're really serving a lot of traffic to Mombasa, akamai will reduce latency. You could also try to avoid multi megabyte downloads for a simple page.


Content: 50KB

Images: 1MB

Javascript: 35MB

Fonts: 200KB

Someone who is good at the internet please help me budget this. My bounce rate is dying.


While there are lots of bad examples out there - keep in mind its not quite that straight forward as it can make a big difference whether those resources are on the critical path that blocks first paint or not.


What's all that JavaScript for?


Cookie banner


It’s not an either or thing. Do both. Good sites are small and download locally. The CDN will work better (and be cheaper to use!) if you slim down your assets as well.


> But not to increase cache hit rate (what CDNs were originally intended for)

Was it really cache hit rate of the client or cache hit rate against the backend?


Both.


Even when it "made sense" from a page load performance perspective, plenty of us knew it was a security and privacy vulnerability just waiting to be exploited.

There was never really a compelling reason to use shared CDNs for most of the people I worked with, even among those obsessed with page load speeds.


In my experience, it was more about beating metrics in PageSpeed Insights and Pingdom, rather than actually thinking about the cost/risk ratio for end users. Often the people that were pushing for CDN usage were SEO/marketing people believing their website would rank higher for taking steps like these (rather than working with devs and having an open conversation about trade-offs, but maybe that's just my perspective from working in digital marketing agencies, rather than companies that took time to investigate all options).


I don’t think it ever even improved page load speeds, because it introduces another dns request, another tls handshake, and several network round trips just to what? Save a few kb on your js bundle size? That’s not a good deal! Just bundle small polyfills directly. At these sizes, network latency dominates download time for almost all users.


> I don’t think it ever even improved page load speeds, because it introduces another dns request, another tls handshake, and several network round trips just to what?

I think the original use case, was when every site on the internet was using jquery, and on a js based site this blocked display (this was also pre fancy things like HTTP/2 and TLS 0-RTT). Before cache partitioning you could reuse jquery js requested from a totally different site currently in cache as long as the js file had same url, which almost all clients already had since jquery was so popular.

So it made sense at one point but that was long ago and the world is different now.


I believe you could download from multiple domains at the same time, before HTTP/2 became more common, so even with the latency you'd still be ahead while your other resources were downloading. Then it became more difficult when you had things like plugins that depended on order of download.


You can download from multiple domains at once. But think about the order here:

1. The initial page load happens, which requires a DNS request, TLS handshake and finally HTML is downloaded. The TCP connection is kept alive for subsequent requests.

2. The HTML references javascript files - some of these are local URLs (locally hosted / bundled JS) and some are from 3rd party domains, like polyfill.

3a. Local JS is requested by having the browser send subsequent HTTP requests over the existing HTTP connection

3b. Content loaded from 3rd party domains (like this polyfill code) needs a new TCP connection handshake, a TLS handshake, and then finally the polyfills can be loaded. This requires several new round-trips to a different IP address.

4. The page is finally interactive - but only after all JS has been downloaded.

Your browser can do steps 3a and 3b in parallel. But I think it'll almost always be faster to just bundle the polyfill code in your existing JS bundle. Internet connections have very high bandwidth these days, but latency hasn't gotten better. The additional time to download (lets say) 10kb of JS is trivial. The extra time to do a DNS lookup, a TCP then TLS handshake and then send an HTTP request and get the response can be significant.

And you won't even notice when developing locally, because so much of this stuff will be cached on your local machine while you're working. You have to look at the performance profile to understand where the page load time is spent. Most web devs seem much more interested in chasing some new, shiny tech than learning how performance profiling works and how to make good websites with "old" (well loved, battle tested) techniques.


Aren't we also moving toward not even letting cross-origin scripts having very little access to information about the page? I read some stuff a couple years ago that gave me a very strong impression that running 3rd party scripts was quickly becoming an evolutionary dead end.


Definitely for browser extensions. It's become more difficult with needing to set up CORS, but like with most things that are difficult, you end up with developers that "open the floodgates" and allow as much as possible to get the job done without understanding the implications.


CORS is not required to run third party scripts. Cors is about reading data from third parties not executing scripts from third parties.

(Unless you set a Cross-Origin Resource Policy header, but that is fairly obscure)


The same concept should be applied to container based build pipelines too. Instead of pulling dependencies from a CDN or a pull through cache, build them into a container and use that until you're ready to upgrade dependencies.

It's harder, but creates a clear boundary for updating dependencies. It also makes builds faster and makes old builds more reproducible since building an old version of your code becomes as simple as using the builder image from that point in time.

Here's a nice example [1] using Java.

1. https://phauer.com/2019/no-fat-jar-in-docker-image/


> The same concept should be applied to container based build pipelines too. Instead of pulling dependencies from a CDN or a pull through cache, build them into a container and use that until you're ready to upgrade dependencies.

Everything around your container wants to automatically update itself as well, and some of the changelogs are half emoji.


I get the impression this is a goal of Nix, but I haven't quite digested how their stuff works yet.


> self-host your dependencies

I can kind of understand why people went away from this, but this is how we did it for years/decades and it just worked. Yes, doing this does require more work for you, but that's just part of the job.


For performance reasons alone, you definitely want to host as much as possible on the same domain.

In my experience from inside companies, we went from self-hosting with largely ssh access to complex deployment automation and CI/CD that made it hard to include any new resource in the build process. I get the temptation: resources linked from external domains / cdns gave the frontend teams quick access to the libraries, fonts, tools, etc. they needed.

Thankfully things have changed for the better and it's much easier to include these things directly inside your project.


There was a brief period when the frontend dev world believed the most performant way to have everyone load, say, jquery, would be for every site to load it from the same CDN URL. From a trustworthy provider like Google, of course.

It turned out the browser domain sandboxing wasn’t as good as we thought, so this opened up side channel attacks, which led to browsers getting rid of cross-domain cache sharing; and of course it turns out that there’s really no such thing as a ‘trustworthy provider’ so the web dev community memory-holed that little side adventure and pivoted to npm.

Which is going GREAT by the way.

The advice is still out there, of course. W3schools says:

> One big advantage of using the hosted jQuery from Google:

> Many users already have downloaded jQuery from Google when visiting another site. As a result, it will be loaded from cache when they visit your site

https://www.w3schools.com/jquery/jquery_get_started.asp

Which hasn’t been true for years, but hey.


The only thing I’d trust w3schools to teach me is SEO. How do they stay on top of Google search results with such bad, out of date articles?


Be good at a time when Google manually ranks domains, then pivot to crap when Google stops updating the ranking. Same as the site formerly known as Wikia.


> For performance reasons alone, you definitely want to host as much as possible on the same domain.

It used to be the opposite. Browsers limit the amount of concurrent requests to a domain. A way to circumvent that was to load your resources from a.example.com, b.example.com, c.example.com etc. Paying some time for extra dns resolves I guess, but could then load many more resources at the same time.

Not as relevant anymore, with http2 that allows sharing connections, and more common to bundle files.


Years ago I had terrible DNS service from my ISP, enough to make my DSL sometimes underperform dialup. About 1 in 20 DNS lookups would hang for many seconds so it was inevitable that any web site that pulled content from multiple domains would hang up for a long time when loading. Minimizing DNS lookups was necessary to get decent performance for me back then.


Using external tools can make it quite a lot harder to do differential analysis to triage the source of a bug.

The psychology of debugging is more important than most allow. Known unknowns introduce the possibility that an Other is responsible for our current predicament instead of one of the three people who touched the code since the problem happened (though I've also seen this when the number of people is exactly 1)

The judge and jury in your head will refuse to look at painful truths as long as there is reasonable doubt, and so being able to scapegoat a third party is a depressingly common gambit. People will attempt to put off paying the piper even if doing so means pissing off the piper in the process. That bill can come due multiple times.


Maybe people have been serving those megabytes of JS frameworks from some single-threaded python webserver (in dev/debug mode to boot) and wondered why they could only hit 30req/s or something like that.


Own your process – at best that CDN is spying on your users.


> and it just worked

Just to add... that is unlike the CDN thing, that will send developers into Stack Overflow looking how to set-up CORS.


I don't think SRI would have ever worked in this case because not only do they dynamically generate the polyfill based on URL parameters and user agent, but they were updating the polyfill implementations over time.


>self-host your dependencies behind a CDN service you control (just bunny/cloudflare/akamai/whatever is fine and cheap).

This is not always possible, and some dependencies will even disallow it (think: third-party suppliers). Anyways, then that CDN service's BGP routes are hijacked. Then what? See "BGP Routes" on https://joshua.hu/how-I-backdoored-your-supply-chain

But in general, I agree: websites pointing to random js files on the internet with questionable domain independence and security is a minefield that is already exploding in some places.


I strongly believe that Browser Dev Tools should have an extra column in the network tab that highlights JS from third party domains that don't have SRI. Likewise in the Security tab and against the JS in the Application Tab.


I've seen people reference CDNs for internal sites. I hate that because it is not only a security risk but it also means we depend on the CDN being reachable for the internal site to work.

It's especially annoying because the projects I've seen it on were using NPM anyway so they could have easily pulled the dependency in through there. Hell, even without NPM it's not hard to serve these JS libraries internally since they tend to get packed into one file (+ maybe a CSS file).


Also the folks who spec'ed ES6 modules didn't think it was a required feature to ship SRI from that start so it's still not broadly and easily supported across browsers. I requested the `with` style import attributes 8 years ago and it's still not available. :/


Another downside of SRI is that it defeats streaming. The browser can't verify the checksum until the whole resource is downloaded so you don't get progressive decoding of images or streaming parsing of JS or HTML.


I can see the CDNs like CF / Akamai becoming soon like the internet 1.2 - with the legitimate stuff in, and all else considered gray/dark/1.0 web.


I agree with this take, but it sounds like Funnull acquired the entirety of the project, so they could have published the malware through NPM as well.


> the World Economic Forum website is affected here, for example! Absolutely ridiculous

Dammit Jim, we’re economists, not dream weavers!


World Economic Forum website is affected here, for example!

What did they say about ownership? How ironic.


Meanwhile ...

"High security against CDN, WAF, CC, DDoS protection and SSL protects website owners and their visitors from all types of online threats"

... says the involved CDN's page (FUNNULL CDN).-

(Sure. Except the one's they themselves generate. Or the CCP.)


Another alternative is not to use dependencies that you or your company are not paying for.


The suntrain mentioned here is completely nuts, but a really cool solution: https://www.suntrain.co/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: