Hacker News new | past | comments | ask | show | jobs | submit | MrThoughtful's comments login

Some are examples of good design ideas.


From my perspective, the web is fine from a technological standpoint.

Anybody can easily put text, images, video, audio online.

Finding something that is interesting is the hard part for me.

I would like to see examples of individuals putting out interesting content.

I mean apart from entertainment. Not stuff that is just funny or beautiful or thrilling.

Is there any blogger, tweeter, activitipubber, blueskyer, nostrerer or whatever out there, where you guys think "Damn what they are doing is cool and interesting. I learn something from it for my life. I can't wait for the next update!"?


I'm not sure that content exists any more in any reasonable quantity. Occasionally a video will pop out which is interesting from someone but generally it's all below average. The attention dynamics have focused all content onto retaining viewers and reinforcement metrics around that. This normalised everything into mediocrity and delivering longer content with no meat.

The only thing notable left in my mind which I await with interest is Ken Shirriff's blog: http://www.righto.com ... oh and Retraction Watch: https://retractionwatch.com


just look for people who don't care about viewer retention?

I've been having good luck with RSS and https://search.marginalia.nu


I also make heavy use of RSS. That's how I find basically everything I read and view.


I feel a lot of content creators moved to videos or podcasts, I think cause that's easier to monetizes, but there seems to be _a lot_ of cool and interesting things being put on the web these days.

E.g. I recently listened to a really interesting podcast about the indoeuropeans "invasion" of Europe.

I agree it's somewhat harder to find good stuff among the chaff, but it's not like it's not out there.


Do you have a link to that podcast?



It's a bit lazy of me, but let others submit links to each update to a website, and then another group of users up vote the content, and then I use that website to find such content. I could curate such content myself, but I would still need some sort of an algorithm to surface the truly quality content from the morass of updates.


I have been following the web components discussion for years now and just don't see what I can do with them that makes my life as a fullstack developer better.

All the examples I have seen use them to template some data into html. I can do that with handlebars already.

Am I missing someting?


There are two models of the "web", where HTML is a document and where HTML is the scenegraph of a more complicated app.

If you're using HTML as a document you can use web-components to include fancier interactive real-time feature

* A terminal emulator web component that attaches to a websocket * A date picker web component, add features like checking if a date is already taken * Custom form elements in general, a search-box that takes a URL for auto-completion suggestions * A map, but not a full mapping application * A data table, like the jquery plugin of old * Lightweight interactivity like tab widgets * Basically any of the custom components that jquery-ui provided

Yes you can do all of these without webcomponents, but the HTML is a lot cleaner and a lot more document like if it's a custom component. Mixing the model and scenegraph views of the web is not my favorite. It sure would be nice if there was a consistent library of web components available.

You can actually do pretty decent live-chat with something like HTMX and server-sent-events, I think. But it's sort of a progressive-enhancement view of HTML as a document model.


Full native isolation. HTML, CSS, JS, the whole thing, no tricks, the browser isolates it for you. It's really nice to be able to make a web component, clearly define the JS, HTML and even CSS API in terms of variables, and then throw it into any environment without it breaking, or without creating a complex maze of CSS name spaces and framework dependencies.


Is there really ever a use case for that?

When do you want part of your page to have different fonts, colors, everything from the rest of the page?


We ran into this too, and ended up not using the Shadow DOM at all. We want our stuff to automatically use the page's fonts, sizes, colors etc. Also we want customers to be able to customize stuff with CSS without having to use JS hacks to inject CSS into the shadow DOM (this gets especially cumbersome when you're nesting web components). Personally I feel like the shadow DOM is the most oversold part of web components, in that it's so all-or-nothing that it often creates more problems than it solves.


Which is another thing I love about web components - if you don’t want shadow dom, then don’t use it - you can build using just custom elements.


True, but i wish there was a way to say "don't inherit anything here except fonts and colors".

Being able to make a new root for rems would be nice too.


The Shadow DOM does exactly what you are asking for here. Just use CSS variables, they pierce right through:

https://open-wc.org/guides/knowledge/styling/styles-piercing...


yeah so this means that if i want to use a page's font family but not its font size, the user has to do extra effort, and set not just `font-family: "Comic Sans"`, but also `--some-component-font-family: "Comic Sans"`. i'd love it if i could just selectively inherit stuff and not other stuff, without the user having to learn which css variables my thing supports. of course you can't do this with domain specific stuff, but you could make a thing fit kinda sorta well into its environment by default, and right now using a shadow DOM doesn't let you do that.


It is not a choice in many situations. For a large company that has many different teams that own many different parts of a product, even though teams adhere to the same "UI standards", things get complicated quickly. For example, CSS classes that have name conflict can cause UI to break (which happens more often than you think, and strictly adhering to naming rules is just hard for humans). That's just one example. Custom elements with shadow DOM is a simple and straightforward solution to this, and it makes sense -- JavaScript code are scoped to modules and use imports/exports to define interfaces, and it is just natural to do that for UI instead of putting every class that other people don't care about in the global CSS space.


The use-case for having isolated objects with parameters, much like classes in Java, is to be able to a) share code, b) hide internal details, and c) have object behavior be governed solely by a constrained set of inputs.

So the point isn't to have your web component be different from the rest of the page. The point is that you can pass in parameters to make an off-the-shelf component look how you want. However, exactly how much freedom you want to give users is up to the component author. It is possible for there to be too little freedom, true.

See here [1] for a concrete example of someone writing a reusable web component, and figuring out how to let users customize the styling.

[1]: https://nolanlawson.com/2021/01/03/options-for-styling-web-c...


In terms of style isolation, the answer is very much “it depends”. And it depends as much on the nature of what the component does, as the kind of isolation you want to achieve.

- Namespace isolation. Example: you have different components in the same codebase or otherwise meant to work together; you may want assurance that a locally defined style attached to class “foo” doesn’t have unexpected effects on other components which happen to use the same class a different way. This is commonly achieved with build tooling, eg by mangling class names.

- Cascade isolation. Example: you have an embeddable widget that you want to look consistent in any context, regardless of the styles of its parent DOM. This is achievable to some extent without custom elements, but they are a way to achieve it with confidence in a relatively straightforward way (at the expense of other limitations and complexity).


This 100%. Web components get praised for this isolation - and it’s like the exact thing I do not want if I’m building an application. Like try to have a global CSS theme, or use bootstrap, etc. (Please don’t suggest I embed a link to global CSS in every component.)

Like I get it if you’re sharing a component on different sites, like an embedded component or ad banner, etc. But it just gets in the way if you’re trying to do normal things that the majority of web apps need.


There are ways to apply global styles to your components other than importing a global sheet. There's just not a standard way defined in the spec. Isolation by default is the correct path for them to take compared to the alternatives. That doesn't make it useless just because you don't know how to do it in a good way.

See https://developer.mozilla.org/en-US/docs/Web/API/Web_compone...


My main point is that gets in the way, unlike most other web frameworks when building normal applications. It's a headwind that always comes up and hurts adoption imho.


From reading the parent site the use cases seem to be large enterprise and cross site components, say a twitter embed or advertising bar.


Web components don't do any JS isolation, right? And when you say HTML isolation I think its probably important to say that any custom element registered is global, so if Web-Component-A uses Web-Component-B version 1 then Web-Component-C cant use Web-Component-B version 2 (unless Web-Component-B includes the version in its name).

If you want actual isolation of the whole web stack (HTML/CSS/JS) I don't think there are any alternatives to iframes.


It is my personal nightmare. I want to write custom css to customize my Homeassistant. Only to find out every single thing is in a shadow root and I cannot write css to adress what I want to change. WHICH WOULD BE REALLY SIMPLE IF I CAN WRITE PLAIN OLD CSS.

I can't believe how extremely mad and frustraded I became when I found out - writing this out fills me with rage.


They are very good for progressive enhancement, for example you could have a web component that wraps a <table> to add fancy features like filtering or drag-and-drop reordering. If JS is disabled or fails to load, the user just gets a plain table, but they still get the content. When this stuff was new, that was much better for the user than what would happen with a front-end framework (they would get a white page), but now server-side rendering is widely available in those frameworks it’s less of a selling point.

They are also good for style encapsulation, i.e. you could drop someone else’s component in your page and not worry about it affecting or being affected by your CSS. Anecdotally I feel like that is less of a common desire now than it was ~10 years ago, with the rise of both “headless” UI libraries (behaviour without dictating appearance) and the prevalence of scoped styles in front-end frameworks.

What does annoy me about the standard is that to use slots you must opt into the shadow DOM, which means that if you’re trying to create reusable components for your own stuff, you can’t style them just by dropping a stylesheet into the page. I’m sure there’s a technical reason why this is the case, but annoying nonetheless.


Without the shadow dom, your component can still have children.

If you need several slots, there's an example duplicating that functionality with javascript in the second comment of this blog post: https://frontendmasters.com/blog/light-dom-only/


the examples probably avoid talking about the dynamic APIs because they are super ugly and very stateful. It's hard to say that Web Components are the future when your demo shows 200 lines of manual DOM reconciliation.


I‘ve come to the same conclusion while using them pretty extensively. The idea is nice, but they are not there yet.


Sure, there is some pain but after years of several Angular and a Vue migration, I'd say that pain is much less than the pain of framework migration. People often overlook the experience of a framework for a long term use.


This could summarize every interaction I’ve had with Web Components since the beginning. Web Components are becoming the Nuclear Fusion of Web standards.


Only 10 more years or so, pinky promise ;)


Web Components let you use a UI component made in a different framework than yours. That's it. For most other purposes they're pretty awful.

Also they let you publish a UI component that works in every framework, without having to build 7 versions of it (or just exclude everyone who's not on React, or something like that)


That’s also the conclusion I’ve drawn. This seems to be the reason Web Components exists.


IMO the slots that allow a component to have children are the difference. You can compose indepent components that way. Also the styles are scoped to the component by default, and you can only break the scope with custom vars (css vars)


Web components are a standard and built into the browser, Handlebars is a separate library. Web components are a way to encapsulate and isolate structure (html), appearance (css) and behaviour (js) into components, extending HTML.

It's like jquery widgets but without dependencies.


Are you genuinely asking as a professional? Seems like a big ask for someone to go over all you are misunderstanding if you think they are equivalent to a template language.


The standard, like PWA, is designed for maximum feasible misunderstanding. To the average dev it seems like a random bunch of features that don’t hang together. Five developers could look at it and be like the blind men discussing the elephant —- hung up on individual parts and not seeing the whole, if there is a whole.

There are a lot of candidates for “what’s wrong with modern web standards” but this fragmentation, which comes from a rather mathematical view of programming, is one of them. Thing is, a lot of web devs never studied computer science (even CS 101) and less than 5% live in San Francisco.


> and less than 5% live in San Francisco

How will they ever understand web components.


On the other hand, if they're completely different from a template language it seems like it should be just a moments work to demonstrate why, and help a fellow professional understand what they're missing.


Well um

1) You dont have to load an external library

2) Shadow DOM

3) Dynamic slots

That’s about it, honestly LOL.

I guess the main point of most browser APIs was to let apps use browser features.

This one actually tried to make a standard way for apps to use other apps. But they already had their own libraries so nyeh, thank you very much! LOL


Now I'm curious, what template engines don't have dynamic slots? Is that actually a rare enough feature to declare webcomponents special for having it? Does webcomponents have an advanced version of it?


Removing the GIL sounds like it will make typical Python programs slower and will introduce a lot of complexity?

What is the real world benefit we will get in return?

In the rare case where I need to max out more than one CPU core, I usually implement that by having the OS run multiple instances of my program and put a bit of parallelization logic into the program itself. Like in the mandelbrot example the author gives, I would simply tell each instance of the program which part of the image it will calculate.


There is an argument that if you need in process multithreading you should use a different language. But a lot of people need to use python because everything else they’re doing is in python.

There are quite a few common cases where in process multi threading is useful. The main ones are where you have large inputs or large outputs to the work units. In process is nice because you can move the input or output state to the work units instead of having to copy it.

One very common case is almost all gui applications. Where you want to be able to do all work on background threads and just move data back and forth from the coordinating ui thread. JavaScript’s lack of support here, outside of a native language compiled into emscripten, is one reason web apps are so hard to make jankless. The copies of data across web workers or python processes are quite expensive as far as things go.

Once a week or so, I run into a high compute python scenario where the existing forms of multiprocessing fail me. Large shared inputs and or don’t want the multiprocess overhead; but GIL slows everything down.


> Where you want to be able to do all work on background threads and just move data back and forth from the coordinating ui thread. JavaScript’s lack of support here, outside of a native language compiled into emscripten, is one reason web apps are so hard to make jankless

I thought transferring array buffers through web workers didn’t involve any copies of you actually transferred ownership:

    worker.postMessage(view.buffer, [view.buffer]);
I can understand that web workers might be more annoying to orchestrate than native threads and the like but I’m not sure that it lacks the primitives to make it possible. More likely it’s really hard to have a pauseless GC for JS (Python predominantly relies on reference counting and uses gc just to catch cycles).


This is true, but when do you really work with array buffers in Javascript? The default choice for whatever it is that you're doing is almost always something else, save for a few edge cases, and then you're stuck trying to bend your business logic to a different data type.


That’s a choice you get to make and probably depends on your problem domain and other things. For example when I was writing R2 it was all ArrayBuffers up and down the stack. And you could use something like capnproto or flat buffers for managing your object graph within an array buffer. But yeah, being able to transfer custom object graphs would be more powerful.


There is this assumption in these discussions that anything consuming significant CPU must necessarily have a simple interface that’s easy to reduce to a C-level ABI, like calling an ML library on an image, a numpy function on two big matrices or some encryption function. Therefore it is trivial to just move these to native code with an easy, narrow interface.

This assumption is incorrect. There are plenty of problems that consist entirely of business logic manipulating large and complex object graphs. “Just rewrite the hot function in rust, bro” and “just import multiprocessing, bro” are functionally identical to rewriting most of the application for these.

The performance work of the last few years, free threading and JIT are very valuable for these. All the rest is already written in C.


It's a good assumption though, because it keeps (in this case kept) closed the door to the absolutely nightmarish landscape of "multithreading to the masses". Those who made it open probably see it better, but, imo and ime, it should remain closed. Maybe they'll manage to handle it this time, but I'm 95% sure it's gonna be yet another round of ass pain for the world of python.


otoh, if all of your time is spent in python code and you have performance issues, it's time to rewrite in a different language. Correct multi threaded coffee is take hard and python is inherently really slow. the inherent complexity of multi threaded code is enough that you should just write the single threaded version in a language that's 10x faster (of which there are many)


> “Just rewrite the hot function in rust, bro” and “just import multiprocessing, bro” are functionally identical to rewriting most of the application for these.

Isn't "just use threads, bro" likely to be equally difficult?


no. it's much harder


Is this some internal cloudflare feature flag or can everybody pass ArrayBuffers zero-copy via service bindings?

(random question, totally understand if you're not the right person to ask)



As always, it depends a lot on what you're doing, and a lot of people are using Python for AI.

One of the drawbacks of multi-processing versus multi-threading is that you cannot share memory (easily, cheaply) between processes. During model training, and even during inference, this becomes a problem.

For example, imagine a high volume, low latency, synchronous computer vision inference service. If you're handling each request in a different process, then you're going to have to jump through a bunch of hoops to make this performant. For example, you'll need to use shared memory to move data around, because images are large, and sockets are slow. Another issue is that each process will need a different copy of the model in GPU memory, which is a problem in a world where GPU memory is at a premium. You could of course have a single process for the GPU processing part of your model, and then automatically batch inputs into this process, etc. etc. (and people do) but all this is just to work around the lack of proper threading support in Python.

By the way, if anyone is struggling with these challenges today, I recommend taking a peek at nvidia's Triton inference server (https://github.com/triton-inference-server/server), which handles a lot of these details for you. It supports things like zero-copy sharing of tensors between parts of your model running in different processes/threads and does auto-batching between requests as well. Especially auto-batching gave us big throughput increase with a minor latency penalty!


> For example, imagine a high volume, low latency, synchronous computer vision inference service.

I'm not in this space and this is probably too simplistic, but I would think pairing asyncio to do all IO (reading / decoding requests and preparing them for inference) coupled with asyncio.to_thread'd calls to do_inference_in_C_with_the_GIL_released(my_prepared_request), would get you nearly all of the performance benefit using current Python.


Machine learning people not call their thing Triton challenge (IMPOSSIBLE)


This (Nvidia’s) triton predates openAI’s by a few years.


The biggest use case (that I am aware of) of GIL-less Python is for parallel feeding data into ML model training.

* PyTorch currently uses `multiprocessing` for that, but it is fraught with bugs and with less than ideal performance, which is sorely needed for ML training (it can starve the GPU).

* Tensorflow just discards Python for data loading. Its data loaders are actually in C++ so it has no performance problems. But it is so inflexible that it is always painful for me to load data in TF.

Given how hot ML is, and how Python is currently the major language for ML, it makes sense for them to optimize for this.


> Removing the GIL sounds like it will make typical Python programs slower and will introduce a lot of complexity?

This was the original reason for CPython to retain GIL for very long time, and probably true for most of that time. That's why the eventual GIL removal had to be paired with other important performance improvements like JIT, which was only implemented after some feasible paths were found and explicitly funded by a big sponsor.


That is the official story. None of it has materialized so far.


Python development is done in public so you can just benchmark against the development version to see its improvement. In fact, daily benchmarks are already posted to [1]; it indicates around 20% improvement (corresponding to 1.25x in the table) since 3.10. The only thing you can't easily verify is that whether GIL was indeed historically necessary in the past.

[1] https://github.com/faster-cpython/benchmarking-public


My hunch is that in just a few years time single core computers will be almost extinct. Removing the GIL now feels to me like good strategic preparation for the near future.


It depends what you mean by extinct.

I can't think of any actual computer outside of embedded that has been single core for at least a decade. The Core Duo and Athlon X2 were released almost 20 years ago now and within a few years basically everything was multicore.

(When did we get old?)

If you mean that single core workloads will be extinct, well, that's a harder sell.


Yeah, I just checked and even a RaspberryPi has four cores these days. So I guess they went extinct a long time ago!


Yes, but:

* Most of the programs I write are not (trivially) parallelizable, and a the bottleneck is still a single core performance

* There is more than one process at any time, especially on servers. Other cores are also busy and have their own work to do.


Yes, but:

1. Other people with different needs exist.

2. That's why we have schedulers.


Even many microcontrollers have multiple cores nowadays. It’s not the norm just yet, though.


Single core computers yes. Single core containers though..


Single core containers are also a terrible idea. Life got much less deadlocked as soon as there were 2+ processors everywhere.

(Huh, people like hard OS design problems for marginal behavior? OSes had trouble adopting SMP but we also got to jettison a lot of deadlock discussions as soon as there was CPU 2. It only takes a few people not prioritizing 1 CPU testing at any layer to make your 1 CPU container much worse than a 2 VCPU container limited to a 1 CPU average.)


It's actually quite difficult to get a "single core" container (ie: a container with access to only one logical processor).

When you set "request: 1" in Kubernetes or another container manager, you're saying "give me 1 CPU worth of CPU time" but if the underlying Linux host has 16 logical cores your container will still see them.

Your container is free to use 1/16th of each of them, 100% of one of them, or anything in-between.

You might think this doesn't matter in the end but it can if you have a lot of workloads on that node and those cores are busy. Your single threaded throughout can become quite compromised as a result.


It's easy, though?

On Docker, --cpuset-cpus=0 will pin the container to the first core.

K8s: https://kubernetes.io/docs/tasks/administer-cluster/cpu-mana...

CPU affinity and pinning is something I think you should be able to achieve without too much hassle.


I think the point was this isn’t the norm though. If you know you need to be pinned to a core you CAN configure kubernetes to do so but it’s not the default and therefore you are unknowingly leaving performance on the floor


I'm quite certain you'd leave more performance on the table by pinning in general and on average.

Just let the CPU scheduler do its job. Unless you know better, in which case, by all means go ahead and allocate computational resources manually. I don't see a way to make that a sensible default, though.


> It's easy though

Neat, I didn't know it was a single flag in Docker.

The k8s method you linked definitely has some caveats as it doesn't allow this scheduling at the pod level, and requires quite a bit of fiddling to get working (atleast on GKE). This isn't even available if you use a fully managed setup like autopilot.

Maybe my expectations just aren't realistic but "easy" to me would mean I put the affinity right next to my CPU request in the podSpec :/


> if you have a lot of workloads on that node and those cores are busy. Your single threaded throughout can become quite compromised as a result.

While yes this can cause a slowdown, wouldn't it still happen if each container thought it had a single core?


Only if you have more containers than cores.


That depends on what your scheduler does. Having one virtual core doesn't necessarily mean you always get the same physical core.

Also you said "a lot of workloads" so yes probably more containers than cores.


Most of my pods have a CPU request >= 1 so more containers than cores is rare. But obviously that really depends on your workload(s).

I don't think the scheduler picking a different core matters much unless your workload is super cache sensitive. My point is more about access to single threaded performance. If you have a single threaded workload (ex: an ffmpeg audio encode) and you want it to be able to access as many cycles from a single core as possible, it isn't always as simple as request: 1


> My hunch is that in just a few years time single core computers will be almost extinct.

Single core computers are already functionally extinct, but single-threaded programs are not.


Depends on the OS, on Windows or Android, even single processes have multiple threads under the hood.


> What is the real world benefit we will get in return?

If you have many CPU cores and an embarrassingly parallel algorithm, multi-threaded Python can now approach the performance of a single-threaded compiled language.


The question really is if one couldn't make multiprocess better instead of multithreaded. I did a ton of MPI work with python ten years ago already.

What's more I am now seeing in Julia that multithreading doesn't scale to larger core counts (like 128) due to the garbage collector. I had to revert to multithreaded again.


I assume you meant you had to revert to multiprocess?


Yes exactly. Thanks.


You could already easily parallelize with the multiprocessing module.

The real difference is the lower communication overhead between threads vs. processes thanks to a shared address space.


Easily is an overstatement. Multiprocessing is fraught with quirks.


Well I once had an analytics/statistics tool that regularly chewed through a couple GBs of CSV files. After enough features had been added it took almost 5 minutes per run which got really annoying.

It took me less than an hour to add multiprocessing to analyze each file in its own process and merge the results together at the end. The runtime dropped to a couple seconds on my 24 thread machine.

It really was much easier than expected. Rewriting it in C++ would have probably taken a week.


In F#, it would just be

    let results = files |> Array.Parralel.map processFile
Literally that easy.

Earlier this week, I used a ProcessPoolExecutor to run some things in their own process. I needed a bare minimum of synchronization, so I needed a queue. Well, multiprocessing has its own queue. But that queue is not joinable. So I chose the multiprocessing JoinableQueue. Well, it turns out that that queue can't be used across processes. For that, you need to get a queue from the launching process' manager. That Queue is the regular Python queue.

It is a gigantic mess. And yes, asyncio also has its own queue class. So in Python, you literally have a half a dozen or so queue classes that are all incompatible, have different interfaces, and have different limitations that are rarely documented.

That's just one highlight of the mess between threading, asyncio, and multiprocessing.


Well I'm not here to debate the API cleanliness, I just wanted to point out to OP that Python can utilize multicore processors without threads ;)

Here is the part of multiprocessing I used:

  with Pool() as p:
      results = p.map(calc_func, file_paths)
So, pretty easy too IMO.


Fraught with quirks sounds quite ominous. Quuuiiirkkksss.

I agree though.


That's not really correct. Python is by far the slowest mainstream language. It is embarrassingly slow. Further more, several mainstream compiled languages are already multicore compatible and have been for decades. So comparing against a single-threaded language or program doesn't make sense.

All this really means is that Python catches up on decades old language design.

However, it simply adds yet another design input. Python's threading, multiprocessing, and asyncio paradigms were all developed to get around the limitations of Python's performance issues and the lack of support for multicore. So my question is, how does this change affect the decision tree for selecting which paradigm(s) to use?


> Python's threading, multiprocessing, and asyncio paradigms were all developed to get around the limitations of Python's performance issues and the lack of support for multicore.

Threading is literally just Python's multithreading support, using standard OS threads, and async exists for the same reason it exists in a bunch of languages without even a GIL: OS threads have overhead, multiplexing IO-bound work over OS threads is useful.

Only multiprocessing can be construed as having been developed to get around the GIL.


No, asyncio's implementation exists because threading in Python has huge overhead for switching between threads and because threads don't use more than one core. So asyncio was introduced as a single threaded solution specifically for only network-based IO.

In any other language, async is implemented on top of the threading model, both because the threading model is more efficient than Python's and because it actually supports multiple cores.

Multiprocessing isn't needed in other languages because, again, their threading models support multiple cores.

So the three, relatively incompatible paradigms of asyncio, threading, and multiprocessing specifically in Python are indeed separate attempts to account for Python's poor design. Other languages do not have this embedded complexity.


> In any other language, async is implemented on top of the threading model

There are a lot of other languages. Javascript for example is a pretty popular language where async on a single threaded event loop has been the model since the beginning.

Async is useful even if you don't have an interpreter that introduces contention on a single "global interpreter lock." Just look at all the languages without this constraint that still work to implement async more naturally than just using callbacks.

Threads in Python are very useful even without removing the gil (performance critical sections have been written as extension modules for a long time, and often release the gil).

> are indeed separate attempts to account for Python's poor design

They all have tradeoffs. There are warts, but as designed it fits a particular use case very well.

Calling Python's design "poor" is hubris.

> So my question is, how does this change affect the decision tree for selecting which paradigm(s) to use?

The only effect I can see is that it reduces the chances that you'll reach for multiprocessing, unless you're using it with a process pool spread across multiple machines (so they can't share address space anyway)


> Calling Python's design "poor" is hubris.

Not in the least. Python is a poorly designed language by many accounts. Despite being the most popular language in the world, what language has it significantly influenced? None of note.


> Python is a poorly designed language by many accounts

Hubris isn't rare.

> what language has it significantly influenced?

I can think of at least 1 language designer[1] who doesn't think it's "poorly designed," based on it's significant impact on what they're currently working on[2]

1. https://en.m.wikipedia.org/wiki/Chris_Lattner 2. https://www.modular.com/mojo


Who cares about how many other languages a language has influenced? If that was a metric of any consideration we all would write Algol or something. Programming languages are tools, tools to help you perform a task.


>Python is by far the slowest mainstream language. It is embarrassingly slow.

Oh? It is by far the fastest language for me. No languages comes close on the time from starting to write, to have code that runs. For me that time far outweighs the execution time, so it is a lot more important.


may i ask in what field do you specialize? Because any modern language i can think of, is one "project init" command away from "nothing" to "running"


What you’re describing is basically using MPI in some way, shape or form. This works, but also can introduce a lot of complexity. If your program doesn’t need to communicate, then it’s easy. But that’s not the case for all programs. Especially once we’re talking about simulations and other applications running on HPC systems.

Sometimes it’s also easier to split work using multiple threads. Other programming languages let you do that and actually use multiple threads efficiently. In Python, the benefit was just too limited due to the GIL.


> Removing the GIL sounds like it will make typical Python programs slower and will introduce a lot of complexity?

There is a lot of Python code that either explicitly (or implicitly) relies on the GIL for correctness in multithreaded programs.

I myself have even written such code, explicitly relying on the GIL as synchronization primitive.

Removing the GIL will break that code in subtle and difficult to track down ways.

The good news is that a large percentage of this code will stay running on older versions of python (2.7 even) and so will always have a GIL around.

Some of it however will end up running on no-GIL python and I don't envy the developers who will be tasked tracking down the bugs - but probably they will run on modern versions of python using --with-gil or whatever other flag is provided to enable the GIL.

The benefit to the rest of the world then is that future programs will be able to take advantage of multiple cores with shared memory, without needing to jump through the hoops of multi-process Python.

Python has been feeling the pain of the GIL in this area for many years already, and removing the GIL will make Python more viable for a whole host of applications.


> What is the real world benefit we will get in return?

None. I've been using Python "in anger" for twenty years and the GIL has been a problem zero times. It seems to me that removing the GIL will only make for more difficulty in debugging.


There will be consumer chips with 64 cores before long


Do FireFox, Chrome and Safari still use unencrypted channels for DNS queries?

What is the state of DNS over HTTPS?


`sudo tcpdump port 53` says yes, they do use unencrypted DNS.

AFAIK Chrome has a hardcoded list of DNS servers which offer encrypted DNS. I.E. if your DHCP server tells your PC to use 8.8.8.8, 1.1.1.1, 9.9.9.9, (or the IPv6 equivalents) it will instead connect to the equivalent DNS-over-HTTPS endpoint for that DNS provider. This is a compromise to avoid breaking network-level DNS overrides such as filtering or split-horizon DNS. It's not limited to public DNS providers either, ISP DNS servers are in there. (I've seen it Chrome connect to Comcast's DNS-over-HTTPS service when Comcast's DNS was advertised via DHCP.)

Of course, this is pretty limited. Chrome obviously can't hardcode ever DNS server, and tons of networks use private IPs for DNS even though they don't do any sort of filtering / split-horizon at all. (My Eero router has a local DNS cache, so even if my ISP's DNS servers were in Google's hardcoded list, it wouldn't use DNS-over-HTTPS, because all Chrome can see is that my DNS server is 192.168.4.1)


> Do FireFox, Chrome and Safari still use unencrypted channels for DNS queries?

Firefox for sure has a "corporate" setting which guarantees that DNS queries are unencrypted, using port 53 (virtually always UDP although technically I take it TCP over port 53 is possible but a firewall only ever allowing UDP over port 53 for a browser works flawlessly).

AFAIK Chrome/Chromium also has such a setting and making sure that setting is on bypasses DoH.

I force all my browsers / wife / kid's browser to my own DNS resolver over UDP port 53 (my own DNS resolver is on my LAN but it could be on a server if I wanted to).

That DNS resolver can then, if you want, only use DoH.

To me it's the best of both worlds: "corporate" DNS setting to force UDP port 53 and then DoH from your own DNS resolver.

The benefit compared to directly using DoH from your browser is that you get to resolve to 0.0.0.0 or NX_DOMAIN a shitload of ads/telemetry/malware/porn domains.

You can also, from all your machines (but not from your DNS resolver), blocklist all the known DoH servers IPs.


I don't want my browser ignoring my DNS settings. I went through a lot of effort to set up Pihole in front of a local BIND server with split-horizon DNS for my VPS subdomains and my local subdomains, with caching and control over upstream resolvers, routed through Wireguard to avoid ISP snooping/hijacking.

It's bad enough that so many devices and applications already ignore DNS settings or hard-code IPs. I want everything going through my DNS.


block all outgoing traffic to port 53 in your router. this catches everything using plaintext DNS or DoT.


This does nothing to stop anything intentionally circumventing your DNS settings. There's no reason DNS traffic has to be on port 53, and DoH is undetectable.


>This does nothing to stop anything intentionally circumventing your DNS settings.

It makes it substantially more difficult. My firewall statistics are proof of that. On a production network you'd have everything blocked.


    the average developer spends 42% of their
    work week on maintenance
Indeed, I see that happening all around me when I watch how my friends build their startups. The first few months they are productive, and then they sink deeper and deeper into the quicksand of catching up with changes in their stack.

So far, I have done a somewhat good job of avoiding that. And I have a keen eye on avoiding it for the future.

I think a good stack to use is:

    OS: Debian
    DB: SQLite
    Webserver: Apache
    Backend Language: Python
    Backend Library: Django
    Frontend: HTML + CSS + Javascript
    Frontend library: Handlebars
And no other dependencies.

I called Django a "library" instead of a framework, because I do not create projects via "django-admin startproject myproject" but rather just do "import django" in the files that make use of Django functionality.

In the frontend, the only thing I use a library for is templating content that is dynamically updated on the client. Handlebars does it in a sane way.

This way, I expect that my stack is stable enough for me to keep all projects functional for decades to come. With less than a week of maintenance per year.


> the average developer spends 42% of their work week on maintenance

Apart from the big question of where that number is from - How is that even a "bad" thing?

I've been on projects where about 100% of the work was maintenance. The app was done. It provided value. Things broke, bugs were uncovered, but the cost of maintaining the app was far eclipsed by the cost of maintaining it.

It's one of the things I hate about software development - every other industry accepts that things require maintenance and that spending on maintenance is a hard requirement to keep the thing you invested tons of money in running. Only software development seems to believe that anything, anywhere reaches a magic state of "done" and will never need to be touched again. And that the only thing that adds value is adding new features.


>Only software development seems to believe that anything, anywhere reaches a magic state of "done" and will never need to be touched again.

The problem is the mental model. There is a state of "done" for a lot of software. But it subsequently becomes "undone" due to outside factors.

There's no bearings, gaskets and lubricants that need maintenance. The software stays the same. So why would the software need maintenance.

Maintenance in software is because software runs on hardware. Hardware gets updates, and then the software needs updates to match.

Maintenance in software is because software runs on a changing OS.

Maintenance in software is because software interacts with other software that changes.

I think it's this lack of mental model of _why_ software needs maintenance, that it's not planned for.


I wonder how much of "tech debt" would be easier to explain to certain business processes if better reframed under a "bearings, gaskets, and lubricants" model? We don't always have the engineering tools to predict strong "wear and tear" models of such things in software like civil and mechanical engineers have for common bearings/gaskets/lubricants, but on the other side we are also maybe closer than ever to having some baselines: most modern OSes now experience major changes once every 6 months, often with alternating (1 year) "LTS" releases; most modern tools and frameworks follow similar 6 month patterns. The big unfortunate bit is that most of those timelines don't align, but in general "needs to be 're-oiled' once ever 6 months" is starting to be the regular view of things.

(The other thing that this framing calls to mind is that a company doesn't usually send a civil or mechanical engineer to do the general maintenance checks, they first send a cheaper mechanic or other equivalent laborer. This is both where the analogy partly falls apart because not all software maintenance tasks in these 6 month timeframes are "easy"/"trivial" and sometimes need massive software engineering work, and also where we again maybe see hints that in some ways the software industry still maybe needs more of a better defined engineer/mechanic split than it has today.)


>We don't always have the engineering tools to predict strong "wear and tear" models of such things in software like civil and mechanical engineers have for common bearings/gaskets/lubricants

Because it's not wear. Wear can be measured. Wear is predictable. This again comes back to the mental model.

>most modern OSes now experience major changes once every 6 months, often with alternating (1 year) "LTS" releases

For the OS, this is a fair point. Maybe it should be mandatory to plan for OS updates, which often cause a host of issues.

>some ways the software industry still maybe needs more of a better defined engineer/mechanic split than it has today

I think the reason we don't have a proper split here, is because a split here isn't really possible. Maintenance work _is_ engineering work, you need good judgment to decide whether a dependency update is ok.

A proper test suite can help make the maintenance easier. But there's no mechanic equivalent, since there's not necessarily modular parts to be replaced when interfaces change.

I think all in all we need to move away from engineering analogies. SW engineering is different enough that these analogies do more harm than good. And in the end we should have the self-confidence to ask our managers to understand software as its own discipline.

If a manager doesn't understand what they're managing without heavy-handed analogies, they are in the wrong position.


I probably spend well over 50% of my time testing for (and finding) issues. In my experience, finding issues is 99% of the time involved in fixing them. Once I find the cause of a problem, the fix is almost immediate.

> Thank the dependency hell we’ve put ourselves in.

This was something that we knew was coming, ten years ago. Modular programming is an old, well-established discipline, but that generally means modules of code that we write, or that we control. I write most of my stuff, using a modular architecture, and I write almost every single one of my reusable modules.

Things are very, very different, when we import stuff from outside our sphere. Heck, even system modules can be a problem. Remember "DLL Hell"?

When I first started seeing people excitedly talking about great frameworks and libraries they use, on development boards, I was, like "I don't think this will end well."

I just hope that we don't throw out the modular programming baby with the dependency hell bathwater.


Great stack, I use a very similar stack and for the same reasons. I imagine you’re also in your late 30’s.

Honestly the best UI I’ve seen is the terminal-based one at libraries in the 80’s and 90’s that allowed you to find books. Lightning fast and allowed the user to become an expert quickly, especially because the UI essentially never changed.

If you design things with Occam’s razor in mind, a full page reload doesn’t feel like one.

Nowadays I build software to last as long as possible without future investment of time and effort spent on maintenance. Meanwhile the industry seems to have developed some need to mess with their programs all the time. It’s almost like a tic.


> I think a good stack to use is:

These things are relative though, also futureproofing your stack will run into issues later if you need to scale to N number of requests etc.

When you are thinking forward for optimizations, you may re-evaluate your thoughts on the right stack.


> When you are thinking forward for optimizations, you may re-evaluate your thoughts on the right stack.

Most projects never get there and even the ones that survive would be well served by a single EC2 instance.

Nothing about that stack jumps out as that problematic, maybe go with a PostgreSQL instance running in a container if need be, but the rest can scale pretty well both horizontally and vertically.

If it becomes a problem then just throw some more money at resources to fix it and if at some point the sums get so substantial that it feels bad to do that, congratulations, you’ve made it far enough to rework your architecture.


And in the mean time, enjoy life for 20 years.

I am perfectly glad for all the things I did not do when they were not needed.

Designing and building amphibian hovercraft monorail dumptruck racecars for all those projects that only ever needed a wheelbarrow is just a diffetent form of technical debt. It's not investment because it never pays off. It's just work that does not produce output, that you pay before instead of after.

It only takes a little bit of thought to avoid the normal idea of technical debt where past thoughtlessness costs you work today. Plain old modularity, seperation of concerns, avoiding tight coupling, and not even those religiously but just as a prefference or guiding direction, pretty much takes care of the future.


Their reasoning is that some platforms like Heroku do not support SQLite.

Why use those then and not a platform that supports it, like Glitch?

I have used Postgres, MySql etc, but having the project storage in a single file is making things so much easier, I would never ever want to lose that again.


So when you bookmark the Hacker News frontpage, that would be a hash of its current content and then you will visit that stale stale version forever and never see any new stories?


a version of it lives forever. You can associate siblings/changesets based on hash linking.

https://sentido-labs.com/en/library/201904240732/Xanadu%20Hy...

Something like Xanadu tumblers.


I dislike the bundling of hardware and software alltogether.

In the world of laptops, we have a wonderful situation. I can select the hardware that I like, then wipe the whole disk and install the software I like. Which in my case is a Linux distribution and then exactly the applications I prefer.

I wish the world of phones was like that.


You can kinda do that with Pixels and Sony Xperias only, because last I recall, they implement Android Verified Boot correctly (or non-draconianly), specifically avb_custom_key.

From a security and freedom perspective, I actually like the restrictions of the Android platform if implemented as Google intended, which means allowing you to roll your own ROM and relock the bootloader with your own keys. Android itself has among the strongest security models for a consumer platform, again if implemented as Google intended (which is why GrapheneOS only supports Pixels). You're actually not supposed to root your phone because that opens up a large attack surface.

It's inconvenient for customization, sure, but you can still wipe the phone and roll your own system. It's a matter of the workflow to do it.


The desktop OS are all pretty bad so I wouldn't want to use that as an inspiration for anything.


I'm very happy with my Linux desktop.

I mostly live in the Plasma desktop, Konsole and Firefox. And life is good.

What are you missing?


Bad in what way? Most people who aren’t power users barely leave the browser.


I mean, define bad.

Certainly they're much more complex. More susceptible to malware, too.

But they're also much more powerful. I can get tasks done 10x as fast. They're much more open, too. They have well-defined interfaces for most things, which means I can automate tasks. Can't do that on a phone.


Talking about open hardware:

Is there a tablet out there which runs Debian or Ubuntu?

I don't mean it has to come with Linux in the first place. I can wipe Windows or whatever it comes with and install Debian myself.

When I google around, I see people use some tablets with Linux based on special kernels they download from somewhere around the web. I would not want that. But a standard Debian or Ubuntu on a tablet would be great.


I think the starlite 5 from https://starlabs.systems/ is what you are looking for.


Hmm... 1.98 pounds for a 12.5 inch tablet. I was hoping for something more lightweight.

The is new iPads are 1.3 pounds at 13 inch.


It's an x86 laptop without the keyboard (clip-on keyboard is an optional extra). On the plus side, that means it's just as easy to install a different OS as any laptop.

1.98lb = 0.89811289 Kilogram


But the new iPads aren't running Linux ¯\_(ツ)_/¯ You asked, I gave you the only option I knew besides the pinetabs.



I have not seen any reports about plain Debian running on the PineTab. Only Mobian, which is not what I am looking for.


Isn't Mobian a debian with additionnal packages to have a mobile interface?

The mobian installation instructions are hosted on debian's wiki: https://wiki.debian.org/InstallingDebianOn/PINE64/PineTab2


Mobian is Debian with specific tweaks that arent upstreamed to Debian to support mobile.

That being said, neither the Pinetab/Pinetab2 are very well supported, I would not recommend.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: