Hacker News new | past | comments | ask | show | jobs | submit | markhahn's comments login

As another reply points out, it has a useful and well-formed meaning, even if you don't like it.

UDP is at-most-once; TCP is exactly-once-in-order.


interestingly, the exokernel approach is suited to particular cases. for instance, a single application (regardless of whether it's multiple sessions, diversity of RTT, etc). after all, "get the packets into userspace with as little fuss as possible" would be the right goal there.

the unix model is different: it's basically premised on a minicomputer server which would be running a diverse set of independent services where isolation is desired, and where it makes sense for a privileged entity to provide standardized services. services whose API has been both stable and efficient for more than a couple decades.

I think it's kind of like cloud: outsourcing that makes sense at the lower-scale of hosting alternatives. but once you get to a particular scale, you can and should take everything into your own hands, and can expect to obtain some greater efficiency, agility, autonomy.


> the unix model is different: it's basically premised on a minicomputer server which would be running a diverse set of independent services where isolation is desired, [...]

Exokernels provide isolation. Secure multiplexing is actually the only thing they do.

> and where it makes sense for a privileged entity to provide standardized services. services whose API has been both stable and efficient for more than a couple decades.

Yes, standardisation is great. Libraries can do that standardisation. Why do you need standardisation at the kernel level?

That kind of standardisation is eg what we are doing with libc: memcpy has a stable interface, but how it's implemented depends on the underlying hardware; the kernel does not impose an abstraction.


maybe the lesson is that "composable" shouldn't mean "just nested".

for most people, the browser is the platform. for a somewhat different "most", the phone is the platform.

none of those people care whether it's apache or nginx or something else handling the socket, or what kind of socket it is, or how app routing works, or how that app accesses storage, allocates RAM, etc.


No, NT was a re-engineering of the kernel, mostly with Mach-microkernel influence, but also with input from VMS.

I think the point is that people who whinge (like here) about Unix are actually complaining about Unix filesystems and user-space. I'm not so sure they care about how exactly privileged execution is partitioned (or not).


Not sure if it's true, but I've heard than WNT is VMS + 1, so to speak, with each letter "incremented". I believe the team behind WNT had previously worked on VMS.

PLEASE start saying "D-Link firmware", since you can avoid this sort of silliness by just installing OpenWRT.

How about another conversation: "Vendors are never trustworthy, so what to do?"


you mean because that's how you get to be a meme stock? yep. stock markets are casinos filled with know-nothing high-rollers and pension sheep.


doing business is hard this days. you can't be just a rich a*hole while working with people. have to care about your image. and this is one of the ways of doing it. hanging out free stuff. sort of selfless donations. but in fact this rises the bar and makes competitors' life much harder. of course you can be rich and narrow minded, like intel. but then it's hard to attract external developers and make them believe in you future. nvidia's stock rise is based on the vision, investors believe in it. while other giants are being dominated by carrier managers. who know the procedures, but absolutely blind when it comes to technology evaluation. if someone comes to them with a great idea they first evaluate how it fits in their plans. sometimes they their own primitive vision, like in facebook. which proved to be a... not that good. so, all this sort of managers can do is look at what is _alrady_ successful and try to replicate it throwing a lot of money. it may be not enough. like intel still lags behind in GPUs.


How many meme stocks are TSMC customers?


donno. it's clear they have no effective moat. they still try hard, making themselves quite customer-hostile.


CUDA is a moat. What's ineffective about it?


it's not open source and can only be used with nvidia gpus (by license).


gosh, pythonistas must have called you really bad names.

JVM wanted to be Self, but Sun was on its last legs by then. now it's just a weird historic artifact, still casting a shadow, or at least causing an unpleasant smell.

I'm curious about the masters-have-left claim: do you really thing current Python action is somehow foolish?


I don't know about "foolish", but the current efforts adding complex threading and a JIT that promises 30% speedup over 3.10 look misguided.

Before, the selling point was simplicity and C extensions.

Just go to GitHub: Apart from the people who are paid to integrate the above features development has all but stalled. The corporate takeover of the "free" software Python (without a standard or viable other implementations like C++) is complete!


Python was never a good language. It was always a joke. But jokes have a short shelf life. The reason to use it was to be in the opposition to the pompous way of how the mainstream was trying to do things.

It's like with academic drawing. A student would spend countless hours or even days trying to draw a model, and would still come out with broken anatomy, made of plaster and not even a bit similar to the model itself. And they would use a huge arsenal of tools, and probably some expensive ones. While a master could scoop some ash from an ashtray, and in a few lines drawn by a finger accomplish a lot more than that student's day's work, and have the model look alive, with plausible anatomy etc.

Python was the ash from an ashtray. It didn't need to be good as a tool to accomplish its goal. Actually, the opposite. But now, with the masters gone, the students don't know why the masters were so good at it, and they try to copy whatever they did, including finger drawing with ash. It works even worse than if they used their huge arsenal of tools, but they no longer have anyone to guide them.

You might think this is a ridiculous story, but this actually happened before my eyes in academic drawing in a particular art academy. But it's a story for a different time.

So, to answer your question, with some examples:

Yes, Python today is thousand times worse than what it was 15 years ago. In general, adding to a programming language makes it worse. Python not only adds, it adds useless garbage. Stuff like f-strings, match-case, type annotations, type classes, async... all this stuff made the language a lot more complex and added nothing of value.

Lots of Python libraries were burnt in a name of "progress" (broken backwards-compatibility). Making tools for Python became prohibitively expensive due to the effort it takes to work with the language.

But, it didn't stop there. The community itself changed its shape dramatically. Python, from being usually the second language, learnt by someone intelligent enough to see the problems with the mainstream and looking for alternatives, it became the first language programmers learn on their way into trade. Python more and more started to bend to meet the demands of its new audience: the unwilling to learn amateurs. And that's where Java was good! Java was made for exactly this purpose. Java came packed with guard-rails and walking sticks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: