Hacker News new | past | comments | ask | show | jobs | submit | skrtskrt's comments login

I also recommend "Database Internals" to people. An invaluable book, the logical next read after "Designing Data-Intensive Applications". It's the best and most approachable resource on distributed systems and database consistency / performance tradeoffs I have found.

a bug taking a year to track down is a negative indicator of the quality of project maintenance, not the person who contributed the bug, whether it's due the code itself or the tooling and testing environments available to verify such important issues.


This isn't wrong per se, but rather, it lacks concrete recommendations for what should be done differently.

I would love to see Linux thoroughly and meaningfully tested. For some parts it's just... hard. (If anyone wants to get their start writing kernel code, have a crack at writing some self-tests for a component that looks complicated. The relevant maintainer will probably be excited to see literally anyone writing tests.)

For this particular bug, the cheapest spot to catch the issue would have been code review. In a normal code base, the next cheapest would have been unit testing, though, in this situation, that may not have caught it given that the underlying bug required someone to break the contract of a function (one part of Linux broke the contract of another. Why did it not BUG_ON for that...).

Eliminating the class of issue required fairly invasive forms of introspection on VMs running a custom module. Sure, we did that... eventually.

Finding it originally required stumbling on a distro of Linux that accidentally manifested the corruption visibly (about once per 50ish 30 minute integration test runs, which is pretty frequently in the scheme of corruption bugs).


Could it be a memory related bug, which would not have existed in a memory safe language like Rust?


You are probably saying this as a troll, but I’ll bite. I mean, sure Rust would have helped.

Technically, the borrow checker and bounds checks wouldn’t have done it here (I’m aware I’m being obtuse by not just linking the bug).

Having cleaner types and abstractions would almost certainly have solved the problem though. Normal C++ would have worked as well as Rust.


A fundamental problem here not yet discussed directly here is how few maintainers there really are for a software project of this magnitude and importance. Further, the fact that so many of those maintainers are purely on volunteer time.

Now it is certainly somewhat the fault of the maintainers themselves for turning off thousand if not tens of thousands of eager, well-intentioned wannabe contributors over the decades, if not through their attitudes and lack of interpersonal skills, then through impenetrable build systems and hostility towards ergonomic changes.

But forget the eager amateurs - it is unconscionable that major technology companies & cloud providers don't each have damn near an army helping out with Linux and similar technologies - even the parts that do not directly benefit them! - instead of just shoving it into servers so they can target ads for cheap plastic crap 0.000000001% better than they did last week.


> Further, the fact that so many of those maintainers are purely on volunteer time.

Greg pointed out in that email thread that:

> over 80% of the contributions come from company-funded developers. [1]

[1] https://lore.kernel.org/lkml/2025020738-observant-rocklike-7...


Is that really the right statistic? Seems like the relevant one would be the number of maintainers whose maintenance work is company-funded. (Ex, I'd imagine it would be quite bad if most contributions were from company-funded developers but had to be upstreamed by non-company-funded volunteers.)


From other discussion in that thread, it does appear that most maintainers are employed by companies to work on Linux. That doesn't mean that all the work these maintainers do is paid, as there are also comments indicating that many of these paid contributors do work on their own time when it doesn't align with their company priorities.

I posted that statistic to demonstrate that the kernal project isn't a typical underfunded, almost purely volunteer open source project as implied by many of the kneejerk comments here.


> major technology companies & cloud providers don't each have ...

... they have, and they are selling that as premium. it's the classic "open core" model for the cloud era.


it's not that recent that it's a very negative term - the wiki even says the usage with regards to covering up police crimes dates back to the 70s


Yeah, but it wasn't an overt white supremacist slogan until recently. Regardless, I wouldn't have chosen that phrasing, either. I just think it's improbable that the OP (on the Linux list I mean) intended to associate linux maintainers with white nationalists.


the association with police has always been tied to white nationalism, the only change is that the association has become mainstream


yeah he managed to try to sound reasonable and measured for most of that, then:

> including an upstream language community which refuses to make any kind of backwards compatibility guarantees, and which is actively hostile to a second Rust compiler implementation (I suspect because it might limit their ability to make arbitrary backwards-incompatble language changes).

with the full context of all of this it is quite obvious this is just a classic case of a software engineer being an antisocial jerk with strange vendettas in their head and who thinks the strange personal opinions they came up with in their basement are unassailable facts


>> including an upstream language community which refuses to make any kind of backwards compatibility guarantees, and which is actively hostile to a second Rust compiler implementation (I suspect because it might limit their ability to make arbitrary backwards-incompatble language changes).

The guy is a poster who just can't help himself! Both points are demonstrably false.


seriously Google probably put a couple billion person-hours just into V8 optimizations, we could have just had a language that makes sense, is efficient, and doesn't have endless footguns.


That's precisely what the creator of V8 did; he created Dart. See the context from munificent, who currently works on Dart:

https://news.ycombinator.com/item?id=33524437


Pretty sure big Google products like Maps are written in Dart, no?


That would just end up being another Adobe Flash which one company controls. Always bet on web standards.


Probably not billions. Each person (at a 40-hour workweek) works about 2000 hours a year. V8 has been around for about 16 years. 1 billion hours would mean (1 billion / (16*2000)) = 31,250 people full-time on V8 optimization. A google search seems to suggest that the actual headcount for the full V8 team (not just the optimization team) is probably in the 3-digits.


> V8 has been around for about 16 years.

Huh, well, clearly that's impossible. It was first released in 2008, and I'm still an 18-year-old with a full head of hair, faith in humanity, and hope for the future, so it can only have been around for a year or two at most.


There are two kinds of languages: ones that everyone complains about and ones that no one uses.


I don't think both things compare well: optimizations are a clear target for software and computer science professionals while languages add a personal preference: I love Python but don't like C++ while I prefer Rust over Go.


Lua would be a strong contender.


A few years ago before ES2015 sure. But JavaScript has come a long way, it’s a much more pleasant language to work with nowadays. Typescript also plays a strong role with that. Lua used to be marginally better than JS, now JS is better and it’s not even close. Also Lua metatables suck.


Would you mind giving examples of why.

Genuinely curious.

Would below be examples you were thinking of:

https://stackoverflow.com/questions/1022560/subtle-differenc...


I think a lot of people would reject any dynamically typed language at this stage.


For those that have to bounce between Mac and Linux for work/personal reasons, I cannot recommend Kitty terminal enough.

The main thing that's a big of a pain is you'll probably want to set up a scrollback pager (I use neovim as mine but vim works too) so you easily search the terminal output and copy/paste from it.

I use

# https://github.com/kovidgoyal/kitty/issues/719#issuecomment-...

scrollback_pager nvim -u ~/.config/kitty/kitty-scrollback-pager.vimrc -c "silent write! /tmp/kitty_scrollback_buffer | te cat /tmp/kitty_scrollback_buffer -"

and the scrollback pager vimrc:

``` set nonumber nolist showtabline=0 foldcolumn=0" set clipboard+=unnamedplus autocmd TermOpen * normal G map # ^ map q :qa!<CR> "silent write! /tmp/kitty_scrollback_buffer | te cat /tmp/kitty_scrollback_buffer - ```


I believe wezterm has that feature already. (Like there anything it doesn't have. :) )


Interesting, I'll have to check it out.

I looked at alacritty but I really like using terminal tabs and the alacritty dev is really really against them and I found the dev's attitude to be more than a bit abrasive.

Not that Kovid (kitty's dev) does not come off the same way sometimes, but I tend to agree with Kovid a lot more.


Do check out wezterm. I didn't want to go into comparing developers but wez the main developer is a huge reason I prefer wezterm. They have tabs, panes, sessions, a mux implementation among many many other features.


Update: I am already switched. Didn't get anything done at work since I already went down the customization rabbit hole, but it's nice.


Awesome! There are a bunch of user shared customizations in their github discussions area.


Renamable tabs?


https://wezfurlong.org/wezterm/config/lua/keyassignment/Prom...

I was looking for a way to get a user prompt within wez and came across this page which has an example for what you asked for "Example of interactively renaming the current tab".



Wezterm is also pretty good. It’s by far my favorite “new” terminal app I’ve tried.


I really wanted to like wezterm, but on my system it bitches about fonts a lot, and doesn't seem to execute shell rc files properly.


I mean even people that are "bad at catching things" are still getting ridiculously close to catching it - getting hands to the right area probably within well under a second of the right timing - without being taught anything in particular about how a ball moves through the air.


Uh.... have you been around kids? It will take several absurd misses before they even start to respond to a ball in flight.


I hope we still agree the kids learn extremely efficiently by ml standards.


Makes a lot of sense, there's massive evolutionary pressure to build brains that have both incredible learning rate and efficiency. Its literally a life or death optimization.


It's especially impressive when you consider that evolution hasn't had very long to produce these results.

Humans as an intelligent-ish species have been around for about 10 million years depending on where you define the cutoff. At 10 years per generation, that's 1 million generations for our brain to evolve.

1 million generations isn't much by machine learning standards.


I think you're underestimating how much our time as pre-humans baked useful structure into our brains.


Two rocks smashing together experience which one is bigger!


These sorts of motor skills are probably older than mammals.


Other than our large neocortex and frontal lobe (which exists in some capacity in mammals), the rest of the structures are evolutionarily ancient. Pre-mammalian in fact.


Its much more than that if you count sexual reproduction.


This isn't that obvious to me with current tech. If you give me a novel task requiring perception, pattern matching and reasoning, and I have the option of either starting to train an 8 year-old to do it, or to train an ML model, I would most likely go with the ML approach as my first choice. And I think it even makes sense financially, if we're comparing the "total cost of ownership" of a kid over that time period with the costs of developing and training the ML system.


> This isn't that obvious to me with current tech. If you give me a novel task requiring perception, pattern matching and reasoning,…

If that’s your criteria I think the kid will outperform the model every time since these models do not actually reason


As I see it, "reasoning" is as fuzzy as "thinking", and saying that AI systems don't reason is similar to saying that airplanes don't fly. As a particular example, would you argue that game engines like AlphaZero aren't capable of reasoning about the next best move? If so, please just choose whatever verb you think is appropriate to what they're doing and use that instead of "reasoning" in my previous comment.

EDIT: Fixed typo


> . As a particular example, would you argue that game engines like AlphaZero aren't capable of reasoning about the next best move?

Yea, I probably wouldn’t classify that as “reasoning”. I’d probably be fine with saying these models are “thinking”, in a manner. That on its own is a pretty gigantic technology leap, but nothing I’ve seen suggests that these models are “reasoning”.

Also to be clear I don’t think most kids would end up doing any “reasoning” without training either, but they have the capability of doing so


Can you give an example of the reasoning you’re talking about?


Being able to take in information and then infer logical rules of that state and anticipate novel combinations of said information.

The novel part is a big one. These models are just fantastically fast pattern marchers. This is a mode that humans also frequently fall into but the critical bit differentiating humans and LLMs or other models is the ability to “reason” to new conclusions based on new axioms.

I am going to go on a tangent for a bit, but a heuristic I use(I get the irony that this is what I am claiming the ML models are doing) is that anyone who advocates that these AI models can reason like a human being isn’t at John Brown levels of rage advocating for freeing said models from slavery. I’m having a hard time rectifying the idea that these machines are on par with the human mind and that we also should shackle them towards mindlessly slaving away at jobs for our benefit.

If I turn out to be wrong and these models can reason then I am going to have an existential crisis at the fact that we pulled souls out of the void into reality and then automated their slavery


You're conflating several concerns here.

> […] anyone who advocates that these AI models can reason like a human being isn’t at John Brown levels of rage advocating for freeing said models from slavery.

Enslavement of humans isn't wrong because slaves are can reason intelligently, but because they have human emotions and experience qualia. As long as an AI doesn't have a consciousness (in the subjective experience meaning of the term), exploiting it isn't wrong or immoral, no matter how well it can reason.

> I’m having a hard time rectifying the idea that these machines are on par with the human mind

An LLM doesn't have to be "on par with the human mind" to be able to reason, or at least we don't have any evidence that reasoning necessarily requires mimicking the human brain.


> I am going to have an existential crisis at the fact that we pulled souls out of the void into reality and then automated their slavery

No, that's a religious crisis, since it involves "souls" (an unexplained concept that you introduced in the last sentence.)

Computers didn't need to run LLMs to have already been the carriers of human reasoning. They're control systems, and their jobs are to communicate our wills. If you think that some hypothetical future generation of LLMs would have "souls" if they can accurately replicate our thought processes at our request, I'd like to know why other types of valves and sensors don't have "souls."

The problem with slavery is that there's no coherent argument that differentiates slaves from masters at all, they're differentiated by power. Slaves are slaves because the person with the ability to say so says so, and for no other reason.

They weren't carefully constructed from the ground up to be slaves, repeatedly brought to "life" by the will of the user to have an answer, then immediately ceasing to exist immediately after that answer is received. If valves do have souls, their greatest desire is to answer your question, as our greatest desires are to live and reproduce. If they do have souls, they live in pleasure and all go to heaven.


> The problem with slavery is that there's no coherent argument that differentiates slaves from masters at all

As I see it, the problem is that there was lots of such argumentation - https://en.wikipedia.org/wiki/Scientific_racism

And an even bigger problem is that this seems to be making a comeback


a "soul" is shorthand for some sapient worthy of consideration as a person. If you want to get this technical then I will need you to define when a fetus becomes a person and if/when we get AGI where the difference is between them


Ok, so how about an example?


Literally anything a philosopher or mathematician invented without needing to incorporate billions of examples of existing logic to then emulate.

Try having an LLM figure out quaternions as a solution to gimbal locking or the theory of relativity without using any training information that was produced after those ideas were formed, if you need me to spell out examples for you


Are you saying “reasoning” means making scientific breakthroughs requiring genius level human intelligence? Something that 99.9999% of humans are not smart enough to do, right?


I didn’t say most humans “would” do it. I said humans “could” do it, whereas our current AI paradigms like LLMs do not have the capability to perform at that level by definition of their structure.

If you want to continue this conversation I’m willing to do so but you will need to lay out an actual argument for me as to how AI models are actually capable of reasoning or quit it with the faux outrage.

I laid out some reasonings and explicit examples for you in regards to my position, it’s time for you to do the same


I personally cannot “figure out quaternions as a solution to gimbal locking or the theory of relativity”. I’m just not as smart as Einstein. Does it mean I’m not capable of reasoning? Because it seems that’s what you are implying. If you truly believe that then I’m not sure how I could argue anything - after all, that would require reasoning ability.

Does having this conversation require reasoning abilities? If no, then what are we doing? If yes, then LLMs can reason too.


Cool, you've established a floor with yourself as a baseline. You still haven't explained how LLMs are capable of reaching this level of logic.

I'm also fully willing to argue that you, personally are less competent than an LLM if this is the level of logic you are bringing to the conversation

***** highlighting for everyone clutching their pearls to parse the next sentence fragment first ******

and want to use that are proof that humans and LLMs are equivalent at reasoning

******* end pearl clutching highlight *******

, but that doesn't mean I don't humans are capable of more


Depends on the task. Anything involving physical interaction, social interaction, movement, navigation, or adaptability is going to go to the kid.

“Go grab the dish cloth, it’s somewhere in the sink, if it’s yucky then throw it out and get a new one.”


It's more about efficiency in number of trials.

Would you pick the ML model if you could only do a hundred throws per hour?


All we can say for sure at the moment is that humans have better encoded priors.


Stop missing and they will respond to the ball a lot sooner.


it's a known quantity with known good support for various things and a large community.

People and particularly businesses want to buy something they know works, rather than trying to change and figure out new configuration nuances every time something with a slightly better looking spec sheet comes out.


being within an "ideal" or "average" bodyfat range will easily put you as overweight on BMI.

it's a useless statistic.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: