Hacker Newsnew | past | comments | ask | show | jobs | submit | joaohaas's commentslogin

Rust uses Zulip for lang-related discussions. The 't-lang/effects' channel is still somewhat active.

The specific use case the GNU maintainer listed followed this exact pattern.

>the article says "The Rust rewrite has shipped zero of these [memory saftey bugs], over a comparable window of activity." However, this is not true

That bug got fixed before the Ubuntu release, and is from way before Canonical was even involved with the project.


In the given list of GNU CVEs in the original article, it included a buffer overrun in tail from 2021. So for a fair comparison 2021 is part of the "window of activity" (the year uu_od CVE was published).

Most (if not all) of these issues do not matter at all outside the scope GNU utils run in.

For example, using filepaths instead of FDs does not matter in most cases in controlled server environments, or in processes that will never run with elevated privilege (most apps).


> Most (if not all) of these issues do not matter at all outside the scope GNU utils run in.

I suspect that attitude is how we got ourselves into this mess.

You have to assume you ultimately don't control what scope your software runs in. Obviously you do, 99.999% of the time. The other 0.0001% is when someone has found another vulnerability that lets them run your program with elevated privileges in an environment you didn't expect, and then they can use it to exploit one of these bugs. Almost all exploits use a chain of vulnerabilities each one seemingly mostly harmless - your "no one can ever exploit this weakness in my program because I control the environment" will be just one step in the chain.

That sounds far fetched. It is far fetched in the sense that it almost never happens. But nonetheless systems were and are exploited because of it. Once the solution was added in 2006 (openat() and friends), it should have never happened again. And indeed in the GNU utils it can't.

The people who build Rust's std::fs should have been aware of the problem and its solution because it was written in 2015. std::path was written at the same time, and that is where the change has to be made. It's not a big change either: std::path has to translate the path into a OS descriptor use that instead of the path - but only if it was available. I suspect the real issue was they had the same attitude as you, they thought it affects such a small percentage of programs it didn't really matter. That and it's a little bit of extra work.

It was a pity they had that attitude, because the extra work would have avoided this mess.


The grass most cows eat also need to be planted. The point of this post is that we could be planting stuff we can eat so you don't have to 'pay' the conversion cost.


That depends entirely on where you are.

In India, for instance, dairy cattle are fed almost exclusively on crop residues and by-products. Crop residues being what's left over in the field after you harvest and by-products being what would otherwise be waste left over after you process for human use.

Elsewhere, in addition to crop residues/by-products, you also have natural grasslands that aren't planted or irrigated, legume feed grown between major crop seasons when you can't grow anything else that also replenishes the soil and feed grown on otherwise marginal land or barely managed land.

Certainly some crops grown for cows would be edible by humans or the land repurposed for growing crops edible for people, but there's often a cost involved like heavier fertilizer requirements, pesticide use, water requirements, added infrastructure and/or labor.


The grass cattle can eat (which doesn't need to be planted; most people around me don't regularly plant their pasture) is not stuff people can eat, and can often grow in conditions that can't grow people-food.

Specifically, they can eat stuff that doesn't require constant fertiliser inputs, where as people-food generally does need a lot of fertiliser inputs and needs more intensive herbicide/pesticide application.

A balanced approach is to go, "Hmm, it's probably a good idea to raise cattle, chickens, and other animals, and also to grow all kinds of produce and staple crops as well."


No, not all land isn’t the same. It is far more profitable to grow a high value crop vs plain grass. But some land just isn’t great for other things than grass. You have large cattle farms in Australia where you can’t grow anything other than grass and other wild plants.


I think most people oustide the area do not care and do not know about who's on top, and the negative perception is much more related to how the tech will enable users to misuse it (replacing phone lines/support, AI art, things losing quality, etc) than about the companies themselves.


Yes I believe we're quickly approaching crypto territory, where distributed ledgers certainly have their valuable use cases, but the overwhelming _mindshare_ is active scamming and/or monkey jpegs.

There needs to be a concerted focus on real value for end users and less "yeah the terminator will take your job and raise your kids in your absence"


I think there is a lot of truth to what you say, particularly when it comes to caring rather than parroting; however as part of my personal and civil life I interact with a lot of non-tech people in non-tech capacities, and a surprising number of them raise unprompted complaints about people like Sam Altman and Elon Musk. Musk I understand everyone knowing about; between Tesla, SpaceX, the Thai boys football team, a very public inclination to raise his hand, and a position in the US government he is meaningfully famous. However how Sam Altman has managed to get his name out there in the wrong way very quickly to a bunch of Brits I don't know.


I can't think of a single big provider that does not provide a status page.

Not a lot of them provide uptime in % values, but Anthropic doesn't either.


I’m not saying others don’t provide a status page, just that they’re often misleading.

I’m thinking for example of Apple having a multiple hour outages a couple weeks ago, preventing anyone be from installing debug apps on devices. Yet, everything was green the entire time.


With the recent barrage of AI-slop 'speedup' posts, the first thing I always do to see if the post is worth a read is doing a Ctrl+F "benchmark" and seeing if the benchmark makes any fucking sense.

99% of the time (such as in this article), it doesn't. What do you mean 'cloneBare + findCommit + checkout: ~10x win'? Does that mean running those commands back to back result in a 10x win over the original? Does that mean that there's a specific function that calls these 3 operations, and that's the improvement of the overall function? What's the baseline we're talking about, and is it relevant at all?

Those questions are partially answered on the much better benchmark page[1], but for some reason they're using the CLI instead of the gitlib for comparisons.

[1] https://github.com/hdresearch/ziggit/blob/5d3deb361f03d4aefe...


The reason being bun actually tested both using the git CLI as well as libgit2. Across the board the C library was 3x slower than just spawning calls to the git CLI.

Under the hood, bun's calling these operations when doing a `bun install` and these are the places where integrating 100% gives the most boost. When more and more git deps are included in a project, these gains pile up.

However, the results appear more at 1x parity when accounting for network times (ie round trip to GitHub)


> This is not just product simplification. It is a distribution and deployment strategy.

iykyk


Are you suggesting this was written by AI?


You are absolutely right.


It’s not just a suggestion. It’s a demonstration.


And you are so bold for identifying it. You’re not just passively commenting. You’re creating something bigger, larger than yourself.


It's the demonstration layer


agentic demonstration layer


It's a very frequently used structure by LLMs especially ones writing for LinkedIn.


"It's not X, it's Y."

A linguistic presentation commonly referred to as constrastive negation.


Humans may use commas here, but LLMs always use a full stop, always.


I wonder if there's a single trainer in Kenya who is responsible for some of the conventions we see so often. Maybe (s)he just really likes full stops and used them in all of the training examples.


Was looking for a precise term for that. Thank you!

Also AI-Linkedin-Bullshit likes to use "just" additionally and it's mostly along the lines of Y being something much more impactful then X.


Why would they not use their own AI?


As other people mentioned this is obviously not something I would want in my notebook... but I can still appreciate the cool tech!

I can also definitely see this kind of thing being used in things budget outdoor displays, specially if the UI is made to accommodate the lack of accuracy, and the camera is positioned on the side (since these displays are usually vertical).


Difficult to capture reflections across a large screen while also dealing with outdoor lighting, glare, and moisture. The touchscreen part isn't usually what makes outdoor signage expensive compared to IP65, temperature control, and a secure housing, all of which would still need to apply here.

This looks like a neat option for retrofitting, and I suspect it'd work for some non-screen glass applications too. A combined IR/visible light solution would be interesting too, since I suspect those are complimentary (IR touch has issues with radiant light, while this wouldn't; this would have issues with low/no light, while IR wouldn't).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: