Hacker News new | past | comments | ask | show | jobs | submit | JoshTriplett's comments login

I'd love to see the benchmarks for this with SMT on: 96x2x4 = 768 CPUs in one system, along with 512GB of HBM that has 6900 GB/s memory bandwidth, and then DDR5.

There was a study recently that made it clear the use of LLMs for coding assistance made people feel more productive but actually made them less productive.

EDIT: Added links.

https://www.cio.com/article/3540579/devs-gaining-little-if-a...

https://web.archive.org/web/20241205204237/https://llmreport...

(Archive link because the llmreporter site seems to have an expired TLS certificate at the moment.)

No improvement to PR throughput or merge time, 41% more bugs, worse work-life balance...


I recently slapped 3 different 3 page sql statements and their obscure errors with no line or context references from Redshift into Claude, it was 3 for 3 on telling me where in my query I was messing up. Saved me probably 5 minutes each time but really saved me from moving to a different task and coming back. So around $100 in value right there. I was impressed by it. I wish the query UI I was using just auto-ran it when I got an error. I should code that up as an extension.

$100 to save 15 minutes implies that you net at least $800,000 a year. Well done if so!

When forecasting for developers and employee cost for a company I double their pay but I'm not going to say what I make and if I did or not. I also like to think that developers should be working on work that is many multiples of leverage over their pay to be effective. But thanks.

> but really saved me from moving to a different task and coming back

You missed this part. Being able to quickly fix things without deep thought while in flow saves you from the slowdowns of context switching.


That $100 of value likely costed them more like $0.1 - $1 in API costs.

It didn't cost me anything, my employer paid for it. Math for my employer is odd because our use of LLMs is also R&D (you can look at my profile to see why). But it was definitely worth $1 in api costs. I can see justifying spending $200/month for devs actively using a tool like this.

I am in a similar same boat. Its way more correct than not for the tasks I give it. For simple queries about, say, CLI tools I dont use that often, or regex formulations, I find it handy as when it gives the answer Its easy to test if its right or not. If it gets it wrong, I work with Claude to get to the right answer.

First of all, that's moving the goalposts to next state over, relative to what I replied to.

Secondly, the "No improvement to PR throughput or merge time, 41% more bugs, worse work-life balance" result you quote came, per article, from a "study from Uplevel", which seems to[0] have been testing for change "among developers utilizing Copilot". That may or may not be surprising, but again it's hardly relevant to discussion about SOTA LLMs - it's like evaluating performance of an excavator by giving 1:10 toy excavators models to children and observing whether they dig holes in the sandbox faster than their shovel-equipped friends.

Best LLMs are too slow and/or expensive to use in Copilot fashion just yet. I'm not sure if it's even a good idea - Copilot-like use breaks flow. Instead, the biggest wins coming from LLMs are from discussing problems, generating blocks of code, refactoring, unstructured to structured data conversion, identifying issues from build or debugger output, etc. All of those uses require qualitatively more "intelligence" than Copilot-style, and LLMs like GPT-4o and Claude 3.5 Sonnet deliver (hell, anything past GPT 3.5 delivered).

Thirdly, I have some doubts about the very metrics used. I'll refrain from assuming the study is plain wrong here until I read it (see [0]), but anecdotally, I can tell you that at my last workplace, you likely wouldn't be able to tell whether or not using LLMs the right way (much less Copilot) helped by looking solely at those metrics - almost all PRs were approved by reviewers with minor or tangential commentary (thanks to culture of testing locally first, and not writing shit code in the first place), but then would spend days waiting to be merged due to shit CI system (overloaded to the point of breakage - apparently all the "developer time is more expensive than hardware" talk ends when it comes to adding compute to CI bots).

--

[0] - Per the article you linked; I'm yet to find and read the actual study itself.


Do you have a link? I'm not finding it by searching.

I really need the source of this.

This is, very directly, a supply-and-demand problem. (That is not to say that it's a simple one, or that naive economics trivially applies here or will necessarily give correct answers, or especially that either the supply or demand are straightforward to change, just that it's a reasonable reasoning framework to start with.) The more people willing to do a job, the lower the standards of the lowest bidder. Solving this problem requires one or more of:

1) Raising the minimum standards of the lowest bidders on the worker side. This could be done through collective bargaining or regulation, making it so nobody is willing to "defect" (in the prisoner's dilemma sense) and work for conditions below a certain standard, which means there's little to no supply of such workers.

2) Raising the standards on the demand side. This could theoretically happen if consumers are willing to preferentially purchase from places that provide higher standards for workers; effectively, coordinate and collectively bargain on the purchasing side. This seems unlikely to happen as consumers are even more likely to "defect" and purchase from the least expensive company. This is one case where a simplistic model breaks down: consumers' ability to collectively demand higher standards for how companies treat their workers is limited by the fact that consumers are getting their income and ability to afford higher standards from the work they're doing.

3) Lowering the supply of labor across the board. This would happen if fewer people are willing to do the job, such as if people didn't have to work in order to survive (e.g. UBI). If there isn't an endless supply of workers who have to tolerate whatever conditions get them paid enough to survive, satisfying demand for labor requires substantially higher standards for pay and working conditions. (Conversely, if everyone in a workplace wants to be there, it's easier to get quality output.)

4) Raising the demand for labor across the board. This isn't going to happen, as it'd run counter to some of the primary defining qualities of an improving society; even if it did, it would be likely to ultimately result in similar stratification between groups of workers.

5) Raise the mobility from one category of labor to another. Constantly being worked on in many different ways, but will inherently never be able to fully solve the problem because not enough people can take advantage of this option to avoid stratification.

The feasible alternatives here feed back both positively and negatively into each other.

Personally, I think implementing (3) via UBI is the one most likely in terms of feasibility. (Not politically in terms of passing it, but practically in terms of how monumentally effective it would be compared to the rest.) (3) is the option here most immune to the prisoners' dilemma defection problem.


"running out" in practice means "getting more expensive", as many entities who own (unnecessarily) large swaths of the IPv4 space are reselling them.

In practice it also means literally running out since there are less IPv4 addresses than there are people (and much less computers that might want to connect to the Internet).

The future never arrives in an evenly distributed fashion, but it reaches everyone eventually.

1) There's no danger of overpopulation. People have a natural tendency to reproduce slower when they feel safer.

2) Trivial argument: if people already lived indefinitely would you advocate murdering them to "make room"? Telling people they shouldn't be able to pursue a longer life is equivalent. Making that decision for yourself is perfectly fine; making it for others is not.

3) 150k people die every day, nearly 2 people per second. If fixing that tragedy creates new problems, bring them on; we'll solve those problems too.


1) Is not clearly true. Yes, 'feeling safer' pushes down on reproduction rates. But the _total_ effect on population growth could still be positive if the total death-rate drops enough -- we don't know enough to say for sure. And frankly, I think the most likely outcome is that people would be more likely to have kids if they didn't have to worry about missing out on their chance at XYZ dream.

2) Not true from most moral perspectives, including 'common sense morality'. In a pure utilitarian sense, sure, but most people don't subscribe to that. For example, choosing to not save someone from a burning fire is not the same as choosing to burn them to death. Both the actor and their intention matter.

3) I don't disagree with the first half of your point (that this is a tragedy) but I cannot share your optimism re.: us solving the consequent problems. If there's anything that the last fifty years of modernity have shown, it's that we're actually quite bad at solving broader social problems, with new and even-worse problems often arising well after we thought the original problem settled. Consider global warming (to which the 'solution' looks to be the further impoverishment of the third world, and probably mass deaths due to famine/drought/heat waves), or how we in the US 'solved' mobility by destroying main streets and replacing established public transportation with cars and mazes of concrete. Now we've "solved" loneliness by giving everyone a phone and -- well, I'm sure you know how that went.


1) We already have a growing population, and I don't think it's inherent that curing mortality must make it grow faster. The net effect would certainly be an ongoing upwards growth (since I would hope that population never goes down), but I'm arguing that the net effect does not inherently have to be unchecked exponential growth. Immortality doesn't solve resource constraints, and resource constraints do influence people's choices. That said, I also believe that even if it did result in faster growth, that isn't a reason to not solve the problem.

2) The equivalence here isn't "choosing to not save". Choosing to push someone back into a burning building, or preventing them from trying to escape, is equivalent to choosing to burn them to death.

3) I am an incorrigible optimist and don't intend to ever stop being one. Humanity is incredible and it's amazing what we can solve over time. I don't believe that any potential solution we might come up with is worse than doing nothing and letting 150k people die every day.


Which case was that?

If you make buyout offers to 10000 startups, how many of those say "no" to those offers? Not everybody is willing to send an "our incredible journey" post to all their users.

> If it isn't in you it means you are carefully suppressing it.

This is false and excessively cynical. There are genuinely good people who have values and live up to those values, without hypocrisy, and that doesn't mean they have "carefully suppressed" desires to do otherwise.


They have desires all the time. Most of them will admit it. their actions prove their success at the suppression.

Of course people have desires. My point is that some people have their desires and their values aligned, rather than having desires that run contrary to their values. No "suppression" required.

Sometimes that happens. They also have other desires that conflict with their values at times. This is just a normal part of being a complex human.

The Path of Love begins with exercising our free will to choose to not be a selfish ahole. Suppression of our selfish urges is the first step to fully overcoming them, which is the process of purifying one's soul and, therefore, even the potential to vice.

"Blessed are the pure of heart, for they shall see God."

That is the highest point of human self-evolution, though precious few even attempt it, much less recognize a person who has done so. And so few believe it to be possible that even those of us who try are in a small minority.


We all have conflicting desires. You might desire a healthy physique while simultaneously desiring a hamburger.

But do you want to want the hamburger? Or do you want to want a salad? Which do you, the person (not the animal), really want?

With a bit of reflection, it's not so difficult to figure out what you as a person really want, and it's often different than your lower level primal drives.

Those who choose to eat the salad haven't suppressed anything - they've simply aligned their actions with their higher order desires.


What does your wishlist for Rust look like? (Besides "simpler C/Rust interoperability", of course.) Has QEMU run into things that Rust-for-Linux hasn't, that feel missing from the Rust language?

Right now the only language-level thing I would like is const operator overloading. Even supporting MSRV as old as 1.63 was not a big deal, the worst thing was dependencies using let...else which we will vendor and patch.

Pin is what it is, but it is mostly okay since I haven't needed projection so far. Initialization using Linux's "impl PinInit<Self>" approach seems to be working very well in my early experiments, I contributed changes to use the crate without unstable features.

In the FFI area: Bindgen support for toml configuration (https://github.com/rust-lang/rust-bindgen/pull/2917 but it could also be response files on the command line), and easier passing of closures from Rust to C (though I found a very nice way to do it for ZSTs that implement Fn, which is by far the common case).

The "data structure interoperability" part of the roadmap is something I should present to someone in the Rust community for impressions. Some kind of standardization of core FFI traits would be nice.

Outside the Rust core proper, Meson support needs to mature a bit for ease of use, but it is getting there. Linux obviously doesn't need that.

BTW, saw your comment in the dead thread, you're too nice. I have been curious about Rust for some time and with Linux maturing, and Linaro doing the first contribution of build system integration + sample device code, it was time to give it a try.


> Right now the only language-level thing I would like is const operator overloading.

As far as I know, const traits are still on track.

> easier passing of closures from Rust to C

As in, turning a Rust closure into a C function-pointer-plus-context-parameter?

> The "data structure interoperability" part of the roadmap is something I should present to someone in the Rust community for impressions. Some kind of standardization of core FFI traits would be nice.

Would be happy to work with you to get something on the calendar.


> As far as I know, const traits are still on track.

Yes they are. My use case is something like the bitflags crate, there are lots of bit flags in emulated devices of course. In the meanwhile I guess it would be possible to use macros to turn something like "bit_const!(Type:A|B)" to "Type(A.0|B.0)" or something like that.

> As in, turning a Rust closure into a C function-pointer-plus-context-parameter?

Yes, more in general everything related to callbacks is doable but very verbose. We might do (procedural?) macro magic later on to avoid the verbosity but for now I prefer to stick to pure Rust until there's an idea of which patterns recur.

Let me know by email about any occasions to present what I have.


I made this handy function to pass callbacks to C: https://github.com/andoriyu/uclicious/blob/master/src/traits...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: