Hacker Newsnew | past | comments | ask | show | jobs | submit | PathOfEclipse's commentslogin

My experience profiling is that I/O wait is never the problem. However, the app may actually be spending most of it's CPU time interacting with database. In general, networks have gotten so fast relative to CPU that the CPU cost of marshalling or serializing data across a protocol ends up being the limiting factor. I got a major speedup once just by updating the JSON serialization library an app used.

I think it was always a mistake to pretend hyperthreading doubles your core count. I always assumed it was just due to laziness; the operating system treats a hyperthreaded core as two "virtual cores" and schedules as two cores, so then every other piece of tooling sees double the number of actual cores. There's no good reason I know of that a CPU utilization tool shouldn't use real cores when calculating percentages. But, maybe that's hard to do given how the OS implements hyperthreading.


>There's no good reason I know of that a CPU utilization tool shouldn't use real cores when calculating percentages

On AMD, threads may as well be cores. If you take a Ryzen and disable SMT, you're basically halving its parallelism, at least for some tasks. On Intel you're just turning off an extra 10-20%.


Can you provide some links for this? A quick web search turns this up at near the top from 2024:

https://www.techpowerup.com/review/amd-ryzen-9-9700x-perform...

The benchmarks show a 10% drop in "application" performance when SMT is disabled, but an overall 1-3% increase in performance for games.

From a hardware perspective, I can't imagine how it could be physically possible to double performance by enabling SMT.


I don't. It's based off my own testing, not by disabling SMT, but by running either <core_count> or <thread_count> parallel threads. It was my own code, so it's possible code that uses SIMD more heavily will see a less-significant speed-up. It's also possible I just measured wrong; running Cargo on a directory with -j16 and -j32 takes 58 and 48 seconds respectively.

>From a hardware perspective, I can't imagine how it could be physically possible to double performance by enabling SMT.

It depends on which parts of the processor your code uses. SMT works by duplicating some but not all the components of each core, so a single core can work on multiple independent uops simultaneously. I don't know the specifics, but I can imagine ALU-type code (jumps, calls, movs, etc.) benefits more from SMT than very math-heavy code. That would explain why rustc saw a greater speedup than Cinebench, as compiler code is very twisty with not a lot of math.


.NET perf can be great in the sense that it provides more tools to write C-like code while still taking advantage of safe memory management, as compared to Java. On the downside, both their JIT and their GC seem to be far less sophisticated than the JVM.

Why, for instance, does the CLR GC not have something like TLABs? The result is that it seems like .NET devs have to worry a lot more about the expense of small, short-lived allocations, while in Java these things are much cheaper.

Overall, I think it's easier to program idiomatically in Java and get decent performance out-of-the-box, while C# may provide more opportunities for optimization without having to rely on something equivalent to sun.misc.Unsafe and offheap allocations.


And it does, how I wish for Valhala to become a reality, and finally having structs, spans, and all the other C++ like capabilities C# enjoys, and Java lacks.


> On the downside, both their JIT and their GC seem to be far less sophisticated than the JVM

I've always heard this, but does it actually manifest in noticeably worse performance than the JVM? Go has a much simpler GC, but it's not terribly slower than Java (I do know they are optimized for different things).

It seems like C# is within bounds of Java while also having great performance escape hatches.


I've been working in .NET/C# for the past few years, and while I'm happy with it, I still think the JVM/Java are the best ecosystem overall I've worked in. It's amazing how many things the Java ecosystem gets right that .NET gets wrong.

For instance, Java introduced the fork/join pool for work stealing and recommended it for short-lived tasks that decomposed into smaller tasks. .NET decided to simply add work-stealing to their global thread pool. The result: sync-over-async code, which is the only way to fold an asynchronous library into a synchronous codebase, frequently results in whole-application deadlocks on .NET, and this issue is well-documented: https://blog.stephencleary.com/2012/07/dont-block-on-async-c...

Notice the solution in this blog is "convert all your sync code to async", which can be infeasible for a large existing codebase.

There are so many other cases like this that I run into. While there have been many mistakes in the Java ecosystem they've mostly been in the library/framework level so it's easier to move on when people finally realize the dead end. However, when you mess up in the standard library, the runtime, or language, it's very hard to fix, and Java seems to have gotten it more right here than anywhere else.


The thread pool starvation problem in .NET is annoying when you encounter it. Personally I have not bumped into since the .NET framework days.

The thread pool implementation has been tweaked over the years to reduce the impact of this problem. The latest tweak that will be in .NET 10:

https://github.com/dotnet/runtime/pull/112796

I’m not sure a thread pool implementation can immune to misuse (many tasks that synchronously block on the completion of other tasks in the pool). All you can do is add more threads or try to be smarter about the order tasks are run. I’m not a thread pool expert, so I might have no idea what I’m talking about.


Interesting. I'm not a .Net programmer, but I always thought .Net takes winning approach from Java ecosystem and adopts it. Java approaches/frameworks are kinda pioneering and competing, while .Net follows and grabs the best. So instead of competing approaches/frameworks (like ORMs for example) .Net has only one, the best one, well adopted and used by everyone there.

But reading your message it doesn't sound like it.


[flagged]


We've banned this account for repeatedly abusing the site guidelines and ignoring our many requests to stop.

https://news.ycombinator.com/item?id=43009383 (Feb 2025)

https://news.ycombinator.com/item?id=43009374 (Feb 2025)

https://news.ycombinator.com/item?id=41121266 (July 2024)

https://news.ycombinator.com/item?id=40979059 (July 2024)

https://news.ycombinator.com/item?id=38623345 (Dec 2023)


HN only works because people are generally respectful and assume good faith. Your post is rather the opposite. I hope that you'll maybe reflect on that.


That's a very harsh reply with zero evidence behind it. Based on your response, I am willing to bet I understand the platform better than you do. And the deadlocks I'm referring to are happening in apps written by other people who've been in the .NET ecosystem exclusively for more than a decade, or even two decades.

Here's an article from 5 years ago:

https://medium.com/criteo-engineering/net-threadpool-starvat...

But does citing a more-recent article matter to you? Probably not. A source being 13 years old only matters if something relevant has changed since then, and you certainly couldn't be bothered to point out any relevant change to support your otherwise fallacious and misleading comment.

What actually amazes me most about this is that people in .NET seem to want to blame the person writing sync-over-async code like they are doing something wrong, even going so far as to call it an "anti-pattern", when in reality it is the fault of poor decision-making from the .NET team to fold work-stealing into the global thread queue. The red-blue function coloring problem is real, and you can't make it go away by pretending everyone can just rewrite all their existing synchronous code and no other solution is needed.

If all you know is one ecosystem, then it seems you are susceptible to a form of Stockholm syndrome when that ecosystem abuses you.


[flagged]


> For example, starting with .NET 6 there is a pure C# threadpool implementation that acts differently under problematic scenarios.

We're seeing this issue entirely in .NET core. We started on .NET 6, are currently on .NET 8, and will likely migrate to 10 soon after it is released. It's again worth mentioning that you provide zero evidence that .NET 6 solves this problem in any way. Although, as we will see below, it seems like you don't even understand the problem!

> I'm certain you're basing this off of your personal experience from more than a decade ago of some forsaken codebase written in an especially sloppy way.

No, I'm referring to code written recently, at the job I work at now, at which I've been involved in discussions about, and implementations of, workarounds for the issue.

> Moreover, there isn't a single mention that the real way to get into actual deadlock situation is when dealing with applications enriched with synchronization context.

100% false. This deadlock issue has nothing to do with synchronization contexts. Please actually read the 2020 article I linked as it explains the issue much better.

> Pathetic attempt at strawman.

I realize responding to this is to just fight pettiness with more pettiness, but I can't resist. You should probably look up the definition of a strawman argument since you are using the word incorrectly.


To me a claim "how many things the Java ecosystem gets right that .NET gets wrong" borders on insanity if we consider having to interact with Maven or even Gradle on a daily basis after .NET's CLI and NuGet, or having to deal with type erasure in generics, or weird stream API shape, or not having common slice and sequence types that everything nicely unifies under because primitives cannot be generalized, or not being able to author properties and extension methods, creating dozens upon dozens of type copies or just boilerplate accessors, or having to tolerate Hibernate after EF Core, and so on and so forth.

As for async and tasks - have you ever considered just not writing the code that is so bad it managed to bypass cooperative blocking detection and starvation mitigations? It's certainly an impressive achievement if you managed to pull this off while starting with .NET 6.

Edit: I agree with the subsequent reply and you are right. Concurrency primitives are always a contentious topic.


I have no problem with you preferring .NET to Java, and I apologize that my first-cited article was not the best one to share to describe the problem (I should have read it more carefully first), but if you had responded with something like:

"Your deadlock scenario is related to synchronization contexts and can be avoided by ..."

rather than:

"You clearly don't know what you're talking about (but I won't bother telling you why)"

Then we could have had a much more productive and pleasant conversation. I would have responded with:

"Sorry, that article wasn't the right one to share. Here is a better one. The issue I am talking about isn't synchronization context-related at all. It's actually much more insidious."


> having to tolerate Hibernate after EF Core,

Enjoying or tolerating Hibernate is undiagnosed Stockholm Syndrome :D


I don't know what the "right" answer is, but I worked at a company that built a fairly unwieldy monolith that was dragging everyone down as it matured into a mid-sized company. And, once you're successfully used at scale it becomes much more difficult to make architectural changes. Is there a middle ground? Is there a way to build a monolith while making it easier to factor apart services earlier rather than later? I don't know, and I don't think the article addresses that either.

The article does mention "invest in modularity", but to be honest, if you're in frantic startup mode dumping code into a monolith, you're probably not caring about modularity either.

Lastly, I would imagine it's easier to start with microservices, or multiple mid-sized services if you're relying on advanced cloud infra like AWS, but that has its own costs and downsides.


> An argument, though, is an exchange of ideas that ought to surface insight and lead to a conclusion.

That's one definition, I suppose, but it's not the definition you'll find in any dictionary I've seen. The author here seems to be assuming that the only valid reason to argue is to learn. People argue for many reasons other than that.

> If you’re regularly having arguments with well-informed people of goodwill, you will probably ‘lose’ half of them–changing your mind based on what you’ve learned

Again, the author's unspoken presupposition begs to be questioned. Why do most people actually argue in the public sphere? For instance, why do we have presidential debates? The candidates certainly aren't there to learn. They are not even trying to persuade their debate partner. They are arguing to convince or persuade their viewers of something. These could be undecided viewers, or they could be viewers who have already made up their mind but may either feel strengthened about their beliefs or weakened after listening.

Similarly, if I'm debating someone online, it's often less to convince that person and more to convince anyone else who might be reading. I have heard of people in real life who have read debates I've engaged in and expressed both gratitude for my willingness to do so and that they were strengthened in their beliefs on the subject.


I like how you claim data doesn't support this being a problem but at the same time can't be bothered to cite any data. I'll do it for you: https://5666503.fs1.hubspotusercontent-na1.net/hubfs/5666503...

"Alarming proportions of students self-censor, report worry or discomfort about expressing their ideas in a variety of contexts, find controversial ideas hard to discuss, show intolerance for controversial speakers, find their administrations unclear or worse regarding support for free speech, and even report that disruption of events or violence are, to some degree, acceptable tactics for shutting down the speech of others."

"Less than one-in-four students (22%) reported that they felt “very comfortable” expressing their views on a controversial political topic in a discussion with other students in a common campusspace. Even fewer (20%) reported feeling “very comfortable” expressing disagreement with one of their professors about a controversial topic in a written assignment; 17% said the same about expressing their views on a controversial political topic during an in-class discussion; 14%, about expressing an unpopular opinion to their peers on a social media account tied to their name; and 13%, about publicly disagreeing with a professor about a controversial political topic. "

And as for examples, the sitting NIH director, Jay Bhattacharya, who in hindsight was far more correct on everything COVID-related than the CDC was: had this to say about his experience at Stanford: https://stanfordreview.org/stanfords-censorship-an-interview...

" I presented the results in a seminar in the medical school, and I was viciously attacked. ... It was really nasty: allegations of research misconduct, undeclared conflicts of interest… In reality, the whole study was funded by small-dollar donations."

"It was very stressful. I had to hire lawyers. I've been at Stanford for 38 years and I felt it was really, really out of character. At one point, the Chair of Medicine ordered me to stop going on media and to stop giving interviews about COVID policy. They were trying to totally silence me."


> Jay Bhattacharya, who in hindsight was far more correct on everything COVID-related than the CDC was

Bhattacharya who signed the Great Barrington Delaration, advocating for herd immunity and "focused protection" for the elderly? Just imagine how much larger the death toll would have been.

This page has a good list of concerns about Bhattacharya, including how the study mentioned in your link was flawed and one of the co-authors went on to admit the results were wrong: https://www.zmescience.com/medicine/jay-bhattacharya-has-a-h...


An honest seeker of truth wouldn't just say Jay's estimate was off, but compare it to other estimates of the time. Bhattacharya's IFP estimate was .2%. The WHO's IFP estimate was 3.0%. Which of the two had the more accurate estimate? The WHO, with billions in funding, or Jay operating by himself on a shoestring budget, all while the CDC in its bureaucratic incompetence couldn't be bothered to do any real studies? In fact, a positive outcome of Jay's study was to help understand just how bad the initial estimates were!

And as far as the great Barrington declaration is concerned, it is widely accepted now that the lockdown strategy failed, and that focused protection would have saved far more lives and caused far less economic harm and educational harm, which by the way, correlate with loss of life and loss of years of life. Even far left news outlets admit this now: https://nymag.com/intelligencer/article/covid-lockdowns-big-...


> it is widely accepted now that the lockdown strategy failed

Is it?

https://royalsociety.org/news-resources/projects/impact-non-...


setAccessible is also used to be able to access private fields, and not just to be able to write to final fields. Most libraries shouldn't need to set final fields, and I say this as someone who was very against when they deprecated java.lang.misc.Unsafe. I've only had to set a final field once in my career and it was related to some obscure MySql/JDBC driver bug/workaround. This particular deprecation seems very sensible to me.


So how should GSON initialize an object?

The theory is, go through the constructor. However, some objects are designed to go through several steps before reaching the desired state.

If GSON must deserialize {…, state:”CONFIRMED”}, it needs to call new Transaction(account1, account2, amount), then .setState(STARTED) then .setState(PENDING) then .setState(PAID) then .setState(CONFIRMED) ? That’s the theory of the constructor and mutation methods guarding the state, so that it is physically impossible to reach a wrong state.

There is a convention that deserialization is an exception to this theory: It should be able to restore the object as-is, after for example a transfer over the wire. So it was conventionally enabled to set final variables of the object, but only at initialization and only for its own good. It was assumed that, even though GSON could reach a state that was unachievable through normal means, it was, after all, the role of the programmer to add the right annotations to avoid this.

So how do we do it now?


> So how do we do it now?

The JEP says:

> the developers of serialization libraries should serialize and deserialize objects using the sun.reflect.ReflectionFactory class, which is supported for this purpose. Its deserialization methods can mutate final fields even if called from code in modules that are not enabled for final field mutation.

I don't know enough about the details here to say if that's sufficient, but I imagine that it at least should be, or if it's not, it will be improved to the point where it can be.


> The JEP says: [...]

The JEP also says:

> The sun.reflect.ReflectionFactory class only supports deserialization of objects whose classes implement java.io.Serializable.

In my experience, most classes being deserialized by libraries like GSON do not implement Serializable. Implementing Serializable is mostly done by classes which want to be serialized and deserialized through Java's native serialization format (which is used by nothing outside Java, unlike cross-platform formats like JSON or CBOR).


Why would you use GSON for objects that go through steps of state? Why would you mark fields like State as final when it is actually mutable? This just sounds like poorly designed code.

Maybe I don't know of your use case, but GSON/Jackson/Json type classes are strictly data that should only represent the data coming over the wire. If you need to further manipulate that data it sounds like the classes have too much responsibility.


all state is immutable :) a change creates new state - which is immutable


:) no its not.


if you change the state, it is not same state, it is a new state


It strikes me that we could have a way to reflectively create an object from values for all its fields in a single step - similar to what record constructor does, but for any class (could even be Class::getCanonicalConstructor, returning a java.lang.reflect.Constructor). It would be equivalent to creating an uninitialised instance and then setting its fields one by one, but the partly-initialised object would never be visible. This should probably be restricted, because it bypasses any invariants the constructor enforces, but as you say, ultimately serialisation libraries do need to do that.


I don't know if Java serialization supports this kind of thing, but if you have object A has a pointer to object B and vice-versa, there's no order to deserialize them without passing through a partially-initialized state that preserves the object identity relationship. I suppose you can't construct this kind of loopy references graph with final fields without reflection in the first place, so it's kindof chicken and egg. For the very common case of DAG-shaped data or formats that don't support references I think the one-shot internal constructor works though.


Just do it like Rust


The downvotes for your comment are quite telling of the demographic of Hacker News. But even leftwing media are admitting the Democrat party has a problem:

https://newrepublic.com/article/192078/democrats-become-work...

"We cannot solve this problem without an honest assessment of who we are. How we see ourselves as the Democratic Party—the party of the people, the party of the working class and the middle class—no longer matches up with what most voters think."

https://www.realclearpolitics.com/video/2025/01/21/eric_adam...

The Democratic Party "Left Me And It Left Working Class People"

https://www.thirdway.org/report/renewing-the-democratic-part...

"For the first time since the mid-20th century, the central fault line of American politics is neither race and ethnicity nor gender but rather class, determined by educational attainment. But in the intervening half century, the parties have switched places. Republicans once commanded a majority among college-educated voters while Democrats were the party of the working class. Now the majority of college educated voters support Democrats"

But I suppose there's too many well-off, well-educated, white collar elites on this site who aren't willing to face the reality that their politics have been ruining blue collar lives for decades, and the political backlash has finally hit a crescendo powerful enough to wash over Washington. So I guess we blame it all on disinformation campaigns?


You should continue to be cautious, as the Guardian article is pretty weak. It's first point is that Russia wasn't mentioned in a talk about cyber threats, which means very little by itself. It's second and final point comes from an "anonymous source". I'll believe it when the source becomes de-anonymized. Too many deep state "anonymous tips" have turned out to be lies. It should be noted that CISA has a history of strong leftwing activism:

https://www.dailywire.com/news/u-s-cybersecurity-defense-age...

https://www.dailywire.com/news/biden-administration-colluded...

Trump is not far right and never has been. He's actually moderate on most issues. The current far right position is the Tucker Carlson "isolation at all costs" and "Putin maybe isn't a bad guy" garbage. Trump holds neither of those views.


If it turns out that CISA analysts were truly told verbally to no longer focus or report on Russian threats, and if you assume this was done under Trump's direction or to align with his intent, would would your thoughts be?

I'm not (yet) asserting that either of those two conditions are or aren't true, I'm just curious what your thoughts would be IF they were.


It's a good question, and, in the worst case, I would disagree with the Trump administration and lose some trust in them. There is also the possibility that missing context would provide reasonable justification for the order. For instance, I think "Russian election interference" investigations have mostly been a partisan operation with little evidence that such interference has had any meaningful impact on U.S. elections. These investigations are mostly used to make the political right side look bad by tying them to Putin. It would be reasonable for the Trump admin to defund these investigations as we have much greater problems to be worried about.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: