In 2015 there was no ZGC. Today ZGC (an optional garbage collector optimized for latency) guarantees that there will be no GC pauses longer than a millisecond.
I would check your answer. These are pauses due to time spent writing to diagnostic outputs. These are not traditional collection pauses. This affects both jstat as well as writes of GC logs. (I.e. GC log writes will block the app just the same way)
These modern garbage collectors are not simply free though. I got bored last year and went on a deep dive with GC params for Minecraft. For my needs I ended up with:
-XX:+UseParallelGC -XX:MaxGCPauseMillis=300 -Xmx2G -Xms768M
When flying around in spectator mode, you'd see 3 to 4 processes using 100%. Changing to more modern collectors just added more load to the system. ZGC was the worst, with 16+ processes all using 100% cpu. With the ParallelGC, yes you'll get the occasional pause but at least my laptop is not burning hot fire.
Yes no GC is free (well perhaps Epsilon comes close :)
It’s a low pause GC so latencies, particularly tail latencies, can be more predictable and bounded. The tradeoff you make is that it uses more CPU time and memory in order to operate.
Minecraft really needs generational ZGC (totally brand new) because Minecraft generates garbage at prodigious rates and non-generational GC collects less garbage per unit time.
Yes, this is why GCs work so bad for 3D games since you are usually limited by memory bandwidth and latency, especially on systems with unified RAM (no seperate GPU RAM).
Sadly in many cases no; it's not magic. This nirvana is restricted to cases where there is CPU bandwidth available (e.g. some cores idle) and plenty of free RAM. When either CPU or RAM are less plentiful... hello pauses my old friend.
This is why memory-bound services generally use languages without mandatory GC. Tail latency is a killer.
Rust's memory management does have some issues in practice (large synchronous drops) but they're relatively minor and easily addressed compared to mandatory GC.
In cases where java is unavoidable and you're working with large blocks, it is possible to sort of skirt around the gc with certain kinds of large buffers that live outside the heap.
I've used these to great success when I had multiple long-lived gigabyte+ arrays. Without off-heap memory, these tended to really slow the gc down (to be fair, I didn't have top of the line gc algorithms because the openj9 jvm had been mandated)
Managing off heap memory in Java is pain even worse than manual memory management in C. Unlike C++ and Rust, Java offers no tools for manual memory management, and its idioms like frequent use of exceptions make writing such code extremely error prone.
But it is a pain and only really useful if you have a big, long lived object. In my case it was loading massive arrays into memory for access by the API server frontend. They needed to be complete overwritten once an hour, and it turns out that allocating 40% of system memory then immediately releasing another 40% back to the GC at once is a good recipe for long pauses or high CPU use
The cost of statistics gathering on a GC implementation that avoids ineffective GC activity is less affected by the cost of telemetry (no news is good news), but it is still affected.
There is a difference between memorization and rote memorization. In chess, rote memorization of master games or chess positions is not a recognized training method. Chess memory improves as a byproduct of analyzing many positions.
Rote memorization is absolutely a mainstay of learning both openings
and endgames.
It’s usually a part of tactics training as well although not as purely, the polgar sisters for instance were drilled on the same chess positions day in day out in a spaced repetition system. This is going away a bit because chess puzzle databases have so many unique positions that there’s less need for repetition.
Regarding openings, there's a trade-off between chess training and chess results. Rote memorization can improve your results (if you already have good skills), but it won't improve your skills.
Learning endgames is not about blindly memorizing moves in specific positions. You learn tricks that can be used in a large number of positions. Even the seemingly very specific positions can be mirrored left-right (not to mention black-white).
That seems reasonable, but at the same time my understanding is that there’s enormous value in novice and intermediate players to memorizing openings. I wonder if that effect is significant enough to categorize chess as another high-rote-memorisation-affinity task.
Learning openings beyond a very basic level is not going to help the club player very much and it’s generally a good way for them to waste their time, at least from an improving your ELO perspective.
Being the best out of the opening will typically put you a “quarter pawn” ahead, maybe putting you ahead as white or equalizing as black. Then if you’re a novice you will immediately hang a knight and end up 2.75 pawns behind. Then your opponent will hang a bishop and you’ll be a quarter pawn ahead again.
The other problem with learning opening theory against novices is you will learn 30 moves a side of Ruy Lopez opening theory and your opponent won’t get 10 moves without leaving theory rendering your study moot.
There’s far more emphasis on memorizing openings at the grandmaster level because people are playing a tight enough game elsewhere for that slight advantage to really matter, and because of all the pre-game preperation where teams of grandmasters and chess engines will come up with novel moves to throw an opponent off balance while the star player memorizes the lines. To the point of grandmasters like Bobby Fischer complain it ruined the game and inventing variants like chess960. All super grandmasters have outlier memorization abilities.
Generally club players just need to rote memorize not too deeply and understand the broad sweeping ideas and key moves of the openings (when white does that, counter them with this). That should allow them to come up with reasonable moves on the fly which might be the best or third best moves. Memorizing fewer openings at first is probably better. At the more casual level memory is much less important.
> Being the best out of the opening will typically put you a “quarter pawn” ahead, maybe putting you ahead as white or equalizing as black. Then if you’re a novice you will immediately hang a knight and end up 2.75 pawns behind. Then your opponent will hang a bishop and you’ll be a quarter pawn ahead again.
While this is true if you know openings, many openings have a trap or two that make up a very tricky line that puts you 3-5 points ahead. Knowing the traps and how to punish them is a huge material advantage in some games. So while knowing your opening well is "worth" only a quarter pawn in a typical game, it is worth a several-percent increase in win rate from knowing these lines.
Openings like the Jobava London system have 10-20 different trap lines like this, and if you want to play them, you must know the lines.
It is very common for players with your mindset to plateau around 1400-1600, at which point it's time to sit down and start memorizing openings and endgames. Just being good at searching the game tree gets you to that point, but now you need to know the times when the game tree collapses 30 moves later.
There was a guy Michael De La Maza who literally just drilled tactics and broke 2000 USCF and then quit chess, and if you look at his games yes he really really did not understand openings. So 1400-1600 is well before when you’re going to plateau without knowing openings.
1400 yes learning a trap line can improve your results, so if you subscribe to the Eric Rosen school of opening theory you can benefit from openings. I’ve just never thought it’s worth learning much about conventional openings until about 1600.
> When all is said and done, I can’t recommend Rapid Chess Improvement (a book that, in my view, offers a philosophically bankrupt vision of what chess is). It smacks of "the blind leading the blind.” But, as I said earlier, his book might prove useful for some.
Also, a rating improvement from a 1300 start after a long spell of no rated games often means a lot of skill improvement in that gap, and then a corresponding adjustment in rating. Perhaps the guy was a bridge or Magic: the Gathering player and already had a decent intuition for games and needed to transfer that to chess. Disregarding that drilling 1000 tactical problems sounds a lot like a memorization plan to me, he also clearly knows the e4 opening given the game analysis quoted in Silman's review.
> Like many adults, he assumed that he needed to augment his natural skills and intelligence by compiling chess knowledge: he studied openings, endgames, and other "chess knowledge" information. Despite all that accumulation of knowledge, he was getting nowhere.
Huh... did someone study some openings and endgames? His tactical game was likely the weakest part of his game so he remedied that error and got rapid improvement. Not in spite of failing to study openings and endgames, but because he did study them, just out of order.
Sure he didn't know the quarter-pawn-advantage grandmaster lines (which you don't need to know as a 1600), but he knew the traps and how to avoid them.
This can't be more wrong. It's absolutely a training method, and the importance of recognizing certain openings is even more pronounced in professional play.
Your link is only for value types, but this one is more general, it's for any type. Obviously the two are related, but not the same. Nullability for value types has performance implications as well.
It is free to try. But may not be free for commercial use. Some random blog post says check it with Oracle if you want to sell your products/services with it.
"The GFTC is the license for GraalVM for JDK 21, GraalVM for JDK 17, GraalVM for JDK 20 and later releases. Subject to the conditions of the license, it permits free use for all users – even commercial and production use."
"What is the new “GraalVM Free Terms and Conditions (GFTC) including License for Early Adopter Versions” License?"
Diffing the licenses shows that the only change was to add clarity around graalvm as opposed to a generic program:
"For clarity, those portions of a Program included in otherwise unmodified software that is produced as output resulting from running the unmodified Program (such as may be produced by use of GraalVM Native Image) shall be deemed to be an unmodified Program for the purposes of this license."
Other than that, it appears the same. The thing I'm wondering about is how this matches with everyone saying it's "free to use" when the following is in both versions of the license (underscore _emphasis_ mine):
"(b) redistribute the unmodified Program and Program Documentation, under the terms of this License, provided that __You do not charge Your licensees any fees associated with such distribution or use of the Program__, including, without limitation, fees for products that include or are bundled with a copy of the Program or for services that involve the use of the distributed Program."
This to me reads as if:
- You cannot charge for licensing fees on graalvm built distributed pieces of software (licensing for a desktop application, cli program, game, etc)
- You cannot charge for licensing fees on services that involve the use of graalvm built software (SaaS software that uses the graalvm built binary in a k8s container, lambad, etc as part of the service topology)
My gut reaction (since this is Oracle) is that this was an ambiguity oversight in the language that is now being corrected to lock down this "free" usage of the premium version, but I may be overly pessimistic.
Is anyone using this in prod based on it being "free" and has any info from their legal team or otherwise?
Mostly looking to be corrected in my likely misinterpretation.
I did not, but I have my doubts as to whether he was made to do it or if he truly meant it. I tend to believe in the former because it's always accompanied by "stepping back for a while" and "get help on how to behave better". Seems like a standard procedure after somebody manages to overpower you (in whatever hierarchy they have there).
Were cursing and expletives necessary? Absolutely no. They don't drive any point forward.
But: is showing people the door when they are entitled or unprofessional necessary? Very, very much yes.
Feel free to read into the article as your beliefs incline you to. I've known many people like Linus and they don't get "change of hearts". They simply get sick and tired of being misunderstood and just remove themselves from the situations that cause it.
There's more to the Linus-style jerk phenomenon than just telling entitled people to piss off (I would be reluctant to call that being a jerk if that's all it was). See https://news.ycombinator.com/item?id=33058906 for an example thread and associated comments. If you're just ranting or passing off subjective POVs as truth and obvious and those disagreeing with you as doing so out of incompetence or malfeasance, that isn't being direct, honest, or straight to the point. It's being a dick.
I've seen brilliant colleagues for whom I have the utmost technical admiration completely fail to improve bad designs implemented by others, because the brilliant person was so dickish about how they communicated to others.
> And the reality is that there are no absolute guarantees. Ever. The "Rust is safe" is not some kind of absolute guarantee of code safety. Never has been. Anybody who believes that should probably re-take their kindergarten year, and stop believing in the Easter bunny and Santa Claus.
It's an exaggeration. Means that he disagrees with people who blindly repeat something that, on the level Linus operates, is objectively not true.
I have no context on the broader discussion but it seems both sides haven't equalized the plane on which they discuss. In that context I'll agree Linus was a dick.
But, consider what was said upthread: many high-profile open source contributors hear the same crap every day. No matter how gracious you start off, you'll start rolling your eyes and ultimately resort to sarcasm. And some go even further: start insulting. Ask anyone who works in retail.
So again, to me Linus' statement basically uses an exaggeration to illustrate a point: "Stop repeating something as if it's unequivocally true. It's true only in your context (userland application development). It's not true in kernel context."
That people get offended by that says more about them than about Linus.
Finally, I'll agree it can and should be toned down. Not disputing that. But it's also not so difficult to extract out the true point in such a rant and move on.
We probably won't get much further going back and forth on this. For what it's worth, you seem very reasonable, I've appreciated your comments for a long time, and I'm sure we'd get along fine if we were to work together.
I think you could let Linus off the hook by trying to find the kernel of truth as you suggest, and that seems to be the way key Rust team members work. There's been plenty of needless rancor in HN comments about Rust and you can see people like @pcwalton just not engage with the emotional content while still continuing to engage with the technical points. I'm personally impressed by this, but wouldn't be surprised if it contributed to the burnout.
Should we all aspire to be like that? Doing so seems like the human communication equivalent of Postel's Robustness Principle, which sounds great but in practice leads to shitty implementations getting away with being shitty because of the "robustness" on the other side. Maybe the better play here, especially with asynchronous communication, is to expect people come back to their message draft when they are not so pissed/emotionally triggered and then snip out tangential emotional crap. Especially the ragey condescending stuff.
> I'm personally impressed by this, but wouldn't be surprised if it contributed to the burnout.
I think that people who contribute to languages are of a certain psychological type. They are generous, nice, kind, they want to contribute and are not interested in the social and people drama. They are a special breed of people whom I admire.
At the same time, and as you point out, that makes them more vulnerable to burnout because the social / people drama always creeps in, and they seem less well equipped to deal with it (though I've known of very impressive exceptions).
Personally I found out that bottling up negative emotions is futile; they inevitably erupt and the longer you have bottled them up, the more violent the explosion and the worse the ramifications for your mental health.
That's my main reason for not mincing words anymore. I prioritize my mental health. I am OK if that means I part ways with people and companies. I was a victim of FOMO for most of my conscious life; I want to live my remaining years being more at peace.
> Maybe the better play here, especially with asynchronous communication, is to expect people come back to their message draft when they are not so pissed/emotionally triggered and then snip out tangential emotional crap.
Obviously that's best but I can bet each and every one of us has been in a situation where they knew they had to do that and still didn't. :D Myself included, not proud of some of my comments on HN during Corona time...
But expecting most people to be like that? Super tall order, turns out. :(
--
> We probably won't get much further going back and forth on this.
Likely not, but I am grateful that you entertained it as much as you did. :)
> For what it's worth, you seem very reasonable, I've appreciated your comments for a long time, and I'm sure we'd get along fine if we were to work together.
That's an extremely surprising and very warm message that I couldn't predict if I lived for 1000 years. Thank you! Still very surprised and your message is definitely the highlight of my day now.
That means nothing if one uses an inefficient language for both inference and training and the slower language will still bottleneck the ONNX runtime FFI calls and on top of Java's own garbage collector.
For those that care about performance know that the worst case is taken into account regardless if it is written in C++ for both memory and runtime performance.
I often do a Google search, and then go directly to the Wikipedia result. My reasoning is that during the initial search, I don't know if there's a Wikipedia page about that topic, and I might need a fallback option.
Unless it's something related to medicine; then you have to explicitly add "wiki" to the query. Some public health thing to discourage hypochondriacs I guess, but it's very annoying.
EDIT: I see now that you already know jpackage, but you don't want an installer. In that case you can use launch4j, which just wraps a jar in an exe: https://launch4j.sourceforge.net/
okay, Launch4j is new to me - but this seems to only work for Windows
I find the whole situation a bit silly, bc obviously it's creating an executable under the hood somewhere. The official tools just don't expose it to you.
I think what I could do is to run the installers on all target systems and then pull out the executables manually. I just think that'd be a bit difficult/annoying to make as part of a CI system
I only used Launch4j on Windows, but in the downloads you can find Linux and MacOS versions as well. At the bottom of the webpage it explains that it can be built on even more platforms, if you have MinGW binutils 2.22 (windres and ld only) on those platforms.
If I remember correctly, you can't just pull out the exe created by jpackage, because it doesn't contain the runtime. The installer installs both the runtime and the exe. The exe created by Launch4j also doesn't include the runtime, but Launch4j is better at finding a system-wide runtime, and can direct the user to a specific download site (such as adoptium.net) instead of java.com. If you want to have JUST a single exe, then I think GraalVM's native image is the only option.
Hmmm,yeah I should really just try GraalVM. JavaFX finally supports it. I just remmeber it was a bit unclear how to hook it up with Clojure/deps.edn but I'm sure the tooling has evolved since I last looked
Some day they'll integrate cosmopolitan builds with Graal native and we'll come full circle to cross-platform executables haha
I actually did that, and I asked for the first paragraph from "Great Expectations", and the answer was correct.
Of course the first paragraph of any famous novel is in itself famous, so I also asked for the first paragraph from Chapter XL, from the same novel. Interestingly the first 12 words were correct, but the rest was hallucination.
Original: It was fortunate for me that I had to take precautions to ensure (so far as I could) the safety of my dreaded visitor; for, this thought pressing on me when I awoke, held other thoughts in a confused concourse at a distance.
ChatGPT 3.5: "It was fortunate for me that I had to take precautions to ensure the safety of my treasure, for with such a boy as Compeyson on the loose, I was not likely to find much mercy at his hands. I had not told Herbert about the money, and I was glad to have the chance to count it before he came back. I found that I had still twenty-one sovereigns, six shillings, and sixpence left."
The latest Mockito works with Java 21; you'll just see a warning about the usage of the bytebuddy agent. In the future, you'll need to explicitly approve the loading of agents (you can call it "magic"), which will improve the security of your programs.
That was not the case a couple of days ago when I last looked; I see now that they released Mockito 5.7.0 two days ago, which probably fixes this issue. But that still shows my point: for every new Java LTS, you have to either fix some breakage, or update some library you depend on to a new version (which might then require further changes to your code, or even dropping compatibility with some older Java LTS).
Edit: I just tried with the latest Mockito (5.7.0), and it still gave me the same "Java 21 (65) is not supported by the current version" error, when run without the magic system property. It seems something else earlier in the dependencies had a dependency on an older release of byte-buddy, so I will have to manually upgrade byte-buddy, and hope that it doesn't break that earlier dependency.
You have to update dependencies that depend on bytecode, like ASM, bytebuddy or aspectj.
Those deps upgrades in 99% cases don't break the library that uses them.
I upgraded to JDK 21 on the day of release and haven't had issues with mockito (or spring, hibernate, junit, etc.)