What are they top 3 languages that you have used that can be used as Go alternatives ranked by fun in your opinion (I am guessing Clojure and Rust are two of them)?
You’re right! I think overall I have most fun with Clojure.
I also really like Julia; it has a channel mechanism and it has a very nice macro syntax. It also is obscenely fast at number crunching tasks and I find the language pretty pleasant to use with a nice syntax. I use Julia whenever I need to do anything involving CPU bound number crunching stuff (which admittedly isn’t too often).
All that said, I have been mostly favoring Rust because the memory footprint is so much smaller and I still have a fair amount of fun with it :).
Also, depending on the task, I also get a fair bit of mileage out of ZeroMQ, which can bolt on channel-like semantics in a pinch. It’s not as nice as having it build directly into the language but it does support more flexible patterns. Whenever I have to use Java I almost always end up importing JeroMQ the moment that that built in BlockingQueues stop being sufficient.
It’s a combination of the problem appearing to be low-hanging fruit in hindsight and the fact that almost nobody actually seemed to have checked whether it was low-hanging in the first place. We know it’s the latter because the problem was essentially uncited for the last three decades, and it didn't seem to have spread by word of mouth (spreading by word of mouth is how many interesting problems get spread in math).
This is different from the situation you are talking about, where a problem genuinely appears difficult, attracts sustained attention, and is cited repeatedly as many people attempt partial results or variations. Then eventually someone discovers a surprisingly simple solution to the original problem which basically make all the previous paper look ridiculous in hindsight.
In those cases, the problem only looks “easy” in hindsight, and the solution is rightly celebrated because there is clear evidence that many competent mathematicians tried and failed before.
Are there any evidence that this problem was ever attempted by a serious mathematician?
You can do a lot of money in swindles and bubbles if you time your exit well. There is a fair bit of opportunistic investors who did well in the NFT craze, who speculated knowing fully well that NFT is a craze that will go to zero.
everything will eventually go to zero. we look at some of these things and laugh because we're pretty sure they're going to go to zero within weeks or months vs years. but by the end of all of our lifetimes, most the companies on the stock market will be replaced. the few that won't are probably investment banks like goldman sachs
This title is a bit ironic when you consider the fact that one of the motivations of inventing category theory is to provide a foundation for many branches of mathematics
Can you elaborate what's ironic (what is "this" - higher CT)?
A note on the motivations - CT was not originally intended as a foundations. This is clear from both the name (General Theory of Natural Equivalences) and construction (based on set theory, which is was and still is the foundation for most of mathematics). There was indeed work in the foundational direction and there are relevant aspects, but I don't think that's even today the core aspect of it.
Yes I should point out that I am a noob in this area, so you might be right in calling me out. My understanding is that CT was invented in part to provide a robust foundation for algebraic geometry, so it is quite ironic that people are now involved in trying to rework the foundation of the foundation.
Not really. For many years mathematics rested on traditional first order logic and traditional naive set theory. That was revisited at the begining of the twentieth century.
I am used to having "Emacs key-bindings" on both gnome and Mac (so that for example ctrl-a will always go to the beginning of a textbox no matter the application, such as chrome).
For some strange reason this seems to be very hard thing to set up on KDE or am I missing something?
> so that for example ctrl-a will always go to the beginning of a textbox no matter the application
And Control-W will always erase word no matter the application?
This is actually a major reason I use KDE: I can, with some effort, change keyboard shortcuts to avoid conflicting with terminal Control keys. It doesn't solve the textbox problem, though.
(I don't use Emacs bindings, but Control-W erase word came in the ‘new’ TTY driver in BSD2 in 1983 — predating Windows 1.0, incidentally — likely copying TOPS-20.)
There are projects like kinto that achieve very good results at making Linux behave like macOS on shortcuts (cmd instead of ctrl only when it makes sense, etc).
I’m not sure how they do it, i suspect it’s mostly a manual grind for configuring the most common shortcuts and apps, but there might be some idea there that can be reused for the eMacs setup.
On paper KDE's system is more elegant and practical than GNOME's. In practice the keyboard shortcut management via exporting and importing is rather unwieldy. Then again, if you want to go deep into setting up your shortcuts to something properly usable, it's a fair bit more convenient with KDE, where all your shortcuts are in a single place, than with GNOME, where you have to look in all the gconf categories and it's a right pain to find conflicts.
I'm not sure this is the reason or a big reason, but I think this is very difficult to do in Linux, sadly.
What makes Linux great is also its biggest handicap, in my opinion, when it comes to User Experience: the fragmentation of UI frameworks and libraries.
I imagine having this control between Qt, GTK and other UI libraries and electron-type-apps os difficult if not impossible.
It works perfectly on Gnome (ubuntu) though. There is simple toggle you have to do in one of the standard control panels as well.
I am surprised this issue is not gaining traction with the KDE crowd, as I imagine a substantial part of the userbase are emacs users and used to emacs keybindings.
omg I found my people. Ctrl+A and Ctrl+E are so engraved into my muscle memory that it always takes a moment to readjust when using non Mac OSes. I didn't know Gnome also uses those as a KDE fan
As I thought: you can set a different shortcut for "Beginning of line" and it will work in Qt input fields but, sadly, not others. I don't know if this is the step you're on.
This seemed like something relatively easy for chatgpt to handle. The response is a bit complicated compared to what it sounds like you are looking for (it's not from the kde settings gui), but still a two minute "fix".
It was easy enough and I have enough experience with kde and linux to know it would work. I have no interest in doing this although I do similar things with "Input Remapper". My point, which I tried to state kindly, was it is not that hard and not worth complaining about - basically a 2025 "Let me google that for you".
For the same reason I would use gpt to solve something basic like this: It is the simplest and fastest way to get it done right now. I was not trying to be inflammatory, although apparently I did not understand it would be a big deal to people.
> It is the simplest and fastest way to get it done right now.
or the fastest way to get a confidently wrong words-salad. This stuff is niche AND indexed in search-engines, which makes LLMs generally not the best-suited (as proven by Gemini's answer being inferior than the search results lower down on the same search page).
> I was not trying to be inflammatory
You were not. I was just catching you at "use the right tool for the job", which often LLM is not. On a tech venue such as this, I would indeed expect this to come across as non-controversial.
Actually things did happen in 2001 (9-11), 2008 (GFC), 2012-2015 (Eurozone crisis), 2020 (Covid) and 2022 (Russio-Nato war).
Each of these came very close for a major market depression (except 911).
The fact that none did only instilled in lots of people's brain to take an incremental bit more insurance in the form of gold (if you were to replay Covid or GFC today, what odds do you put that it won't lead to a great depression?)
Also you can argue that the gradual rise is tracking almost exactly the gradual geopolitical paradigm shift from unipolarity to multipolrity.
I always wonder why gold going this high is seen as bearish to the dollar as the US currently holds 8,133.46 metric tons of gold, more than the next three top holders combined (Germany, France and Italy) and almost twice as much as China and Russia combined (although no one really knows how much China holds).
So I would be far more concerned on the impact of gold price going high on other currencies that are not backed by as much gold before worrying about the US.
The risk to the dollar is a flight to gold causing an decrease in interest of buyers purchasing treasuries, resulting in higher rates paid to buyers of this debt. To give a sense of scale the US issues something like 1-2 Trillion in new treasuries a year, which is roughly the value of all 8,133.46 tons if sold at current market price.
For example a 2% increase in rate on 1T in 30Y treasury issuance over 30 years is $600B.
I know, my point is that it is not directly backed by gold, but indirectly you can make the point there astronomical gold reserve comes into play if gold prices go insanely high.
The US GDP in 2024 was 30+ trillion. A single trillion in gold (8133t) is not an astronomical amount of gold, even though it's more than any other owner. In fact, all the gold ever mined is only ~220000 tonnes, so US GDP is higher than all the gold above ground at $4000/oz troy.
I asked chatgpt to write a short fable about the phenomena of vibe coding in the style of Aesop:
The Owl and the Fireflies
One twilight, deep in the woods where logic rarely reached, an Owl began building a nest from strands of moonlight and whispers of wind.
"Why measure twigs," she mused, "when I can feel which ones belong?"
She called it vibe nesting, and declared it the future.
Soon, Fireflies gathered, drawn to her radiant nonsense. They, too, began to build — nests of smoke and echoes, stitched with instinct and pulse. "Structure is a cage," they chirped. "Flow is freedom."
But when the storm came, as storms do, their nests dissolved like riddles in the rain.
Only the Ants, who had stacked leaves with reason and braced walls with pattern, slept dry that night. They watched the Owl flutter in soaked confusion, a nestless prophet in a world that demanded substance.
Moral: A good feeling may guide your flight, but only structure will hold your sky.
It might not lead to singularity but for people who work in academia, in terms of setting and marking assignments and lecture notes, for good or bad AI has had an enormous impact.
You might argue that LLMs have simply exposed some systematic defects instead of improving anything, but the impact is there. Dozens of lecturing workflows that were pretty standard 2 years ago are no longer viable. This includes the entirety of online and remote education which ironically dozens of universities started investing in after Covid, right around when chatgpt launched. To put this impact in context, we are talking about the tertiary and secondary sector globally.
> This includes the entirety of online and remote education
I don't get this. Either you do graded home assignments which the person takes without any examiner, which you could always cheat on, or you do live exams and then people can't rely on AI . LLMs make it easier to cheat, but it's not a categorical difference.
I feel like my experience of university (90% of the classes had in-person exams, some had home projects for a portion of the final marks) is fundamentally different from what other people experienced and this is very confusing for me.