This frames escalation as if it’s an inevitable byproduct of “not cooperating,” but that’s a choice. Sanctuary policies generally limit voluntary local participation (e.g., detainers without judicial warrants), they don’t “block” federal enforcement.
“If you don’t want door-to-door, cooperate” is basically saying federal agencies get to punish jurisdictions for lawful policy choices by switching to more coercive tactics. That’s not normal enforcement; it’s politicized leverage. And once you normalize that logic, it won’t stay confined to immigration.
> “If you don’t want door-to-door, cooperate” is basically saying federal agencies get to punish jurisdictions for lawful policy choices by switching to more coercive tactics.
If you make any level noncooperation law against federal law enforcement, you are effectively creating the requirement that for the feds to enforce the law the federal government has to change their tactics. That’s not punishment, it’s just the effect of the decision you made.
> That’s not normal enforcement; it’s politicized leverage.
It’s not normal enforcement, but neither is noncooperation. Sanctuary cities are not the norm. It’s a form of political leverage too.
> And once you normalize that logic, it won’t stay confined to immigration.
Right…and you could also say the same thing about sanctuary policies too. So what if a city or area decided that they were going to be a sanctuary for people who violate the civil rights act? Would the federal government be justified in using different tactics in its enforcement of that law?
There are threads here you don’t want to accidentally pull because they will unravel whole sections of cloth that you want to keep intact.
China is still far more repressive as a system (and Xinjiang is in its own category). The point isn’t equivalence; it’s convergence. Democracies don’t have to become ‘China’ to become unrecognizable fast, what matters is whether coercive tools are being politicized, whether oversight still bites, and whether abuses have consequences.
Personally I don't feel its constructive to discuss who's worst, because there are many axis they could be compared on. But when it comes to internal human rights violations, China has infrastructure in place for industrial control of dissent. The US is not there but is currently on a crash course towards authoritarianism
This isn’t really about Greenland’s strategic value; it’s about the category error. You can trade goods, sign treaties, and negotiate basing rights. You can’t “buy” a people or their sovereignty especially when they don’t consent. That’s why Europe responds with process and principle: normalize coercion-as-bargaining among allies and you’re reviving a pre-1945 model of politics Europe built institutions to prevent.
It’s also lose-lose for the US. There isn’t a positive outcome. If it’s dropped, the damage is “just” reputational and partly repairable. If it’s pursued: tariffs, threats, coercion. It burns trust inside NATO, accelerates European strategic decoupling, and hands a propaganda gift to every US adversary. A forced takeover would be a catastrophic own-goal: legitimacy crisis, sanctions/retaliation, and a long-term security headache the US doesn’t need.
And the deeper issue is credibility. The dollar’s reserve status and US financial leverage rest on the assumption that the US is broadly predictable and rule-bound. When you start treating allies like extractive targets, you’re not “winning” you’re encouraging everyone to build workarounds. Part of the postwar setup was that Europe outsourced a lot of hard security while the US underwrote the system; if the US turns that security guarantee into leverage against allies, you should expect Europe to reprice the relationship and invest accordingly.
The least-bad outcome is a face-saving off-ramp and dropping the whole line of inquiry. Nothing good comes from keeping it on the table.
Yes. Ian Bremmer keeps pointing out that if the "law of the jungle" becomes the norm for relations between countries, the USA will not benefit as much as autocracies like China and Russia.
Autocracy isn't a switch you can flick. To establish one, you first have to win a protracted civil war, likely between loyalist paramilitary groups like ICE, the standing US Army and regional defense paramilitaries that would spring up. The likely result of this is a stalemate that leads to secession into separate countries.
Why? Russia didn't have a protracted civil war between 2000-ish and now?
Isn't Trump busy replacing US Army leadership with those loyal to him? Why would Army and ICE be on opposite sides?
Seems MAGA just have to continue the present course and apply just enough pressure to the election system to keep "winning" half-credibly and autocracy is there in not too many years.
I mean they are already past pardoning those attacking congress for not accepting the election result.
It is just a gradual process which is well underway, at what point would California and Washington suddenly prop up a militia?
Warren Buffett once said: "You can't make a good deal with a bad person"
Which is exactly the case as long as Trump is POTUS. There's no good deal to be made for Denmark, Greenland, or Europe in general. Trump is a bad person, and can not be trusted.
Any deal that is made will either be altered or voided. And he'll continue to move the goalposts.
There are two outcomes with Trump:
1) He tries to bully someone into submission, and keeps coming back for more if successful.
2) He is slapped so hard that he gives up entirely.
Unfortunately (2) is a bit shaky these days, as he views the US military as his personal muscle.
Yes, people expect SCOTUS to rebuff Trump on the tariffs. [0]
Lately SCOTUS has been providing stricter textual interpretations of Constitutional questions. Many of these have aligned with Trump administration arguments based on the power of the executive as outlined in Article II. The text says, "The executive Power shall be vested in a President of the United States of America," and, "he shall take Care that the Laws be faithfully executed." One of the key arguments is that Congress can't take that power away from him. For example, Congress can't tell him that he can't fire executive-branch staff, because the executive power rests with him, not with Congress.
One thing the Constitution is very clear on, though, is that only Congress can impose tariffs ("The Congress shall have Power To lay and collect Taxes, Duties, Imposts and Excises"). Furthermore, recent rulings of this Court have established the major questions doctrine, which says that even if Congress delegates the specifics of implementing its powers to the Executive branch, that delegation cannot be interpreted broadly. It can't be used to create new broad policies that Congress didn't authorize.
Therefore, because the text of the Constitution explicitly grants the right to impose tariffs to Congress /and/ Trump's imposition of tariffs is both very broad and very substantial, many people believe that SCOTUS will deny Trump's tariffs.
The case as argued is about Trump's right to issue tariffs under the IEEPA (a law Congress passed to give the President some ability to take economic actions due to international emergencies, which do not explicitly include tariffs), and there is some debate about what a negative ruling would mean for the return of tariffs to merchants who have paid them. Both of those points require careful consideration in the decision. Will the ruling limit itself to just tariffs issued under the IEEPA or to the President's ability to establish tariffs under other laws? If the Court rules against the tariffs, will the government be required to pay people back, and if so, to what extent? It's not surprising that the decision is taking some time to be released. There's a lot of considerations, and every one is a possible point for disagreement by the justices.
> One of the key arguments is that Congress can't take that power away from him. For example, Congress can't tell him that he can't fire executive-branch staff, because the executive power rests with him, not with Congress.
Just want to comment what an incredibly piss poor argument that is, because if you take it to its conclusion, it means all of the power rests with the Executive and none with the Legislature. That is, by definition, the Executive branch has all the people that actually "do stuff". If the executive has full, 100% control over the structure and rules of the branch, why bother even having a legislature in the first place if all the laws can be conveniently ignored or "reinterpreted".
You could argue Congress still has the power of impeach if they believe laws aren't being faithfully excited, but I'd argue that is much too much of a blunt instrument to say that laws should be able to constrain what a President can do within the executive branch.
Picking better next time won't be enough unless a lot of work is done to put in place safeguards to make it impossible for a future government to act the same way.
I think people should realize that, in a democracy, it is virtually impossible to put these safeguards in place if people at large don't want them.
The reason Trump is able to get away with so much right now is because Congress is letting him. They could easily constrain his tariff powers, or his warmongering powers (they actually were close to doing that WRT Venezuela before some Republican Senators caved like a bunch of wet blankets), but they don't, because this is what people voted for. Trump is so much more powerful in his second term because at this point everyone knew he was a convicted felon, they knew he fomented the attack on the Capitol, and still a majority of voters voted for him.
Safeguards only work of someone is willing to enforce them.
It may not be possible to do perfectly, but here are many things that can be done to make it harder.
E.g.:
- no direct elections of a president with such broad powers.
- Separating the head of state and head of government, and split their powers.
- Proportional representation to reduce the chance of the largest party obtaining so much power alone.
- Not letting the president appoint supreme court justices.
- No presidential pardons; basically removing the chance of getting out of protections against legal sanctions after leaving office, and removing one of the strongest means of protecting loyalists.
The US isn't uniquely vulnerable, but it is a whole lot more vulnerable than governments in countries where the head of government is easier to replace and have fewer powers vested in their own personal mandate.
A direct election of a single powerful leader is also fundamentally creating a less democratic system - it reduces the influence of a huge minority of the electorate far below what their numbers justify.
Indeed, but it might be many decades - once this lesson is first learned, it will take a long time to unlearn because it tends to become self-reinforcing.
To give an illustration of how long institutional memory over things like this can be:
As of when I went to primary school in Norway in the 1980's, we were still taught at length about the British blockade of Norway during the Napoleonic wars due to Denmark-Norway's entry into the war on Napoleons side and its impact on Norway (an enduring memory for many Norwegian school-children is having to learn the Norwegian epic poem "Terje Vigen" about a man evading the blockade).
Norwegian agricultural policy to this day has had a costly cross-party support for subsidies intended to provide at least a minimum of food idependence as a consequence of learning the hard way first during the Napoleonic wars with a reinforcement (though less serious) during WW2 of how important it can be.
A large part of the Norwegian negotiations for EEA entry, and Norways rejection of EU membership was centered around agricultural policy in part because of this history.
The importance of regional development and keeping agriculture alive even in regions that are really not suited to it is "baked in" to Norwegian politics in part because the subsidies means that on top of those who are about the food idependence a lot of people are financially benefiting from the continuation of those policies, or have lived shaped by it (e.g. local communities that would likely not exist if the farms had not been financially viable thanks to subsidies), so structures have been created around it that have a life of their own.
Conversely, a lot of support for the US in Europe rests on institutional memory of the Marshall Plan, with most of the generations with first hand experience of the impact now dead.
Create a replacement memory of the US becoming a hostile force, and that can easily embed itself for the same 3+ generations after the situation itself has been resolved.
Interesting; as a British person myself, we don't get taught any of that about Norway or Denmark, not even knowing that they were once joint together in a union.
I'm not surprised. From a British POV it was a relatively minor part of a much larger conflict that Britain was done with when Napoleon defeated, and Denmark-Norway was for most practical purposes treated as "just" Denmark, since Denmark was the more powerful part of the union by far.
From the Danish and Norwegian side, Britain annihilated or captured most of the Danish-Norwegian fleet because Britain expected Denmark-Norway to enter the war on Napoleons side (as a consequence, Denmark-Norway of course entered, but severely weakened), and Norway was blockaded and faced famine from 1808-1814.
After the war ended, the Norwegian mainland was handed over to Sweden (Iceland and Greenland were also Norwegian at that point, but stayed with Denmark), but Norway took advantage of the process and passed a constitution and briefly went to war against Sweden to force a better settlement, resulting in a relatively loose union. So this whole affair had a very significant effect on the formation of the Norwegian state.
Trump's passing and his admin getting tossed won't erase the memory that a good third of America was always happy with him and wanted what he actually did. America is now branded with MAGA in a way that will take generations to fade.
At this point, I'd say terms rather than generations.
I mean, I'm old enough to remember people saying "Never Forget" about 9/11, but it's barely in any discourse at this point, and that was a single generation ago and had two major wars a bunch of PoW scandals, war crime scandals that led to Manning, and domestic surveillance that led to Snowden. And yet, despite all that, I've only heard 9/11 mentioned exactly once since visiting NYC in 2017, and that was Steve Bannon and Giuliani refusing to believe that Mamdani was legitimate.
So, yeah, if Trump fades away this could be forgotten in 8 years or so; if this escalates to a war (I'm not confident, but if I had to guess I'd say 10% or so?), then I see it rising to the level of generations.
It's different. 9/11 was an outside foe, which was dismantled by US forces, and its leader was executed. America "won" against the perpetrators of 9/11 in the conventional sense.
You cannot defeat MAGA the same way: the "enemies" are among us, and they aren't going anywhere.
From my point of view as a European asking if myself if or when I will be able to trust the USA in the future, the Taliban is to Afghanistan as MAGA is to the USA.
You're the outsider, to me. The pre-9/11 Taliban were seen as "kinda weird but we can do deals, oh dear aren't they awful, never mind", the post-9/11 were not even worthy of talking to. The USA is currently in a similar "pre" state, an invasion would make it a "post" state.
There's how the people in general remember, and then there's how the politicians and the institutions remember. If nothing else, the changes in institutions will have effects reverberating for decades, with the most obvious institution being the military in each country that expected to fight a war under a NATO umbrella with an American general in charge.
If I'm a German or French or Swedish officer, especially if I'm suddenly in Greenland, I'm going to be thinking hard about the changes to come in the next few years so that they're not all dependent upon a friendly America. If nothing else, they're all getting ready now to operate without any Americans in the loop, since it might be Americans they're fighting. That means the entire NATO command structure, which presumes American dominance of it, is now an obstacle to avoid rather than a resource to share. Every PM is asking the head of their air force if they can fly their F-35s without the Americans knowing about it and possibly shutting them down remotely.
There's a story going around today in French newspapers about how French and Ukrainian intelligence fed US intelligence some false strategic info to see if it ended up in Russian hands, which it did within days. Now Ukraine is consciously breaking its relationship with US intelligence because it can't be trusted, while getting closer to French and German intelligence. I suspect that the UK is also carefully looking at what's shared via the Five Eyes and decided what it can/needs to withhold.
I'm saying "never forget" fades. That's a human condition we all share.
I mean, I live in Germany these days, and this country absolutely got the multi-generational thing, and I'm from the UK whose empire ditto, but… the UK doesn't spend much time thinking about the Falklands War and even less about the Cod Wars.
Nobody disagreed with that it eventually fades, they were all saying it is going to take decades. The consequence of 9/11 of was mostly TSAs, following the USA into wars and the erosion of privacy at the mention of terrorism. The first and the last are still ongoing and I think the current US admin is still using the latter as a narrative, the second one may come at an end currently, because the USA is trying to use it against its (former) allies.
What you describe is called "to historicize an event". The WW1 has been historicized by WW2 (some argue it's the same war). But not even WW2 has been historicized yet (at least in Europe) and it already ended 80 years in the past, so I doubt an atlantic conflict is going to be forgotten in the next few decades.
Edit: I originally linked to https://en.wikipedia.org/wiki/Historicization, but this does not describes what I mean. It is weird, because the supposed German equivalent does. The German article is about a concept from the science of history, while the English article is about a literature concept.
Aye, and thanks for the link, will read the german version as per your edit.
> so I doubt an atlantic conflict is going to be forgotten in the next few decades.
If it gets to one, yes. Was writing late at night, so sloppily, sorry about that.
Right now, I think we're not that far gone yet. Absolutely agree it becomes as you say if it becomes hot war. Not sure about which step between will be the drop that overflows the bucket.
If we don't reduce conflict to mean military conflict, then I think there is definitely some diplomatic issue ongoing.
> Not sure about which step between will be the drop that overflows the bucket.
True, this is kind of the open question, because the EU both needs to be the adult in the room and deescalate, but also can't do compromises with territorial integrity otherwise it has already lost. This will of course have an impact on the "time to forget".
But I don't think if there is a uprising today in the US, Trump and the whole admin is gone next week and they improve their constitution, that the whole issue will just be forgotten. The whole pro-, neutral- or even contra USA debate has been ongoing for decades know. For example the trade deals aren't exactly concordant with EU law (https://en.wikipedia.org/wiki/Max_Schrems#Schrems_I) and the USA has been boycotting multilateral institutions, that the EU wants to have authority. I mean it is new that they openly sabotage the ICJ, but that they have the capability to do that is not.
Yeah, one thing the EU could do that wouldn't hurt them/us (much) would be to stop bringing up fake replacements for the data sharing agreements that get shot down.
The damage would mostly hit the top performers of the US stock market (amongst others) while not damaging the EU as much.
It'll probably be tariffs first though, followed by the ACI if things get really bad.
You have to be incredibly naive to give that much credibility to the US system. A lot more than just a switch of parties would be needed.
Personally I highly doubt a possible democratic would return a conquered Greenland. And even if it did, it would have to ensure that kind of derailment doesn't happen again. The opposition so far seems to be about as ineffectual as centrist parties across Europe at dealing with the far right.
Sort of. Those of us outside the US are aware his support hasn’t cratered. There’s going to be the concern the US will just swap him out for someone similar.
If past history is anything to go by, the US will elect the current opposition, who won't be nearly strong enough to enact the reforms that would prevent an extremist party from returning to power in 4 years' time.
For Americans, many foreigners use the word “government” where we would say “administration”. So, a “new government” or “the government falls”, would be a “new administration” or “the administration’s party loses the next election”.
Exactly. The fixes that would go some way to restore my trust are changed to the mechanisms surrounding the democratic process. Things like no more gerrymander, get rid of allowing corporations influencing the voting by flooding the system with money, somehow fix social media every ad is seen by everyone rather than allowing personalised lies be shown to specific voters, fix your electronic voting systems to a maintenance man with a screwdriver can't make new votes pop into existence (as happened once), stop disenfranchising voters - even to the extend of implementing compulsory voting. The distortions the USA allows now to the democratic process are beyond belief.
Oh, and a system that allows a politician to incite a mob to attack the sitting parliament, and get away without punishment, then pardon the perps is a joke.
And the opposition party has proven itself to be unable to take actions necessary to prevent this sort of thing. The democrats could have used the Biden administration as an opportunity to try Trump for his crimes and establish new boundaries on the power of the president. Instead they just hoped he would vanish into the night and left space for his return.
If the dems win in 2026 and 2028, what is there to stop a return to fascism and further collapse in 2032?
> and hands a propaganda gift to every US adversary.
This demonstrates, again, that Trump is the prime domestic enemy of the US. Where are the agencies that are sworn to protect the US against enemies foreign and domestic?
But it's not US who is in charge of US, unfortunately. It's Project 2025 who is in charge of US, and it has a vastly different win and lose criteria. For Project 2025 dissolving NATO, UN, WTO and whatever is a win. For Project 2025 weakening dollar is a win. For Project 2025 isolation in the Americas is win. And US is no longer in charge. Congress has voluntarily surrendered its power and others are following the lead. Project 2025 may or may not become future US, we'll see how it goes this year.
Greenland already has the right to independence from Denmark, via chapter 8 of the law for the self-governing of Greenland, that was enacted in 2009 [1]:
> The decision on Greenland's independence is made by the Greenlandic people.
Technically, the Danish government has to OK the decision, but this is largely viewed as a formality by Danish politicians, should Greenland choose to move forward with independence.
If it truly is a pure formality, why is the condition written into law? The legislative branch (the branch that writes and changes laws) can simply remove the condition of Danish acceptance, instead of proudly proclaiming that the condition of Danish acceptance is a pure formality.
“Optics” is the wrong frame: this is about legitimacy and consent. A referendum demanded by outsiders under pressure is just coercion with a procedural costume. Imagine Cuba proposing a referendum on Puerto Rico joining Cuba and calling it “bad optics” if people won’t play along, the absurdity is the premise, not the lack of voting.
Maybe that's the answer - the USA needs to hold a referendum on becoming a British colony again. It's 250 years since they declared independence, maybe they've changed their mind on having a king? (/s)
I fail to see what is damning here. What would you even hold a referendum on? Independence? Replacing the arrangement with Denmark with whatever unclear arrangement the US is proposing?
If you trust independent polls, you can get a pretty clear picture of where Greenlanders stood as of Feb. 2025:
Danish citizenship or independence are overwhelmingly favored over US citizenship in these polls. And for independence, only really if it does not affect living standard too bad. And there, it's hard to imagine the US being able to match Denmark's social security system...
I believe you write in good faith, and that you sincerely and non-agressively hold your opinion, and that you believe you don't lack a well known piece of information.
But first let me quote a short piece of text, and later in the comment I will reveal where it comes from.
"After World War II, colonial power was increasingly frowned upon on the international stage. To ease pressure from the United Nations, Denmark decided to reclassify Greenland, not as a colony, but as a region. A new Status that required Denmark to guarantee EQUAL LIVING STANDARDS for both Greenlanders and Danes."
Hold on to this "EQUAL LIVING STANDARDS".
So now on towards the poll.
Back when I was studying physics, one of the courses was statistics. Now statistics in physics or mathematical courses is very different from "statistics" in applied / social / political sciences where students are merely required to execute a procedure, like linear regression to fit a line, or the steps to calculate average and variance, ... Those are fixed formulas without rearranging terms and applying mathematical deduction to statistical statements. One can't fully grok statistics in this light form, it needs more rigorous foundations, only then can students learn to derive their own original conclusions in a correct manner and be able to see through the honest mistakes or manipulations of statistical results by others. The professor recommended a booklet called "How to Lie with Statistics". Of course the goal of the book is NOT to breed dishonest statisticians, but to show the myriad of ways statistical results are depicted and phrased to convey intentionally convey an incorrect impression or conclusion, so that we can detect and see through it.
One of the classic things is for example the distribution of top classes: consider mortality rates for different afflictions, lets pretend we buy into the mono-causal paradigm, so tree like, not DAG like. Then if some entity is embarrassed about the top entry, you can just split it up in similarily balanced subcases (instead of a category cancer, splitting it up into all the different kinds of cancer might result in say cardiovascular diseases becoming the top category, simply by splitting up the top class. (My example is arbitrary, I care naught about top mortality, personally).
A false dilemma (false trilemma etc.) is when all the options combined don't form the universe of possibilities, like "would you prefer pestilence or cholera"?
Please take a careful look at the actual poll options [0]:
1. I want independence unconditionally, regardless of the impact on the standard of living
2. I want independence, even if it would have a major negative impact on the standard of living
3. I want independence, even if it would have a small negative impact on my standard of living
4. I only want independence if it doesn't have a negative impact on my standard of living
5. I don't want independence
6. Don't know
It's almost like some Dane made up the vote-able categories and decided to troll the Greenlanders with a reference to the broken promise: LIVING STANDARDS ?!? Some Good Old forced contraception foisted of as the required EQUAL LIVING STANDARDS between Danes and Greenlanders ?!!
So we can classify already: Don't Know (option 6: 9%) vs Know (presumably options 1 through 5: 91% claim to know what they want), so far so good since we have mutually exclusive but exhaustive split.
Now consider the universe of possibilities for those who Know:
Those who know they want independence (from Denmark; options 1 through 4: 84% of all respondents) and those who know they don't want independence (from Denmark; option 5: 9% of all respondents)
So far so good.
Those who want independence (from Denmark) unconditionally (option 1: 18% of all respondents) and those who want independence (from Denmark) conditionally (option 2 through 4: 66%)
Here it gets vague because the boundaries one is asked to get classified in (divide and conquer style) are subjective: on condition there is no "major", "small" or "negative" impact on standard of living.
Is "negative impact" more or less negative than "small negative impact"? I want to see HN commenters discuss if "negative impact" is better or worse than "small negative impact".
This is just non-quantitative gerrymandering.
But let's ignore the gerrymandering: the phrasing is not neutral, as if it is a given there will be negative impact on standards of living!
Imagine the poll stated not the above but:
1. "I want independence unconditionally, regardless if the Danes perform a new round of population control as a goodbye present for old times sake"
2. "I want independence conditionally, regardless if the Danes perform a new major round of population control as a goodbye present for old times sake"
3. "I want independence unconditionally, regardless if the Danes perform a new small round of population control as a goodbye present for old times sake"
4. "I want independence unconditionally, regardless if the Danes perform a new round of population control as a goodbye present for old times sake"
It would be the exact same logical fallacy, but probably with different results, thousands of women (and men) would keelhaul their nearest Danish officials under the nearest ice shelf.
It's just insulting for an (unverifiable) poll to pull these tricks, especially if the poll was co-organized by a Danish newspaper.
> The poll, which was carried out by Verian on behalf of Danish newspaper Berlingske ...
Something else that is insulting: I saw pictures of immense crowds protesting Trump's comments, and read the number of protesters involved: practically the population count of whole Greenland... until I saw the fine print: the numbers were for a protest in Denmark, not Greenland!
Let people speak for themselves, and don't gerrymander polls, its just doubly insulting, and shows that the colonial mentality is still present, sigh!
that power goes both ways, what happens if the Greenland population demands the full list of doctors involved, what type of doctors: military or civilian?, their extradition for legal proceedings on Greenland soil, the confiscation of their pension funds, ... the whole shebang, or else --- who knows they might become a state joining a Union of States, perhaps EU perhaps US. The US has a similar history, from a similar time frame, but the Danish government took a remarkably longer time to even acknowledge what happened.
Check this documentary (about 30 minutes), horrendous crimes. And then "apologizing", apologizing is when all forms of help have been exhausted, instead of apologizing reveal the lists of doctors, so the Greenlanders can question them, who they got commands from, and were those people got their instructions from, extraditions, confiscation of their pension funds (think about it: having been raped by the doctor, or sedated (another crime if for non medical reasons). The normal order is acknowledge, then help, help, help, and only when all forms of help have been exhausted, apologize.
And Europe is angry how Trump plays the realpolitik game, but by not insisting a Greenland run referendum, but instead backing Denmark, they are playing the realpolitik game just as well, you know "maintaining good relations"
Recommended viewing (30 minutes), its where the quote comes from:
It doesn't seem to discuss Trump's "offer". Voting independence from Denmark is different from being given the option to join the United States.
As Chomsky would say "whenever there are multiple pictures, the darkest one tends to be closer to the truth". What if natural resources would be more expensive (for both US and EU) to buy if Greenland were independent, than if it were still half-colony of Denmark. Then EU and US would have a common interest in manipulating in the same direction the referendum you referenced (for independence). Both US and EU might have cheaper access to natural resources if the population votes no for independence. Good Cop Bad Cop stuff, to scare the population to stay subjugated (and enjoy imaginary protection from EU against imaginary threat from US).
his comment was not specific to Western nations, it would apply equally well to asian, african, south american, russian,northern, southern, ... nations, but you are right, he wouldn't treat Western nations with an exception, and that always makes the relevant population feel addressed, and this subjectively feels different, or being picked on with precision, but its just when a population feels addressed.
The referendum is on independence. Which they might want if they weren't under the threat of annexation. When given a choice between the US and Denmark, they chose Denmark, and might choose to go all in rejoin EU.
To people here just a week ago saying it was just insane joking and even MAGA didn't support it, something I pointed out didn't matter, we have moved from 'it's meme'ing' to 'here's why it's good' 'here's why it's needed' 'it's 4d chess' in less than a week. Please NEVER give an inch to this trash that will justify anything. Don't accept 'meme'ing' from an American President by saying 'it's Trump being Trump'. Push back on everything, everytime.
As the nazi's were happening, everyone was waiting for the 'one big thing' that was too big of a line cross. We have waited until the point the US is using it's power to take land. And everyone is still waiting for that 'big thing' or some line (even though we've passed countless lines already). MAGA freaked out over Epstein for what a decade? And suddenly when it's almost released they stopped caring. If MAGA dropped that almost instantly, MAGA is NEVER going to care about anything.
Staff-level frontend/system engineer based in Sweden. 20+ years experience shipping performance-critical, real-time, and composable interfaces in production. Previous work includes:
• Custom poker clients and in-house game engines (Gamesys, OpenBet)
• Embedded media platforms (Xperi/TiVo smart TVs)
• Large-scale e-commerce widgets (Stylitics, tens of millions of daily hits)
• LLM-assisted pipelines and tooling (CMS ingestion, visual diffing, media automation)
Strengths include UI architecture, latency-aware design, composable systems, fast iteration, and taking projects from prototype to production with minimal overhead.
Stack/tools:
React, TypeScript, Web Components, Canvas/WebGL, ClojureScript
Tauri, SQLite, Payload CMS, ffmpeg, Cloudflare Workers
GitHub Actions, Playwright, Codex-based AI workflows
• Kolibri Club : cultural site with CMS, ingestion pipeline, and live video workflows
• grabfilm : Filmarkivet + Archive.org media archiver with idempotent metadata handling
Looking for small, focused teams working on tools, infrastructure, creative software, or low-latency systems. Open to consulting, contracting, or embedded long-term roles. Prefer high agency, low management overhead, and no hype-driven development.
I’ve ended up with a workflow that lines up pretty closely with the guidance/oversight framing in the article, but with one extra separation that’s been critical for me.
I’m working on a fairly messy ingestion pipeline (Instagram exports → thumbnails → grouped “posts” → frontend rendering). The data is inconsistent, partially undocumented, and correctness is only visible once you actually look at the rendered output. That makes it a bad fit for naïve one-shotting.
What’s worked is splitting responsibility very explicitly:
• Human (me): judge correctness against reality. I look at the data, the UI, and say things like “these six media files must collapse into one post”, “stories should not appear in this mode”, “timestamps are wrong”. This part is non-negotiably human.
• LLM as planner/architect: translate those judgments into invariants and constraints (“group by export container, never flatten before grouping”, “IG mode must only consider media/posts/*”, “fallback must never yield empty output”). This model is reasoning about structure, not typing code.
• LLM as implementor (Codex-style): receives a very boring, very explicit prompt derived from the plan. Exact files, exact functions, no interpretation, no design freedom. Its job is mechanical execution.
Crucially, I don’t ask the same model to both decide what should change and how to change it. When I do, rework explodes, especially in pipelines where the ground truth lives outside the code (real data + rendered output).
This also mirrors something the article hints at but doesn’t fully spell out: the codebase isn’t just context, it’s a contract. Once the planner layer encodes the rules, the implementor can one-shot surprisingly large changes because it’s no longer guessing intent.
The challenges are mostly around discipline:
• You have to resist letting the implementor improvise.
• You have to keep plans small and concrete.
• You still need guardrails (build-time checks, sanity logs) because mistakes are silent otherwise.
But when it works, it scales much better than long conversational prompts. It feels less like “pair programming with an AI” and more like supervising a very fast, very literal junior engineer who never gets tired, which, in practice, is exactly what these tools are good at.
I’ve been using GPT-4o and now 5.2 pretty much daily, mostly for creative and technical work. What helped me get more out of it was to stop thinking of it as a chatbot or knowledge engine, and instead try to model how it actually works on a structural level.
The closest parallel I’ve found is Peter Gärdenfors’ work on conceptual spaces, where meaning isn’t symbolic but geometric. Fedorenko’s research on predictive sequencing in the brain fits too. In both cases, the idea is that language follows a trajectory through a shaped mental space, and that’s basically what GPT is doing. It doesn’t know anything, but it generates plausible paths through a statistical terrain built from our own language use.
So when it “hallucinates”, that’s not a bug so much as a result of the system not being grounded. It’s doing what it was designed to do: complete the next step in a pattern. Sometimes that’s wildly useful. Sometimes it’s nonsense. The trick is knowing which is which.
What’s weird is that once you internalise this, you can work with it as a kind of improvisational system. If you stay in the loop, challenge it, steer it, it feels more like a collaborator than a tool.
That’s how I use it anyway. Not as a source of truth, but as a way of moving through ideas faster.
Once you drop the idea that it's a knowledge oracle and start treating it as a system that navigates a probability landscape, a lot of the confusion just evaporates
I think of it like improvising with a very skilled but slightly alien musician.
If you just hand it a chord chart, it’ll follow the structure. But if you understand the kinds of patterns it tends to favour, the statistical shapes it moves through, you can start composing with it, not just prompting it.
That’s where Gärdenfors helped me reframe things. The model isn’t retrieving facts. It’s traversing a conceptual space. Once you stop expecting grounded truth and start tracking coherence, internal consistency, narrative stability, you get a much better sense of where it’s likely to go off course.
It reminds me of salespeople who speak fluently without being aligned with the underlying subject. Everything sounds plausible, but something’s off. LLMs do that too. You can learn to spot the mismatch, but it takes practice, a bit like learning to jam. You stop reading notes and start listening for shape.
I’m with the people pushing back on the “confidence scores” framing, but I think the deeper issue is that we’re still stuck in the wrong mental model.
It’s tempting to think of a language model as a shallow search engine that happens to output text, but that metaphor doesn’t actually match what’s happening under the hood. A model doesn’t “know” facts or measure uncertainty in a Bayesian sense. All it really does is traverse a high‑dimensional statistical manifold of language usage, trying to produce the most plausible continuation.
That’s why a confidence number that looks sensible can still be as made up as the underlying output, because both are just sequences of tokens tied to trained patterns, not anchored truth values. If you want truth, you want something that couples probability distributions to real world evidence sources and flags when it doesn’t have enough grounding to answer, ideally with explicit uncertainty, not hand‑waviness.
People talk about hallucination like it’s a bug that can be patched at the surface level. I think it’s actually a feature of the architecture we’re using: generating plausible continuations by design. You have to change the shape of the model or augment it with tooling that directly references verified knowledge sources before you get reliability that matters.
Solid agree. Hallucination for me IS the LLM use case. What I am looking for are ideas that may or may not be true that I have not considered and then I go try to find out which I can use and why.
This technology (which I had a small part in inventing) was not based on intelligently navigating the information space, it’s fundamentally based on forecasting your own thoughts by weighting your pre-linguistic vectors and feeding them back to you. Attention layers in conjunction of roof later allowed that to be grouped in higher order and scan a wider beam space to reward higher complexity answers.
When trained on chatting (a reflection system on your own thoughts) it mostly just uses a false mental model to pretend to be a desperate intelligence.
Thus the term stochastic parrot (which for many us actually pretty useful)
Thanks for your input - great to hear from someone involved that this is the direction of travel.
I remain highly skeptical of this idea that it will replace anyone - the biggest danger I see is people falling for the illusion. That the thing is intrinsically smart when it’s not - it can be highly useful in the hands of disciplined people who know a particular area well and augment their productivity no doubt. Because the way we humans come up with ideas and so on is highly complex. Personally my ideas come out of nowhere and mostly are derived from intuition that can only be expressed in logical statements ex-post.
Is intuition really that different than LLM having little knowledge about something? It's just responding with the most likely sequence of tokens using the most adjacent information to the topic... just like your intuition.
With all due respect I’m not even going to give a proper response to this… intuition that yields great ideas is based on deep understanding. LLM’s exhibit no such thing.
These comparisons are becoming really annoying to read.
>A model doesn’t “know” facts or measure uncertainty in a Bayesian sense. All it really does is traverse a high‑dimensional statistical manifold of language usage, trying to produce the most plausible continuation.
And is that that different than what we do under the scenes? Is there a difference between an actual fact vs some false information stored in our brain? Or both have the same representation in some kind of high‑dimensional statistical manifold in our brains, and we also "try to produce the most plausible continuation" using them?
There might be one major difference is at a different level: what we're fed (read, see, hear, etc) we also evaluate before storing. Does LLM training do that, beyond some kind of manually assigned crude "confidence tiers" applied to input material during training (e.g. trust Wikipedia more than Reddit threads)?
I would say it's very different to what we do. Go to a friend and ask them a very niche question. Rather than lie to you, they'll tell you "I don't know the answer to that". Even if a human absorbed every single bit of information a language model has, their brain probably could not store and process it all. Unless they were a liar, they'd tell you they don't know the answer either! So I personally reject the framing that it's just like how a human behaves, because most of the people I know don't lie when they lack information.
>Go to a friend and ask them a very niche question. Rather than lie to you, they'll tell you "I don't know the answer to that"
Don't know about that, bullshitting is a thing. Especially online, where everybody pretends to be an expert on everything, and many even believe it.
But even if so, is that because of some fundamental difference between how a human and an LLM store/encode/retrieve information, or more because it has been instilled into a human through negative reinforcement (other people calling them out, shame of correction, even punishment, etc) not to make things up?
Hallucinations are a feature of reality that LLMs have inherited.
It’s amazing that experts like yourself who have a good grasp of the manifold MoE configuration don’t get that.
LLMs much like humans weight high dimensionality across the entire model then manifold then string together an attentive answer best weighted.
Just like your doctor occasionally giving you wrong advice too quickly so does this sometimes either get confused by lighting up too much of the manifold or having insufficient expertise.
I asked Gemini the other day to research and summarise the pinout configuration for CANbus outputs on a list of hardware products, and to provide references for each. It came back with a table summarising pin outs for each of the eight products, and a URL reference for each.
Of the 8, 3 were wrong, and the references contained no information about pin outs whatsoever.
That kind of hallucination is, to me, entirely different than what a human researcher would ever do. They would say “for these three I couldn’t find pinouts” or perhaps misread a document and mix up pinouts from one model for another.. they wouldn’t make up pinouts and reference a document that had no such information in it.
Of course humans also imagine things, misremember etc, but what the LLMs are doing is something entirely different, is it not?
Humans are also not rewarded for making pronouncements all the time. Experts actually have a reputation to maintain and are likely more reluctant to give opionions that they are not reasonably sure of. LLMs trained on typical written narratives found in books, articles etc can be forgiven to think that they should have an opionion on any and everything. Point being that while you may be able to tune it to behave some other way you may find the new behavior less helpful.
> Hallucinations are a feature of reality that LLMs have inherited.
Huh? Are you arguing that we still live in a pre-scientific era where there’s no way to measure truth?
As a simple example, I asked Google about houseplant biology recently. The answer was very confidently wrong telling me that spider plants have a particular metabolic pathway because it confused them with jade plants and the two are often mentioned together. Humans wouldn’t make this mistake because they’d either know the answer or say that they don’t. LLMs do that constantly because they lack understanding and metacognitive abilities.
>Huh? Are you arguing that we still live in a pre-scientific era where there’s no way to measure truth?
No. A strange way to interpet their statement! Almost as if you ...hallucinated their intend!
They are arguing that humans also hallucinate: "LLMs much like humans" (...) "Just like your doctor occasionally giving you wrong advice too quickly".
As an aside, there was never a "pre-scientific era where there [was] no way to measure truth". Prior to the rise of modern science fields, there have still always been objective ways to judge truth in all kinds of domains.
Yes, that’s basically the point: what are termed hallucinations with LLMs are different than what we see in humans – even the confabulations which people with severe mental disorders exhibit tend to have some kind of underlying order or structure to them. People detect inconsistencies in their own behavior and that of others, which is why even that rushed doctor in the original comment won’t suggest something wildly off the way LLMs do routinely - they might make a mistake or have incomplete information but they will suggest things which fit a theory based on their reasoning and understanding, which yields errors at a lower rate and different class.
When you ask humans however there are all kinds of made-up "facts" they will tell you. Which is the point the parent makes (in the context of comparing to LLM), not whether some legal database has wrong cases.
Since your example comes from the legal field, you'll probably very well know that even well intentioned witnesses that don't actively try to lie, can still hallucinate all kinds of bullshit, and even be certain of it. Even for eye witnesses, you can ask 5 people and get several different incompatible descriptions of a scene or an attacker.
>When you ask humans however there are all kinds of made-up "facts" they will tell you. Which is the point the parent makes (in the context of comparing to LLM), not whether some legal database has wrong cases.
Context matters. This is the context LLMs are being commercially pushed to me in. Legal databases also inherit from reality as they consist entirely of things from the real world.
That’s deliberate. “Correct” implies anchoring to a truth function the model doesn’t have. “Plausible” is what it’s actually optimising for, and the disconnect between the two is where most of the surprises (and pitfalls) show up.
As someone else put it well: what an LLM does is confabulate stories. Some of them just happen to be true.
Do you have a better word that describes "things that look correct without definitely being so"? I think "plausible" is the perfect word for that. It's not a sleight of hand to use a word that is exactly defined as the intention.
I mean... That is exactly how our memory works. So in a sense, the factually incorrect information coming from LLM is as reliable as someone telling you things from memory.
But not really? If you ask me a question about Thai grammar or how to build a jet turbine, I'm going to tell you that I don't have a clue. I have more of a meta-cognitive map of my own manifold of knowledge than an LLM does.
Try it out. Ask "Do you know who Emplabert Kloopermberg is?" and ChatGPT/Gemini literally responded with "I don't know".
You, on the other hand, truly have never encountered any information about Thai grammar or (surprisingly) hot to build a jet turbine. (I can explain in general terms how to build one from just watching Discovery channel)
The difference is that the models actually have some information on those topics.
System developer with 20+ years of experience building performant, component-based UIs and real-time systems. I enjoy functional programming and use functional techniques daily for clarity and composability. Recent work includes the TiVo smart TV platform at Xperi/Vewd, a JavaScript game engine at Gamesys, and Clojure/ClojureScript applications at Stylitics.
After a 2 year Clojure stint I find it very hard to explain the clarity that comes with immutability for programmers used to trigger effects with a mutation.
I think it may be one of those things you have to see in order to understand.
I think the explanation is: When you mutate variables it implicitly creates an ordering dependency - later uses of the variable rely on previous mutations. However, this is an implicit dependency that isn't modeled by the language so reordering won't cause any errors.
With a very basic concrete example:
x = 7
x = x + 3
x = x / 2
Vs
x = 7
x1 = x + 3
x2 = x1 / 2
Reordering the first will have no error, but you'll get the wrong result. The second will produce an error if you try to reorder the statements.
Another way to look at it is that in the first example, the 3rd calculation doesn't have "x" as a dependency but rather "x in the state where addition has already been completed" (i.e. it's 3 different x's that all share the same name). Doing single assignment is just making this explicit.
The immutable approach doesn't conflate the concepts of place, time, and abstract identity, like in-place mutation does.
In mutating models, typically abstract (mathematical / conceptual) objects are modeled as memory locations. Which means that object identity implies pointer identity. But that's a problem when different versions of the same object need to be maintained.
It's much easier when we represent object identity by something other than pointer identity, such as (string) names or 32-bit integer keys. Such representation allows us to materialize us different versions (or even the same version) of an object in multiple places, at the same time. This allows us to concurrently read or write different versions of the same abstract object. It's also an enabler for serialization/deserialization. Not requiring an object to be materialized in one particular place allows saving objects to disk or sending them around.
DRAM is linear memory. Caches, less so. Register files really aren't. CPUs spend rather a lot of transistors and power to reconcile the reality of how they manipulate data within the core against the external model of RAM in a flat linear address space.
Modern CPUs do out-of-order execution, which means they need to identify and resolve register sharing dependencies between instructions. This turns the notional linear model of random-access registers into a DAG in practice, where different instructions that might be in flight at once actually read from or write to different "versions" of a named register. Additionally, pretty much every modern CPU uses a register renaming scheme, where the register file at microarchitecture level is larger than that described in the software-level architecture reference, i.e. one instruction's "r7" has no relationship at all to another's r7".
Caches aren't quite as mix-and-match, but they can still internally manage different temporal versions of a cache line, as well as (hopefully) mask the fact that a write to DRAM from one core isn't an atomic operation instantly visible to all other cores.
Realistically, the compiler is building a DAG called SSA; and then the CPU builds a DAG to do out of order execution, so at a fine grain -- the basic block -- it seems to me that the immutable way of thinking about things is actually closer to the hardware.
That doesn't affect what I said though. Register renaming and pipelining does not make mutation go away and doesn't allow you to work on multiple things "at once" through the same pointer.
It's still logically the same thing with these optimizations, obviously -- since they aren't supposed to change the logic.
I agree that the explicit timeline you get with immutability is certainly helpful, but I also think its much easier to understand the total state of a program. When an imperative program runs you almost always have to reproduce a bug in order to understate the state that caused it, fairly often in Clojure you can actually deduct whats happening.
That's right - immutability enables equational reasoning, where it becomes possible to actually reason through a program just by inspection and evaluation in one's head, since the only context one needs to load is contained within the function itself - not the entire trace, where anything along the thread of execution could factor into your function's output, since anybody can just mutate anybody else's memory willy-nilly.
People jump ahead using AI to improve their reading comprehension of source code, when there are still basic practices of style, writing, & composition that for some reason are yet to be widespread throughout the industry despite already having a long standing tradition in practice, alongside pretty firm grounding in academics.
In theory it’s certainly right that imperative programs are harder to reason about. In practice programmers tend to avoid writing the kind of program where anything can happen.
> In practice programmers tend to avoid writing the kind of program where anything can happen.
My faith in this presumption dwindles every year. I expect AI to only exacerbate the problem.
Since we are on the topic of Carmack, "everything that is syntactically legal that the compiler will accept will eventually wind up in your codebase." [0]
That example is too simple for me to grasp it. How would you code a function that iterates over an array to compute its sum. No cheating with a built-in sum function. If you had to code each addition, how would that work? Curious to learn (I probably could google this or ask Claude to write me the code).
Carmack gives updating in a loop as the one exception:
> You should strive to never reassign or update a variable outside of true iterative calculations in loops.
If you want a completely immutable setup for this, you'd likely have to use a recursive function. This pattern is well supported and optimized in immutable languages like the ML family, but is not super practical in a standard imperative language. Something like
def sum(l):
if not l: return 0
return l[0] + sum(l[1:])
Of course this is also mostly insensitive to ordering guarantees (the compiler would be fine with the last line being `return l[-1] + sum(l[:-1])`), but immutability can remain useful in cases like this to ensure no concurrent mutation of a given object, for instance.
You don't have to use recursion, that is, you don't need language support for it. Having first class (named) functions is enough.
For example you can modify sum such that it doesn't depend on itself, but it depends on a function, which it will receive as argument (and it will be itself).
While your example of `sum` is a nice, pure function, it'll unfortunately blow up in python on even moderately sized inputs (we're talking thousands of elements, not millions) due to lack of tail calls in Python (currently) and the restrictions on recursion depth. The CPython interpreter as of 3.14 [0] is now capable of using tail calls in the interpreter itself, but it's not yet in Python, proper.
Yeah, to actually use tail-recursive patterns (except for known-to-be-sharply-constrained problems) in Python (or, at least, CPython), you need to use a library like `tco`, because of the implementation limits. Of course the many common recursive patterns can be cast as map, filter, or reduce operations, and all three of those are available as functions in Python's core (the first two) or stdlib (reduce).
Updating one or more variables in a loop naturally maps to reduce with the updated variable(s) being (in the case of more than one being fields of) the accumulator object.
Yet even Rust allows you to shadow variables with another one with the same name. Yes, they are two different variables, but for a human reader they have the same name.
I think that Rust made this decision because the x1, x2, x3 style of code is really a pain in the ass to write.
In idiomatic Rust you usually shadow variables with another one of the same name when the type is the only thing meaningfully changing. For example
let x = "29"
let x = x.parse::<i32>()
let x = x.unwrap()
These all use the same name, but you still have the same explicit ordering dependency because they are typed differently. The first is a &str, the second a Result<i32, ParseIntError>, the third an i32, and any reordering of the lines would provide a compiler error. And if you add another line `let y = process(x)` you would expect it to do something similar no matter where you introduce it in these statements, provided it accepts the current type of x, because the values represent the "same" data.
Once you actually "change" the value, for example by dividing by 3, I would consider it unidiomatic to shadow under the same name. Either mark it as mutable for preferably make a new variable with a name that represents what the new value now expresses
In a Clojure binding this is perfectly idiomatic, but symbolically shared bindings are not shadowed, they are immutably replaced. Mutability is certainly available, but is explicit. And the type dynamism of Clojure is a breath of fresh air for many applications despite the evangelism of junior developers steeped in laboratory Haskell projects at university. That being said, I have a Clojure project where dynamic typing is throughly exploited at a high level, allows for flexible use of Clojure's rational math mixed with floating point (or one or the other entirely), and for optimization deeper within the architecture a Rust implementation via JVM JNI is utilized for native performance, assuring homogenous unboxed types are computed to make the overall computation tractable. Have your cake and eat it too. Types have their virtues, but not without their excesses.
In Rust, one way I use shadowing is to gather a bunch of examples into one function, but you can copy and paste any single example and it would work.
fn do_demo() {
let qr = QrCode::encode_text("foobar");
print_qr(qr);
let qr = QrCode::encode_text("1234", Ecc::LOW);
print_qr(qr);
let qr = QrCode::encode_text("the quick brown fox");
print_qr(qr);
}
In other languages that don't allow shadowing (e.g. C, Java), the first example would declare the variable and be syntactically correct to copy out, but the subsequent examples would cause a syntax error when copied out.
The values of x are the same. It was just an oversight on my part but wondered if I could set my linter to highlight multiple uses of the same variable name in the same function. Does anyone have any suggestions?
Or they got inspired by how this is done in OCaml, which was the host language for the earliest versions of Rust.
Actually, this is a behaviour found in many FP languages.
Regarding OCaml, there was even a experimental version of the REPL where one could access the different variables carrying the same name using an ad-hoc syntax.
Sometimes keeping a fixed shape for the variable context across the computation can make it easier to reason about invariants, though.
Like, if you have a constraint is_even(x) that's really easy to check in your head with some informal Floyd-Hoare logic.
And it scales to extracting code into helper functions and multiple variables. If you must track which set of variables form one context x1+y1, x2+y2, etc I find it much harder to check the invariants in my head.
These 'fixed state shape' situations are where I'd grab a state monad in Haskell and start thinking top-down in terms of actions+invariants.
Guys, guys, I don't think we're on the same page here.
The conversation I'm trying to have is "stop mutating all the dynamic self-modifying code, it's jamming things up". The concept of non-mutating code, only mutating variables, strikes me as extremely OCD and overly bureaucratic. Baby steps. Eventually I'll transition from my dynamic recompilation self-modifying code to just regular code with modifying variables. Only then can we talk about higher level transcendental OOP things such as singleton factory model-view-controller-singleton-const-factories and facade messenger const variable type design patterns. Surely those people are well reasoned and not fanatics like me
It's funny that converting the first example to the second is a common thing a compiler does, Static single assignment [0], to make various optimizations easier to reason about.
"Constant" is ambiguous. Depending on who you ask, it can mean either:
1. A property known at compile time.
2. A property that can't change after being initially computed.
Many of the benefits of immutability accrue properties whose values are only known at runtime but which are still known to not change after that point.
I think you’re after something other than immutability then.
You’re allowed to rebind a var defined within a loop, it doesn’t mean that you can’t hang on to the old value if you need to.
With mutability, you actively can’t hang on to the old value, it’ll change under your feet.
Maybe it makes more sense if you think about it like tail recursion: you call a function and do some calculations, and then you call the same function again, but with new args.
This is allowed, and not the same as hammering a variable in place.
for (0..5) |i| {
i = i + 1;
std.debug.print("foo {}\n", .{i});
}
In this loop in Zig, the reassignment to i fails, because i is a constant. However, i is a new constant bound to a different value each iteration.
To potentially make it clearer that this is not mutation of a constant between iterations, technically &i could change between iterations, and the program would still be correct. This is not true with a c-style for loop using explicit mutation.
I argue in your example there are 6 constants, not 1 constant with 6 different values, though this could be semantics ie we could both be right in some way
Immutable and constant are the same. rendaw didn't use the word mutable. One reason someone might use the word "mutable" is that it's a succinct way of expressing an idea. Alternative ways of expressing the same idea are longer words (changeable, non-constant).
In languages like JavaScript, immutable and constant may be theoretically the same thing, but in practice "const" means a variable cannot be reassigned, while "immutable" means a value cannot be mutated in place.
They are very, very different semantically, because const is always local. Declaring something const has no effect on what happens with the value bound to a const variable anywhere else in the program. Whereas, immutability is a global property: An immutable array, for example, can be passed around and it will always be immutable.
JS has always hade 'freeze' as a kind of runtime immutability, and tooling like TS can provide for readonly types that provide immutability guarantees at compile time.
I think JavaScript has a language / terminology problem here. It has to be explained constantly (see) to newcomers that `const a = []` does not imply you cannot say `a.push( x )` (mutation), it just keeps you from being able to say `a = x` further down (re-binding). Since in JavaScript objects always start life as mutable things, but primitives are inherently immutable, `const a = 4` does guarantee `a` will be `4` down the line, though. The same is true of `const a = Object.freeze( [] )` (`a` will always be the empty list), but, lo and behold, you can still add elements to `a` even after `const a = Object.freeze( new Set() )` which is, shall we say, unfortunate.
The vagaries don't end there. NodeJS' `assert` namespace has methods like `equal()`, `strictEqual()`, `deepEqual()`, `deepStrictEqual()`, and `partialDeepStrictEqual()`, which is both excessive and badly named (although there's good justification for what `partialDeepStrictEqual()` does); ideally, `equal()` should be both `strict` and `deep`. That this is also a terminology problem is borne out by explanations that oftentimes do not clearly differentiate between object value and object identity.
In a language with inherent immutability, object value and object identity may (conceptually at least) be conflated, like they are for JavaScript's primitive values. You can always assume that an `'abc'` over here has the same object identity (memory location) as that `'abc'` over there, because it couldn't possibly make a difference were it not the case. The same should be true of an immutable list: for all we know, and all we have to know, two immutable lists could be stored in the same memory when they share the same elements in the same order.
There is no exception for ANY data structure that includes references to other data structures or primitives. Not only can you add or remove elements from an array, you can change them in place.
A const variable that refers to an array is a const variable. The array is still mutable. That's not an exception, its also how a plain-old JavaScript object works: You can add and remove properties at will. You can change its prototype to point to something else and completely change its inheritance chain. And it could be a const variable to an unfrozen POJO all along.
That is not an exception to how things work, its how every reference works.
I know, and I do agree it's consistent, but then it doesn't make any sense to me as a keyword in a language where non-primitives are always by-reference.
You can't mutate the reference, but you _can_ copy the values from one array into the data under an immutable reference, so const doesn't prevent basically any of the things you'd want to prevent.
The distinction makes way more sense to me in languages that let you pass by value. Passing a const array says don't change the data, passing a const reference says change the data but keep the reference the same.
The beauty of `const` in JS is that it's almost completely irrelevant. Not only does it have nothing to do with immutability, it's also local. Which means, if I were to write `let` instead of `const`, I could still see whether my code reassigned that variable at a glance. The keyword provides very little in the way of a guarantee I could not otherwise observe for myself.
Immutability is completely different. Determining whether a data structure is mutated without an actual immutable type to enforce is impractical, error-prone, and in any event impossible to prove for the general case.
That's because in many languages there is a difference between a stored reference being immutable and the contents of the thing the reference points to being immutable.
Oh, good point. I misunderstood your previous question.
Is there a name that refers to the broader group that includes both constants and variables? In practice, and in e.g. C++, "variable" is used to refer to both constants and actual variables, due to there not being a different common name that can be used to refer to both.
In plenty of languages, there's not really a difference. In Rust, there is a difference between a `let var_name = 10;` and `const var_name: u64 = 10;` in that the latter must have its value known at compile-time (it's a true constant).
> why are you calling it mutable?
Mostly just convention. Rust has immutable by default and you have to mark variables specifically with `mut` (so `let mut var_name = 10;`). Other languages distinguish between variables and values, so var and val, or something like that. Or they might do var and const (JS does this I think) to be more distinct.
If you call a variable tau in production code then you're being overly cute. I know what it means, because I watch math YouTube for fun, but $future_maintainer in all likelihood won't.
Agree in real life you can come up with meaningful names (and should when the names are used far away from the point of assignment), but it doesn’t make sense for GPs example, where the whole point was to talk about assignments in the abstract.
Made a similar experience with Scheme. I could tell people whatever I wanted, they wouldn't really realize how much cleaner and easier to test things could be, if we just used functions instead of mutating things around. And since I was the only one who had done projects in an FP language, and they only used non-FP languages like Java, Python, JavaScript and TypeScript before, they would continue to write things based on needless mutation. The issue was also, that using Python it can be hard to write functional style code in a readable way too. Even JS seems to lend itself better to that. What's more is, that one will probably find oneself hard pressed to find the functional data structures one might want to use and needs to work around recursion due to the limitations of those languages.
I think it's simply the difference between the curious mind, who explores stuff like Clojure off the job (or is very lucky to get a Clojure job) and the 9 to 5 worker, who doesn't know any better and has never experienced writing a FP codebase.
I'm really afraid that the weak point of the argument is really Scheme having a Lisp syntax. One might say syntax is the most superficial thing about a language but as a matter of fact it's the mud pool in front of the property where everybody's wheels get stuck and they feel their only option is to go into reverse and maybe try another day, or never. The same happens with APL; sure it's a genius who invented it and tic-tac-toe in a single short line of code is cool—doesn't mean many people get over the syntax.
FWIW I believe that JS for one would greatly benefit from much better support for immutable data, including time- and space-efficient ways to produce modified copies of structured data (like you don't think twice when you do `string.replace(...)` where you do in fact produce a copy; `list.push(...)` could conceivable operate similarly).
Doesn't even have to be true copies. Structural sharing is a thing, that enables many or most functional data structures and avoids excessive memory usage. I agree with your point, and it would put JS higher in my liked languages list.
JS is much more of a functional language than it was given credit for a long time. It had first-class functions and closures from day one if I'm not mistaken.
The way I like to think about is that with immutable data as default and pure functions, you get to treat the pure functions as black boxes. You don't need to know what's going on inside, and the function doesn't need to know what's going on in the outside world. The data shape becomes the contract.
As such, localized context, everywhere, is perhaps the best way to explain it from the point of view of a mutable world. At no point do you ever need to know about the state of the entire program, you just need to know the data and the function. I don't need the entire program up and running in order to test or debug this function. I just need the data that was sent in, which CANNOT be changed by any other part of the program.
Sure modularity, encapsulation etc are great tools for making components understandable and maintainable.
However, don't you still need to understand the entire program as ultimately that's what you are trying to build.
And if the state of the entire programme doesn't change - then nothing has happened. ie there still has to be mutable state somewhere - so where is it moved to?
In functional programs, you very explicitly _do not_ need to understand an entire program. You just need to know that a function does a thing. When you're implementing a function-- sure, you need to know what it does. But you're defining it in such a way that the user should not know _how_ it works, only _what_ it does. This is a major distinction between programs written with mutable state and those written without. The latter is _much_ easier to think about.
I often hear from programmers that "oh, functional programming must be hard." It's actually the opposite. Imperative programming is hard. I choose to be a functional programmer because I am dumb, and the language gives me superpowers.
I think you missed the point. I understand that if you writing a simple function with an expected interface/behaviour then that's all you need to understand. Note this isn't something unique to a functional approach.
However, somebody needs to know how the entire program works - so my question was where does that application state live in a purely functional world of immumutables?
It didn't disappear; there's just less of it. Only the stateful things need to remain stateful. Everything else becomes single-use.
Declaring something as a constant gives you license to only need to understand it once. You don't have to trace through the rest of the code finding out new ways it was reassigned. This frees up your mind to move on to the next thing.
> Only the stateful things need to remain stateful.
And I think it is worth noting that there is effectively no difference between “stateful” and “not stateful” in a purely functional programming environment. You are mostly talking about what a thing is and how you would like to transform it. Eg, this variable stores a set of A and I would like to compute a set of B and then C is their set difference. And so on.
Unless you have hybrid applications with mutable state (which is admittedly not uncommon, especially when using high performance libraries) you really don’t have to think about state, even at a global application level. A functional program is simply a sequence of transformations of data, often a recursive sequence of transformations. But even when working with mutable state, you can find ways to abstract away some of the mutable statefulness. Eg, a good, high performance dynamic programming solution or graph algorithm often needs to be stateful; but at some point you can “package it up” as a function and then the caller does not need to think about that part at all.
And what about that state that needs to exist? - like application state ( for example this text box has state in terms of keeping track of text entered, cursor position etc ).
Where does that go?
Are you creating a new immutable object at every keystroke that represents the addition of the latest event to the current state?
Even then you need to store a pointer to that current state somewhere right?
It's moved toward the edges of your program. In a lot of functional languages, places that can perform these effects are marked explicitly.
For example, in Haskell, any function that can perform IO has "IO" in the return type, so the "printLine" equivalent is: "putStrLn :: String -> IO". (I'm simplifying a bit here). The result is that you know that a function like "getUserComments :: User -> [CommentId]" is only going to do what it says on the tin - it won't go fetch data from a database, print anything to a log, spawn new threads, etc.
It gives similar organizational/clarity benefits as something like "hexagonal architecture," or a capabilities system. By limiting the scope of what it's possible for a given unit of code to do, it's faster to understand the system and you can iterate more confidently with code you can trust.
You are very right in that things need to change. If they don't, nothing interesting happens and we as programmers don't get paid :p. State changes are typically moved to the edges of a program. Functional Core, Imperative Shell is the name for that particular architecture style.
FCIS can be summed up as: R->L->W where R are all your reads, L is where all the logic happens and is done in the FP paradigm, and W are all your writes. Do all the Reads at the start, handle the Logic in the middle, Write at the end when all the results have been computed. Teasing these things apart can be a real pain to do, but the payoff can be quite significant. You can test all your logic without needing database or other services up and running. The logic in the middle becomes less brittle and allows for easier refactoring as there is a clear separation between R, L and W.
For your first question. Yes, and I might misunderstand the question, so give me some rope to hang myself with will ya ;). I would argue that what you really need to care about is the data that you are working with. That's the real program. Data comes in, you do some type of transformation of that data, and you write it somewhere in order to produce an effect (the interesting part). The part where FP becomes really powerful, is when you have data that always has a certain shape, and all your functions understands and can work with the shape of that data. When that happens, the functions starts to behave more like lego blocks. The data shape is the contract between the functions, and as long as they keep to that contract, you can switch out functions as needed. And so, in order to answer the question, yes, you do need to understand the entire program, but only as the programmer. The function doesn't, and that's the point. When the code that resides in the function doesn't need to worry about what the state of the rest of the program is, you as the programmer can reason about the logic inside, without having to worry about some other part of the program doing something that it should do that at the same time will mess up the code that is inside the function.
Debugging in FP typically involves knowing the data and the function that was called. You rarely need to know the entire state of the program.
I'm trying to work out in my head if it helps the true challenge of programming - not writing the program in the first place, but maintaining it as requirements evolve.
The examples for functional programming benefits always seem to boil down to composable functions operating on lists of stuff where the shape has to be the same or you convert between shapes as you go.
It's very useful, but it's not a whole programme - unless you have some simple server side data processing pipeline - and I'd argue those aren't difficult program.
Programming get's difficult when you have to manage state - so I accept that parts that don't have to do that are therefore much simplier, however you have just moved the problem, not solved it.
And you say you've moved it to the edge of the program - that's fine with a simple in->function-> out, but in the case of a GUI isn't state is at the core of the program?
In that case isn't something with a central model that receives and emits events, easier to reason over and mutate?
Even the GUI can follow the FCIS architecture. It helps immensely with testing and moving things around.
For a bigger program that handles lots of things, you can still build it around the FCIS architecture, you just end up with more in->chains of functions->out. The things at the edges might grow, but at a much slower pace than the core.
My experience with both sides is what's driven me to FP+immutability.
For your last question: I believe it's a false belief. I believed the same when I started with FP+immutability. I just did not understand where I should put my changes, because I was so used to mutating a variable. Turned out that I only really need to mutate it when I store in a db of some sort (frontend or backend), send it over the wire (socket, websocket, http response, gRPC, pub/sub, etc) or act as an object hiding inherit complexity (hardware state like push button, mouse, keyboard, etc). Graphics would also qualify, but that's one area where I think FP+immutability is ill suited.
> However, don't you still need to understand the entire program as ultimately that's what you are trying to build.
Of course not, that's impossible. Modern programs are way to large to keep in your head and reason about.
So you need to be able to isolate certain parts of the program and just reason about those pieces while you debug or modify the code.
Once you identify the part of the program that needs to change, you don't have to worry about all the other parts of the program while you're making that change as long as you keep the contracts of all the functions in place.
> Once you identify the part of the program that needs to change,
And how do you do that without understanding how the program works at a high level?
I understand the value of clean interfaces and encapsulation - that's not unique to functional approaches - I'm just wondering in the world of pure immutability where the application state goes.
What happens if the change you need to make is at a level higher than a single function?
Yes, obviously a program with no mutability only heats up the CPU.
The point is to determine the points in your program where mutation happens, and the rest is immutable data and pure functions.
In the case of interacting services, for example, mutation should happen in some kind of persistent store like a database. Think of POST and PUT vs GET calls. Then a higher level service can orchestrate the component services.
Other times you can go a long way with piping the output of one function or process into another.
In a GUI application, the contents of text fields and other controls can go through a function and the output used to update another text field.
The point is to think carefully about where to place mutability into your architecture and not arbitrarily scatter it everywhere.
A pretty basic example: I write a lot of data pipelines in Julia. Most of the functions don't mutate their arguments, they receive some data and return some data. There are a handful of exceptions, e.g. the functions that write data to a db or file somewhere, or a few performance-sensitive functions that mutate their inputs to avoid allocations. These functions are clearly marked.
That means that 90% of the time, there's a big class of behavior I just don't need to look for when reading/debugging code. And if it's a bug related to state, I can pretty quickly zoom in on a few possible places where it might have happened.
> However, don't you still need to understand the entire program as ultimately that's what you are trying to build.
Depends on what I'm trying to do. If what I'm trying to handle is local to the code, then possibly not. If the issue is what's going into the function, or what the return value is doing, then I likely do need that wider context.
What pure-functional functions do allow is certainty the only things that can change the behaviour of that function are the inputs to that function.
I would say it's more than immutability - it's the "feel" of working with values. I've worked with at least 6 languages professionally and likely more for personal projects over last 20 years. I can say that Clojure was the most impactful language I learned.
I tried to learn Haskel before but I just got bogged down in the type system and formalization - that never sat with me (ironically in retrospect Monads are a trivial concept that they obfuscated in the community to oblivion, yet another Monad tutorial was a meme at the time).
I used F# as well but it is too multi paradigm and pragmatic, I literally wrote C# in F# syntax when I hit a wall and I didn't learn as much about FP when I played with it.
Clojure had the lisp weirdness to get over, but it's homoiconicty combined with the powerful semantics of core data structures - it was the first time where the concept of working with values vs objects 'clicked' for me. I would still never use it professionally, but I would recommend it to everyone who does not have a background in FP and/or lisp experience.
I have dreams of being at a “Clojure shop” but I fear daily professional use might dull my love for the language. Having to realize that not everyone on my team wants to learn lisp (or FP) just to work with my code (something I find amazing and would love to be paid to do) was hard.
On a positive note I have taken those lessons from clojure (using values, just use maps, Rich’s simplicity, functional programming without excessive type system abstraction, etc) and applied them to the rest of my programming when I can and I think it makes my code much better.
I think the advantage is often oversold and people often miss how things actually exist on a continuum and just plainly opposing mutable and immutable is sidestepping a lot of complexity.
For exemple, it's endlessly amusing to me to see all the efforts the Haskell community does to basically reinvent mutability in a way which is somehow palatable to their type system. Sometimes they even fail to even realise that it's what they are doing.
In the end, the goal is always the same: better control and warranties about the impact of side effects with minimum fuss. Carmack approach here is sensible. You want practices which make things easy to debug and reason about while mainting flexibility where it makes sense like iterative calculations.
If you read through the Big Red Book¹ or its counterpart for Kotlin², it's quite explicit about the goals with these techniques for managing effects, and goes over rewriting imperative code to manage state in a "pure" way.
I think the authors are quite aware of the relationship between these techniques and mutable state! I imagine it's similar for other canonical functional programming texts.
Besides the "pure" functional languages like Haskell, there are languages that are sort of immutability-first (and support sophisticated effects libraries), or at least have good immutable collections libraries in the stdlib, but are flexible about mutation as well, so you can pick your poison: Scala, Clojure, Rust, Nim (and probably lots of others).
All of these go further and are more comfortable than just throwing `const` or `.freeze` around in languages that weren't designed with this style in mind. If you haven't tried them, you should! They're really pleasant to work with.
For me, well-written books are an enjoyable way to learn, and I'll admit I'm partial to that!
But of course you can learn in whatever way you like. Books are just a convenient example to point to as an indicator of how implementers, enthusiasts, and educators working with these techniques make sense of them and compare them to mutating variables. They're easy to refer to because they're notable public artifacts.
Fwiw, there's also an audiobook of the Red Book. To really follow the important parts, you'll want to be reading and writing and running code, but you can definitely get a sense of the more basic philosophical orientation just listening along while doing chores or whatever. :)
Lenses is mutation by another name. You are basically recreating states on top of an immutable system. Sure, it's all immutable actually but conceptually it doesn't really change anything. That's what makes it hilarious.
In the end, the world is stateful and even the purest abstractions have to hit the road at some point. But the authors of Haskell were fully aware of that. The monadic type system was conceived as a way to easily track side effects after all, not banish them.
It’s a clear-minded and deliberate approach to reconciling principle with pragmatic utility. We can debate whether it’s the best approach, but it isn’t like… logically inconsistent, surprising, or lacking in self awareness.
You might also think of it a bit like poetry: creativity emerging from the process of working within formal constraints. By asking how you can represent something familiar in a specially structured way, you can learn both about that structure and the thing you're trying to unite with it. Occasionally, you'll even create something beautiful or powerful, as well.
Maybe in that sense there's an "artificial" challenge involved, but it's artificial in the sense of being deliberate rather than merely arbitrary or absurd.
You don’t see what’s hilarious about recreating what you are pretending to remove only one abstraction level removed?
Anyway, I have great hopes for effect system as a way to approach this in a principled way. I really like what Ocaml is currently doing with concurrency. It’s clear to me that there is great value to unlock here.
I don’t agree with your characterization that anyone is “pretending”. The whole point of abstraction is convenience of reasoning. No one is fooling themselves or anyone else, nor trying to. It’s a conscious choice, for clear purposes. That’s precisely as hilarious as using another abstraction you might favor more, such as an effect system.
>For exemple, it's endlessly amusing to me to see all the efforts the Haskell community does to basically reinvent mutability in a way which is somehow palatable to their type system.
That's because Haskell is a predominantly a research language originally intended for experimenting with new programming language ideas.
It should not be surprising that people use it to come up with or iterate on existing features.
Clojure also makes it very easy, it'd require too much discipline to do such a thing in Python. Even Carmack, who I think still does python mostly by himself instead of a team, is having issues there.
> it'd require too much discipline to do such a thing in Python
Is Python that different from JavaScript? Because it's easy in JavaScript. Just stop typing var and let, and start typing const. When that causes a problem, figure out how to deal with it. If all else fails: "Dear AI, how can I do this thing while continuing to use const? I can't figure it out."
I agree that Python is not too different and in general I treat my Python variables as const. One thing, however, where I resort to mutating variables more often than I'd like is when building lists & dictionaries. Lambdas in Python have horrible DX (no multi-line, no type annotations, bad type checker support even in obvious cases), which is why the functional approach to build your list, using map() and filter() is much more cumbersome than in JS. As a result, whenever a list comprehension becomes too long, you end up building your list the old-fashioned way, using a for loop and the_list.append().
Javascript only enforces reassignments to const. So this,
const arr = []
arr.push(“grape nuts”]
is just peachy in JS and requires the programmer to avoid using it.
More importantly, because working immutably in JS is not enforced, trying to use it consistently either limits which libraries you can use and/or requires you to wrap them to isolate their side effects. ImmerJS can help a lot here, since immutability is its whole jam. I’d rather work in a language where I get these basic benefits by default, though.
Python doesn't have constants at the language level. You can create classes without setter properties, only getter properties, to have constant objects. This is rare, usually people just write the name in SCREAMING_SNAKE_CASE to document it's supposed to be a constant but Python will still allow mutating it.
The concept is actually pretty simple: instead of changing existing values, you create new values.
The classic example is a list or array. You don't add a value to an existing list. You create a new list which consists of the old list plus the new value. [1]
This is a subtle but important difference. It means any part of your program with a reference to the original list will not have it change unexpectedly. This eliminates a large class of subtle bugs you no longer have to worry about.
[1] Whether the new list has completely new copy of the existing data, or references it from the old list, is an important optimization detail, but either way the guarantee is the same. It's important to get these optimizations right to make the efficiency of the language practical, but while using the data structure you don't have to worry about those details.
> The classic example is a list or array. You don't add a value to an existing list. You create a new list which consists of the old list plus the new value. [1]
Getting back to this, though - where would this be useful? What would do this?
I'm not getting why having a new list that's different from the old list, with some code working off the old list and some working off the new list, is anything you'd ever want.
Can you give a practical example of something that uses this?
> It means any part of your program with a reference to the original list will not have it change unexpectedly.
I don't get why that would be useful. The old array of floats is incorrect. Nothing should be using it.
That's the bit I don't really understand. If I have a list and I do something to it that gives me another updated list, why would I ever want anything to have the old incorrect list?
There’s a mismatch between your assumptions coming from C and GP’s assumptions coming from a language where arrays are not fixed-length. Having a garbage collector manage memory for you is pretty fundamental to immutable-first languages.
Rich Hickey asked once in a talk, “who here misses working with mutable strings?” If you would answer “I do,” or if you haven’t worked much in languages where strings are always immutable and treated as values, it makes describing the benefits of immutability more challenging.
Von Neumann famously thought Assembly and higher-level language compilers were a waste of time. How much that opinion was based on his facility with machine code I don’t know, but compilers certainly helped other programmers to write more closely to the problem they want to solve instead of tracking registers in their heads. Immutable state is a similar offloading-of-incidental-complexity to the machine.
I must admit I do regard assembly language with some suspicion, because the assembler can make some quite surprising choices. Ultra-high-level languages like C are worse, though, because they can often end up doing things like allocating really wacky bits of memory for variables and then having to get up to all sorts of stunts to index into your array.
State exists in time, a variable is usually valid at the point it's created but it might not be valid in the future. Thus if part of your program accesses a variable expecting it to be from a certain point in time but it's actually from another point in time (was mutated) that can cause issues.
> If you want to do an operation on fooA, you don't mutate fooA. You call fooB = MyFunc(fooA) and use fooB.
This is the bit I don't get.
Why would I do that? I will never want a fooA and a fooB. I can't see any circumstances where having a correct fooB and an incorrect fooA kicking around would be useful.
It is about being able to think clearly about your code logic. If your code has many places where a variable can change, then it is hard to go back and understand exactly where it changed if you have unexpected behavior. If the variable can never change then the logical backtrace is much shorter.
Because the account owner withdrew money . The player scored a goal, the month ticked over, the rain started, the car accelerated, a new comment was added to the thread .
> If you need new values you just make new things.
> If you want to do an operation on fooA, you don't mutate fooA. You call fooB = MyFunc(fooA) and use fooB.
The beautiful thing about this is you can stop naming things generically, and can start naming them specifically what they are. Comprehension goes through the roof.
Okay, so for example I might set something like "this bunch of parameters" immutable, but "this 16kB or so of floats" are just ordinary variables which change all the time?
Or then would the block of floats be "immutable but not from this bit"? So the code that processes a block of samples can write to it, the code that fills the sample buffer can write to it, but nothing else should?
The methods don't mutate the array, they return a new array with the change.
The trick is: How do you make this fast without copying a whole array?
Clojure includes a variety of collection classes that "magically" make these operations fast, for a variety of data types (lists, sets, maps, queues, etc). Also on the JVM there's Vavr; if you dig around you might find equivalents for other platforms.
No it won't be quite as fast as mutating a raw buffer, but it's usually plenty fast enough and you can always special-case performance sensitive spots.
Even if you never write a line of production Clojure, it's worth experimenting with just to get into the mindset. I don't use it, but I apply the principles I learned from Clojure in all the other languages I do use.
It ends up being quite the opposite - many, many bugs come from unexpected side effects of mutation. You pass that array to a function and it turns out 10 layers deeper in the call stack, in code written by somebody else, some function decided to mutate the array.
Immutability gives you solid contracts. A function takes X as input and returns Y as output. This is predictable, testable, and thread safe by default.
If you have a bunch of stuff pointing at an object and all that stuff needs to change when the inner object changes, then you "raise up" the immutability to a higher level.
If you keep going with this philosophy you end up with something roughly like "software transactional memory" where the state of the world changes at each step, and you can go back and look at old states of the world if you want.
Old states don't hang around if you don't keep references to them. They get garbage collected.
Okay, so this sounds like it's a method of programming that is entirely incompatible with anything I work on.
What sort of thing would it be useful for?
The kind of things I do tend to have maybe several hundred thousand floating point values that exist for maybe a couple of hundred thousandths of a second, get processed, get dealt with, and then are immediately overwritten with the next batch.
I can't think of any reason why I'd ever need to know what they were a few iterations back. That's gone, maybe as much as a ten-thousandth of a second ago, which may as well be last year.
It is useful for the vast majority of business processing. And, if John Carmack is to be believed, video game development.
Carmamack's post explains it - if you make a series of immutable "variables" instead of reassigning one, it is much easier to debug. This is a microcosm of time travel debugging; it lets you look at the state of those variables several steps back.
In don't know anything about your specific field but I am confident that getting to the point where you deeply understand this perspective will improve your programming, even if you don't always use it.
I spent some time discussing in another thread discussing why the foreach loop is so bad in many languages. Most of the bugs I write come from me managing state, yet if I want to do much more than going start to end of a collection I have to either use methods that are slower than a proper loop or I have to manage all the state myself.
In common lisp you have the loop macro (or better: iterate), in racket you have the for loops. I wrote a thing for guile scheme [0]. Other than that I dont know if many nice looping facilities. In many languages you can achieve all that with conbinatoes and what not, but always at the cost of performance.
I think this is an opportunity for languages to become safer and easier to use without changing performance.
One big problem with mutation is that it makes it too easy to violate many good design principles, e.g. modularity, encapsulation and separation of concerns.
Because any piece of code that holds a reference to a mutable variable is able to, at a distance, modify the behavior of a piece of code that uses this mutable variable.
Conversely, a piece of code that only uses immutable variables, and takes as argument the values that may need to vary between executions, is isolated against having its behavior changed at a distance at any time.
> I think it may be one of those things you have to see in order to understand.
Or the person doesn't understand, then declares the language to be too difficult to use. This probably happens more than the former, sadly.
ex. I've heard people argue for rewriting perfectly working Erlang services in C++ or Java, because they find Erlang "too difficult". Despite it being a simpler language than either of those.
The `assoc` on the second binding is returning a new object; you're just shadowing the previous binding name.
This is different than mutation, because if you were to introduce an intermediate binding here, or break this into two `let`s, you could be holding references to both objects {:a 1} and {:a 1 :b 2} at any time in a consistent way - including in a future/promise dereferenced later.
It's more nuanced, because the shadowing is block-local, so when the lexical scope exits the prior bindings are restored.
I think in practice this is the ideal middle ground of convenience (putting version numbers at the end of variables being annoying), but retaining mostly sane semantics and reuse of prior intermediate results.
I think a lot of this kind of stuff should have language support (like he mentions), even if it is not that functional and is just as a hint.
That said, utopias are not always a great idea. Making all your code functional might be philosophically satisfying, but sometimes there are good reasons to break the rules.
the flash of enlightment I had when I understood the incredible power the rules of functional programming give you as a coder is probably the biggest one I've had in my career so far. idempotence, immutability and statelessness on their own let you build a thing once in a disciplined way and then use it all willy nilly anywhere you want without having to think about anything other than "things go into process, other things come out" and it's so nice.
I’ve been using git for remote collaboration music production for 5 years. We sometimes use branches as well when we are working in let’s two ideas for a bass line. We’ve not really had any issues other than that we need git lfs.
The workflow is based on having the same software, ableton and plugins are largely mirrored.
We communicate over FaceTime which is good enough to assess ideas. When then record track for track and build the songs that way.
“If you don’t want door-to-door, cooperate” is basically saying federal agencies get to punish jurisdictions for lawful policy choices by switching to more coercive tactics. That’s not normal enforcement; it’s politicized leverage. And once you normalize that logic, it won’t stay confined to immigration.
reply