Indeed. And not only that, all this capex has wrecked their balance sheet. They were in the software licensing printing money business, now they are in the burn huge amounts of money building data centers hoping someone will use them business. Even if people did use them, the margins aren't that great anyway - especially compared to their former software business.
The English class system is anchored in this event - the ruling/upper class was replaced in its entirety by the Norman invaders, leaving two very distinct identities.
A reporter for the Financial Times once asked Gerald Cavendish Grosvenor, Duke of Westminster, what advice he would give to a young entrepreneur wanting to succeed. He joked: "Make sure they have an ancestor who was a very close friend of William the Conqueror."
Didn't an existing class hierarchy (at least in part) enable the Normans to do this? When the aristocratic army was defeated, the entire country was defenceless and they could replace the existing aristocracy.
Indeed, and having replaced the aristocracy they let the 'lower classes' carry on much as they had before, continuing with their existing customs and (lower level) forms of governance - just with new 'top bosses' if you will.
So in comparison to other places that did not have such a wholesale aristocracy replacement, this really cemented the class divide. No longer was the aristocracy 'like you but richer/more powerful', but quite different - different language, customs etc.
1066 was the last successful invasion of the British mainland, so, aside from the odd civil war, no sweeping 'cataclysm' occurred to shake things up. We didn't even have a revolution like the French, instead a gradual (over centuries!) transition to our current democratic system, with a constitutional monarchy (itself a remnant of the old ruling system).
That odd civil war was more than a tiny bit like revolutions elsewhere though (violent beheadings, paranoid totalitarianism, bourgeois ascendancy) - it just happened a little earlier than others. British history is all gradual and continuous, except for the big abrupt cataclysm in the middle of it.
However a random [but well established] package will have been used many many times, thus will have been verified in the wild, and likely will have a bug tracker, updates, and perhaps even a community of people who care about that particular code. No comparison really.
Humans have a 'world model' beyond the syntax - for code, an idea of what the code should do and how it does it. Of course, some humans are better than others at this, they are recognized as good programmers.
Could you please cite these papers. If by AI you mean LLMs, that is not supported by what I know. If you mean a theoretical world-model-based AI, that's just a tautological statement.
One conference proceeding paper and one preprint, about LLMs encoding either relative geometric information of objects or simple 2D paths.
One of the papers call this "programming language semantics", but it is using a 2D grid navigation DSL. The semantics of that language are nothing like actual programming language semantics.
These are not the same as the concept being discussed here, a human "world model" of a computer system, through which to interpret the semantics of a program.
Well I didn't find any papers off the bat for code world models but if they can create a world model for the task given, such as geometric manipulation, I don't see why they wouldn't in terms of code.
Their world model is completely a byproduct of language though, not experience. Furthermore, they by deliberate design do not maintain any form of self-recognition or narrative tracking, which is the necessary substrate for developing validating experience. The world model of an LLM is still a map. Not the territory. Even though ours has some of the same qualities arguably, the identity we carry with us and our self-narrative are incredibly powerful in terms of allowing us to maintain alignment with the world as she is without munging it up quite as badly as LLM's seem prone to.
How do you know ours is any different, that we are not in a simulation or a solipsistic scenario? The truth is that one cannot know, it's a philosophical quandary that's been debated for millennia.
It wasn't an argument. There isn't much point in going to a lot of trouble to make an argument to someone so clearly determined to ignore the truth. It is nevertheless true.
Just saying something is true doesn't make it so. Truth requires justification, and if you can't provide that, then there's no reason to believe it's true. For someone making a claim, the onus is on them to provide evidence.
Otherwise I'll just say I'm right and you're wrong, after all, that's what you're saying.
Simple. I have two sets of data I can pull from to validate a claim an LLM makes. I have the linguistic corpora we produce (artificial memory, analogical to latent space built by an LLM). You are correct in that this modality is shared. I also, however, have internal self-narrative and experiential state that is non-linguistic, but sensory/perception driven. An LLM can try to convince me that a bunch of mathematicians would come up with a system that requires one to make many copies of the same bitwise representation of a block for loading by the execution framework due to munging of the latent space via quantization. However, I have recollections of my time amongst Mathematicians and theorists. I can replay my lived perceptions of those times, and analyze and extract new meaning from them as my neural hardware evolves. Therefore, when that claim is made, my validation of the world as she is comes to a screeching halt to the tune of a recollection of a calculus class where the entire point is to pound into you the utility of fungibility of mathematical representations (substitution), and a further connection to optimization (replace entire cluster of an equation with a letter to process other things first and deal with the internal details later). That synthesizes also to the principle Mathematicians are both lazy, and clever. Alias that bitch, and moving right along. LLM's don't have that without you deliberately injecting that mechanism into their context. They'll in fact just run off the rails.
Now, could an equivalent process be modelled at some point? Probably. It'd be a conscious decision to do so on our part, and given fears over the AI Alignment quandary, it seems a rather fraught direction to carelessly proceed.
It could also be because we don't want a theocratic regime with nuclear weapons and the delivery systems to put them on European capitals.
It could also be that the whole region is simply tired of their bullshit, and would like to normalize their relations with Israel and generally get on with life without nonsense like Hamas and Hezbollah interfering. Note how Lebanon is seeing this as an opportunity to get rid of Hezbollah once and for all.
> It could also be because we don't want a theocratic regime with nuclear weapons and the delivery systems to put them on European capitals.
This is laughable. The current US administration would likely jizz their collective pants if Iran did a nuclear strike to a European capital. They can barely hide their hatred of EU and what it represents.
No, fuck this noise. Iran is being bombed by the US and Israel for their own evil reasons. Europe has many failures, but this Iran bullshit is not on us. Go pin this in someone else.
Probably a continuation of the 'mowing the lawn' strategy (as used against the Palestinians). Every now and again use massive military force to set back Iran's capabilities, time and effort they spend rebuilding is time and effort not spent causing problems elsewhere.
I'd suggest that a measure like 'density[1]/parameter' as you put it will asymptotically rise to a hard theoretical limit (that probably isn't much higher than what we have already). So quite unlike Moore's Law.
> Instead of only allowing programs that are proven to be typed correctly you might want to allow all programs that you can not proved to be wrong. Lean into gradual typing. Everything goes at first and the typing becomes as strict as the programmer decides based on how much type information they add.
Yes, this is the way. And if you ensure that the type system is never abused to control dispatch - i.e. can be fully erased at runtime - then a proposed language supplied with some basic types and an annotation syntax can be retrofitted with progressively tighter type-checking algorithms without having to rewire that underlying language every time.
Developers could even choose whether they wanted super-safe checking (that precludes many legal forms), or more lenient checking (that allows many illegal forms), all in the same language!
reply