Just like the industrial revolution impacted barrel makers (coopers).
Except we aren't yet reaping the full rewards or skills realignment yet, so we've still to have the car making impact (which was post revolution, but replaced manual labour with machines as their ability grew and relative cost shrunk).
I guess I get to be the one who brings this up this time. The luddites were not strictly against the technological changes, they were a labor movement protesting how capital owners were using a new technology to dispossess workers who had no viable alternatives.
As this is also one of the major risks of AI, one that has already come to bear directly, there's a lot we can take from their movement when we don't dismiss it as shorthand for "being wrong about technology."
There is strong correlation between suppression of labor organization and strict enforcement of the draconian legislation of the time (labor organizers could be sentenced to death) and 'Luddite' activity.
It was a last resort sort of action against oppressive laws and overzealous enforcement, not some ignorant response to technological progress.
Ask it to create a Typescript server side hello world.
It produces a JS example.
Telling it that's incorrect (but no more detail) results in it iterating all sorts of mistakes.
In 20 iterations it never once asked me what was incorrect.
In contrast, o4-mini asked me after 5, o4-mini-high asked me after 1, but narrowed the question to "is it incorrect due to choice of runtime?" rather than "what's incorrect?"
I told it to "ask the right question" based on my statement ("it is incorrect") and it correctly asked "what is wrong with it?" before I pointed out no Typescript types.
This is the critical thinking we need not just reasoning (incorrectly).
> Ask it to create a Typescript server side hello world.
It produces a JS example.
Well TS is a strict superset of JS so it’s technically correct (which is the best kind of correct) to produce JS when asked for a TS version. So you’re the one that’s wrong.
> Well TS is a strict superset of JS so it’s technically correct (which is the best kind of correct) to produce JS when asked for a TS version. So you’re the one that’s wrong.
Try that one at your next standup and see how it goes over with the team
He's not wrong. If the model doesn't give you what you want, it's a worthless model. If the model is like the genie from the lamp, and gives you a shitty but technically correct answer, it's really bad.
> If the model doesn't give you what you want, it's a worthless model.
Yeah, if you’re into playing stupid mind games while not even being right.
If you stick to just voicing your needs, it’s fine. And I don’t think the TS/JS story shows a lack of reasoning that would be relevant for other use cases.
> Yeah, if you’re into playing stupid mind games while not even being right.
If I ask questions outside of the things I already know about (probably pretty common, right?), it's not playing mind games. It's only a 'gotcha' question with the added context, otherwise it's just someone asking a question and getting back a Monkey's Paw answer: "aha! See, it's technically a subset of TS.."
You might as well give it equal credit for code that doesn't compile correctly, since the author didn't explicitly ask.
As I mentioned TS/JS was only one issue (semantic vs technical definition), the other is that it didn't know to question me, making it's reasoning a waste of time. I could have asked something else ambiguous based the on context, not a TS/JS example, it likely would still not have questioned me.
In contrast if you question a fact, not a solution, I find LLMs are more accurate and will attempt to take you down a notch if you try to prove the fact wrong.
Well yes, but still the name should give it away and you'll be shot during PRs if you submit JS as TS :D
The fact is the training data has confused JS with TS so the LLM can't "get its head" around the semantic, not technical difference.
Also the secondary point wasn't just that it was "incorrect" it's the fact its reasoning was worthless unless it knew who to ask and the right questions to ask.
If somebody says to you something you know is right, is actually wrong, the first thing you ask them is "why do you think that?" not "maybe I should think of this from a new angle, without evidence of what is wrong".
It illustrates lack of critical thinking, and also shows you missed the point of the question. :D
The square artifacts in the dithered image are caused by the distribution not doing second passes over the pixels already with error distributed, this is a byproduct of the "custom" approach the OP uses, they've traded off (greater) individual colour error for general picture cohesion.
A similar custom approach to prevent second pass diffusion is in the code too; it is slightly different implementation - processes the image in 8x8 pixel "attribute" blocks, where the error does not go out of these bounds. The same artifacts occur there too but are more distinct as a consequence.
https://github.com/KodeMunkie/imagetozxspec/blob/3d41a99aa04...
Nb. 8x8 is not arbitrary, the ZX Spectrum computer this is used for only allowed 2 colours in every 8x8 block so this seeing the artifact on a real machine is less important as the whole image potentially had 8x8 artifacts anyway.
I'm 45 and I love it, no technology problem is unsolvable and frustration is usually caused by non technical people (e.g. feature changes halfway through a sprint implementation that's been planned and refined for a month previous).
If you're a person who gave up adapting and learning - "it's a young man's game" - then perhaps the OP has a point for his case, I've seen it often enough.
The 90s had COBOL programmers were out of work, the 2000s had VB6 programmers out of work, and my old bread and butter Java, is being abandoned in AI in favour of Python and TS.
But I love the fact AI is coming for my job, in fact I'm retraining for it, I learnt TS about 10 years ago, I can write C, and my Python 3 is passable.
It keeps me on my toes always, and imho as long as anybody, young or old trains on the frontier/edge you'll never be out of work. The minute you give up that edge... well.
Well they have kind of known about this kind of problem for 14 years https://issuetracker.google.com/issues/35889152?pli=1
although this is not a domain issue it is one where people can't unlink from company project ownership.
My little project for the highly intricate, messy representation ;) https://github.com/KodeMunkie/shapesnap (it stands on the backs of giants, original was not mine). It's also available on npm.
I learnt how to write them. Modern equivalent of re-skilling. When my role as it is, is replaced, then I'll be on the ground floor already for all things AI.
If you're in software dev and can't already reskill rapidly then you're probably in the wrong job.
Just like the industrial revolution impacted barrel makers (coopers).
Except we aren't yet reaping the full rewards or skills realignment yet, so we've still to have the car making impact (which was post revolution, but replaced manual labour with machines as their ability grew and relative cost shrunk).
We even have our own Luddites :D
reply