If I have a complicated set of business-logic functions that all return booleans, the type-checker can tell me I got a boolean back but not that in xyz scenario my result should be true and abc scenario it should be false. You must write a test for that, the types are only the most marginal at helping check correctness. There are many more examples like this. If I have to write the tests, why do I want to spend time writing all the types too if my tests cover the things I care about the the types only cover details like "you returned a number from a function marked boolean", that type of error is super easy to catch with a test, so I didn't need the types.
> the types only cover details like "you returned a number from a function marked boolean", that type of error is super easy to catch with a test, so I didn't need the types
Thanks for your reply, this is why I asked: this kind of typechecking you describe only scratches the basics of what a robust type-checking system can do.
In practice, it can cover much more than "you returned a number from a function marked as boolean", and so it saves you from writing lots of unit tests, leaving you free to write only the few tests that make sense for your business logic.
Once you have a proper type system in place, you can do much more than:
> [...] a complicated set of business-logic functions that all return booleans [...]
You can encode more useful things in the type of the functions than just "it returns a boolean". Of course if every function you write only returns a boolean you won't get much from the type system! But that's circular reasoning...
It's not circular reasoning, I literally have piles of boolean-typed functions in business logic and there is no quantity of type-checking that will validate that (defn should-go [light-is-green? intersection-is-clear?] (and light-is-green? intersection-is-clear?)) is correct. That requires an actual test. Obviously my example is trivial but they quickly become non-trivial and the type-chrcker will never save me if I replace "and" with "or".
> I literally have piles of boolean-typed functions in business logic
Yes, but this is working with legacy code that wasn't written with a mature type-checking system in mind. The boolean type encodes very little meaning with it, and so there's not much you can reason about it, as you've noticed.
If your argument is "a modern type system cannot help me much with legacy code that didn't leverage it to begin with" I can understand you and even agree somewhat. But that's a different argument.
That's what I mean by circular reasoning: if you don't use the types, then yes, a type system won't be of much use. You'll spend time writing unit tests, which for legacy code seems like an adequate approach.
If your type-checker is “mature" enough to encode the full value level semantics of the code it types (rendering the code itself superfluous, as then you’d just need to compile the type-level code to get your app), you then have the problem of validating the logic of the type-level code, and the natural solution to that problem is, again, testing.
I neither used the word legacy nor modern. I don't care how fancy your type checker is today or tomorrow, the arbitrarily deeply complicated boolean-typed function I write tomorrow gleans nearly zero value from type validation. It gains actual validation of correctness with real tests.
It's a legacy problem in the sense your system is a badly designed mess of functions that only return boolean. You could encode additional semantics in the types. Of course if you don't, the type system will be unhelpful.
This is a modeling problem at this point. Types can make your model better by encoding more things. It's often not easy or feasible to refactor the mess, so sometimes you're stuck with it.
Can you explain concretely how adding types would make a bunch of boolean-valued functions and some tests better? What types would you introduce to my example?
You need dependent types for this to fly. But the main argument is that there’s an infinite number of ways your Boolean functions can go wrong if they mistakenly taken a wrong input type or return a non Boolean. Types reduce this universe of runtime errors by 99 percent. It’s still infinite though but you have a lot less possible errors.
That's my point, the type errors being reduced isn't that interesting, I usually find and fix those extremely quickly. What I don't find but am most interested in is the actual logic. So I must write tests. And when I write tests, I get the type errors covered too. So why should I spend my precious time modeling inside a type system when the ROI is minuscule and the cost is I have to litter my codebase with types. Just write the tests.
You can write the test in the type system with dependent types. It's the same thing. But your language needs to support this. Typescript supports it. The static analyzer in your ide or during compilation will now run the test. You get a model and a test for free.
Depending on your language, some things may not be easy to encode in types. "Verify that the function handles gracefully the situation where the map it's passed doesn't contain that one magic entry." That won't be easy to encode in a type...
Of course not everything can be encoded in types, and also YMMV regarding the specific language you're using.
But you now why I asked the question: people who make claims like "types make only guarantees about the type of values" usually don't understand what modern type-checking can do or what can be encoded in types. People making these claims often believe type-checking is limited to validating that you only pass strings to a function expecting a string argument, or some minor variation of that.
Types are just tests. By removing all the mindless, repetitive tests you write, typechecking can free you up to write the actual meaningful tests that you truly need to write.
It's not "pick one", it's "use type checking to help you only write the unit tests that matter".
P-code was still offered as an option because some wanted the smaller output binary sizes, and the build process was faster⁰.
Some incorrectly assume that the native option wasn't really fully compiled because the main supporting library (msvbvm60.dll) was still used¹, but this was for common library functions³ and the interpreter portion was not touched.
There were unofficial tools that would statically link your exe with the relevant VB runtime (and certain other libraries) but the use of those was rare.
----
[0] Though I don't think the build speed matter was actually significant for many, if any, workflows, even on really slow kit.
[1] Some didn't distribute it after a time, to reduce download sizes, as they were included with Windows so users already had them. Windows 7 (and maybe Vista?) included msvbvm60.dll and friends by default, and most XP and 98 installs² had it too as it came with Internet Explorer updates.
[2] though there was a compatibility break at one point that meant you needed to recompile with VB6sp6 if you hadn't included a local copy in your apps directory
[3] Much like many C programs don't have glibc statically linked into them, but work because it is practically ubiquitous on the systems they target.
> She has also done work on religion,[57] and has argued that Stalin was a god of science, and an incarnation both of the god Thoth and the Christian God.
I took the time to follow the link and read her very short essay on Stalin the God and it's very clearly parody and an attempt at absurdist humor. Russians and Kazakhs can do dry/straightfaced humor as well as the Brits.
Misrepresenting the essay as her honest beliefs is like arguing Monty Python really cannot tell a dead parrot from a live one.
(Read until the end, I'm merely commenting on the Stalin God thing, I do agree some things are weird).
I just read it, and the excerpts, the random images she chooses to include in it, the straightfaced giant leaps in logic (Thoth was the Egyptian god of knowledge, the Christian god came from the Middle East, therefore they -- and Stalin -- are one and the same), all reek of something that could have been posted in Something Awful.
I mean, come on:
> Hence Stalin was not just an ancient pagan god - but he was that God christians believe in. In a critical moment for the country and for the whole humanity he came down to Earth, wearing a cool mustache avatar, and restored the order: he revived economics and created the powerful science. He created a communist paradise for righteous people, while bad people were sent to GULAG. He won the war against the evil forces of Hitler, and even restored Israel, as he promised to do long time ago.
"Wearing a cool mustache avatar"? You really believe she wrote this in earnest?
Russians have a warped sense of humor. Though I suppose in this day and age, it's impossible to tell when someone is being sarcastic on the Internet ;) I know I can't anymore!
Edit: I do admit her subjects of interest are all over the place though. I'll grant you that!
Having heard her before, I can assure you she's not sarcastic. This completely fits her character of a pure, devoted paladin of a totalitarian imperialistic communist (heh...) religion of her own making, with a bizarrely self-contradicting pantheon in her head. Putting it bluntly, she's batshit insane. Which is probably the only kind of person that can be good at this sort of work.
Doesn't Anna's Archive have Sci-Hub as one of its main sources? If so, claiming it's lucky this is an alternative to Sci-Hub doesn't make a lot of sense.
But given that we know nothing about the person calling themselves "Anna", how can you be sure it's better than Sci-Hub? Maybe this Anna is also living in Russia.
What's wrong with Sci-Hub, anyway? (Other than it not being updated anymore, of course).
Why do people keep thinking they're intellectually superior when negatively evaluating something that is OBVIOUSLY working for a very large percentage of people?
I've been asking myself this since AI started to become useful.
Most people would guess it threatens their identity. Sensitive intellectuals who found a way to feel safe by acquiring deep domain-specific expertise suddenly feel vulnerable.
In addition, a programmer's job, on the whole, has always been something like modelling the world in a predictable way so as to minimise surprise.
When things change at this rate/scale, it also goes against deep rooted feelings about the way things should work (they shouldn't change!)
Change forces all of us to continually adapt and to not rest on our laurels. Laziness is totally understandable, as is the resulting anger, but there's no running away from entropy :}
Hopefully they are not live-coding that crap though. Do you want to make those apps even more unreliable than they already are, and encourage devs not to learn any lessons (as vibe coding prescribes)?
> Why do people keep thinking they're intellectually superior when negatively evaluating something that is OBVIOUSLY working for a very large percentage of people?
I'm not talking about LLMs, which I use and consider useful, I'm specifically talking about vibe coding, which involves purposefully not understanding any of it, just copying and pasting LLM responses and error codes back at it, without inspecting them. That's the description of vibe coding.
The analogy with "monkeys with knives" is apt. A sharp knife is a useful tool, but you wouldn't hand it to an unexperienced person (a monkey) incapable of understanding the implications of how knives cut.
Likewise, LLMs are useful tools, but "vibe coding" is the dumbest thing ever to be invented in tech.
> OBVIOUSLY working
"Obviously working" how? Do you mean prototypes and toy examples? How will these people put something robust and reliable in production, ever?
If you meant for fun & experimentation, I can agree. Though I'd say vibe coding is not even good for learning because it actively encourages you not to understand any of it (or it stops being vibe coding, and turns into something else). It's that what you're advocating as "obviously working"?
> The analogy with "monkeys with knives" is apt. A sharp knife is a useful tool, but you wouldn't hand it to an unexperienced person (a monkey) incapable of understanding the implications of how knives cut.
Could an experienced person/dev vibe code?
> "Obviously working" how? Do you mean prototypes and toy examples? How will these people put something robust and reliable in production, ever?
You really don't think AI could generate a working CRUD app which is the financial backbone of the web right now?
> If you meant for fun & experimentation, I can agree. Though I'd say vibe coding is not even good for learning because it actively encourages you not to understand any of it (or it stops being vibe coding, and turns into something else). It's that what you're advocating as "obviously working"?
I think you're purposefully reducing the scope of what vibe coding means to imply it's a fire and forget system.
Sure, but why? They already paid the price in time/effort of becoming experienced, why throw it all away?
> You really don't think AI could generate a working CRUD app which is the financial backbone of the web right now?
A CRUD? Maybe. With bugs and corner cases and scalability problems. A robust system in other conditions? Nope.
> I think you're purposefully reducing the scope of what vibe coding means to imply it's a fire and forget system.
It's been pretty much described like that. I'm using the standard definition. I'm not arguing against LLM-assisted coding, which is a different thing. The "vibe" of vibe coding is the key criticism.
> Sure, but why? They already paid the price in time/effort of becoming experienced, why throw it all away?
You spend 1/10 amount of time doing something, you have 9/10 of that time to yourself.
> A CRUD? Maybe. With bugs and corner cases and scalability problems. A robust system in other conditions? Nope.
Now you're just inventing stuff. "scalability problems" for a CRUD app. You obviously haven't used it. If you know how to prompt the AI it's very good at building basic stuff, and more advanced stuff with a few back and forth messages.
> It's been pretty much described like that. I'm using the standard definition. I'm not arguing against LLM-assisted coding, which is a different thing. The "vibe" of vibe coding is the key criticism.
By whom? Wikipedia says
> Vibe coding (or vibecoding) is an approach to producing software by depending on artificial intelligence (AI), where a person describes a problem in a few sentences as a prompt to a large language model (LLM) tuned for coding. The LLM generates software based on the description, shifting the programmer's role from manual coding to guiding, testing, and refining the AI-generated source code.[1][2][3] Vibe coding is claimed by its advocates to allow even amateur programmers to produce software without the extensive training and skills required for software engineering.[4] The term was introduced by Andrej Karpathy in February 2025[5][2][4][1] and listed in the Merriam-Webster Dictionary the following month as a "slang & trending" noun.[6]
Emphasis on "shifting the programmer's role from manual coding to guiding, testing, and refining the AI-generated source code" which means you don't blindly dump code into the world.
Doing something badly in 1/10 of the time isn't going to save you that much time, unless it's something you don't truly care about.
I have used AI/LLMs; in fact I use them daily and they've proven helpful. I'm talking specifically about vibe coding, which is dumb.
> By whom? [...] Emphasis on "shifting the programmer's role from manual coding to guiding, testing, and refining the AI-generated source code" which means you don't blindly dump code into the world.
By Andrej Karpathy, who popularized the term and describes it as mostly blindly dumping code into the world:
> There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
He even claims "it's not too bad for throwaway weekend projects", not for actual production-ready and robust software... which was my point!
> Writing computer code in a somewhat careless fashion, with AI assistance
and
> In vibe coding the coder does not need to understand how or why the code works, and often will have to accept that a certain number of bugs and glitches will be present.
and, M-W quoting the NYT:
> You don’t have to know how to code to vibecode — just having an idea, and a little patience, is usually enough.
and, quoting from Ars Technica
> Even so, the risk-reward calculation for vibe coding becomes far more complex in professional settings. While a solo developer might accept the trade-offs of vibe coding for personal projects, enterprise environments typically require code maintainability and reliability standards that vibe-coded solutions may struggle to meet.
I must point out this is more or less the claim I made and which you mocked with your CRUD remarks.
> Doing something badly in 1/10 of the time isn't going to save you that much time, unless it's something you don't truly care about.
You're adding "badly" like it's a fact when it is not. Again, in my experience, in the experience of people around me and many experiences of people online AI is more than capable of doing "simpler" stuff on its own.
> By Andrej Karpathy, who popularized the term
Nowhere in your quoted definitions does it say you don't *ever* look at the code. MW says non-programmers can vibe code, also in a "somewhat careless fashion" none of those imply you CANNOT look at code for it to be vibe coding. If Andrej didn't look at it it doesn't mean the definition is that you are not to look at it.
> which you mocked with your CRUD remarks
I mocked nothing, I just disagree with you since as a dev with over 10 years of experience I've been using AI for both my job and personal projects with great success. People that complain about AI expect it to parse "Make an ios app with stuff" successfully, and I am sure it will at some point, but now it requires finer grain instructions to ensure its success.
It's not obvious that it's "working" for a "very large" percentage of people. Probably because this very large group of people keep refusing to provide metrics.
I've vibe-coded completely functional mobile apps, and used a handful LLMs to augment my development process in desktop applications.
From that experience, I understand why parsing metrics from this practice is difficult. Really, all I can say is that codegen LLMs are too slow and inefficient for my workflow.
Vibe coding as was explained by the popularizer of the term involves no coding. You just paste error messages, paste the response of the LLM, paste the error messages back, paste the response, and pray that after several iterations the thing converges to a result.
It involves NOT looking at either the LLM output or the error messages.
A case can be made that it involves an experienced coder to be vibe coding, as the author of the term most definitely is and I feel this context is at the very least being conveniently omitted at times. Whether he was truly not doing anything at all or glanced at 1% of generated code to check if the model isn't getting lost is important, as is being able to know what to ask the model for.
Horror stories from newbies launching businesses and getting their data stolen because they trust models are to be expected, but I would not call them vibe coding horror stories, since there is no coding involved even by proxy, it's copy pasting on steroids. Blind copy pasting from stack overflow was not coding for me back then either. (A minute of silence for SO here. RIP.)
The problem with this discussion is that different interlocutors have different opinions of what vibe coding really means.
For example, another person in this thread argues:
> I'd rather give my green or clueless or junior or inexperienced devs said knives than having them throw spaghetti on a wall for days on end, only to have them still ask a senior to help or do the work for them anyways.
So they are clearly not talking about experienced coders. They are also completely disregarding the learning experience any junior coder must go through in order to become an experienced coder.
This is clearly not what you're arguing though. So which "vibe coding" are we discussing? I know which one I meant when I spoke of monkeys and sharp knives...
I mean it very literally, taking the what he said together with who is the person who said it - an experienced professional sculpting a solution using a very complex set of tools, with a clear idea in his head, but with unusual and slightly uncomfortable disinterest in the exact details of how the final product looks from the inside.
He seems to think it barely involves coding ("I don't read the diffs anymore, I Accept All [...] It's not really coding"), and that it's only good for goofing and throwaway code...
I'd rather give my green or clueless or junior or inexperienced devs said knives than having them throw spaghetti on a wall for days on end, only to have them still ask a senior to help or do the work for them anyways.
I'm not advocating for vibe coding, that's new-age hipster talk. But just using the AI for help, assistance, and doing grunt work is where we have to go as an industry.
Something Awful is also missing from this history. Maybe too niche? Though for geeks and gamers it was well known, and (checks Wikipedia) it was launched on 1999...
It was certainly a notable part of the internet culture of the era.
I think it might not be well-known how much of current internet culture cascaded out from the hive that was the SA forums. 4chan was started by an SA goon, as was bellingcat, for example.
If you were around and paying for media at the time, good have seen it before films at the cinema and as a pre-menu sting (sometimes unskippable) on some DVDs. One of those examples where paying customers were the only ones irritated by such a measure.
It certainly had wide audience awareness in its intended form, which is why the many comedy interpretations (https://m.youtube.com/watch?v=ALZZx1xmAzg probably being the most famous) came into being and worked so well.
Some of these I had never heard of, and some of course are early internet history that happened when I was too young. It's crazy how some still seem very recent in my memory, like Homestar Runner. It still feels like yesterday.
Never heard of the helicopter game though. An early "Flappy Bird"!
I wish the series continued past 2007, since there are some interesting artifacts beyond that date.
This will achieve their goal of replacing half the staff with AI... at least I think so, I should check with is-half-ai.
reply