1. Every generation of programmers typically only has a certain window of knowledge, and everything before their time is simply foreign to them. Most developers aren't programming history buffs, they don't read books from the past to learn about technologies or how they work, how we got to now, etc.
2. When you're starting off on your journey as a programmer you're using a lot of things without understanding their implementation, and that understanding for most individuals whom I've worked with comes from having to debug issues.
> Every generation of programmers typically only has a certain window of knowledge
That's true, of course, but I think the disconnect he's pointing out is: a) deeper than that and b) new. I grew up a ways before he did and my first serious computer was a commodore 64. Like his 286, there was nothing you could do with it _except_ program it, and programming was hyper accessible. I immediately understood C's pointer syntax when I first came across it because the only way to show graphics on a C64 is to write to a specific memory address and the only way to intercept keypresses is to read from one.
Fast forward to the 21st century. My son is a smart kid - graduated valedictorian of a competitive high school - but he grew up with "computers" that hid the programming interface down so far that most of his experience with actual programming has been through emulators so the whole thing feels (and is!) artificial. He still struggles differentiating between what's "real" and what's emulated and I can tell from interacting with other programmers his age that he's not alone.
The GP's point is more general and stands. Technologists inevitably start off "in the middle" of a stack of technologies, and then choose to dig down and/or build up from there. It is inevitable that newcomers will not understand the technologies upon which their current work rests. A certain fraction of those will become fascinated and drawn to understand the underpinnings of their work, for an edge, or for the love of it, or both.
This process happens more naturally over time if you grow with the technology. For example, if you started programming with HTML in 1995 you not only have ~30 years of experience, but you've also seen the changes as they occurred, and have a great deal of context about the how and why each of them happened. That kind of experience can only happen in real-time, unless some kind soul is willing to write a manual to help newcomers speed-run it. Such a manual is often called a "textbook".
Until maybe 15-ish years ago, it was common for programmers to “bottom out” on the machine code level. In other words, they came to roughly understand the whole software stack in principle down to the software-hardware interface (the CPU executing machine code). The topic of this thread is that this has (allegedly) changed, developers at large don’t go down in their understanding until they hit hardware anymore.
Okay, you might argue that in earlier decades programmers also tended to have an understanding of hardware circuit design and how a CPU and memory etc. works in terms of logical gates and clocks. But arguably there is a harder abstraction boundary between hardware and software than at any given software-software abstraction boundary.
Dude/ette; pointers WRECKED people's minds (myself included) when we learned about them in CS101 (modified for computer engineers) in 2006. I don't know why either; they make so much sense.
That's actually not true at all. According to interviews a lot of people actually miss it. I believe it was a German documentary that I first heard it in. I'm kind of obsessed with this place and even created a game around the premise of a walled city.
Point taken! I should have considered a subtitle for the post to make it clear what my main point is. That being said, I tried to nuance the post and not make any bold predictions. I am happy to revisit the argument in five years and see where we are.
It's talking about how what we can "AI" today will be treated as bog standard - the same way this happened with Recommender systems 3-4 years ago, Neural networks 4-7 years ago, statistical learning (eg. SVM) 7-10 years ago, etc.
The title is a reference to a fairly prominent article about "Big Data" that was based on the same premise.
Read the article but I still think the criticism of the title is valid. The claim is that the way we talk about AI will be different in 5 years, not that AI will be dead. Likewise recommender systems, Neural Networks, and Statistical Learning are not certainly not dead. It's an abuse of a term to grab clicks.
I strongly disagree - especially because the title is itself a reference to "Big Data Will Be Dead In 5 Years" which itself talked about this same phenomenon, albeit for Data Engineering.
Titles are not arguments. Some people may want them to be, but they are not.
Engaging with a title just distracts a discussion from the core thesis of a post.
Have these AI True Believers heard of the phrase: sufficiently advanced technology is indistinguishable from magic? What’s magical about a recommendation system? Or statistical learning?
Well maybe they are? But they are all very specialized tools. And it’s not difficult to understand how they conceptually work. I guess...
Meanwhile a conversational partner in a box that can answer any of my questions? (If they eventually live up to the promise) Umm, how the hell would a normal person not call that magical and indeed “AI”?
I’m sorry but the True Believers haven’t made anything before which is remotely close to AI. Not until contemporary LLMs. That’s why people don’t call it that.
Dawg, LLMs cannot reason, they simply return a response based on statistical inference (voting on correct answer). If you want it to do anything correctly you need to do a thought experiment, solve the problem at a 9,000ft view, and hold its hand through implementation. If you do that there's nothing it cannot do.
However, if you're expecting it to write an entire OS from a single prompt it's going to fail just as any human would also fail. Complex software problems are solved incrementally through planning. If you do all of that planning its not hard to get LLMs to do just about anything.
The problem with your example of applying analyzing something as complex and esoteric as a codebase is that LLMs cannot reason they simply return a response based on statistical inference, so unless you followed a standard like PSR for PHP and implemented it to a 't' it simply doesn't have to context to do what you're asking it to do. If you want an LLM to be an effective programmer for a specific application you'd probably need to fine tune and provide it instructions on your coding standards.
Basically, how I've become successful using LLMs is that I solve the problem at a 9,000ft view, instruct the LLM to play different personas, have the personas validate my solution, and then instruct the LLM step-by-step to do all of the monkey work. Which doesn't necessarily always save me time upfront but in the long run it does because it makes fewer mistakes implementing my thought experiment.
Fair enough, I might be asking too much indeed, and may not be able to come up with an idea how LLMs can help me. For me, writing code is easy as soon as I understand the problem, and I sometimes spend a lot of time trying to figure out a solution that fits well within the context, so I thought I could ask an LLM what different things do and mean to help me understanding the problem surface better. Again, I may not understand something, but, at this point, I don’t understand what’s the value of code generation after I know how to solve a problem.
Do you happen to have a blog post or something showing a concrete problem that LLM helped you to solve?
What I've found is that the capabilities of LLMs depend on the problem solving skills of the person writing the prompts. If you know exactly what needs to be done and can translate that into step-by-step prompts it can do pretty much anything, but if you're looking for it actually solve 100% of the problem you're going to run into issues. Which is to say you still need to find the solution but you just have the LLM do the monkey work.
I'm working on making a physical console for the Pico-8. Its pretty simple, works on linux booted into kiosk mode, looks for when a 3.5" floppy is inserted or removed, and loads or closes the game via a shell script.
I want it to be like a C64-style keyboard with all the guts inside it but wirelessly connect to the display/tv, so what I might do is use an ESP32 to read the game floppy and wirelessly transfer and cache the file to a dongle that plugs into an open HDMI port. Not sure yet.
Amazing! I went through the same but opting to implement the console itself on the ESP32[0]. It ended up being a bit too much and it's mostly shelved now
Baby sitting LLMs is already my job and has been for a year. It's kind of boring but honestly after nearly 20 years in the game I felt like I was approaching endgame for programming anyways.
Average salary? Salaries are determined by the size of the company, how much value software engineers add, supply of software engineers, and the location of the office.
1. Every generation of programmers typically only has a certain window of knowledge, and everything before their time is simply foreign to them. Most developers aren't programming history buffs, they don't read books from the past to learn about technologies or how they work, how we got to now, etc.
2. When you're starting off on your journey as a programmer you're using a lot of things without understanding their implementation, and that understanding for most individuals whom I've worked with comes from having to debug issues.
reply