Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Would you consider starting to learn a programming language now?
24 points by nbzso 10 months ago | hide | past | favorite | 44 comments
So my question to you is this:

Would you consider starting to learn a programming language now that ChatGPT is here? And what path would you recommend to young people who want to become programmers. Machine learning? Prompt engineering?

I would be grateful if you could share your thoughts and opinions on this matter. Thanks.




You could say that Google Translate etc has obsoleted the need to learn more than one spoken language. However, new languages also teach you new ways of thinking. Multiple studies have shown the benefits of being a polyglot.

I feel its the same with programming languages. This advice from Peter Norvig [1] was very helpful for me personally:

  Learn at least a half dozen programming languages. Include one language that emphasizes class abstractions (like Java or C++), one that emphasizes functional abstraction (like Lisp or ML or Haskell), one that supports syntactic abstraction (like Lisp), one that supports declarative specifications (like Prolog or C++ templates), and one that emphasizes parallelism (like Clojure or Go).
[1] https://norvig.com/21-days.html#:~:text=Learn%20at%20least,C...).


Very few people remember this now but when outsourcing was starting to become a thing circa 1998-2000, it was "common knowledge" that learning programming or getting a CompSci degree etc. Was going to be pointless because all those jobs were just going to be outsourced to India.

You can see that has proven to be 100% accurate.


In some cases it lasted even longer than that. In 2004 at a Canadian University I was told during the student orientation that CS was not a great choice due to outsourcing.


There are many more available processing cores than potential humans workers.


That’s true. It’s just that the future hasn’t yet happened, so we cannot now for sure how it will play out. We can only make more or less educated guesses. And be prepared to sometimes become surprised, since there was developments going on that we just oversaw.


Outsourcing always has a lot challenges. AI is a completely different beast.


No matter how many frameworks, programming languages, the cloud, tools we have invented demand for programmers has stayed high.

LLM is another tool. Until it can talk to a stakeholders and build something it can be accountable for to a high standard that actually solves the problem and does well at the 1000 or so factors that make good software. I don’t worry about it right now.

AI is a threat to programmers precisely when it is a threat to civilisation. It may change the job substantially, but change is normal, so did the www, mobile, VC, cloud, Javascript explosion, functional programming and so on.


It also has a lot of challenges and needs quality control, automated quality control even.


I think the current state of GPT-4 is overhyped when it comes to programming. It's novel and useful for steering you in the right direction and can reproduce simple hello-world like examples (even Pong and other common programs) but in my experience fails when things become even slightly nontrivial. Things might change but it seems like GPT-4 has similar performance to GPT-3.5 so there might be some limit to the scaling ultimately. Regardless, learning a new skill, even an obsolete one, is almost invariably a good and enriching thing.


But the typical program decomposes into a lot of grunt functions/code so it's a productivity multiplier, meaning you can have less developers and do more. You've got to know what you expect as output, it just saves time. It's not just "hello world" but it's not the complete app.

I'd be curious where it failed for you.


I find it constantly fails and hallucinates when working with existing systems that weren’t designed in a standard why. For example, a database that isn’t normalized well and has odd one-to-one mappings tripped up its ability to problem solve around hibernate annotations.

So perhaps if it builds the system it can build new few features just fine, but it struggles to problem solve when the original design doesn’t follow its expected pattern.


I would try providing it a schema as the first message, just for its internal memory and see if that works, but yeah, I usually ask it to write domain independent stuff and then adapt that.


In one case I tried using it to help me find the right functions in `oauthlib` that can validate a two-legged OAuth 1.0 signed message. It couldn't really get it and made up information about how OAuth works, but it did get me on the right track so I'm thankful in that regard.

I've also been using it while writing Common Lisp, and I'm not sure if it's just because it's a more obscure language, but it commonly produces nonsensical code, recommends functions that don't exist or work differently than it thinks they do, or produces non-idiomatic weird stuff.

It also falls apart when writing z80 assembly, again just generating nonsense. It really is a cool tool though, and I use it to get me steered in the correct direction. I've seen other people mention that it can identify unknown unknowns, and I concur it's an amazing tool for that. We'll see if with more specialized training on programming-related datasets if it can get better, I wouldn't write off the possibility. It won't be stealing your job in its current state, though.


>I've also been using it while writing Common Lisp, and I'm not sure if it's just because it's a more obscure language, but it commonly produces nonsensical code, recommends functions that don't exist or work differently than it thinks they do, or produces non-idiomatic weird stuff.

I've been using it for Common Lisp as well and have had the same experience. For example, when asking it to help me generate some simple CRUD operations using the cl-sqlite library, it just comes up with function names that simply do not exist in that library [0]: `sqlite3:execute-command` `sqlite3:with-query-results` `sqlite3:get-result`

[0] https://cl-sqlite.common-lisp.dev/


It seems to really fall apart in anything niche, even if you're just asking information about a topic. Doesn't have enough training data on that topic.


At least in my experience, those lower level functions need to integrate well. From what I'm hearing, these AI systems don't play well with integration, especially outside of well defined API specs.


Right, it saves time. But you have to know enough to spot when it's wrong, and to be able to figure out what to do when it's wrong. To do that, you still have to know a language.


In my experience GPT-4 is much better at producing correct code than GPT-3.5. Can you provide an example where it’s not the case?


GPT-4 doesn't seem anywhere close to writing or intelligently modifying medium/large programs. This is what most professional programmers do.

LeCun at least believes this is a fundamental limitation of AR-LLMs that can't be overcome (e.g. his "Unpopular Opinion" slide -- https://drive.google.com/file/d/1BU5bV3X5w65DwSMapKcsr0ZvrMR...).


For young people, I would recommend learning languages they would probably never use in production. APL, Forth, Prolog, Scheme, and one or few variants of Assembly. Unlike modern mainstream languages, these really expand your thinking vocabulary.

And the practical reason why I'd recommend learning these now: you'll never have time to play with them when you're not young anymore. The whole domain of knowledge they represent will forever remain a missed opportunity.

In my book Geometry for Programmers (https://www.manning.com/books/geometry-for-programmers), I also advocate investing in mathematical education and a computer algebra system. Any system. I propose SymPy but it's only because it's free and ridiculously simple to get started with.

The reason for this is also simple. Mathematical knowledge is non-perishable. ChatGPT can write boilerplate for you, and it any language too. But to solve a real-world problem with math, you need a computer algebra system to solve your equations, and your own head to compose these equations. That's something beyond the reach of LLMs.


I bought the APL book (https://www.amazon.com/gp/product/0471430145) several years ago (though it wasn't >$150 then) because I had a professor in college who loved to talk about APL

I've opened it to skim a few times, and here's what I've realized:

- purpose-built machines (and languages) are always better at their purpose than general-purpose languages/machines

- I don't know nearly enough math for APL to [fully] make sense

- a great deal of elegance and methods of thinking have been lost as languages (virtual (and physical)) disappear

- flexibility is a wonderful thing in languages and hardware


> And what path would you recommend to young people who want to become programmers. Machine learning? Prompt engineering?

Become a software engineer and use the best tools available to you to learn and do your job. This means LLMs and everything that was available before them like search, stackoverflow, documentation, forums, books, courses etc.

You don't need to work on machine learning in order to take advantage of it.

"Prompt engineer" is a bunch of bs, stop trying to make it happen. The language models are a useful tool, they're not your job. It's as stupid as someone claiming to be an "IDE engineer" or a "stackoverflow engineer".


>The language models are a useful tool, they're not your job. It's as stupid as someone claiming to be an "IDE engineer" or a "stackoverflow engineer".

Agree if you're using ChatGPT to learn/generate/debug code for you.

But if you use an LLM to generate, summarize, or classify text from within an app then I think it is a skill just like being able to work with the Stripe API. Since the API takes freeform text as input and the output varies based on how things are worded, there is some extra nuance needed to use it correctly but I think that's basically it. Also, there are chances that the skill will grow stale quickly as LLMs improve.


“Prompt engineer” is the new “Google fu master”


ChatGPT is not here. Its certainly talked about, a lot, but for software engineering the proverbial answer to most questions beyond a beginner level is: It Depends. There are so many factors that go into building and maintaining a working, growing system beyond the text files that make up the code. The programming language is a tool. ChatGPT has potential to also be a tool. Humans still need to think through problems and constraints and come up with solutions.

If you're interested and programming is something you like, yes go learn a programming language and also learn software engineering topics.


It reminds me about a story about robots trying to take over acrobats jobs. The acrobat did flips and when the robot tried to flip it damaged itself.

Has any of the AI ChatGPT generated code pass quality control or patched to prevent exploits? If not then code for quality and security built into your work.

Yes I am learning Python.


Just wait until Boston Dynamics robots can do flips without damage.


https://www.youtube.com/watch?v=fRj34o4hN4I The future is 5 years ago.


You're not going to be able to assess the outputs of the LLMs if you can't program on your own. LLMs are definitely going to make us all more productive and maybe a bit cheaper in aggregate (supply and demand and all) but even assuming the current wildly impressive improvement curve keeps up it seems unlikely that programming will stop being required. It'll just change.


I think the threat of LLMs (Large Language Models) is way overblown. They may even be helpful for beginners.

For young people learning to code and studying how computers work are the best skills to learn.


I've been recently thinking of this as similar to Data Warehouses vs a normal Relational Database. The Data Warehouse has a variety of usecases around analytics or marketing where it has replaced RDBs, but for other usecases – especially around your core application transactions – you still need your Database.

Similarly, if you're doing something like generating medical/therapy notes, or generating SAT questions then GPT is great, but for transactions or cases where exact behavior is a requirement, then you sort of need a discrete set of instructions that only a programming language can provide. You're not going to see payments systems built by GPT in the short term at least.


You are still going to know how to code to troubleshoot and know when LLM is gaslighting you. You will also need to know enough to take advantage of all the new APIs that are coming out.


I don't even know what I want to do.

Maybe I'd recommend stuff like healthcare since that's going to still need individuals for a long time.


You don’t learn programming language for the sake of it. Nobody cares about the language anymore.

You learn to be a software engineer. Language is just one of the obstacles to overcome on your way of delivering value or a product.

Nobody not even most of the people (let alone ai) will do this job for you. And this is what you get paid for. Not for typing compilable letters.


>You don’t learn programming language for the sake of it

Some of us do

>Nobody cares about the language anymore

Only kinda. Yes, strictly speaking, the "language doesn't matter" (only the frameworks and APIs do). But practically speaking, if you're in a room of Python programmers, and you're the lone weirdo trying to write in VB6 (or vice versa) ... you're gonna run into issues :)


As a business owner i would fire those who spent time arguing about vb6 and promote the vb6 guy who delivered some value


fwiw ... I picked VB6 because it's the last version of VB I liked (used a couple editions prior, and tried to use VB .NET in 03 ... hated it) :)


Yes, of course. This is a silly question.


I know a little Python... I'm considering taking on a fairly large refactoring job in Python because it's my favorite tool, and there were breaking changes made to the WxWidgets library that it depends on. I'll be using CoPilot to help.


Absolutely yes and I would not rely on ChatGPT for anything more than simple snippets that can be quickly verified.

I think ChatGPT could be helpful with things like: "please convert this function written in language X to language Y in an idiomatic way"


I don't think foregoing an education just because someone is better than you or that it couldn't make you any money makes sense.


im learning rust

chatgpt is really good at explanations for code


But oh my god is it slow. I really would just have formulas for doing lifetime algebra since that's basically all that rust is about. Then copy-paste bunch of shit from a cheat sheet, find a reasonable ide and then maybe use chatgpt for dark corners


it can explain snippets lind by line!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: