Hacker News new | past | comments | ask | show | jobs | submit login

The more I use GPT, the less I'm worried. It is a tool, and a good one, but not a replacement for the thought required to design an app that will function, scale, and have good UX to result in a marketable product. So use it and enjoy its benefits while letting it help you perform even better.

As far as everything else you've said... oof, you need a break. You seem focused on money and ego. Maybe it is time to simplify a bit, explore what else the world has to offer. Worry less about whether anyone else can do your job and more about whether or not you are enjoying your life. Make changes, have some fun. If you don't want a mortgage but have multiples of the deposit needed, buy a smaller, simpler place with cash. Then you don't have rent or a mortgage.




> The more I use GPT, the less I'm worried

I'm curious, as I see quite a few people saying this. You might not be worried about GPT4, but aren't you at least a little concerned about GPT8 or whatever?

Just having a post like this a mere 5 years ago would've been unthinkable yet here we are.


I think the main reason people become less worried about chatGPT is because of its hallucinations and inability to have actual intelligence (there may be “sparks” of intelligence, but nothing crazy impressive). Also, AI systems replacing engineers is unlikely to happen for a while until we can reach AGI just because of all the nuances and the nature of the work we do requires a lot more than pulling data from a bunch of sources and outputting a response in a formatted way. I think people don’t really understand what is going on under the hood so it makes sense why people are so worried, it is seemingly very intelligent, but still doesn’t have the intelligence to know how to apply what it “knows”. We’ve made a lot of progress so far but I think we are going to hit a wall very soon if we haven’t already. I don’t think people should be worried about GPT8 even.


Ten years ago they were warning anyone who drove for a living that they'd soon be out of a job. I'm sure the day will come, but I can't help but feel that LLMs are in that same area where we can watch them do impressive things, but they are still a long way away from real autonomy.


I understand your point but anyone who just finished high school I wouldnt recommend to choose to be driver as professional path - unless as a temporary 1-10 years gig. It's just unlikely someone would be still a driver for the next 40 years. People who are already professional drivers for less than 10 years I would say also unlikely gonna do it for the next 30 years as a job - at least majority won't.

And I think situation with self driving cars and LLM is different. For self driving car you need it to be at least 99.99% good to be useful and initial investment is high.

For LLM is enough to be just 90% good and it already scales to millions of inferences at the same time. Investment for user is either free or 20$ per month.


I had a similar moment of existential career crisis as OP.

I'm at the 20 year mark as well, in terms of developing software professionally. I've always felt like with new technology, I could grok at a high level how things worked. But LLMs like GPT seem like magic and I went through stages of initial astonishment -> despair realizing the potential impact it would have on the industry -> acceptance.

While I still feel uncertainty and fear about the future, as others have echoed, I'm realizing it's a tool for developers to use. We can either choose to accept it and understand how to work with it, or reject it. The things GPT can generate amazes me, but I'm finding that it's a good starting point or reference to build on... not a final solution. It will generate things that are sometimes completely wrong, and it's your own experience and judgement that has to be used to determine that. GPT cannot do that... at least not yet.

I think back 20 years ago and remember reading through a lot of physical books with occasional web searches landing on experts-exchange or random forums. Then came Stack Overflow and that became in invaluable tool, along with the ubiquity of free tutorials on YouTube and elsewhere. And now we have GPT which I'll ask if I really get stuck on something and it gives me new ideas to try. Perhaps in the near future, GPT is the tool that I'll use first.

I found this podcast episode helpful for me to process what I’ve felt: [Lex Fridman Podcast #376 – Stephen Wolfram: ChatGPT and the Nature of Truth, Reality & Computation][1].

It's an unsettling feeling (in general) to feel like a foundation you've built and live on could potentially be made quickly irrelevant. I'd like to say I have words of wisdom to get rid of that feeling but I don't. What has helped me is to acknowledge these feelings as valid, and then try to get clarity in what direction to move. It's not the foundation itself that's important per se, but it's the skills you've acquired in building the foundation that's more important.

[1]: https://lexfridman.com/stephen-wolfram-4/


>> I've always felt like with new technology, I could grok at a high level how things worked. But LLMs like GPT seem like magic

This is exactly how I feel. I felt so out of my depth looking at the ML architectures and I could not make any sense of it. I thought perhaps, they get inspired by neuroscience for the layers etc.

But a friend who works on LLMs mentioned, the architecture of large ML models, are mostly experimentally discovered, not designed. If that's the case, that's even worse... it means an entire field which perhaps could replace me in future, doesn't even have a knowledge foundation for its breakthroughs, but just goes by experiment... I thought it was only the weights inside the model that evolves, not the architecture itself.

Which body of knowledge do I study then, and is it even engineering anymore? That's something else, which I am not sure if my programming experience applies.

The amount of GPU/Capital it takes to evolve such architectures, run such experiments has to be prohibitively expensive.


Checking in with the same feeling. If I had to do an interview and they asked me to sketch out on a whiteboard a high level diagram of how anything from the last 20 years of computing worked, I could probably muddle my way through it. A 3D engine, a database, a word processor, a web site with a REST API, you name it. It might not be 100% right in the details, but I could at least describe it in the general sense and talk about the constraints of such a system.

If you held a gun to my head and asked me to tell you even at a sky-high architectural level (let alone in any detail) how ChatGPT worked, well... tell my family I love them. This is the first time in my 20+ year career I have felt like some computing thing is total unexplainable black magic.


It’s OK. From what I’ve read, nobody actually knows how it works. I mean we know we have layers and weights, but how those emit what looks like intelligence is not understood by anyone.


I don’t know what the future will be and I can’t control it, so no point in worrying like OP is

One thing I will add is that many people are considering it a given that chatGPT will keep progressing at the same rate


I'm thinking along similar lines, the more I use it the less it scares. I'm simply seeing it as a tool to be used to help enable tasks and even products I build. It's hard to know how sentient gpt8 will appear and if it can do everything to completely replace developers. I'll have to keep an eye on it and ensure that I change with those times, perhaps the role of developer will be drastically different by then. It's the same with any tech, keep up to ensure you're still relevant.


There are limits to what the current hardware can achieve. That said, a theoretical gpt8 that displays reasoning skills several orders of magnitude better than gpt4 still has to work within the tools and boundaries set by the existing frameworks and it still be using input from existing pieces of work. And a person who is not an expert at using those frameworks and familiar with existing state of the art will not be able to piece together a complex application that actually works.


Yes they will, a more capable chatgpt will do more than give you code to copy/paste, it will directly interact with your infrastructure, git repositories, etc.

Even if it didn't, the capacity for a developer to learn all the frameworks just got much much greater, which is a bad thing for developer salaries.


For newer tools: maybe.

But 20 years of experience vs. some schmuck who is right out of the coding bootcamp, vs. guy with gpt4 prompt.

<toughchoice.jpg>

(of course it depends on the situation, but the point stands)


> The more I use GPT, the less I'm worried

yep that's where I'm at too. If you hand my boss Chat GPT and Copilot and tell him "okay there you go, make a website" - you're gonna come back the next day to find a mess of completely disconnected chunks of code which maybe kinda sorta work on their own, but haven't been tied together at all into any kind of viable thing. You'd have better luck sitting him down with Squarespace.


This is correct to a degree, but consider what doors GTP is opening.

Its slow erosion of our responsibilities. If AI can do some of your work with minimal supervision then sooner or later managers will figure it out and reduce you job scope. Get an intern to do it.

We are certainly not at level where you can talk to chatGTP and give it requirements to generate code, but who knows where we will be in 5,10 years.

As a reason for my 'doomerism' consider digital artists industry right now.

I can imagine that if you are concept artist at game studio, you are probably seriously worried. AI will not replace all artists - you need specific and consistent art assets to be created, AI doesnt understand fingers etc - but some of the workload can be done by pretty much anyone. Or have artists take AI generated images and touch it up.


Was assembly a "slow erosion of our responsibilities" compared to coding in octal?

Was a compiled language a "slow erosion of our responsibilities" compared to writing assembler?

Was writing in a garbage-collected language a "slow erosion of our responsibilities" compared to manual memory management?

The better tools let us do more, faster. They let us waste less time on the trivial, and spend more time on figuring out how to actually build what we were trying to build. They didn't reduce the need for programmers - far from it.

GPT will probably be the same. It's a force multiplier. You can write more in less time. That will make people want software that they couldn't dream of before, because it was too expensive to build. Net programmer employment will probably go up, not down.


> Its slow erosion of our responsibilities. If AI can do some of your work with minimal supervision then sooner or later managers will figure it out and reduce you job scope. Get an intern to do it.

No.

Quite the reverse.

AI is a super intern.

Both super productive, and super clueless.

It's the interns (and possibly their managers) who are at risk.

AI is a liability in any area where you can't afford to slip up.


> It is a tool, and a good one

Not even a good one imo, I ask it to cite its sources and it makes up the URLs 95% of the time.


That makes sense - it is an LLM, not a reference library.

Part of using a tool well is understanding what it is good for and what it is not. If you are .looking for citable references... or even full factual accuracy, it is the wrong tool.


It's the wrong tool at the moment. But full factual accuracy and citable references is something the users will likely demand from these chatbots as a minimum requirement. I'd be surprised if it won't be attempted incorporated at some point, not too distant.


What is it good for?


In addition to the other comment, how many developers do you think are designing an app vs maintaining existing code or adding fairly basic CRUD features?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: