Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: GPT4 Broke Me
105 points by thaway_thaway34 on June 15, 2023 | hide | past | favorite | 174 comments
I created this throwaway account to talk completely openly. I am an engineer with 20 years in, working mostly on web development. I have a pretty high salary in the top %5 band of my area and sector.

I don't know where to start this. So let's just jump right into it.

I can't tell what will happen immediately next year. This has never happened to me before professionally. I was born a nerd, started programming early, and jobs just came to me.

AI scares me for job security. My entire pitch to get to top %5 of salary band has been that I have many years of experience in my specialty. Thus securing me high roles in my field along with high salary.

I have been already working nearly at a burn-out level for many years to reach this comp.

Rent is sky high. I don't feel comfortable taking on mortgage despite having multiples of needed deposit, because I am not sure how much longer I can maintain my comp. with AI automating everything.

Many people share the comforting stories, that there will be other jobs for engineers/programmers.

How am I supposed to retain my TC if I have to switch to another field from web?

I am unable to enjoy any content, movie, tv show anymore. Even the SciFi from last year, feels outdated. Reddit comments feel like they're all AI-generated. I made an AI reddit bot myself. None noticed.

I use ChatGPT on a daily basis to build projects, and the more I use it, the more scared I get. It is just too powerful. The more I use it, the less proud I feel for my output. It just feels like anyone could do it.

Where are we really going? Can we just stop the optimistic techie talk and accept UBI is not happening... They don't give you healthcare, do you think they will give you CASH like that???

I hope this doesn't get flagged, because I really need your inputs.




The more I use GPT, the less I'm worried. It is a tool, and a good one, but not a replacement for the thought required to design an app that will function, scale, and have good UX to result in a marketable product. So use it and enjoy its benefits while letting it help you perform even better.

As far as everything else you've said... oof, you need a break. You seem focused on money and ego. Maybe it is time to simplify a bit, explore what else the world has to offer. Worry less about whether anyone else can do your job and more about whether or not you are enjoying your life. Make changes, have some fun. If you don't want a mortgage but have multiples of the deposit needed, buy a smaller, simpler place with cash. Then you don't have rent or a mortgage.


> The more I use GPT, the less I'm worried

I'm curious, as I see quite a few people saying this. You might not be worried about GPT4, but aren't you at least a little concerned about GPT8 or whatever?

Just having a post like this a mere 5 years ago would've been unthinkable yet here we are.


I think the main reason people become less worried about chatGPT is because of its hallucinations and inability to have actual intelligence (there may be “sparks” of intelligence, but nothing crazy impressive). Also, AI systems replacing engineers is unlikely to happen for a while until we can reach AGI just because of all the nuances and the nature of the work we do requires a lot more than pulling data from a bunch of sources and outputting a response in a formatted way. I think people don’t really understand what is going on under the hood so it makes sense why people are so worried, it is seemingly very intelligent, but still doesn’t have the intelligence to know how to apply what it “knows”. We’ve made a lot of progress so far but I think we are going to hit a wall very soon if we haven’t already. I don’t think people should be worried about GPT8 even.


Ten years ago they were warning anyone who drove for a living that they'd soon be out of a job. I'm sure the day will come, but I can't help but feel that LLMs are in that same area where we can watch them do impressive things, but they are still a long way away from real autonomy.


I understand your point but anyone who just finished high school I wouldnt recommend to choose to be driver as professional path - unless as a temporary 1-10 years gig. It's just unlikely someone would be still a driver for the next 40 years. People who are already professional drivers for less than 10 years I would say also unlikely gonna do it for the next 30 years as a job - at least majority won't.

And I think situation with self driving cars and LLM is different. For self driving car you need it to be at least 99.99% good to be useful and initial investment is high.

For LLM is enough to be just 90% good and it already scales to millions of inferences at the same time. Investment for user is either free or 20$ per month.


I had a similar moment of existential career crisis as OP.

I'm at the 20 year mark as well, in terms of developing software professionally. I've always felt like with new technology, I could grok at a high level how things worked. But LLMs like GPT seem like magic and I went through stages of initial astonishment -> despair realizing the potential impact it would have on the industry -> acceptance.

While I still feel uncertainty and fear about the future, as others have echoed, I'm realizing it's a tool for developers to use. We can either choose to accept it and understand how to work with it, or reject it. The things GPT can generate amazes me, but I'm finding that it's a good starting point or reference to build on... not a final solution. It will generate things that are sometimes completely wrong, and it's your own experience and judgement that has to be used to determine that. GPT cannot do that... at least not yet.

I think back 20 years ago and remember reading through a lot of physical books with occasional web searches landing on experts-exchange or random forums. Then came Stack Overflow and that became in invaluable tool, along with the ubiquity of free tutorials on YouTube and elsewhere. And now we have GPT which I'll ask if I really get stuck on something and it gives me new ideas to try. Perhaps in the near future, GPT is the tool that I'll use first.

I found this podcast episode helpful for me to process what I’ve felt: [Lex Fridman Podcast #376 – Stephen Wolfram: ChatGPT and the Nature of Truth, Reality & Computation][1].

It's an unsettling feeling (in general) to feel like a foundation you've built and live on could potentially be made quickly irrelevant. I'd like to say I have words of wisdom to get rid of that feeling but I don't. What has helped me is to acknowledge these feelings as valid, and then try to get clarity in what direction to move. It's not the foundation itself that's important per se, but it's the skills you've acquired in building the foundation that's more important.

[1]: https://lexfridman.com/stephen-wolfram-4/


>> I've always felt like with new technology, I could grok at a high level how things worked. But LLMs like GPT seem like magic

This is exactly how I feel. I felt so out of my depth looking at the ML architectures and I could not make any sense of it. I thought perhaps, they get inspired by neuroscience for the layers etc.

But a friend who works on LLMs mentioned, the architecture of large ML models, are mostly experimentally discovered, not designed. If that's the case, that's even worse... it means an entire field which perhaps could replace me in future, doesn't even have a knowledge foundation for its breakthroughs, but just goes by experiment... I thought it was only the weights inside the model that evolves, not the architecture itself.

Which body of knowledge do I study then, and is it even engineering anymore? That's something else, which I am not sure if my programming experience applies.

The amount of GPU/Capital it takes to evolve such architectures, run such experiments has to be prohibitively expensive.


Checking in with the same feeling. If I had to do an interview and they asked me to sketch out on a whiteboard a high level diagram of how anything from the last 20 years of computing worked, I could probably muddle my way through it. A 3D engine, a database, a word processor, a web site with a REST API, you name it. It might not be 100% right in the details, but I could at least describe it in the general sense and talk about the constraints of such a system.

If you held a gun to my head and asked me to tell you even at a sky-high architectural level (let alone in any detail) how ChatGPT worked, well... tell my family I love them. This is the first time in my 20+ year career I have felt like some computing thing is total unexplainable black magic.


It’s OK. From what I’ve read, nobody actually knows how it works. I mean we know we have layers and weights, but how those emit what looks like intelligence is not understood by anyone.


I don’t know what the future will be and I can’t control it, so no point in worrying like OP is

One thing I will add is that many people are considering it a given that chatGPT will keep progressing at the same rate


I'm thinking along similar lines, the more I use it the less it scares. I'm simply seeing it as a tool to be used to help enable tasks and even products I build. It's hard to know how sentient gpt8 will appear and if it can do everything to completely replace developers. I'll have to keep an eye on it and ensure that I change with those times, perhaps the role of developer will be drastically different by then. It's the same with any tech, keep up to ensure you're still relevant.


There are limits to what the current hardware can achieve. That said, a theoretical gpt8 that displays reasoning skills several orders of magnitude better than gpt4 still has to work within the tools and boundaries set by the existing frameworks and it still be using input from existing pieces of work. And a person who is not an expert at using those frameworks and familiar with existing state of the art will not be able to piece together a complex application that actually works.


Yes they will, a more capable chatgpt will do more than give you code to copy/paste, it will directly interact with your infrastructure, git repositories, etc.

Even if it didn't, the capacity for a developer to learn all the frameworks just got much much greater, which is a bad thing for developer salaries.


For newer tools: maybe.

But 20 years of experience vs. some schmuck who is right out of the coding bootcamp, vs. guy with gpt4 prompt.

<toughchoice.jpg>

(of course it depends on the situation, but the point stands)


> The more I use GPT, the less I'm worried

yep that's where I'm at too. If you hand my boss Chat GPT and Copilot and tell him "okay there you go, make a website" - you're gonna come back the next day to find a mess of completely disconnected chunks of code which maybe kinda sorta work on their own, but haven't been tied together at all into any kind of viable thing. You'd have better luck sitting him down with Squarespace.


This is correct to a degree, but consider what doors GTP is opening.

Its slow erosion of our responsibilities. If AI can do some of your work with minimal supervision then sooner or later managers will figure it out and reduce you job scope. Get an intern to do it.

We are certainly not at level where you can talk to chatGTP and give it requirements to generate code, but who knows where we will be in 5,10 years.

As a reason for my 'doomerism' consider digital artists industry right now.

I can imagine that if you are concept artist at game studio, you are probably seriously worried. AI will not replace all artists - you need specific and consistent art assets to be created, AI doesnt understand fingers etc - but some of the workload can be done by pretty much anyone. Or have artists take AI generated images and touch it up.


Was assembly a "slow erosion of our responsibilities" compared to coding in octal?

Was a compiled language a "slow erosion of our responsibilities" compared to writing assembler?

Was writing in a garbage-collected language a "slow erosion of our responsibilities" compared to manual memory management?

The better tools let us do more, faster. They let us waste less time on the trivial, and spend more time on figuring out how to actually build what we were trying to build. They didn't reduce the need for programmers - far from it.

GPT will probably be the same. It's a force multiplier. You can write more in less time. That will make people want software that they couldn't dream of before, because it was too expensive to build. Net programmer employment will probably go up, not down.


> Its slow erosion of our responsibilities. If AI can do some of your work with minimal supervision then sooner or later managers will figure it out and reduce you job scope. Get an intern to do it.

No.

Quite the reverse.

AI is a super intern.

Both super productive, and super clueless.

It's the interns (and possibly their managers) who are at risk.

AI is a liability in any area where you can't afford to slip up.


> It is a tool, and a good one

Not even a good one imo, I ask it to cite its sources and it makes up the URLs 95% of the time.


That makes sense - it is an LLM, not a reference library.

Part of using a tool well is understanding what it is good for and what it is not. If you are .looking for citable references... or even full factual accuracy, it is the wrong tool.


It's the wrong tool at the moment. But full factual accuracy and citable references is something the users will likely demand from these chatbots as a minimum requirement. I'd be surprised if it won't be attempted incorporated at some point, not too distant.


What is it good for?


In addition to the other comment, how many developers do you think are designing an app vs maintaining existing code or adding fairly basic CRUD features?


My dad ran a local dental lab for 30+ years, making crowns and bridges for local dentists.

Over the course of a few years, dentists began using 3D scanners to digitally scan patient impressions, which allowed them to email the scans to my dad's dental lab (previously he'd have to drive to each dentist and pick it up in person).

Then, 1-2 years later, they started emailing the 3D scan to a company in China who would do the same work as my dad, and then mail back the finished product directly to the dentist from China. And of course the Chinese dental labs did this at half the price my dad was charging.

He went out of business and ended up retiring early at 55. The "retiring early" explanation was a great one, but the real reason he retired was innovation hit his industry and made it easy for dentists to outsource crown/bridge manufacturing to China. He works for a property maintenance company now for $20/hr, mostly as a way to fill his time (he's doing fine financially).

Most of the comments here are telling you that you'll be ok, and I hope you will be. But the reality is innovation causes disruption in many fields/industries. If you're in the path of disruption, you have to be willing to quickly adapt and learn to live alongside it rather than fight it.

Also remember you're not alone. Technology has been displacing jobs for decades. Life is about learning and adapting. As long as you commit to adapting quickly, learning new skills as necessary and being open to different types of jobs, you should be fine.


> Technology has been displacing jobs for decades. Life is about learning and adapting. As long as you commit to adapting quickly, learning new skills as necessary and being open to different types of jobs, you should be fine.

That seems to be a contradiction of your story. Your dad was lucky to be able to retire. He did not adapt. He had to quit and he got low end job.

Its easy to say adapt re-skill, innovate but to a lot of people it might not be an option.


Yea my dad isn't the best example. He had a business partner that was refusing to buy the expensive scanners and 3d printers because the business partner didn't want to make the investment in new technology. They stayed old school and went out of business.

It's probably more of an example of how not to handle this sort of situation. Instead of being afraid of or avoiding the technology coming for your job, embrace it and figure out how to thrive with/alongside it.


My dentist does this in-house now using machines they own.


I think there are some good takeaways or lessons from this situation.

My first takeaway is if the dentists didn’t give the Dad an opportunity to compete on price and went straight to China. One day you’re getting orders and the next day you are not. Tough situation. A pivot would need to be done quickly. Once the dad caught wind of the China situation, maybe he could outsource his stuff to China as well and lower his prices. Is there any margin left though?

Same thing with GPT. One day you’re getting contracts for work and the next day you’re not. You later find out they are using GPT.

One day people are picking cotton and the next day a machine is doing it.

The OP‘s concern is real. One good aspect is that everyone is aware GPT is here. The time is now to pivot or adapt. Those that wait to find out what will happen are usually at a disadvantage to those that act more quickly.


Nothing I have seen from ChatGPT has made me worried for my job. If your job is producing 20 line snippets solving variations of common coding tasks, ChatGPT is the least threat to your job security.

To be honest the far greater threat to your job is an increase in the ammount of programmers plus an increase in individual productivity, with an industry declining in growth. There is your threat.


> To be honest the far greater threat to your job is an increase in the ammount of programmers plus an increase in individual productivity

That productivity increase by a level of magnitude is from AI tools or ChatGpt. The field is gonna get a whole lot more competitive, the salaries will suffer as well but maybe it will be for the better routing jobs to other fields other than programming.

What Im worring most is increasing inequality and the class who will wield most power from AI


The people who have the most to gain from an increase in productivity are those who already have lots of experience and are willing to embrace new technologies.


... or people who don't need that experience to now compete against you.


>... or people who don't need that experience to now compete against you.

I am convinced that ChatGPT works multiplicatively. You need to ask the right questions and be able to quickly understand the output. Those skills come from experienced developers.

ChatGPT, if anything, will greatly reduce the amount of banalities as those are obviously easier to automate. The people writing banalities now are the ones which the least experience and skills.

The more senior you are the more broader you control the code, deciding on abstractions, interfaces and features. This is obviously something ChatGPT is horrible at and giving someone with 6 months of experience and ChatGPT the ability to decide on that will lead to disaster.


why are people in such a denial on this point, makes me mad...

the same field who has the most imposter syndrome rants, also has most people who think they can't be replaced by someone with less experience equipped with AI.

Everyone is talking about the junior, mid-tier programmer who will replaced, but not themselves. It just sounds like willful ignorance.


>who think they can't be replaced by someone with less experience equipped with AI.

Because in a shrinking market experiece + AI is vastly superior to little experience + AI.

Experience is becoming more important, since experience is required for the tasks AI is very bad at. AI is drastically increasing the barrier to entry, since it can do many of the things entry level people are/were useful for, but it can't do any of the things experienced people get payed for.

ChatGPT can not tell you which API changes to your 1M SLOC code base are helpful for the future interests of your corporation. It won't talk to management about technical challenges in demanded features or which hirings are neccesarry to be on time with future projects.


The points described in your last paragraph seem to be well in reach of a GPT model.


Yep, at a much lower compensation for starters.


You're in denial and delusional. Most people in this thread are.


> I have been already working nearly at a burn-out level for many years to reach this comp.

I think you might want to give yourself a break or change of scenery because your post sounds like the burnout is doing all the talking for you and AI is just a convenient external factor to blame. Maybe find a coach or therapist to help lower the intensity and explore what you’d want in life beyond being a top earner.

You can only run at 100% capacity for so long before the exhaustion takes over and your thinking loses clarity. I’ve been there, more than once. Something has to make way for your recovery, ideally giving you space to enjoy what life brings you.


There are a lot of developers in denial here.

I write code for scientific applications. I used to hire/contract out some work to other devs. In the last 6 months I haven't needed to hire anybody else, doing so would have been a waste of money. This is 100% because of chatgpt.

Many devs are hired by other devs and these people are aware enough to know they don't need you anymore.

Other developers have been spoiled by being the one-eyed person in a room full of blind people. Those developers maybe haven't quite realize what has happened, after all, the blind people still can't code.

The conversation around junior developers is particularly upsetting.


If you write code like a robot(read: junior developer level), you will be replaced by a robot. IMHO, lots outsourcing work is manual work, outsourced to save time. Some of it is to save costs. But the output from outsourcing is often horrible and reads like GPT code. And people will figure - hey, if I have to pay for this level of code, with not GPT?


So what will happen to junior developers and those developers whose code is mostly just plumbing?


Exactly this. If I could, I would pin this comment.

It is not simply about job safety, it is about the career ladder.

When the career ladder is disrupted, how will even the banks evaluate someone’s ability to pay off mortgage for long term.


Thanks for sharing your thoughts. As you said, in the last 6 months you use ChatGPT instead of outsourcing. What are the roles/positions that you don’t need anymore? In other words, what is the job that ChatGPT does for you now?


One very common task we have is writing and maintaining ETL scripts. This was never a particularly difficult task, just time consuming and normalized enough that I could outsource it.

It's not that chatgpt can do the whole task, it's just that I can do it myself much faster now. Paying somebody to set up a new source that is consistent with our protocol is just too expensive relative to the 1-2 hours it takes me to do it myself now.

I don't think ETL is special at all in this way.

It is also just much, much, easier to learn today. Previously, I might have hired a developer to handle the UI side of a new project. I have about 15 years of experience building ML apps and while I have spent a good amount of time building UI, it just didn't make sense for me to do it myself after factoring in the relearning time.

With chatgpt I can create a really nice, functional, deployed UI very quickly even factoring in learning time, with much less of the frustration, miscommunication and Cost involved in working with other people.


I don’t know much about the ETL industry. Are these ETL jobs for all sort of data? Accounting data? Log data? Getting the data from point A to point B? You mentioned UI. Is the UI so the customer can now move the data around or to visualize the data.?


I would not get overly focused on this use-case. It is more difficult to think of use-cases that won't get disrupted than those that will.


Not the OP but I think mainly digital plumbing type of work is helped a lot by AI


What area of science are these applications for?


This strikes me as a little absurd, but perhaps it is common for "developers" to misunderstand what value they actually add. AI won't change the value you add, any more than autocomplete or code generators and scaffolds do.

The vast majority of programming is about facilitating, automating and improving business processes. The program artifact only has value as a tool to enable some business processes.

A website facilitates customer communication.

A complete webshop/eCommerce site facilitates selling products; discovery, ordering, invoice generation, logistics, reporting, returns, feedback etc.

The problem is charging for (mostly) time spent implementing the system, rather than the value added by understanding the desired business processes and drawing up an architecture for the system.

It is similar to outsourcing development - if you can solve the hard problem of gathering requirements and designing the data model and system architecture - you might be able to successfully outsource the less valuable parts - and still end up with a decent solution.

Now, with LLMs, a single system architect/senior developer might be able to do the work of a five person consultancy alone. You might not be able to charge five times as much - but perhaps three times for the same or fewer hours worked?

Ed: The money that pays for the software system still comes out of the value added for the customer buying the system. It doesn't really matter how long it takes to build - you need some senior resources to do the "hard part" - and the value added for the customer is the same - they get to stay in/improve their business.


"Now, with LLMs, a single system architect/senior developer might be able to do the work of a five person consultancy alone. You might not be able to charge five times as much - but perhaps three times for the same or fewer hours worked?"

This is the point that underscores why the OP is stressing out, and it really is underselling the value GPT adds.

Ignore a consultancy. Consider a team of 5 you might be on. Esp. for greenfields, consider that a team of 1 or 2 could probably be as or more productive.

I'm working on a startup... I was shocked at how much progress GPT afforded me to build out a solution in a day, that likely would have taken over a week of research. It took like 30 iterations to get to a working solution. Unlike a mid-level consultant, I don't ask it to do a thing and it gets back to me the next day... it gets back to me within a minute. Rinse/repeat, incredible progress. Better than pairing with another senior dev, which would still likely take 2-3 days.

Boom, less engineers needed to produce novel products or features, more engineers on the market, depressed wages.

What's absurd is not connecting those dots. It's pretty basic. A business is always looking to increase margins, and with tech, it's almost entirely in human capital. Maybe some will want to move at 2-3x the speed, that's fair. Probably only in good times though.

In short, it doesn't replace all developers... it needs humans guiding it to work, absolutely. But it can certainly replace teammates, and that's the issue if you're working for a company, esp. one that is publicly traded.

Oh yeah, I have 24 years of experience myself professionally, been programming since '82.


I am honestly baffled — what kind of program are you working on that can be AI-programmed?

In my experience most CRUD apps are ultra-trivial for the most part in a language with sane ecosystem, you literally just glue together things. That’s one day either with ChatGPT or without - the real benefit are the ecosystem (as per Brooks), that’s not the bottleneck (besides the occasional “this doesn’t work together with that because..”, to which chatgpt is just as susceptible if not more)


I don't want to be too descriptive, but it's not a CRUD app - it's a multi-proc app where each process acts as both a client and server (obviously in separate threads) connected w/ gRPC bindings and has an internal DSL that must be abided to. A communication framework of sorts governed by some very specific rules.

Also, why is there some notion it can only do simple things? My understanding is it's trained on a large portion of existing opensource work beyond just SO posts and the like. It seems too many folks saw it produce code w/ some bug or hallucination. The correct response to this is not "welp, my job is safe". It's to feed that error back in have it correct what is broken. If it builds the wrong thing, explain what is wrong, provide multi-shot examples.

You could conceivably create an agent that takes the generated output w/ tests, runs them locally, then feeds the error back in until there is fully working output. Right now the context window limit would be an issue compared to using the web interface. Perhaps OpenAI or Github will provide that capability as a feature.


I don't think your last paragraph is conceivable actually. Consider what's happening when you give it a prompt. It receives some set of tokens through the prompt and then takes a sample from a very accurately weighted distribution of its very large training set to generate the most likely next set of tokens. When you feed it back in an error in the code it generated, all it's doing is generating a new sample from a new set of input tokens. This time the tokens just include the error which tells it that the output it generated the first time was wrong in some way and it should weight its distribution differently. Usually the error message will contain the some of same tokens as the original lines of code that caused, which will have the effect of causing those particular lines to fall out of the distribution, and then the model will you give you some slightly different code that maybe doesn't fail this time.

It's still going to be subject to all the constraints of all ML techniques. Namely, it's really only capable of interpolating across its training set, not extrapolating from it. And it's going to fall on it's face whenever the distribution of the input data doesn't mirror the distribution of the training data. If it doesn't have lots of close-enough analogs to what you're trying to do in its training set such that the "right" code is tangibly represented in the distribution, you can feed errors back into it for infinity and it will never give you working code.

This might be controversial, but my belief is that if you find ChatGPT to be really good at doing your work for you, your work is likely closely mirroring a lot of code that exists publicly on the internet. Which would beg the question, is what you're doing not easily handled by just importing an existing library or something? Your application sounds relatively complex/niche, so maybe you are actually still doing a whole lot of higher level engineering and distilling your work down into trivial tasks that the model can handle.


In regards to the last paragraph, I saw it happen again and again. Sometimes it wouldn't even be a directly obvious error, but an exception was thrown seemingly unrelated to the actual issue, a proximate cause. Of course that could be related to some SO post that is referenced, but who knows. There definitely is debate about emergent behavior in GPT-4 and it's not quite clear how some aspects work internally.

While it did create quite a few bugs and a couple random hallucinations... and sometimes it would update some piece of code without telling me other parts of the solution were updated (ie. choosing to change protobuf def without being asked to), it was able to fix everything even if I needed to give it a bit of help.

I don't think there is anything I'm doing that is truly novel in isolation, and it's not a lot of code (just a few hundred lines), but I doubt there's anything quite like it in totality out there.


With that amount of experience I imagine GPT is explaining stuff you already know and is thus saving you time.

For something completely novel, or as someone with little experience, it’s going to be harder to get that kind of result in terms of understanding the requirements, writing the prompt, and comprehending the output.


It's something I have little experience with, or at least, it's not in the domain of skills I've used in years and certainly not with the type of tooling I'm using. At a high level I do have good knowledge of what I'm trying to build and how it should operate though. Like I said, it probably would take me about a week unassisted, mostly due to a fair amount of reading and experimentation.


> Boom, less engineers needed to produce novel products or features, more engineers on the market, depressed wages.

Possibly. But software is funny - increased software productivity tends increase demand for software. Perhaps the job market will be saturated, but perhaps the market will simply grow.


Addressed in the following paragraph of that post. Certainly possible, but likely the result will be a ton of new startups than larger tech companies ballooning their ranks - as this gets better, those will either stay the same or be reduced. IOW, the fat big tech comp the OP is stressing over won't really be a thing at scale anymore. Time will tell, but that's my prediction.


To add to this - don't undervalue your 20 years of experience!

I recently had ChatGPT "build" me a simple ruby graphql hello world-app. That was useful to me since I could easily read and understand the 20 or so lines of ruby, recognized the imported gems etc.

It wouldn't have helped the people in my company that I build software for one with. Many of them can create complex spreadsheets to solve certain problems - but they can't go from a sample api to something connected to a database and a web front-end. And especially not a system with continuous delivery, continuous improvement (features/bugfixes) or something remotely secure.


You're among the first people I've encountered with both experience in writing your own software and prompting an LLM to do it. I have a few questions for you (maybe the answers will make you feel better, or maybe they'll make you feel worse)

- when the code the model spat out was wrong, how did you fix it? Did you identify it was wrong before you ran it?

- what level of complexity was there in the code, in terms of "business logic" or complexity of the requirements you fed into it?

I ask these two questions because I am not sure an AI will get to the level of experience you have in the near future in _generating_ complex applications, let alone being able to reflect on why its own creations are wrong, fixing them, deploying some output, and then explaining the changes?

A human is going to be in the loop in these cases for a long time, and I'm assuming part or all of your decades of experience has been spent understanding quite how poor people are at explaining their requirements. What if you thought of generative AI as a tool you can learn to utilise to do your job more effectively?


Here's 7 points to console a programmer worried about the imminent threat ai poses to their job:

1. AI is a tool, not a replacement: AI is designed to assist, not replace, human tasks and decisions.

2. AI needs human supervision: AI technologies require expert human supervision for their creation, maintenance, and evolution.

3. AI creates jobs: AI often creates more jobs than it eliminates by automating routine tasks.

4. Continual learning: By continuously learning and adapting, you can ensure your skills remain relevant in the AI era.

5. Human creativity is irreplaceable: AI can't replicate human creativity, which is vital for problem-solving and innovation in programming.

6. AI ethics: Ethical considerations in AI deployment require human judgement, which AI lacks.

7. Emotional Intelligence: AI can't replace the emotional intelligence humans bring to their work, including empathy, understanding, and interpersonal communication.

Hope this helps.


These are maybe true for GPT-4, but probably don't hold for GPT-10 or whatever the landscape looks like in another handful of iterations.


I think this is important for all fields who are worried to keep in mind. Thanks for this. Gonna put it in my notebook. :)


An artist friend the other day posted a meme to the tune of "Artists will be displaced when clients are able to express their desires clearly. I'm not worried for my job."

Programmers are really artists that way. Our job is not to program computers. Our job is to figure out the client's needs. The implementation on the computer is a matter of craft. That is, it's a skill we've practiced and honed, but it's not the essence of our job. It's not the artistry, any more than holding a paintbrush is the artistry.

It's why I've long discouraged programmers from thinking of themselves as computer jockeys. A lot of developers pat themselves on the back for skills that aren't really that important -- jobs that computers can do better than we can. That was true even before AI, such as the interminable worry about "optimizations" that are better solved by compilers, libraries, and hardware.

I don't doubt that some artists will be replaced by Canva, with worse but acceptable results, and that some developers will be replaced by AI. But there's still a lot of room for us human beings to do what we're actually good at -- being human ourselves and knowing what other humans need. The more you think of yourself as somebody who writes code and are not a "people person", the more you should rightly worry for your job.


Whenever you feel this kind of panic you should ask yourself how F’d you are relative to the general population. AI will shake things up for sure, but has it ever been the case that smart, educated, experienced people collectively got hurt because of innovation or disruption? No, never.

Your total comp might go down or it might go up. Every time you make a career move that’s the risk you take. But if you’re healthy and smart and diligent enough to be highly paid today you are in a better position than 95% of your countrymen and 99.9% of the rest of the world, no matter what happens.

Enjoy the time you have on this planet. Don’t let fear of the unknown get in the way of that.


Yeah, but how do we protect and grow expertise when it can be easily copied in GPT-(n+1)


I think it’s more like truck drivers panicking in 1995 they will soon be obsolete after having seen the first self-driving demo.

Truck drivers will get automated away eventually, but not today and not next year. Self-driving systems get better every year. It’s been 30 years. We’re getting close but we’re still not there yet.

I expect the LLM revolution to be somewhat similar. The models will get better but they won’t have the kind of superintelligence that makes meat humans obsolete. Someday, perhaps. But GPT 5 will be an incremental improvement, and GPT 6 moreso.


You're right. I am definitely biased as a long time techie. There is one downside though in our own sector in comparison, we are ready to give our data the fastest and the environment we work in allows for it.


LLMs like (Chat)GPT won't take your job. Software engineering is a creative job. Language models are not creative. Imagine that instead of using SQL to query a database of knowledge you use plain English. They can't create anything on their own. If they interpolate from their knowledge, it's usually fine. If they extrapolate, the outcome is shit. For instance, ChatGPT has enough knowledge (training data) about Python and Java. It can generate unit tests for Python and Java functions. I threw a less-known language, Common Lisp, the other day on it. It failed miserably (calling functions which didn't exist, calling with wrong number of parameters, it even imagined a mocking library which doesn't exist at all!). The more I know about machine learning and artificial intelligence (I study that stuff in a Ph.D. programme) the less I am worried. AI isn't dangerous. The only "intelligence" in AI is in its name. Nowadays machine learning really boils down to matrix multiplications (and set of (usually quite static) rules which can change the numbers in the matrices). People not understanding it and misusing it are. Does the vast majority corporate managers understand AI? We know the answer.

There is a ChatGPT craze right now but it will cool down eventually. It's not the first time this is happening. Google changed the way we search the internet, suddenly everything can be at your fingertips. Teachers didn't lose their jobs and people still go to school since you can't google things when you don't know the fundamentals of the topic. Chatbots changed the way we approach customer support. Support people didn't lose their jobs (at least entirely), since chatbots are subpar experience. If you are 20 years in, you can probably remember Visual Basic 6 - a tool which everybody can learn and write programs easily. We, software engineers, didn't lose our jobs.


Being absolutely serious, I think you may be having a mental health crisis. You describe burnout, feelings of unreality, deep seated anxiety about both the present and the future, as well as a lack of enjoyment in things which you imply you liked before.

I'm not a doctor, but I've suffered with mental health issues in the past. You strike me as someone in the midst of clinical depression and anxiety, and (relating to the idea that nothing is real in particular) like you might have early signs of psychosis.

Please get help. It is not the big deal you think it is: SSRIs, therapy, and lifestyle changes (relax a bit!) will help you see this. Things will be OK, but you're clearly struggling right now. Making a post here was a good idea, but now it is time to take stock and consult a professional or two.

Good luck, and look after yourself.


Welcome to real life. I've had similar thoughts to you but in the context of writing. Many coding tutorials I wrote last year can now be done with GPT-4. The same tutorials that I wrote last year got me a lot of exposure on various sites and directories, but that hasn't been the case since early this year.

I imagine it's like this for many people. And, personally, I'm trying not to think/dwell on it, and simply move on with life and projects and work. Whatever happens, I know for a fact I don't have control over it.

The seed of disruption has now been planted, so we have to wait and see in which direction it grows. But I think a big wave will eventually come that will displace a lot of jobs in one big swoop.


This is what I think as well. I don't know how to prepare for it, though. I'm currently trying to find an "escape hatch", but it's rough because I can't think of a alternate path that isn't at risk of the same fate. I have no answers, just empathy.


Enjoy the AI making you even more productive and you even have to work less. Until a human with 20yrs exp. gets replaced in the office like you will not come as quick as you think.

Self-help Singh has the right attitude for you: https://www.youtube.com/watch?v=YHxwY3Fz2gU


Don't wanna make you feel worse but a friend of mine and nearly 20% of the company got fired a few days ago. CEO's justification? "We don't need you anymore, we will compensate your loss with chat-gpt & co. It's nothing personal, just business". To this day I can't tell whenever CEO's reasons were just outright lies, or if he genuinely thinks he can replace a fifth of his workplace just like that.


Most likely outright lies. Lots of companies are laying people off to maintain profitability, this particular CEO probably doesn't want to admit that the company would have financial problems otherwise.


What I take from that is either the CEO doesn't take advice from a competent CTO, or he's lying and they used "AI" as a front to reduce the workforce for some other reason.


he also mentionned competitive pressure; basically competitors were also letting go of many folks due to AI and he had no choice but to do the same lol


We had a similar event a couple of months ago. After the layoffs the remaining employees were told to increase the productivity (do more with less) by utilizing GPT. Of course there will be no increase of compensation. The real cause of layoffs were financial, GPT is just an excuse to fend off complaints of (now overworked) employees.


got a source for that company? Presumably it'll be semi-public knowledge. I'd be incredibly surprised if a business was already proficient enough to replace software teams. Call centre operatives or content-farmers maybe...


For the last 20 years in web development you've adapted to plenty of technology changes. Double the 20 years timeframe and you'll find developers working in assembler on daily basis - just think about how far you are from there. AI is a very significant change (maybe even the most significant one in the 40 years timeframe) and a serious risk if your plan was to stop adapting, but it is essentially a tool to bring programming to a higher level of abstraction. Less work on form validations, but more on building value.

We are still early adopters in using AI to increase productivity. The society as a whole is not yet aware of what is coming and the results are essentially unpredictable. Your attachment to the top 5% salary band might be irrelevant in 10 or 20 years because we might live in some kind of eco-utopia by then. Or all wealth might move to AI-controlling entities, in which your current standing is irrelevant and beyond your control.

Stop worrying. Focus on figuring out ways to increase your output with the help of the new tool or invest in AI to ride the wave you expect to drown in. You're way ahead of the 7.5b other people.


When google turned up (or before) we started to get more productive by being able to look up information and other people's code examples and so on. I couldn't have done my job properly in my whole career without search to help me.

Anyone could look up an internet reference and copy some lines of code without being special. Or could download a python library that does what was needed and write almost no code.

GPT is a kind of improvement of search which again makes the common knowledge base more usable. It obviously lets people do less work just as installing a module saves a mountain of work.

Yet jobs have not seemed to get more scarce since search engines appeared. We have built out a huge internet infrastructure and that might have skewed things because it needed such a lot of effort and people.

We've simply (IMO) done more and more amazing things because there has been an appetite for them. Perhaps there is a limit to software and there might be a time when there are enough people doing it that the salaries look less amazing but it need not fall off a cliff suddenly.

We tend to do the same things many times over anyhow,


I've had the opposite experience. Not just google, but StackOverflow. Did these tools make me more "productive" in the immediate sense of closing feature request tickets? Sure, absolutely.

But these are all crutches for the mind. I don't know about you, but my capability and skill as a developer has only ever been improved by meeting dead-ends, and doing the cognitively demanding work of actually understanding the system I'm working on at a deep level, and deriving solutions myself.

Google, StackOverflow and the like have definitely improved raw feature delivery rates, but they have indisputably made me dumber and less skilled.

As for LLMs, I'm on the fence. I am using GPT-4 in some code generation utilities I've written to speed up mundane tasks, and I do not yet believe it impinges on my ability to learn, as these are just mundane tasks. But there's a good chance this will change if I ever get access to the 32K token model, or GPT-5, GPT-6, etc are released and will be much more capable.


I agree 100% that being given the answer is useless because then you haven't been forced to understand the problem.

I'm thinking of 2 special cases: 1) you can read manuals and get modules online. I lived in Africa in the days of 2400 modems and information was very hard to get - always in books which I could not buy. Now anyone in the world can get information so there's a way in which that allowed the software industry to expand but fortunately there was huge demand.

2) Bugs which other people have solved before - a particular error message from Anypoint Studio today was because I had run brew and I updated maven without realising it - past the version that Anypoint supports. Searching saved hours of frustration. When this works I think it's a true saving.


"...I use ChatGPT on a daily basis to build projects..."

Really? What projects? I think its telling that you haven't replied to any of the comments here - is this post just BS?

A take that really summed up how I feel about all this hysteria around LLMs and ChatGPT specifically: People are praying to it, treating it like an Oracle. Anything you can dream up, GPT can do. Its exhausting.


>is this post just BS?

I think it is, because the entire thing reads as someone with very little life experience. If they've truly been in the industry 20 years they would have experienced plenty of similar career anxieties - the .com boom/bust, outsourcing, and global financial crises.


I've been through all those (started working in '99 at a company that failed in Jan '01). Seriously, it was 80% y2k bug work during the first year of my career.

This has me far more concerned, as I went into detail in another comment in this thread.

Much of what I see in this thread feels like platitude-driven myopia, which brings to mind the famous Upton Sinclair quote "It is difficult to get a man to understand something, when his salary depends on his not understanding it"


"It is difficult to get a man to understand something, when his salary depends on his not understanding it"

So, it is "Your salary depends on your ability to convince others that you can not be replaced." basically.

Everyone is talking about "that other developer who has less experience, they will be the one to be replaced". The more I hear, the more suspicious it gets to me. Like everyone is so sure of themselves.


I didn’t expect this thread to blow up while I was away for lunch, hence no replies until now.

I built 4 projects so far with it. Out of these 3, 1 is clone of an app that has been charging 65usd per year. Other one was rehash/improvement on an existing app that has been selling for 25usd.

It took me under a day to build the prototype, a week to deploy/build installer.

The only thing left is to actually put them out in the market really.

This was just too easy.


I thought trolling was banned on hn.


Hard to take this post and the people in it with good faith. OP feels like a troll, and everyone posting how AI is replacing workers or is so amazing won’t post any of the code or talk too in detail about what it’s supposedly done.

Meanwhile, copilot x routinely spits out code with bugs, fumbles refactoring, misinterprets which variables are needed, and in general is a helpful but annoying autocomplete or template tool. Will it replace workers? Probably, but they weren’t really doing much to begin with then. You still need people who understand the business and domain, the systems in place, how they interact and connect, etc. I don’t believe a LLM is capable of that.

Also, people mentioning GPT10 or whatever might as well say it will be doing magic. There is no way to know how it will develop. How the models will be gatekept. How government policy will change or stifle it. And let’s not forget that progress isn’t exponential or linear forever, it’s generally logarithmic.


If you're in top 5% of TC, you're not in danger for the foreseeable future, GPT-4 will make expertise more valuable, not less. And it's the middle brackets that will have to fight to keep their jobs against the low-skilled workers that have now been elevated to average


Yeah I definitely won’t be quitting from my current job with high TC until I am fired of laid off.


no offense man but -

> I have been already working nearly at a burn-out level for many years ... Rent is sky high ... I don't feel comfortable taking on mortgage ... I am unable to enjoy any content, movie, tv show anymore ...

you are struggling with yourself here, not GPT4.

You're clearly stressed, you say 'near burn out' but being unable to enjoy things that you used to like is straight up depression, you know?

Have you considered seeing a counselor?


I'm in a similar position. Top x% in my field, more than fair compensation, somewhat financial security. I worked my butt off towards burnout to get there (currently slowly recovering).

I recently listened to a conversation between Lex Fridman and Stephen Wolfram [1]. Stephen said something that stuck with me for a while. Something along the lines:

GPTs are statistical word guessing machines. They regurgitate the average of the internet. You can get around that by placing with temperature and top-k, but that'll only make them less accurate, and increase the chances for them to hallucinate.

You, however, are an expert. Better than the average of the internet. Years of experience, and likely with higher standards. You understand the requirements, architect maintainable systems, and align with the company goals. You learned how to read requirements from context and between the lines.

Yes, GPTs can do highly complex things, and sometimes it feels magical or even scary. But you'll have to prime them to do so, and you'll have to guide them. They are your junior programmer, and you are their lead. You understand context, and they don't (simply because they don't have any).

Start to accept that from now on you have your personal junior dev. Learn to guide him into the right direction, and learn to provide accurate tasks. I'd also suggest looking into GitHub copilot chat to see where this is going.

If we're lucky, they'll get better with time and lots of research, but that's (likely) none of your concerns.

Uneducated opinion:

From this point on, I think it's likely the corpus they consume gets worse because they are starting to consume their own generated, average-of-the-internet content. You built your own Reddit bot, and know how easy it is to get generated content out there. It's a problem that isn't solved yet, and with time, it's becoming even more difficult because the number of different public models (which suffer from the same challenges) is steadily increasing, and therefore detection of such content gets a lot harder. The technology might get exponentially better, but in the end it boils down to the corpus they consume and learn from. This will slow things down, and will give you some time to learn to use these tools to your advantage, not see them as competition.

[1] https://lexfridman.com/stephen-wolfram-4/


The coming LLM content ouroboros is a real problem. Garbage in, garbage out.


AI is a new kind of leverage, and you will need to figure out how to adapt and use it. I think you probably have more time to do this than the hype would like you to believe though.

If you've reached the top 5% in your field, I'm sure you will be able to do this.

If AI is giving you anxiety, maybe take a break from reading and thinking about it for a while.

It sounds like you may also need to find meaning outside of work. You mentioned you are burned out and it sounds like a large part of your identity and self worth are tied to you work. This has helped you be successful, but it can be unhealthy for you as a person, a human being who is more than a job title.


>I don't feel comfortable taking on mortgage despite having multiples of needed deposit

This is great. It means you have more financial security than most in the event of a worst case scenario and the financial aspect appears to be underpinning a lot of your concerns so give yourself some credit for that.

The most valuable antidote to the concept of AI replacing skilled workers is depth of experience. Before your job becomes obsolete, the jobs of every engineer with half your experience need to become obsolete and we're a very long way away from that happening.


Some anecdotes:

A professional translator I’ve known for more than twenty-five years has, within the last few months, seen all of his sources of work dry up. He worked in a field that required specialized knowledge and experience and for which there was steady demand, but there was little human interaction: clients would e-mail him texts, and he would e-mail back the translations. He is angry and upset because he believes—probably correctly—that he can produce more accurate translations than GPT-4. But the price difference is so great between machine and human translation, and the average quality difference now is so small, that his clients have dropped all of their human translators. They have offered him some work checking machine translations, he says, but the pay is much lower. He is now, in his sixties, trying to start a new career.

I myself was a professional translator for twenty years, until 2005, when I took a university job. Most of the translation work I did—for which I earned a good income—can now be done much faster and more cheaply by LLMs. In the mid-1990s, I started also working as a lexicographer—writing and editing entries for bilingual dictionaries. That was difficult work, but I enjoyed it and felt proud to be able to do it. Before I retired from my academic position earlier this year, I had been thinking of returning to dictionary work part-time in my retirement. There’s no future in that now, though, because LLMs have pretty much eliminated the need for human-edited language dictionaries.

Both my daughter and the daughter of a friend of mine are freelance illustrators. While both of them are continuing to get work now, they are following developments in Midjourney etc. with growing worry.

In contrast, a couple of weeks ago I got together with an old friend about my age whom I hadn’t seen in a while. He worked for many years as a glazier, and since he retired on a good union pension he has kept busy repairing regulators for scuba-diving equipment. I tried telling him how excited and worried I was about AI, but he was dismissive. It took me a while to understand why: The work he has done—installing windows and mirrors, fixing precision devices on which people’s lives depend—is not likely to be done by machines any time soon. It’s those of us who have done most of our work in front of computers who need to worry.


My partner and I often discuss that if civilisation collapses there will still be work to be done. I don't think it's necessarily a bad thing for people to do more jobs offline. It might even be healthy in the end, though obviously there will be a lot of devastation. Sometimes you have to scrap things and start over with better foundations. I can only pray that some of these efforts can be put into the real world problems that are surmounting of late, such as rewilding, etc. but being a dreamer keeps my sanity. :)


I've been using ChatGPT to provide output for very basic stuff, it's 50/50 if it works for the basic stuff. And 100% fails on complex stuff.

It's super useful for finding out how things work like asking it how to use an SDK to do something it works really well for it even provides stuff I searched the docs repeatedly for.

But for complex logic that I want to be too lazy to do, it literally just wastes my time. I tried for an hour to get it to do something and tried to modify the code to get it to work. Ended up just doing my own version completely.


My experience to date is very much along the lines of your experiences. ChatGPT is basically returning a mashup of what it previously found on the internet.

Statistical correlation != innovative solutions to real world problems.


Back when personal technical blogging was a thing, I produced a lot of valuable content for myself and my employer. It was valuable in a sense that it was read, and positioned us as expert in the field, which lead directly to sales. Now, companies are expected to produce content on, say, linked-in, in order to get clicks - just to be seen to be an active company. What is produced now is mostly PR fluff that is written by people that are so distanced from the technology, that it is essentially meaningless. We had a young 'social media content' person that quit after seeing ChatGPT. On reflection, they were right. Producing meaningless marketing content and 'organics' and liking stuff to get likes back is something that an AI can do. ChatGPT can produce '100 words on <company name> innovative AR Product that is going to change business' as much as a meatbot can.

As per my anecdata above, if you are working in field where the value that you add is pretty low. You can get replaced by ChatGPT quite easily. In some ways cookie-cutter web development is like that. If your job can be outsourced to cheap and less experienced (in terms of domain knowledge) people, it can be 'outsourced' to ChatGPT. Do hard stuff. Not hard as in you are able to buidl a web page with this weeks' new framework, but hard as in solving a problem that other people cannot. If other people can't solve the problem, it doesn't matter if you're competing with an AIbot or a meatbot.


Have you actually used GPT-4? I just subscribed and asked it to write some boilerplate for a language im not familiar with (swift - i work mostly in java). Did this mostly to replicate what its like for non-technical folks who believe AI can replace devs...

I was forced almost immediately to go onto YT and find a swift crash course. I'm sure I could've continued prompting it to explain things and I probably could've extrapolated how to implement the code, but with my experience, a 30-60 min condensed video seems more helpful to me, idk.

I guess my point is that there will still be need for those with technical knowledge. AI isnt at a point where it can generate fully functional apps, and I don't really believe this is coming anytime soon. GPTs are relatively new in the AI space and there's already plenty of concern around their future viability.

I think a lot of the fear is just bad faith marketing from openAI, and all the wrapper startups that have sprung up like weeds recently, and they all love to claim to their wrapper is coming for SWE jobs. If i see one more "RIP software engineers," I might literally laugh myself out of my chair.

I saw one hilarious demo on twitter from GPT-engineer. it claimed to be able to generate entire projects, and it did create a handful of project files with ~20 lines of boilerplate code. It's pretty comical that we're all afraid of this tool, myself included.

And to everyone saying "well what about GPT-8," GPTs' fundamental flaw are hallucinations, meaning GPT-8 will still suffer from most, if not all, of the same issues as GPT-4. I don't really believe any version of a GPT will threaten SWE jobs en masse. I've also been seeing some interesting articles warning how training AIs with AI generated output will effectively kill the models within a couple generations, so let's see if "AI is the worst it will ever be today" is actually going to be true.

Regardless, I dont think you have much to worry about, it sounds like you're senior/principle level already and AI has the least threat to anyone in those higher levels. Rest easy, you'll be alright.


In 2019, I read "AI-Superpowers" by Kai-Fu Lee, where he went through most of the worries we have today. In the final chapters however, he started talking more about how we, as humans, need to adapt, and he also mentioned UBI as a (possible) way forward.

At some point, AI indeed will "replace" what people like you (and me, who has pretty much the same background as you) do. We are just tools that knows how to code.

Quite a big part of my time, as part of a large development team, consists of fixing bugs. I believe AI in the near future will be able to solve those, based on input and expectations. Someone still needs to verify the code, and take responsibility for the changes, before they're released to production. This is where I think developers will be needed. I don't think you'll ever find a PO so frisky as to just release it, unless for maybe very small domains.

But.. we'll all get hit. And the way I see it, developers are not in the direct line of fire.

I hope UBI will never actually happen, even though they're testing it out several places in the world recently. I also hope a One World Government will never happen, even though it looks like we're heading for it. I also hope WHO, UN and NATO will not gain the power they seem to strive for, but I don't see much resistance.

Instead.. what I do personally, which might be a solution for you also, is to try to embrace it. These years are quite unique in the history of the human kind. It's a mixture of 1984, Terminator 2, Ex Machina and more. I have a bad feeling about it all, but at the same time I'm curious, and still have hope. But also, I'll try to resist the sinister parts of it whenever I get the chance.


Scifi just became real, and I don’t like it.

In many ways GPT-4 does a better impression of a human than any we have seen in Star Trek. Data couldn’t understand humor.

We were so sure of our own complexity, we projected onto the AI characters weaknesses. Then an AI comes up doing jokes in any topic you throw at it in any way you like.

Yet still some suggest I must be trolling with my post. I know my take is quite pessimistic but far from exaggeration, I didn’t even get as far as suggesting any SciFi level dangers in longer term.


I was speaking to a lead dev today. He said he thought the biggest risk AI poised was to junior devs. He currently gives specs to junior devs who do the work and then he reviews it. He thinks it's nearly possible to give the specs to AI, it does the work, he reviews it. So maybe no more need for junior devs. He thinks this is especially possible when AI has access to your whole framework and product.


Except, as we saw with the Google article today, companies will soon get wise to the fact that feeding their product source code into a cloud-based GPT model is a great way for that source code to leak out.

Sure, maybe OpenAI or Google will sell a "for-business" model for thousands upon thousands of dollars a year, but will people really trust that it's not phoning home? Will there be rate limits on requests that make it not make sense to use it for everyday tasks?


Glass can be half empty or half full:

1) You currently have top %5 salary, not tomorrow, but today - you probably doesn't have to worry that much for the next 5 years at least and probably will enjoy similar high salary

2) You have currently 20 years of experience - that's a lot. Compare to someone who is currently studying IT or just starting their education.

3) After 20 years of working in industry most likely you have some savings and warchest

4) You probably in better situation comparing to someone just starting carieer - most people work around 40 years in their field before retiring, you already half there.

5) Worst case maybe you won't enjoy top 5% salary but median Tech salary - which is still way above median salary in most countries. You managed to live with lower salary before so will do fine as well.

6) With your experience probably easier for you to find remote job if haven't done already. You seems like don't have mortage so probably easy to move to cheaper city or even country even if just for couple of years.

7) Many expats on retired visa in south east asia (thailand, malaysia, indonesia, vietnam) - life is so much cheaper there - that's assuming you are currently from more developed countries.

8) If you salaries will get lower you will have 3 choices:

a) live with lower salary like you used to before but at least you job will be easier to because more automated by LLM so probably less stressful

b) should be easier for you to learn and pick any new specializations with LLM as your copilot: rust, machine learning, vr/ar, robotics,

c) embrace it and use it to develop some you own product/SaaS/app as a side hassle/hobby - it's probably now the best window of opportunity for indie devs.


You are correct. I took over a lead role at a small startup 6 months ago and I have delayed hiring a UX person because ChatGTP is so helpful.

But at the same time, I think you are underestimating how slow society changes. MP3s of popular music were available on the Internet in 1996. But cd sales didn't fall below the 1996 level until 2005.


I’m amazed at the number of people responding to this and answering the question they want to answer, not the question the OP is asking.

> It just feels like anyone could do it.

This.

Yes, you’re 100% right. No, AI will not take away all the jobs, that’s not going to happen, but no one thinks it’s going to happen, anyone calling that out and saying… “don’t worry! AI is just a tool…” are beating a strawman.

The reality is that doing difficult highly paid work will become significantly eaiser and a much wider pool of people will be able to apply to do jobs that previously only people demanding significantly higher wages were able to actually do.

…and that will mean, a surplus of people wanting jobs, to the delight of large businesses who will use it as an (entirely reasonable) excuse to choose to employ cheaper workers.

Flat fact: 20 year of experience and a college degree mean nothing if someone can do the same job for 60% of the pay, and there are 1) people who will accept that and 2) the technology will increasingly enable this scenario.

Certainly, other roles will appear in other areas, but face the blunt reality: the days of scarcity based high wage roles for programmers such as there are today is numbered.

If you want to maintain your income and level, you probably need to start looking around for ways of doing that now, before there are 60 people applying to do your job on Fiverr.

If you stand still, you are correct, you’re basically screwed.

…but, you don’t have to stand still.

Remember, you are better than the majority of the people who will be able to do your current job in the future using AI. That’s why you have your job now.

Lean into it with your professional development to make yourself better than you are now using the tools.

That seems like the only really sensible advice I’ve seen recently.


Today, chatGPT can do zero percent 0% of the actual programming I do. I use chatGPT to find a library function or to find command line options and other simple stuff.

But actual programming? ChatGPT can only write completely trivial scaffolding code. I don’t understand how people get a significant productivity boost out of it when chatGPT can’t do design, can’t do algorithms, can’t do concurrency, can’t do architecture. I like chatGPT but I don’t see how it can even double my productivity.


You’re probably not talking to it right way. You need to build up the story. It doesn’t look up information, it only maintains coherence. You need to bring up key points for your design, have a chat. Then start asking for code. Unless you have a starting code, then give that and chat on top of it.

I am getting at least 5x boost in productivity.

Just last night, I solved 7-8 bugs consecutively.


My prompts are pretty basic but also I have to spell everything out or the output will be wrong. I can poke and prod until I get something approximately correct, but I have to give step by step instructions for the simplest things like you would when coaching a slow student.


Yep, that's exactly the thing. If a random person + GPT makes a programmer equal to me in skill, then me + GPT makes a programmer who's at least 150% of my skill (for some definition of skill), so it's all good.


I know I will be taking the devils advocate to another level but…;

Is there such a thing as %150 developer, Is there a market for the %150 developer?

All of these are not yet answered or tested by the market.


Your fears will manifest in reality. Just stop being afraid.


Your statement can have multiple interpretations. I assume you mean this one: Your fear will be a self-fulfilling prophecy; to prevent that, stop being afraid?

Of course that's like telling someone who is depressed, to "just be happy".


I think he means that thoughts have an actual effect on the unfolding of the physical world.


Which is a "self-fulfilling prophecy", also known by a new-age term as "manifestation".


I hear your stress, and that sounds awful to experience.

Imagine experiencing the leap from assembly to Python, almost overnight. That would feel terrifying as an assembly engineer, but people would realize at least for certain types of projects, they are now significantly more productive using Python. Moreover, that new speed quickly becomes the expectation. Assembly programmers have their niche, but there's significant demand for python people. Demand grew to meet supply.

I think artists are experiencing a similar crisis. Me, with no art skills can make beautiful pictures. However, my friends who were already artists are making breathtaking things by embracing these tools. My view is that these tools are ultimately going to move artists from assembly to Python.


I'm optimistic partly because OpenAI still releases a lot of mistakes despite knowing full well what AI can do. They're rapidly hiring, they basically have a bottomless budget, and yet there's third parties doing what they don't have the manpower to do.


Draw salary as long as you can, save as much as possible and buy land outright somewhere cheap. No mortgage. I saw where it was all headed back in the early 2000s. I was in my mid 30s and feeling the grind back then.

Rent? Fuck that. Your life is on the line and this is for all the marbles. You find a way to go rent free. Maybe do like the bums are doing all over the place, live in a van, etc. Nobody seems to care. Save that money and then get out.

I did this, ending my career in IT back in 2006. Best move ever. When they fired everyone on the last contract I was on, I can't tell how good it felt.

By the way, the house I lived in at the end... Remember the house in Fight Club? That would have been an improvement.


That's what I am thinking of doing if things go for worse. I am planning to do in 4-5 years time. I already have enough to buy a small house in the country side back in my home country. I could be building my own projects for fun.

I think you nailed it. My current fear arises from the fact, I am at one of the highest HCOL cities in the world in comparison to wages. This makes the fear more imminent, as I basically would have to move if my salary goes significantly lower.

That wouldn't be the case where I am originally from, in Scandinavia.


Many companies that sell applications realize that chatgpt can easily replace their value prop. Even if it doesn't get 100% of the way there, people still much prefer to have less tools.

A very large, difficult to comprehend, shakeup is happening at all the tech companies in the world. Stuff is falling apart in a big way and it will be interesting to see how it comes together again. Just think about stackoverflow and then think carefully about every other service you use online.

I think some of you are in denial about where your salary comes from. When these companies can't compete they aren't going to continue paying you, and they will probably fire you long before then.


I think you are worrying too much, not least because you are destroying your enjoyment of life right now.

If the worst happens you will adapt, it will suck in some ways, but life will still be worth living. Maybe you will enjoy not working at burn out levels all the time.

The worst may not happen. You may find yourself in high demand as someone who can really use AI productively. Who can say?

Certainly it would be prudent to keep a close eye on things and make sure you are one of the people that can keep adding significant value with AI.

In the meantime, I implore you not to ruin what is good now because of fears for a future that may not materialize, or will do so in ways you don't anticipate.


>> They don't give you healthcare, do you think they will give you CASH like that???

Here is a tip: take no platitudes, and change your life in a way that works around your worst fears. The healthcare bit you can solve by emigrating to a country with free healthcare, for example. If you have money for a deposit in the place where "rent is sky high," maybe you should consider a different place to live, maybe even a rural location.

Nobody really knows how things are going to develop. But if our brains have gotten us this far, maybe our brains will get us out of a potential future crisis caused by AIs. Just hedge your risks.


I have been already working nearly at a burn-out level for many years to reach this comp.

If you have the willpower to do that I wouldn't worry that much. Maybe use the extra productivity ChatGPT gives you to learn some skills that LLM-AIs will take a long, long time to learn if ever: specialization. Industrial programming, scientific programming, etc. One of the reasons I'm not worried about AI is that I work with machines so specialized there's maybe half a dozen people on the planet with my skillset and almost none of it exists on the internet at all.


I have experience in hardware/embedded and done industrial stuff, monitoring 100 to 1000 factories.

But the compensation is much lower in these fields in Europe/UK, compared to web :/


That sounds like depression & anxiety to me rather than GPT anything.


I am a bit concerned too, the current state of tech is nowhere near to replace a senior engineer, but where it will be 5 years from now is a scary thought.

Yesterday I tried to get GPT4 to generate me a Dovecot Sieve filter to move .ru email to Junk and mark it seen, it failed 3 times and hallucinated some syntax that looked plausible, but actually is not supported by Dovecot. My hope was that I would not have to spend 15 minutes Googling and could get an answer faster from GP4, but in this case GPT4 did not succeed.


I do use Google in combination with GPT4 and give tips to it, to help. But it cut down my Google searches by a huge factor.

About in 5 years, imagine all of us, using Chatgpt in that time...along with all the enterprise integrations of it directly into workflow of companies through JIRA etc. (if not chatgpt, closed models could be trained in that time)

AI could even learn corporate politics from all the Slack, Jira etc. conversations. If this hype survives 5 years, and we keep putting data into it, it will only get more powerful from here on.


Programming is the reification of making decisions -- plus a lot of shiny fashionable chrome. That is, most of it is about thoroughly understanding the meaning of a process, and getting it right.

LLMs don't understand meanings at all. They are anti-compressors. If the information content of a message is how unpredictable it is (and therefore how uncompressable), LLMs generate the most predictable, least information-dense output from your prompt.

So: focus on meaning and decision making. Set LLMs to do fluff and shiny bits.


Re: the mortgage, this will be a problem only if you cannot maintain your salary and the housing market collapses.

Otherwise, if you buy a house, and later you decide you don't want to (or cannot) pay for mortgage, you can just sell your house and buy a smaller one (or move back to renting).

Sure, the risk isn't zero, but if you keep renting and the housing bubble keeps going on you may also burn a lot of money in the end. Nothing is risk free, and you're in a better situation than most.


A couple of angles. Firstly can you try to elevate your skills to more of an architect role and delegate most of the specifics to AI. Wrangling AI to produce something fast and good and cheap will be a highly sought after skill until AI takes over completely. Secondly can you moderate your expenses. Even here in Silicon Valley there are lower cost options for places to live for example modular houses (many large parks here) or possibly downsize into an ADU in someone's back yard.


It's one of those things where it could go either way: worst-case scenario, your fears and worries become reality and most web-stuff will be outsourced to AI handlers/managers and the actual AIs doing the work. Best-case scenario? AI ends up not replacing dev jobs but turns into a hyper-powerful toolkit for them.

Even if you take AI aside, it remains a fact that nothing will ever stay as it is, things will always move, the world around you will always change (we get older, everything gets re-evaluated constantly as we age). I thought we'd reached the ultimate end-goal with Turbo Pascal, then, Borland Delphi — why use anything else, ever? This is not a particular fault of the AI era, it's just one that hits you and your profession a bit closer. The exact same thing happened to SO MANY OTHER PROFESSIONS over the years.

There was an article a long while back about a US law firm that decided to try and outsource a huge case by shipping tons of boxes to a specifically-trained "warehouse" of workers in the Philippines, knowing that their law (and capability to speak/read/understand English) is similar enough to that of the US. And, from what I recall, it turned out that while this was a fantastic business decision which saved them a lot of money by managing to outsource the most expensive part of the those huge cases, it also became clear that "being a lawyer" may have lost its long-held prosperous perspective (at the bottom).

I say this because that article was written over a decade ago (if I remember correctly), and even today, law-schools are still packed, lawyers are still making a lot of money, everything is still... sorta fine (for them).

You will NOT live forever. So the question is only if you can sustain your career long enough to make it through and then enjoy your ACTUAL LIFE once you're reasonable safe, financially. I am very, very confident that you will be able to do that, even with the rise of AI.

I look at all the mid-to-large-scale projects I've been involved with, as a developer, and 80% of the real work is figuring out how to approach certain things, and how everything should tie together. Writing the code itself was never the problem. Never. For AI to take over completely (aside from writing certain functions or boilerplate starters), I'd argue we've still got a very long way to go.


yes. anyone could do it. can anyone do it well? not yet. not for a while. possibly never.

a human is still very necessary in the loop. there is no loop without the human. currently, there should be no loop without one. cost prohibitive if nothing else.

the better the instructions, the better the output. something must be done with the output, and it has to be done safely and consistently.

there will be new challenges. there will be new careers. (lol no prompt engineering is not it.) there will be new and old jobs.

content not so compelling? take a walk. touch a hand tool, or a crochet needle. escape the screen. blame the content before yourself. was it ever that good? or just good enough? have you already read the good stuff? maybe nothing good as of late. maybe chatgpt is simply causing you to question what makes content… good.

spend a little less. save a little more. your compensation is likely “a little high,” and may suffer a little bit. keep solving problems and it isn’t yet an existential crisis.

where are we going? too many variables, someone will successfully guess, plenty will unsuccessfully. somewhere new yet familiar.

if any of this wasn’t useful, didn’t resonate, or didn’t even make sense, throw it out. just like one should do with language model output.


Honestly it sounds like you would benefit from working with a therapist.

Also, higher quality media would probably do some good too! Shows that are well-written, funny, and emotionally interesting weren't written by AI. Plenty of schlock was written before the advent of AI, so it's not exactly fair to blame AI for unenjoyable content. Check out Abbott Elementary, What We Do In the Shadows, The Lives of Others, or The Seventh Seal


Have you used any of these AI tools? If you have, you should feel more confident not anxious. It will make bad programmers worse and good programmers better.


Well, yeah, like I said I have been using it to build several projects.

My productivity definitely went up at least by a factor of 5 by a safe estimate.

This is amazing, all good, yeah. But doesnt that mean companies need 5x fewer people?


I don't think you should even been playing those mind games with yourself. Focus on what you can control which is leveraging the tools available to you and have some grounded perspective. There's a lot of noise. There's also a bunch of coincidental events happening simultaneously (economy, post-covid recovery, company right sizing, etc) that are obscuring things.


hmm, actually thank you man. perhaps, you're right. I didn't see them as playing mind games with myself. I always thought of it as planning for the worst. But it is certainly a debilitating habit.


well put. it was already a headache trying to unblock devs that went down terrible stack overflow copy/paste rabbit holes ... gpt4 only 10x that mess.


The value developers add isn't writing code, it's solving the problems. More so for the more experienced end of the roles, such as you said you are.

If it were that easy for AI to step in and replace developers, then we'd have seen the same thing with "codeless" frameworks.

Maybe one day in the distant future it will be a different story. But I'd wager you'd be dead before that happens.


Yep. No-code/low-code tooling never really took off beyond some small use cases for a pretty simple reason that will also apply to non-engineers using GPTs:

There is always a tension between having a simple-to-use tool and getting a custom result. The simpler the tool is to use, the less likely you'll be to get a custom result. Tools that are great at generating custom results are rarely simple to use.

GPT models demonstrate this very well. If you attempt to prompt it without any knowledge of a field you can get something back... but it won't be unique. To get a workable result to a more complicated problem you need to know how to prompt it, how to review what it spits out, how to modify your prompt in response to correct it, how to stitch the eventually working piece with other pieces. And so on.


Just remember that these LLMs like ChatGPT hallucinate.

We still need experienced engineers to see these mistakes and fix them.

If anything, using ChatGPT will help with productivity and reduce the workload that is pushing us at burnout levels.

On the job security front, even if the LLMs get better at coding, they still need technical people that can articulate what is needed.

You will be fine, just take a deep breath and enjoy the present.


ChatGPT can only regurgitate problems (in potentially a new form) whose solutions were explicitly found in its problem domain, for all practical purposes.

If you didn’t fear “kids with stackoverflow” taking away your job, then you really shouldn’t do it anymore with chatgpt. It writing anything remotely complex is just.. we don’t even know how far away that is.


>with 20 years in

You need to be more concerned about boring, usual tech industry ageism doing you in than an AI taking your job.


I'm a mid developer. AI has made me a 1.5x developer. I see most people don't care about using AI for their work, so I don't think I will be replaced. I might be the one automating other peoples jobs in the future, just how I have done it with programming. It is just another tool.


If you're a good engineer, you're good at getting inside the head of the customer, understanding their needs and converting that into a technical problem which has a solution. None of that can yet be done by ChatGPT and yet it's the most valuable skill you probably have.


I second this. Maybe it's time to trying to shift your job slightly. Maybe you could get into something like consulting for your company or "we brought a tech guy with us" when sales goes to a customer. That's nothing that AI can do.


Creativity is the solution for your fears. GPT4 or 5 cannot be more creative than humans. Switch your career early to domains that require high creativity


Some developers derive career stability by being the only person who understands a complex system at a company. This 'trick' has been undercut in a huge way.

It is much easier for another dev to understand your complex system now, which makes you replaceable.


Yes and no. There’s a lot of latent ot tacit knowledge about said system that chatgpt can’t ever guess but can bullshit about it a lot, throwing you to nearly endless deadend alleys.

But I agree in a sense , for certain scenarios hiding behind complexity will become less and less of a cover.


If the ‘tacit’ knowledge can be discovered in code, docs, email, slack, recorded conversations, etc then LLMs can find it and integrate it in its responses. This makes you easier to replace


Good luck with that and any hallucinatory tacit knowledge


This is so weird to me. Have you been using the tool? Have you asked it to review code yet? Just because it gets stuff wrong sometimes doesn't mean it isn't extremely useful.


30 year mark. These LLMs will change what is attainable. If we continue to just build the same, those less skilled will be able to achieve. You can now do things that took a team. Embrace that.


Like you say, it's just going to make your job easier and faster. Programmers are likely the last jobs to be replaced by robots.

Stop reading tech for a few weeks and you will get another perspective and you won't feel so neurotic.


I see the only solution is to shift your career path to higher level of intelligence requirements


Relax. ChatGPT, GPT-3, GPT-4, etc. is not at the point where it can replace a human programmer/engineer.

It's great as an assistant, but still very much needs a human.


Time for you to enter consulting!

Your services will be even more valuable in assisting others who are completely dependent on whatever form of AI is doing the programming.


With any luck a couple of lawsuits over copyright will completely nerf this ML/AI stuff in the bud and you won't have to worry as much.


LOL, I don’t think so. If LLM’s fail, they will have to fail on their own. probably programmers will get more competition from non-programmer humans leveraging LLM’s. but we have been saying everyone should learn to code for a while. Job on its way to being accomplished.


You don't think the law could basically make them infeasible? Right now they rely on leeching off content generated by real humans. Cut that off and they're going to have issues even training them.

Get something by Disney included in the training data by mistake and just watch what happens.


I don’t think god can stop it now, not disney, not the lawyers, not even the communists.

Your best hope is LLM’s in the current state aren’t enough, and we need to wait for another algorithmic leap or a lot more computational power improvements before we get to human level.


Litigation stops a whole lot of things in its tracks. Copyright stuff barely scratches the surface of the legal issues these things raise.

Other things that are soon to come - trade secrets no longer being trade secrets because company staff are feeding internal documents and source code into a cloud model (a practice that doesn't really align with "reasonable efforts to keep something secret"). Or even just the basic question of, who gets the blame when something generated by an LLM causes a significant problem? It doesn't take much in the way of judgments or settlements before an org realizes the risk of un- or under- reviewed output simply isn't worth it.


No offense but this sounds like it was written by GPT.

/s not /s

Just covering my bases. I really can't tell anymore.


Seems to me like you've become good at programming in (with?) chatgpt. Nice skill.


Funny post kind stranger. Start a farm and solve your problems una mas at a time


This is obvious troll, why do people even reply seriously to this?


That was my reaction too.

How could you be in top 5% and be able to use ChatGPT to "automate" any significant portion of your work? It just doesn't compute!


Ok, I'll try, since I went through that stage also in the recent past. I'd do a multi step thinking process to outline the bigger picture, hope I can communicate it good enough:

1. AI-assistence helps senior people even more that junior people right now. For example I can prompt ChatGPT far better what I want it to generate including some key points to consider that that juniors don't even know about. I can also start typing a solution with autocomplete in mind, knowing roughly how something like it will look like, to get optimal autocompletes over mutli lines, and can see right away if there are subtile problems that differ from my expectation. In total one senior + AI can compensate multiple missing juniors, but a junior cannot become that senior this way - GPT can be a good coach to get there over time, though.

2. This means roughly that companies will trim "fat" from the organization, but the remaining highly skilled people might earn even more (or at least not get fired) since they become more and more responsibility that was previously held by those fired people. It also means that bootstrapping something from scratch is now even more feasible if you're junior - these new AI-assited tools are quite good, and can augment individual weaknesses (like devs starting a company can use a marketing ai tool to build something good enough instead of typical abysmal UX/techblahblah raving about their product.

3. Both directions in point 2 basically leads to every dev over time becoming a manager: but not with humans as direct reports, but controlling powerful tools upto autonomous agents that execute unsupervised on nontrivial stuff. As software creation itself becomes a commodity, STOP VIEWING IT AS A CRAFT. its an industrialization moment, where especially the localized creativity/problemsolving part is automated away. Don't be proud of the code itself or similar, but about your ability to deliver products/projects in a fast and reliable way - so you can tackle more complicated problems within these products instead of always fiddling with the lowlevel details.

4. There is a paradox at play that is important to know: yes, we're at a race to the bottom when it comes to coding, so getting a software that does what someone needs goes towards (but not reaching) zero. Naively, this means that developers' time is also worth less and they will be earn less and less until being obsolete. But the cheaper something becomes, demand increases, since suddenly a lot of stuff is now in the financial range that it is worth tackling with code. I think its called "Jevons Paradox".

5. For you specifically, its simply moving up the abstraction ladder again, with prompting as a higher level programming language (but being able to drop down into lower messes if really needed, as always) + hard focus on the "shipping" part including better/cheaper fully managed hosting for stuff. Over a year, expect to do more different projects/products over the same time period and/or more sophisticated topics than before since you are freed from the lowlevel code monkey work rather soonish.

6. We're at the _start_ of the curve and times are wild, and my previous points already include a healthy future prediction (which is more like months instead of yers nowadays). Still it takes _many_ years until heavy changes in the economy really propagate through, since there are layers of people sitting on brakes. Computers and the Internet took (and still are taking!) maaaaany years to shift businesses, especially conservative areas (which there are A LOT).

Side note: while arguably there always might be some people left doing the really basics everyone else builds ontop of, who have to know their shit and go deep down the rabbit holes, there are concinving predictions that AI researchers will be jobless quickly, before many others, so I don't believe in that anymore. Better ignore this path when starting today, and strictly focus on raw shipping skills, which probably means managing/orchestrating some flavors of product-building-bots instead.

Bonus points: I'd buy the house, now, on fixed terms. Inflation is a thing that helps you here when you have a mortgage. Also when you are unable to pay the monthly rates for your house, what is different from not being able to pay your rent? Its not that much worse in practice. The only thing I strongly suggest is moving close to a big/growing city instead of buying property in a maybe beautiful village somewhere unknown, the curve is swinging back already with people moving back to cities.


Here is what Chat GPT (3.5? the free public version) says about your post. It's rather annoyingly generic for my two cents.

------

I understand that you're feeling a sense of uncertainty and anxiety about the future of your career in the face of advancing AI technology. It's normal to have concerns about job security and how automation may impact the field you've dedicated years of expertise to.

While it's true that AI and automation are transforming various industries, including web development, it's important to remember that technology also creates new opportunities and demands for skilled professionals. The key lies in adapting and continuously learning to stay relevant in the evolving job market.

Here are a few suggestions to help address your concerns:

Stay updated and embrace new technologies: Continuously learn and explore emerging technologies in your field. AI itself can be a powerful tool that enhances productivity and efficiency. By staying informed and incorporating new technologies into your skill set, you can remain valuable and adaptable.

Diversify your skills: Consider expanding your skill set beyond web development. Look for adjacent areas or emerging fields where your expertise can be applied, such as data science, machine learning, or cybersecurity. This diversification can make you more versatile and open up new opportunities.

Focus on problem-solving and creativity: While AI can automate certain tasks, it cannot replace human creativity, critical thinking, and problem-solving abilities. Emphasize these skills in your work and seek out projects that require innovative solutions.

Network and collaborate: Build a strong professional network within your industry. Engage with peers, attend conferences or meetups, and participate in online communities. Collaborating with others can lead to new opportunities and provide a support system during uncertain times.

Keep a growth mindset: Cultivate a mindset of continuous learning and adaptability. Embrace change and view it as an opportunity to grow and evolve professionally. Develop a willingness to explore new areas and acquire new skills as needed.

Regarding your concerns about UBI (Universal Basic Income) and healthcare, it's essential to stay informed about policy discussions and advocate for fair and equitable support systems for workers in the changing economy. It's a complex societal issue that requires ongoing dialogue and engagement.

Remember, you are not alone in your concerns. Many professionals are grappling with similar uncertainties. By proactively seeking ways to adapt and remaining open to change, you can navigate the evolving landscape and find new opportunities that align with your skills and interests.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: