Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How do I thrive in an AI dominated future?
32 points by evanchisholm on Dec 16, 2022 | hide | past | favorite | 55 comments
I'm currently in highschool and up until recently my plan was to become a software engineer and eventually start a startup. Over the last several years I've spent a lot of time programming for fun in my spare time and in my opinion I've gotten pretty good. I've built some pretty fun and cool things like chess engines, video games, 3d renderers from scratch and competed in my national computing contest with a solid placement. However as I've watched more and more generative ML models pop up seemingly every month for the past year or two my future feels more uncertain. It seems like a large portion of jobs will be wiped out sooner than many people expected, much like how alpha go achieved super human performance in the game of go nearly a decade before most people expected. I'm not even 100% set on tech as my career but I can see these models outperforming humans in nearly any career. ChatGPT already writes better essays than I do and stable diffusion generates better drawings than I do. What's next? Creating better lesson plans in school? Coming up with more convincing arguments in court? AI programming is not quite as good as I am yet but it's definitely on the horizon. Right now it feels like the only two viable options are to leverage these new technologies for business or to become one of the researchers creating these large models.

Tldr: what would you do if you were an ambitious teenager interested in tech entering a world soon to be dominated by large language models and other generative machine learning algorithms?




If it turns out that AI with respect to programming is just a productivity amplifier for programmers - which is all it is currently - then it will be more like a rising tide floats all boats - I think you'd rather be in the mix and super adept at any new techniques and you'll be more competitive than those who fail to adapt. Otherwise - look at what OpenAI suggests and is mentioned in another comment - find the niche and create and train the AI for that niche - profit.


If you want to get into tech, get into the ML space. Teach yourself StableDiffusion. like, really dig into it and learn how it all works from start to finish. The model was released so it's open source(ish) but most importantly learning about that will let you look behind the curtain, and realize the pieces that go into, say ChatGPT. it may seem like a magic but OpenAI's published papers on the underlying technologies. I'm not trying to say it's not impressive, because it is, I'm trying to say that decomposing AI into understanding that it's really a series ML models that you can interact with and tweak and tune - that's a skill that's not going away and so will be useful while the machines take over. obviously this isn't for everyone but if I were in your shoes, that's the direction I would go to future proof my career.


I think this is the wrong move. You're not going to compete with a PhD. Instead focus on the long tail, keep learning frontend or whatever you like, and put a pretty face in front of the AI. Make awesome stuff the best way you can, don't worry about matrix math all of the sudden if it's not already your thing.


This person is a teenager. They can of course get a PhD.


So hard giving advice nowadays. Pursuing a PHD will take him around 10 years ...who the hell knows what's coming in 10 years. If he wants to maximize his earnings its very well possible working for the government in whatever is a much better route.


But why wouldn't A.I be able to apply A.I theory on itself and improve itself? I mean if they can replace me as a software engineer why can't they replace everyone who apply ML algorithms?

Or do you mean for him to become one of the few people who can create new A.I algorithms? That's like the top 1% of intelligence I think and the barrier will only go higher.


I'm a software developer/engineer with 25 years experience. The coding aspect of software development is actually a very small part of the overall job. I don't expect AI to significantly change my job in the least.


I wonder, can you ask ChatGPT to write a compelling argument convincing a room full of senior software engineers that they need to switch to a new technology? Or to write a series of Jira tickets to complete a feature?


If ChatGPT can write JIRA tickets then I will accept the robot overlords. That is a burden worth more than my humanity.


Honestly I think I'll try this next week. I bet if you write "Turn this list of bullet points into a Jira ticket a React engineer would understand:" it'd get you closer than you'd think!


AI will create more engineer jobs then it will destroy. Learn math and computational complexity theory and then use AI to augment your productivity as a researcher or engineer. Math knowledge will be indispensable.

Don’t worry if you’re not a math major. You can and should get a degree in mathematics along side an software engineering degree.

AI is a major factor in applied computer science and engineering. However, there is more to compsci than just "AI". It will make you more productive provided you're above the threshold of engineers who will indeed by automated away.


> AI will create more engineer jobs then it will destroy.

I second this, but humanity will soon (or already have) hit the wall, because there are simply not enough talented people to work on them.


Just thinking about ChatGPT: it

* doesn't "care" if its output is true or false

* is weak at humour

which (for the moment) suggests that there remains a niche for people who have integrity and can laugh at themselves.

(Should that niche disappear in the future, at least one won't have lived in a shameful manner before then:

  Losing one glove
  is certainly painful,
  but nothing 
     compared to the pain,
  of losing one,
  throwing away the other,
  and finding 
     the first one again. )


It really is terrible at humor. I tried asking ChatGPT some dumb joke questions like "if I melt dry ice, can I swim without getting wet?" Any human being would know that this isn't a serious question and is intended as a joke. But ChatGPT gave me a short lecture on the actual chemistry of dry ice and then clarified how dangerous it would be to swim in or with dry ice. The joke went way over its head.

I asked some other similar questions. ChatGPT has zero sense of humor.


I always thought those examples demonstrated ChatGPT has a deadpan(1) humor, not that it's bad at humor.

1: I had forgotten the term for this and Google didn't help, so, of course, I asked ChatGPT:

Q: "What is the used for the type of humor where someone makes a joke question, and the other replies in a serious and factual manner on purpose?"

A: 'The type of humor you are describing is often referred to as "deadpan" or "straight man" humor. It involves one person making a joke or humorous statement, and the other person responding in a serious, matter-of-fact manner, as if the statement were completely genuine. This creates a contrast between the humorous intent of the joke and the serious response, and can be very funny when done well. Deadpan humor is often used in comedy sketches and stand-up comedy routines, as well as in everyday conversations and interactions.'


Shoot I might be an AI cuz I don’t get it


AI will often lie confidently. It will need expert humans to correct these “hallucinations”. You’ll be safe, kiddo.


I really wish I had your optimism, I just don't. People like Sam Altman and Kai-Fu Lee don't hide that they are coming for our jobs. They don't even try to sugar coat it, "cognition/intelligence as a commodity" is what Sam Altman wants and he said that everyone working only with a computer is probably the first to go away.

I can only hope they are wrong by a decade or more like Musk was about self driving cars.


100% for the near term but a couple decades from now I'm not so sure. Chess computers beat humans every single time even the absolute best of the best and they keep getting even better even since they first beat a world champion over 30 years ago. Is it not likely that similar trends could happen in other areas?


AI will excel in areas with clear cut rules and game theoretical modes of operation. Computers lack a Theory of the World which would allow them to: 1) create said rules, or 2) put disparate pieces into place in novel ways to further said rules. Humans will always be needed as a result.

But I could be wrong, who knows?


How can you test whether a computer lies? It basically needs to have a model of the world, by which point that becomes AGI, so indeed no human is safe. Until then, we have nothing to worry about.


I got into software because it was a good way to solve a broad class of problems. Not because I have some obsession with actually writing code. You can fight the tools and risk becoming obsolete, or you can embrace the tools and adapt to continue to solve problems and deliver value. Just like business users cannot adequately convey requirements to off-shore development firms, they won't be able to get the most use out of ML generated code. They don't know the details of what problems they are trying to solve, nor do they know how to implement the coded suggestions in a production ready capacity. There is still a lot of technical work required to use these tools. The "real" value technical folk provide is being able to decipher and translate business speak to implementation. The only people who should be really worried are those who want to hide in their basement and just program against tickets implementing discrete tasks. The technical folk who actually understand business objectives and are effective at communicating with business folk should still have a long lucrative career ahead of them.


Start a brand or company based around your work, and sell based on name recognition rather than price/quality/ease of use/timing/whatever.

AI can create art or program. It can write text. And in future, it'll be able to make videos and music too.

But it can't BE someone else, not legally at least. A random AI cannot replace a popular celebrity because its their name that matters the most, not the objective quality of their work. It can't outcompete Disney or Marvel or DC or Nintendo at their own game, because their works are protected by IP laws and sell based on their characters and brands as much as the objective quality of the works.

So don't just draw or write or program in a way that you can be replaced. Do so under your own name/brand, so people pick you over the AI copycats based on your personality and image alone. If you become the next Stephen King or George RR Martin or Markiplier, people will check out your work because it's your work, and no amount of tech can ever compete on that front.


AI will make trivial stuff easier to work with, so we can focus on the harder stuff. This isn't going to hit critical mass for at least another decade, so you have time.

> built some pretty fun and cool things like chess engines, video games, 3d renderers from scratch and competed in my national computing contest with a solid placement

You're already ahead of 99% of the folks out there. Learn AI, learn ML, get into that now and you'll remain ahead of the curve.

I don't think AI is going to be quite that ground-breaking for at least another 20 years. It's impressive now, yes, but until it's on every device running under every OS as part of a commercial package, it's not going to disrupt anything.

If anything it'll be like the transition to cloud computing. Jobs will change. The disrupting factor will be that a good portion of data processing jobs will no longer need bodies to fill it. Middle management in IT will be next. Programmers and sysadmins will be safe for some time to come.


I agree with you but I also fear that we are making the classic mistake of thinking linearly when the changes will happen on a exponential which is something I think the OP hinted at with regard to Alpha Go.


Yes, they all said that programmers would be obsolete by 2000's, and sysadmins would be out of a job by the time cloud was mainstream.


Focus on something you expect AI not to exceed professional humans at in the next couple of decades. If you're convinced there's no such thing, then I wouldn't worry so much about career choice.

For a concrete answer in tech, have a look at AI development itself. Keep abreast of what OpenAI and their competitors identify as open problems and orient your academic career towards working in them. The object level questions will change while you're in school; the goal is an "intercept trajectory" where what you study just before you graduate is the state of the art.

Alternatively, highly regulated professions (medicine, law, civil engineering to name a few) are likely to continue to employ bright humans long after AI can do the job just because no-one will be allowed to use AI there.


> Focus on something you expect AI not to exceed professional humans at in the next couple of decades.

So I should focus on driving a car?


Yes, we should all become professional truck drivers. I hear they make six figures too.


Don't (completely) buy the hype.

I understand that you are being told that AI will make everything obsolete, but remember that you are getting this message through news outlets. These are businesses that make money by getting attention, and nothing gets attention more than making extreme, especially threatening, claims.

The maturity of machine learning will change things, but exactly how will be hard to predict.

Rather than try to optimize for 'success' based on guessing the future, I'd suggest that you focus on continuing to learn and ask yourself what you like.

People are still playing chess and go, and there will still be software.


It seems like the trap you're falling into goes back to what "a job" is. Not your fault, you're still young, and a lot of older folk fall into the same trap, believing a job is defined as "someone paying me to do some work."

A job is when someone pays you to add value.

(It's true all the way down to bullshit jobs, where the value you're adding is to meet an uneconomic need.)

It doesn't matter which work models will eliminate. All that matters is building value in a reality where they exist. Just as a casual observation, climate change continues to rage on, energy supply is still a bottleneck, almost half the deaths in high-income countries are from cancer etc. etc. etc.

There will always be value to add. The baseline may change, but no one yearns back to the days before the copying machine or Excel.

In my mind, the most sensible thing an ambitious teenager can do is get a proper academic education (AI is not magic, it's jus the product of research, and all those people doing the research did exactly that - get good education.)

Also, and I realize that doesn't gel with the whole HN ethos but whatever, don't waste your early working years starting a company. The only way to discover real problems in the world are to live in that world. Getting to understand a domain well, any domain, will serve you better than any 'Uber for X' ever will.


There are some great comments on this thread all with pretty decent if not great advice. But i'll come from a slightly different angle - not because it's better/worse but just to provide a more rounded group of responses.

My background is not in hardcore cs, ML, or AI but i majored in engineering and have worked as a software engineer for about 6 years.

Both breadth and depth are important with knowledge/skills and having diversified experiences at early ages (and throughout life) is valuable for "honing in on" what you want to do.

This is just to say maybe seeking out various computing "applications" wherever possible may help expose you to the different things out there. It could be neuroscience, medical imaging, nuclear fusion energy or whatever interesting thing presents itself. Since you mentioned starting a business understanding a variety of markets could also be valuable.

best of luck and keep hacking

edit formatting


Why not aks yourself „what is it i like to do“ instead of „how do i thrive“? You „thrivers“ are a boring bunch.


What I would like to do is ski race on the world cup circuit but since I am not good enough and don't have infinite money I'm going to need to find another option.


Underrated answer :-)


ChatGPT expects to have $1B in revenue in 2024. This revenue is going to come from people building businesses around it (<-- possibly you? this would require programming knowledge obv.). Honestly, I think a person could make a living as a prompt-engineer right now. You can't fight it, you have to lean into it.


What's a prompt engineer? Can you share some reading material?


Prompt engineer : Someone who is quite good at providing the right prompts in order to guide ai towards the solution that is needed.


Play around with Midjourney and you'll see what I mean.


Is anyone actually paid to do this today?


Learn strong communication skills.

I come from an automation background, and work as a tech lead, and I have absolutely no concerns about AI. Most people who worry about it don't fully understand what it is, and that it's ultimately just tools, albeit ones with more difficult to understand internals.

Nevertheless, strong communication skills are leverage on almost anything you do. That includes programming. They also can't be surpassed by AI in a non-apocalyptic world.

You can start by joining Toastmasters, and/or Rotary.

Communication skills can be further improved by understanding human psychology. If you're able to quickly detect personality disorders and other psychological minefields you'll encounter in the workplace, it'll bode well for you.

Again, no machine can understand a human better than another human, in anything short of an apocalyptic future.


Honestly, if this thing keeps progressing like it does now then all bets are off, simple as that. Cognitive jobs will plummet in salary and then simply disappear. There may be jobs for the top 1% of IQ for 10-15 more years but then that's probably gone too as A.I simply becomes better than us in everything.

What's left? stuff like kindergarten workers, nurses and police officers aren't likely to be replaced so soon..(does it bring you joy though doing that?).

You can also try catching some of the A.I boom and starting your own business, there's a lot of opportunity to be had in the coming 5-10 years probably.


I suggest starting a company. It seems that there is no better time to be an entrepreneur as you will be able to amplify your talents so greatly with AI. If that isn't your thing, suggest taking the academic route and get a PhD in machine learning.


AI is good at generating simple examples in any given domain, including programming.

AI can't refactor code, or debug (or at least debug well as far as I know). Folks here can correct me if I'm wrong, I stopped looking into AI once I realized it wasn't as far along as most non-technical people seemed to think.

Things like Stable Diffusion and OpenAI have made great strides, but they will only serve to enhance humans' skills and abilities. There are limitations to what these tools can do, I wouldn't worry too much about having to compete with them once you're in the work force.


>Tldr: what would you do if you were an ambitious teenager interested in tech entering a world soon to be dominated by large language models and other generative machine learning algorithms?

An interesting topic for a science fiction novel, but that doesn't describe your experience today.

I studied ML at a PhD level. A lot of "neat tricks", but a whole lot of issues preventing it from taking over basic tasks like driving (let alone programming).

Just keep doing what you're doing and adapt to the AI revolution when it actually happens.

It's been "just 10 years away" since the 1960's. :)


Since you are thinking about this in highschool, you are on the right track. There is not a AI dominated future that isn't driven by Software Engineers. You can either be an engineer creating AI/ML tech or an engineer using the AI/ML tech.

There will also be a very long tail of "traditional" software engineering. But if you are still 4-5 years out from your "career" - you probably wont want to count on a traditional software engineering job.


Taking software as an example, there will be some practical application for code that's generated, which means there needs to be some analysis of the result. Somewhere along the way there will be a human who reads some output, regardless of how complex the systems are which produce that output. Much of the time, this person is going to have a significant level of technical knowledge. Identify what positions those people will be in and try to fill them.


Find an industry or line of work you are interested in and learn about problems in that area. Then build products to solve those problems. Use ChatGPT, stable diffusion, and others as force multipliers.

The funny thing with problems is that they never end. Even if AI is effective against the current slate of problems, new ones tend to show up quickly.


Don't worry, AI will cause a lot more problems than it will solve.


Yes you are absolutely right. You have to learn to leverage AI as a tool and use it the most to your benefit. This is the new way.


AI needs computation power, it needs data processing. Focus on the tools, processes and hardware.


Build systems not available to public access. AI is not a problem where it is not permitted.


not available to public access meaning ? please give me some example.


So this is the thing I've been thinking about the most in this AI wave ... people like you in your position.

Depending on how fast AI transforms the economy (which I'm in no position to begin to predict), but presuming some degree of speed that is unexpected by at least a good proportion of people, one of the things that will surely be disrupted the most is the idea of career planning for teenagers and high-schoolers.

Because surely, in line with the idea of the "singularity", one of the corollaries of AI disruption will be that the rate and predictability of disruption events increases. At some point, either there's no point on planning out a career, or the only plans that seem to make sense are either vague, snake-oil, or to focus on really basic fundamental things that entail a major shift in lifestyle and even philosophy like farming.

If anything like that pans out, with sufficient speed, I'd predict some hefty social disruption, like groups of people going hard on Ludditism. But also, I'd predict some general polarisation not unlike what appears to have transpired in the west politically, which I think can be attributable to a growing sense of uncertainty and misunderstanding of an increasingly complex and incomprehensible world.

I know I'm not answering your question. That's partly because, though I'm older than you, I've not settled well on a career in my life and am in a not too dissimilar, perhaps worse, scenario from you. But also, as someone who didn't see this year's wave of AI coming, my response is that I don't have an answer, and I think that that is, and that you're asking this question, the actual problem.

...

All that being said, AI pessimism can go too far, and the tone of my post above is certainly guilty of this. Recent AI developments are still very problematic and there's plenty of scope in tech jobs that aren't going to be touched by AI.

Nonetheless, in the spirit of this more optimistic position (for tech at least), I would predict that the blindspot many on here (and in tech in general) will be the experience of juniors with AI being involved more and more.

On one hand, I can see leveraging AI tools to be more productive and learn faster to become more common, especially amongst more adaptable younger people.

On the other hand, I can see AIs making the work of juniors harder, because the AI most easily supplants a junior's non-expert error prone work and so might raise the floor of the bare minimal expertise in the industry without providing accessible means of acquiring that expertise for newcomers. It may quickly become the case, for instance, that writing code in a language and having some vague and basic awareness of tools and concepts is no longer sufficient for a junior, but instead, you need to know the things that an AI doesn't but which are hard to learn without experience, like the pitfalls of combining two tools together, or what certain errors are likely to be caused by etc.


Classically, what you're looking for in a stable career is to be part of a generational cohort that is well-positioned to transition from something emerging, risky, and low-value, into a more established senior role. And then stay there forever, holding the expertise captive until retirement, at which point the economy now has to figure out what to do without your indispensable knowledge.

This is actually a process that has been a big part of recent news: the Boomer generation, being oversized and long-lived, sucked up a lot of the jobs. By the time their Millenial children arrived, a lot of career paths had no emerging prospects because the parents were still there, still in the same roles, and the world had become fantastically more competitive through globalization. But now they've finally begun to age out altogether, and that transition plays into the dynamic of high unrest, financial turmoil and anxiety around tech that we're experiencing.

The thing is, tech is something societies decide to invent pragmatically, based on what available science allows. We invent "automobile tech" because policy and available resources supported its widespread use in the richest parts of the world. In the ones destroyed by WWII, auto tech still existed but was complemented with reinvestment in rail tech.

So with ML AI, the tech is something we're currently looking for models of application for. The science is cool, and there are certain things it's great at, but contra the "learn AI" replies, that doesn't mean the "AI industry" is something you'll gain the most leverage from by approaching it at its most fundamental level, getting a degree in, and then simply signing up for a research job. That is one of the most competitive routes you could take, as the "easy part" of the field is already behind us, and now everything is going to be about little nuances of improving the tech and integrating it better.

What we currently see in terms of AI users is relatively unsophisticated application: people who log on to one of the apps, send a few basic prompts, and then stop there, satisfied with the result. But I believe the way to think about this is rather to become sufficiently AI-literate to use it to storm the gates of some other field, as a combination threat; a layered synthesis of traditional know-how and new methods. This is why some artists are unconcerned, while others are panicked; one group sees a way to enhance what they're doing with another layer of tech, another sees a threat to their routine illustration and asset creation gigs.

But in "storming the gates" you have to expect to arrive at a surprisingly empty field, where nobody knows what you are doing or whether it's valuable. It might take several years to build up visibility, just trying things and publishing results, before you find a path to monetize on it.

If the basic idea you're building from is coherent - it doesn't contain contradictory elements, and you can use it as a philosophical framework to assess the value of your output - you will stay motivated and eventually succeed. Study enough philosophy to get what it means to be coherent and build such a framework; it'll pay off.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: