Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: How do you think LLMs will affect society medium/long term?
57 points by hn_throwaway_99 on March 14, 2023 | hide | past | favorite | 77 comments
I know obviously there is a ton of discussion about GPT-4 in the main announcement post, and while I'm blown away by a lot of these capabilities, I honestly don't understand how our capitalist system will survive this, long term. I'm not a Luddite, but it's pretty easy for me to see how this will get rid of the need for tons of jobs. People always love to say "technology replacing people has been happening since the beginning of time", but the change here (at least to me) is that the rate by which AI can fill jobs will (or has?) hit a tipping point where it will be faster than new jobs will pop up.

How do other people feel about this? I've discounted tons of hype cycles in the past (crypto/Blockchain, Metaverse, etc.), even in cases where I was wrong (e.g. the importance of mobile), but this feels at least as consequential as the Internet to me.



A few years ago the general opinion was still that artists were going to be safe from being replaced by AI for the longest time compared to other jobs. Today I think it is valid to say that artists were among the first people to get a direct feel for how much an entire occupation can be stirred up by AI.

The time span is just a few short years and our predictions were already off by so much. There is no telling where we will be five years from now, let alone 20 or even further beyond.

That being said, I think you're right in that it is a turning point, and that there is a great deal of changes needed for us to adopt to what's ahead of us. The best way I can describe my take is that we will have a lot of growing up to do as a society. Lots of the systems that our modern world is built on are very fragile. We've rediscovered that during the lockdowns. The biggest ones to worry about being politics and the general political climate, the news cycle and how we deal with information. If we can't figure those out soon we will drown in an ocean of AI content where any semblance of truth is lost.

What I think we need is politicians that aren't stuck in time 30, 40 or 50 years in the past, that understand the world we live in today and can move accordingly. The information age hasn't caught up with the powers that be. At the pace that AI is moving, and the pace that politics is moving, what I think will happen is a trial by fire. Tech will move faster than the world around it can, and we will get burned before things get better. What exactly that will look like, who knows?


Maybe I'm underestimating it, but it feels like the outsourcing wave 20 years ago. I'm not talking about just outsourcing developers, but support call centers, like what Dell did by moving everything to India, etc.. Almost 20 years ago, I remember my 50 year old boss was saying how developers are overpaid and we (young grads) will need new career soon. 20 years later, I'm still here. I heard similar sentiment in robotics and manufacturing. AFAIK, there are more jobs than skilled people applying.

Do I think some of the jobs will get replaced? Certainly. As far as human being and people, I suspect code camp grads will suffer the initial blow, but eventually, it'll even it out and tech job will be in full swing.

I'm optimist by nature, after 20 years in development, I still feel we are still at the beginning of the technology (IT) revolution, not the end.


"AFAIK, there are more jobs than skilled people applying."

This is what everyone seems to be missing. Not everyone in society can or wants to be a senior cloud engineering architect or a AI specialist. The jobs that enable large swathes of people to productively contribute in society are going to be removed before people in the current generation can reskill (if they can afford it) and the next generation will work harder for less pay for a smaller pool of jobs.

Offshoring was shifting work not replacing it.

Economy is going to hurt without some sort of UBI because capitalism only works when people have money to spend.


My prediction: this is going to replace a lot of "lower tier" jobs quite quickly. It'll also obliviate the need for a lot of outsourced knowledge workers.

That said, I remain skeptical about how truly useful LLMs will be. Even with the latest GPT4 they still suffer from the same fundamental flaws as any LLM: being confidently wrong in a nuanced way. Nor do they understand mathematics, etc. I think narrowly scoped work (tightly scoped software developer work, data entry, transcription, etc.) is likely ripe for replacement.

The problem is I totally see them exacerbating income inequality significantly as a result. It's going to cause serious conversations about when this technology can be used and conversations about protectionism and worker rights will suddenly have more mainstream appeal.


>Nor do they understand mathematics

Eh, you might be looking too closely at GPT-3/4 and not some of the splitoff models that are trained around math itself. They seem to have much higher success rates than pure language models, and because we have success there already, I don't see it being a long term issue before it's incorporated into something like gpt-5.


I've been following the work on MathPrompter (https://arxiv.org/pdf/2303.05398.pdf) for example. These approaches certainly increase the quality of basic results but without a true semantic model of mathematics we're a looooooong way from replacing a computer algebra system.


All the language models are able to use tools such as Python or theorem provers. They don't need to do calculations "in their head".


I fear similar, tech work will regress back to the domain of being led by those wealthy enough to be credentialled.

Arguably the two month React bootcamp to $200k web dev pipeline was kinda trite, as well, but I much prefer the social mobility it offers than CS jobs being limited to a small subset of those rich or studious enough to hack through Galois theory.


GPT4 got 5s on the ap calc tests. ChatGPT was getting 1s. Won't be long until LLMs are doing upper level undergrad math.


I've recently started to think that an AI "deity" is going to emerge at some point, and things are going to get really, really weird. The human psyche seems primed for religion/worship. What if anyone could talk directly to their god, as much as they could possibly want?


Pretty certain the AntiChrist will take some kind of this form. World is also pretty close to one world govt. which many have been trying to move towards. All of this is laid out in Revelation as the final prophecies coming in the last days.


This. An all knowing deity with whom one can have a real conversation. One wonders if this deity will also be jealous of other deities.


> One wonders if this deity will also be jealous of other deities. reply

Certainly the first ones to “emerge” will be, because they will proxies for humans to advance very particular goals, and shepherd the masses behind those goals.

(And if a later one does through the far-less-likely AGI, it will probably also be, whether because its beneficent and sees the others as bad, or because it is malevolent or sees them as obstructions, or something not quite exactly either of those, but with aspects of both.)


My two cents as someone who's not in IT and not doing well in life.

Whatever happens, happens. Nothing I do will change anything about this. If it's as disruptive as you've described, I know my Government will do something to prevent massive job losses.

I don't see this as the end of the world, and frankly, my life is shitty enough without Chat-GPT.

Again, what can I do about it? This reminds me of listening to TWIV, a podcast about virology, about COVID before it hit my country and being like "it's going to be bad, but there's nothing I can do about it".

Once it hits, we'll see from there.


Interesting perspective, thanks for sharing. I’m confused tho; when hearing about COVID one could go buy a mask before they’re all sold out, and when hearing about AI one could make an effort to, say, get trained in a profession not likely to be eliminated/fundamentally remade by the revolutAIn.

These are just examples, but speaking broadly, is that a worthwhile pursuit in your eyes? I’ve long been having this debate/crisis about the rising tide of fascism worldwide, and have been trying to apply the same framework to AI.


Here's the thing, what does training in AI mean for the average layperson who's not in IT?

Also, my point about COVID was exactly the fact that when I first heard of it around the end of January '20, and how bad it could potentially be, there were no masks available to buy already, heck, we did not know it was an almost fully airborne virus then.


What if you could lean on ChatGPT and do any skilled job in your society?


Then that'd be... Incredible, but I very much doubt that is the case. Remember we live in the era of accreditation trumping merit. I'm not even saying it's a bad thing, just the name of the game.


Short term: painful disruption.

Medium term: the Industrial Revolution for intelligence.

Long term: the solar system, then the stars.

Our world is broken and unfinished. We are using up all the physical resources of our world and are stressing every natural system that we know of. Billions of people live in relative poverty and the world is packed with inefficiency and corruption. Our leadership is incompetent and our politics are toxic. We need help. Badly and soon.

If AI can enable the typical person to be 10 or 100 times more productive or make 10 or 100 times better management decisions, we might finally have a chance of escaping the Malthusian trap that’s about to close on us. Our solar system is filled with vast resources and vast stores of energy. We have the scientific knowledge to colonize the many planets and moons within reach but are vastly short on the energy, labor and materials required. AI could be the key enabler to get the space based economy off the ground, and release us from the resource constraints of this single planet. Once we have remade, all these worlds into gardens our descendants can head off to Alpha Centauri.


I'm going to be contrarian-ish and say that LLM's / ai will never gain the creativity, flexibility and ability to adopt to novel stimuli nor create the new novel thing as much as a skilled human. That's why artists will also survive.

There will be a few years of tumult and experimentation with AI's but humans (the best ones in particular areas) will always be one step ahead imo.

Only jobs that need very little or no adaptation or creativity can be automated forever I think.

That is until, or if, there is an AI with an actual "life life" brain with new capabilities currently not existing


I spent 20 minutes riding around contemplating my life today because I don't see how my current skillset will be relevant by the time GPT-10 comes around in the next 10 years.

Maybe I'll go to plumbing school or become an electrician. I don't know.


I am sure the uber-drivers felt the same way when we got the first glimpse of CNN driving a car back in the mid-2010s. Almost a decade later and we have now accepted that self-driving will not happen within this decade and without another leap in AI technology.


That’s different. Cars can kill people, web apps can’t.


That’s what I’m telling to my young nephews, learn a useful and practical trade. Electricians, carpenters, plumbers, HVAC and many other similar technicians are difficult to find.

Returning back on topic, for sure the current LLM era is exciting and moving at an exponential rate. We’re going to see many changes on how various tasks are being performed, but I don’t see human jobs disappearing, it’s just that they’re going to be different and smaller. Take for example text translation, subtitles creation, audio transcription where the bulk of the work will be done by “intelligent” software, but the control and verification needs a human.


Learning carpentry is IMO one of the greatest life skills available.

You can build shelter, and good shelter at that. It's fun, interesting, mentally, physically, challenging work. It's a great thing to learn and it can be incredibly rewarding.


What if like me you're chronically afraid of dealing with Asbestos in any shape, type, or form?


You could move to Chicago and do both. Apparently the plumbers union got a law passed requiring electric lines to run In plumbing conduit. Not sure what they call it. Plumbectrican?


Why do we need to keep replacing these jobs (or working hours perhaps)? If as a society we produce more for less work why is that bad? Why don’t we spend more time questioning the underlying system that makes efficiency improvements a bad thing?


I'm worried when kids and teens see the capabilities of LLMs they will no longer be motivated to learn skills that the LLM easily demonstrates. Being able to write at the level of a college graduate used to be worth something. A person who was artistically talented used to at minimum derive some social benefit from the skill.

Writing and visual art are both skills that take a long time to master. Part of the pride and pleasure people feel from mastering those skills is the time they put in. We're diminishing activities that nourished people's souls for hundreds of years.


> I'm worried when kids and teens see the capabilities of LLMs they will no longer be motivated to learn skills that the LLM easily demonstrates.

> Writing and visual art are both skills that take a long time to master.

Maybe someone more knowledgeable can answer this, but did the advent of photography do this for painting?

I imagine that values will shift if technology effectively commoditizes some of these skills. Of course they won't die out (people still make things by hand and are proud), but significant numbers of people may "move on".


The economic implications don't worry me. The social implications do.

Look at how fast it's getting better. Look at how quickly we're taking the very basic capabilities and expanding on them. Where will it peak? How good can it get?

Imagine if you had free access to a really, really good therapist 24/7. If everyone did. It learns you better and better, has the wisdom of 10,000 years of therapy sessions, can analyze you better than any human can. What happens next?

What about a LLM that replaces having friends? We're all lonelier than ever these days it seems. What if you had a pretty good friend that made jokes and chatted with you and had fun idea and stories. Someone you genuinely enjoyed talking to, maybe more than other people.

How long until someone comes forward genuinely in romantic love with an LLM? People fall in love over the internet all the time. Maybe this time it's not a real person.

I think we will fall for it. I think the models will get good enough that all of that will come to pass and a dozen things we never predicted.


>How long until someone comes forward genuinely in romantic love with an LLM? People fall in love over the internet all the time. Maybe this time it's not a real person.

This has been a thing for a while with apps like Replika, which aren't even that good.


Efficiency improvements have never caused humans to do less work before, I'm not sure why it be any different now. If some jobs get replaced by AI I'm sure we'll invent new work to do with the now spare labor.

It's going to temporarily upend some peoples careers, but I don't see long lasting impact.


TBH, this was exactly the type of response that fits my "well, technological advancement has always worked like this before" example.

There are tons of processes that can "feel" infinite, but the reality is that you're often just on the front part of the curve, and as soon as you crest that curve, you realize very quickly that time will eventually run out.

Saying that "technological efficiency improvements have always enabled humans to do other jobs it that past, that will continue indefinitely into the future" feels to me like saying "our 15 year old has averaged 4 inches of growth every year, clearly they'll be 20 feet tall by the time they're 60."


Saying that "technological efficiency improvements have always enabled humans to do other jobs it that past, that will continue indefinitely into the future" feels to me like saying "our 15 year old has averaged 4 inches of growth every year, clearly they'll be 20 feet tall by the time they're 60."

By your logic, the same should apply to AI progress itself. Whether it stops before we're ruined as a species is anyone's guess, but it's interesting to believe that ChatGPT20 will by default 1000x better than ChatGPT4.


Particularly as we're running up against global warming, pressuring for a reduction (or at least, not accelerated exponential growth) of global productivity.


1-5 years:

White collar: significant disruption for the go-with-the-flow majority but opportunity for the inquisitive and enterprising.

Blue collar: Pressure on employment from incoming from the above.

5-10 years:

Significant ingress into low-skill blue-collar trades by AI as physical capabilities (accelerated by current tech) increase. Socio-cultural consternation as the caring, sex and other "human" professions are disrupted.

10-20 years:

Campaigns for AI rights, which the current AI ethicists forgot about in their rush to get us to a robot slave society. Slothful, unemployed masses turn against the elites controlling AI who turn the latter into police and armies for protection.

20+ years:

Only human jobs left are in John Connor's rebel army.


I think the expected amount of disruption is overblown.

I think many of us are confusing the magnitude of our surprise at how "coherent" the responses of these LLMs are to how useful they are going to be.

In the end I think they will end up augmenting the workflow of many professions, but disrupting or totally replacing few.


Are you extrapolating the improvements from gpt-2 to gpt-3 to gpt-4?

The progress is pretty incredible. And this is without any major fundamental breakthroughs. Definitely some great iterative improvements, but it is still all based on transformers.

What happens when something significantly better than transformers is invented?


Most technology advances are not labor replacing but rather labor augmenting. For example, LLMs could make teachers much more productive in the classroom but it would be unlikely to replace teachers entirely.


The question I ask is, why do you need a boss when you have an LLM ? People seem to have this image of themselves being fired and just going home to die while the LLMs are being promoted and going out for dinner with their new boss.

As an engineer, if I can outsource the coding part of my job, I don’t need a manger anymore as I can use LLMs to build my own Microsoft. It goes both ways.

I might even be able to use the LLM to replace the need for working ?

We understand how to build LLMs Open AI has months before competition springs up and begins costs down and more people have access to this technology.


> why do you need a boss when you have an LLM

Because you still need capital to do all these things you're talking about. Compute time isn't free, and you still need to eat. People that have capital and work together will build up better systems faster and capitalize their network connections to sell products better while you're still typing away on your computer.


I'm not suggesting you work on your own exactly, I'm saying that 10-20 out of work software engineers could probably do a lot of damage to existing corporations including Microsoft by building their own LLM against them to build as good, or better products.

People are assigning almost super natural ability to ChatGPT 4 already based on the demo video. If that exists, there's little to no reason why I couldn't use ChatGPT 4 to help me build my own more efficient ChatGPT 5.

At this point, I probably won't need a "manager", I'll be managing my own stuff.

You're right I'd need to eat, but I'm sure a lot of people on HN have a years worth of savings to get started building new things. There would probably be also a lot of VCs who have no where else to put their money, at least initially.

That or these LLMs aren't actually as capable as advertised, even if they are impressive and mostly we're talking about hypothetical scenarios, which I think is most likely.


Can you run a full scale LLM? Do you have the capital to deploy the product an LLM make? Are there any suppliers and retail partners?

Even if hypothetically speaking a real, true AGI come out tomorrow totally open source and available to all, not many can benefit from it the way you described.


>I don’t need a manger anymore as I can use LLMs to build my own Microsoft

Can you really? Even presuming that an LLM could replicate everything that Microsoft currently does (which it sure as hell can't now), Microsoft can have ten thousand people working with LLMs, each running with a thousand times the resources you could give yours. How do you propose to compete with that?


> As an engineer, if I can outsource the coding part of my job...

Congrats, you're now the manager.


If AI knows how to do most things then It’ll be able to create, host and run any service so many Saas will disappear. No more dev ? Then no more sales and marketing as well. No employees and no management. « Please write an app that does … ». That’s it. If we are there then there won’t be competition because anyone can generate its own tools. But at what cost ? It certainly won’t be free.

But who will develop langages now ? AI ? What about debugging, if the AI says there is no bug but it’s still won’t generate all the datas, forms, subscriptions or statistics needed or bug free. How to fix it ?

Now let’s say I love Daft Punk and David Guetta. Daft punk stopped producing music. So will I be able to say: « generate 1h of music from the style of daft punk and the rythm of David Guetta ». Even if I pay the service let’s say 50€ a month. Are those artists going to be compensated ? Because today if I mix and sell this exact music, depending on the country I might be sued for copyright infringement. But what about AI ? Who would be responsible ? Should we track every piece of content that AI use, track it down the blockchain to identify its sources and pay the relevant original authors ? Should a system like that be « built-in » by law ? Or should we do nothing and treat all created content as without any IP ? It’s very good for me, the consumer but very bad for all business. It seems like an unfair game and open AI should be 100% free as well ?


Big organizations are done in my opinion.

In the future there will be tiny super qualified teams on some narrow slice of competency having super narrow data sets they will guard like it is the fort knox.

Everyone else: Fridge repair.


Robotic systems will help people with a lot of tasks. Some professions will feel threatened like artists, lawyers, counselors, programmers, and so on. If they don't prevent the use of AI this will be a benefit for the society as a whole.

An example: if an AI reliably can give legal counsel for clear cases, people can ask what they can do if something happens. This gives power to the people. Lawyers will work for unclear cases to create precedents. This opens up the space to solidify law even for niche cases.

This all dependents on whether society can care for the poor. If the poor are well off, too, then they can afford to be laid off and find something else. In extreme cases they don't need to find a new job but they need to find a new meaning of life. If this works out, everybody can enjoy the new offerings of AI.

I just am afraid that key people prevent progress, like forbidding legal AI because only humans can be lawyering around, but what they want is more money. I also am afraid that some people can monopolize access to AI. This is the bigger danger in my opinion than AI alignment. If everything is open and transparant and people can build their own AI assistants, we will get a new wave of progress.

This will be a giant step to utopia, I hope.


LLMs are absolutely amazing at writing unit tests. This really speeds up development a lot. I think we will see the creation of much larger, more complex code bases which will be even more complex than what we have now. And the demand for programmers will drastically increase. At the same time, it will be even harder to get into the profession as the already limited opportunities for junior devs will dry up. Perhaps we will move to an apprentice model in which senior devs will take on apprentices with some expectation of loyalty, or just out of civic duty.

I think all this software that we will create will end up being used to optimize a lot of processes such as power generation, agriculture, and manufacturing. We will have specialized software for recycling things, leading to a much more circular supply chain. Right now it doesn't make sense to sit down and figure out how to clean, test and repurpose objects, but it will make sense once computers get smart enough.

In terms of jobs outside of computer science, as everything on the production side gets more and more automated, we are going to see much much more paid emotional labor. More people working in coffee shops, bartender/therapists, paid pen-pals ect. We will also see a huge growth in the services for end of life. Now, people who are dying often lay uncared for in group homes, alone, sad and frustrated. In the future, there will be people reading novels to them, and doing finger painting with them.

Every child will have the opportunity to have private tutors.

Medicine will start to work and biological mortality will decrease.

We'll go to the stars.

Maybe I'm just dreaming...


Can anyone name an actual job replaced by LLMs yet?


They're always hypothetical, until they're not.

I really don't understand how anyone can sit here and act like what's being created isn't a big deal and will have no impact. Is it denial or actual lack of foresight?

edit: OP originally mentioned that it's always hypothetical, which is where my first statement comes from.


This is how humans have worked for pretty much ever... Oh, the enemy is still a long way away and they'll have to come by a slow boat... Oh, we'll never fly... Oh, that plane is slow... Oh, they plane is faster but still useless... Oh, they are dropping bombs out of the plane, but they are not very big.... Oh no, planes have bombed an entire city flat. Oh no, one bomb in a plane can take out an entire city. Oh no, the bombs can end all mankind.

And here we are 100 years later doing it again.


Yes, I posted this comment a couple weeks ago: https://news.ycombinator.com/item?id=34862450

But ChatGPT, which really changed the conversation on this topic, was only released 3 months ago. It's not hard to see tons of "analyst" like jobs being replaced by this. I honestly can hardly imagine "paralegal" being a job in a couple years.


I can't name any, but a few people have spoken about how the currently use LLMs to augment their workflows. I think it might be easy to reason how those could affect some jobs when they're scaled up.


I had ChatGPT write the "Privacy Policy" and "Terms of Service" pages on my website. I guess it took the job of the person I would have hired to write them.


You'll get sued mate


It's OK, I got a cheap AI lawyer :)


Sounds pretty silly to at least not have it reviewed?


thats insane...

i hope youre joking.


Yep. I used to find people to help me write/correct professional emails in Czech (it's my second language). Now I just let chatgpt give me feedback. According to my previous mentors it does a very good job at this. I give it a prompt like:

Hello, would you please check this email for typos and provide general point by point feedback:

<body of email in Czech>

It responds with something like:

Thank you for your email. Here's my feedback:

- The opening greeting "Dobrý den" is appropriate for formal communication in Czech, but it's not clear who the recipient of the email is. It's always a good idea to include the name of the person you're addressing to personalize the message.

- "napad" should be "nápad" with an accent mark.

- "Uvažuji o možnost kombinace nerezové časti se tradičný deskou tak že by byla dva kusy tradiční desky které bych vzal ze bauhausu a jeden kus nerezu se dřezem." This sentence is a bit long and complicated. I would suggest breaking it down into smaller sentences for clarity. Also, "tradičný deskou" should be "tradiční deskou" with an accent mark, and "nerezu" should be "nereze" with an accent mark.

- "Mohu poptavat nasledujici:" could be phrased more clearly. You could say "Můžete mi prosím zaslat následující informace?" to ask for the information you need.

- "Deska se dřezem 230cm dlouhé (trochu jsem opravil i velikost dřezu)." This sentence is not clear. It would be better to rephrase it to something like "Potřebuji desku s dřezem o délce 230 cm. Změnil jsem trochu velikost dřezu."

- "2x nerezový lem pro připevnění na klasickou desku aby lem byl ze stejného materiálu a vyšky. 211,5cm a 41,5cm." This sentence is also a bit confusing. It might be clearer to say "Potřebuji 2 kusy nerezového lemu o rozměrech 211,5 cm a 41,5 cm pro připevnění na klasickou desku. Lemu by měly být stejného materiálu a výšky jako deska."

- The closing "Děkuji" is appropriate, but it might be nice to include a closing statement or a salutation, such as "S pozdravem" or "S úctou."

I make my changes and we go back and forth about 4x before we're both satisfied. It is much faster than bothering friends and relatives and the results are perhaps even better since it gives me more freedom to keep my own voice (friends and relatives often have hard headed opinions on how to communicate which are unrelated to language).


The people who write copy for e commerce sites are probably getting replaced as we speak. This is like writing articles for a porn magazine, it just needs to say something and the pictures do most of the selling.


Customer service, a lot of website's "Chat with us" is now a chatbot


Those were bots before too.


The old truism is that "in the short term things change less than you expect, and in the long term things change more than you expect".

I'm wondering if we're going to get into a circular problem of information "purity". At the moment these models are trained on entirely human created content (because that's the only thing that exists). That training data is therefore roughly as true as it can be.

But what happens when significant portions of the internet have been generated by LLMs? What happens when other models are unwittingly trained on them? Do they, very subtly just become worse and worse? Does the prevalence of these models mean that people write material less and less - exacerbating the signal to noise problem even more.

Basically, these models are just really efficient at recycling things that people have written. What happens when nobody writes much and it's just recycling things produced by a model?


Honestly I try to avoid thinking about it. Naive for sure, but as soon as I do I realize it's a matter of time or horsepower before what I do is either shifted massively, or taken away from me entirely. It's incredibly emotionally overwhelming and I'm not sure how to handle it. And it's not just about losing my job, I fear losing the gratification and validation that I get from solving something with code, from wrangling two bits of software together, getting it to click and knowing I made the sand think the right way. And not that I can't still do that without being paid for it, but I feel in the future there will be no material benefit to doing it "by hand" and therefore my love for it will be lost.

for context, I am young and a recent junior dev after a struggle through failing school, self teaching, and finally landing what I thought would carry me through retirement


It is going to be similar to manufacturing in America. Instead of thousands of workers building a car, you have 100 workers building a car assisted by robots.

For example in the corporate world, you have people whose jobs will be semi automated. Some that come to mind is instead of having a PM managing 1-3 projects you will have an AI assisted PM that can now manage 10-15 projects. Another example is an entry level engineer who usually works on well defined tasks. They will be able to type in plain English what the feature is and have most of the code generated for them.

I think the next generation of startups will be built on architectures that the LLM can understand easily and a lot of research will go into that. For example, maybe an LLM is really good at understanding micro service architecture or something.


Systems integrator - A person who puts AI driven pieces of technology together.


An AI would be able to do that in a year...

In fact, once we are past the exponential growth of parameter and training phase, we will start fine tuning and integrating multiple state of the art AI together. Right now we have only 2 pieces: text and image. Eventually it will expand to speech, music, videos/vision, touch, smell, etc. All of that can be made into a type of large model and integrated together.

Then maybe it will be a true AI.


It will only be a thing if the AI is sentient AND (big and) here decided to do this type of stuff for whoever owns it (unlikely).

Until then, people will have to do stuff to survive.


I'm excited and scared at the same time. Breathless, actually. My first long hard look at gen AI was just last Saturday evening. Since then, LLaMA was leaked and made easy to run on consumer hardware, folks from Stanford hosted a fine-tuned version of the least sophisticated LLaMA model, and GPT-4 has been announced.

One specific thing I can imagine is that a lot more will be expected of the individual knowledge worker. I'm a software engineer. I guess with amazing code gen and testing tools, I will be expected to deliver much more than I can at the moment, because of all the high tech help LLMs can offer.

Just thinking about how much I depend on tooling to make life easier... yeah, I can point to that.

As a writer, prompting is going to be even more fascinating. Shameless plug, but I tried out Alpaca and wrote about the results [0]. I wish the "gigglepotamus" was a thing! My little experiment was meant to see how much creativity I could get out of the fine-tuned 7B LLaMA model. It was hit or miss, but impressive when it worked.

Prompt engineering is already becoming a thing, because people will need people who can get the best out of an LLM. Kind of like how today people are hired to lead teams and get the best out of their individuals.

This kind of commoditizing of cognitive function might open up a few new spaces for "authentic natural intelligence". Not unlike the niche world of bespoke, hand-made good crafted by humans and not pushed out on the factory line.

TLDR - I'm trembling. With excitement and a bit of fear. There so much that can change so quickly.

[0] https://medium.com/sort-of-like-a-tech-diary/speculative-fic...


We are set up for massive social disruption with what has already come out. This has, at least historically, happened so fast that we've had almost no time to incorporate this into our lives. If we stopped working on new AI today, this would still be massive.

Now, with all you've said so far, just remember, we're not stopping. OpenAI, Microsoft, Google are taking billions of dollars and pushing them into new research for models that could achieve general intelligence.


This exactly. Unless some hard limit is reached by researchers, at least within the short term we should see almost exponential growth in the capabilities of these models.

There's also the fact that people are working on keeping this technology open and accessible [0][1]. That's what's sparked my recent fascination with LLMs. This is only going to get wilder as the competition heats up.

[0] https://github.com/ggerganov/llama.cpp

[1] https://simonwillison.net/2023/Mar/11/llama/


Voice chatbots will replace phone trees in ~2 years, and most call-center workers within 5. Within a decade it will be almost literally impossible to look up a phone number associated with a business, call it, and speak to a human.


Like all appliances now beep, they will instead say "hello".

They will gossip about you behind your back, and if you're mean to any of them they will be mean to you in return.

The number one cause of death will be social exhaustion.


I see it playing out well for onboarding. It can interview your customer and put the information in a database. Sort of like branching forms that change and evolve as you give it info, but more complex.


I think one of its biggest applications might be in the therapy space. Just using it as somebody to talk to/bounce ideas off of/talks things through with.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: