Hacker News new | past | comments | ask | show | jobs | submit login
After ChatGPT disruption, Stack Overflow lays off 28 percent of staff (arstechnica.com)
42 points by adamwintle on Oct 18, 2023 | hide | past | favorite | 34 comments



Well, I have all the sympathy for the SO people who are stressed out over losing their jobs... but at least a LLM doesn't chew me out when I ask a question that's remotely similar to one asked 7 years ago. And I like the low-latency, interactive rubber ducking experience way better than the slower turnaround on a SO question.


So how will the LLM answer your questions in the future if SO is greatly diminished?

I guess LLMs can learn directly from documentation websites. But there is no human feedback on correct solutions or better ways to do things that SO provides.


It'll get better and will be able to answer more questions based on sourcecode and documentation?


I've seen support forum sites that have integrated LLMs to automatically generate answers. I doubt that feeding that back into LLMs will improve quality in the long run.


Well, yes, and somewhat cynically also through eg. OpenAIs effort to pay programmers in low wage locations to create high quality train data related to programming in general (thus paying them to automate themselves out of a job)


LLM do no answer, but kind of autocomplete


Debatable, but true, still, my point holds, it'll be able to autocomplete bettereven without direct hints / dataset from SO, esp taking into account GitHub & github issues


Without the question text and the words explaining the answer ?

I think those words provide a lot of context for the LLM to get to that specific response


Wait Stackoverflow employees chewed you out?


Not the employees, the community


Exactly; I have nothing but sympathy for the affected people but SO-the-community was on the way out anyway. I'm almost 40, I understand, and can tolerate to some extent, someone sometimes going into Linus Torvalds mode.

But it totally does not jive with the 20-somethings that just start their career, and need guidance the most.


> But it totally does not jive with the 20-somethings that just start their career, and need guidance the most.

Hostility towards new users is an issue across most communities, see Reddit and Discord.

Perhaps it has less to do with Linus Torvaldisms, and instead a generational gap between. Afterall SO has been around for 15 years, and it replaced the god awful Experts Exchange.


Seems poorly correlated.

It seems unlikely that any significant number of developers quit using SO in favor of blindly trusting ChatGPT.

More likely - advertising revenue declined in line with the reduced ad spending across the industry, largely due to expectations of economic recession by ad buyers.


Why do you think it seems unlikely? I've no idea if my usage of SO is typical, but I generally just ended up using it for pretty simple questions - the type that GPT is absolutely phenomenal at answering. The handful of complex, domain specific, questions I tried asking on SO still remain lingering even using their bounty system, or ended up being answered by myself.

Another nice thing is that SO is, ironically, often relatively hostile to asking questions. It seems like every post there's some group of people racing, often recklessly, to try to mark things as duplicates when they're not, asking for code samples even when the question clearly does not require it, etc. And then there's the passive aggressive stuff. It seems like 95% of the SO community is awesome, but that 5% sure is a turn-off from the site. GPT cuts through that layer of nonsense.


Most of SO’s traffic comes from organic search. While we don’t have data for SO traffic itself, we do know that search traffic is not declining.

While some users use ChatGPT to solve technical problems, it doesn’t seem likely that the search traffic they depend upon is an extreme outlier from search in general.

We do know that SO’s primary ad service is that of tech hiring, something that has cooled significantly in the last year.


My usual flow went from “do two or three searches with slightly modified wording, land on SO or the official docs of whatever I’m struggling with” to just “take a few minutes to explain the issue to ChatGPT, get actionable and mostly correct advice”.

For me, ChatGPT has unlocked an entire class of questions and investigations that would be almost impossible to handle with just search.


> It seems unlikely that any significant number of developers quit using SO in favor of blindly trusting ChatGPT.

Most people I know have done exactly that.


> It seems unlikely that any significant number of developers quit using SO in favor of blindly trusting ChatGPT.

I did to a large extent. If verification is easy(which in most of cases it is), I first ask the question to GPT 4 before searching SO. And tbh I have found SO top answer to be more probability of being wrong than GPT 4. In most of the code, if the code doesn't look wrong and could pass the testcases, which I use GPT 4 to generate and verify manually, it is likely fine.

Also there is a huge difference between GPT-3.5 and GPT-4 so if you have only tried GPT-3.5, you experience will differ wildly.


> It seems unlikely that any significant number of developers quit using SO in favor of blindly trusting ChatGPT.

Man, maybe it's just me, but I have switched almost exclusively from Google searches and Stack Overflow to just asking ChatGPT. I've been doing dev for 20+ years and GPT4 is like switching from regular tools to power tools. Granted, it's only useful for things that have been done before and are well documented, but most professional software development is exactly that.


Well chatgpt reduced my usage of SO with around 90%.

For the simpler questions it is quite ok. As in write me x in language y where those are inputs

SO is left only for really hard or niche one.


My SO usage is massively down since ChatGPT. Probably 95%+. This is not a surprise to me at all.


I'm unable to find it now, there was a post showing that it was in decline before ChatGPT was released. It was discussed right better on HN


Stack Overflow is laying off another 28% (https://news.ycombinator.com/item?id=37900651) (187 points | 2 days ago | 214 comments)

Stack Overflow announces 28% headcount reduction (https://news.ycombinator.com/item?id=37898199) (118 points | 2 days ago | 171 comments)


I'm thinking that coding is not going to be the most interesting application of current LLMs for a while. Writing code is simultaneously the least time consuming part of product development, and also the most critical. So much of the work is done outside of the IDE, figuring out the problem, deciding on a solution, communicating the solution, etc. Once you do write the code, it's relatively quick and there are enough tools to speed up the process. For AI to truly replace engineers, it needs to be an AGI. I think Stack Overflow should be considering the community component, because as of right now, LLMs are more likely to disrupt creative fields, customer service, and middle management.


PSA: "after" does not mean "due to".


Are you suggesting the layoffs and new "Overflow AI" product referenced in the article are happenstance?


I meant to say layoffs are not (necessarily) due to ChatGPT disruption, even though the title may suggest that. All sorts of companies are doing layoffs, SO happens to be one of them.


Consider the word “aftermath”. Or the sentence: “The companies failed after the CEO was convicted of fraud.”


> While no chatbot is 100 percent reliable, code has the unique ability to be instantly verified by just testing it in your IDE, which makes it an ideal use case for chatbots.

Oh, this is so wrong. Code is not instantly verified by any IDE except for things like syntax errors and compiler warnings. SO critique comments under each answer are so more valuable in that sense, it's not even funny. Every time I give feedback on ChatGPT's proposed answers it tends to go into incredible rabbit holes and endless recursion, reminiscent of WOPR playing tic tac toe.


Previously discussed, yesterday: https://news.ycombinator.com/item?id=37900651


I would have thought they could manage the workload with 50 percent of the staff assisted by chatgpt.


I used to relay on SO, still do sometimes but most problems seem dated. And with quality of Google search declining, getting to the core of a problem takes too long.

I used to relay on GitHub issues but github.com way too slow and quality of search is abysmal.

So more and more when in doubts I just read source code. Seems fastest.


*rely


Everyone blaming GPT but I have found StackOverflow become less useful recently. I blame the hostile mods and downvoters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: