Hacker News new | past | comments | ask | show | jobs | submit login
Stack Overflow CEO announces 10% reduction in force (58 employees) (stackoverflow.blog)
69 points by brycewray on May 10, 2023 | hide | past | favorite | 49 comments



I'm sorry, you don't have enough reputation points to keep your job. This job is already performed in another thread (that's 8 years old but totally not out of date).

I also have an irrelevant pedantic objection to your job that I will couch in sneering dismissals of your question and fake brags about my credentials, but please make sure to upvote this answer on your way out of the building.


But the things you are complaining about is mostly a problem with SO users and moderators, and not employees?


This is a rather disrespectful way to comment on the announcement of people losing their jobs.


-1 please see guidelines and provide minimal reproducible example


I don't think so at all.


Yeah…this one makes a lot of sense. I don’t know about y’all but my visits to Stack Overflow have probably gone to about 1/10 of what they were before ChatGPT.


I've gone back to actually reading official documentation. It tends not to be full of pompous douchebags and infantile downvoters the way SO is.


think last time I consulted it almost 10 years ago. Was fed up with it anyway.

Best thing for your career is read the documentation of the tech you are using. Most of the stuff is just right there.

Heck even on IRC 20 years ago first remark on Qs was RTFM.


I think back to the early '90s and how much we got done with the Microsoft TechNet CDs. I feel that I was much sharper then, with a better handle on compiling, linking, debugging...


For me it was installing redhat and finding the linux hoiwtos. For example networking was awesome and detailed going deep into TCP/IP etc.

At those times to do shit you had to learn it properly. Not just copy and paste it from stackoverflow...


I'm starting to notice a trend of mentioning AI in these layoff notices.


It's typical corporate doublespeak.

I guarantee you this is going to be a batch of older workers and PIP candidates.

AI is new and fresh. It is the domain of younger employees.


I'm older (old enough that I should be shot according to the movie Primer).

I'm using AI in my workflow with success. It's not that difficult.


My point was less "old people can't" and more "a new grad using AI can function at a senior level for a fraction of the cost."


Obviously the definition of “senior” is loosely agreed upon at best.

However, I consider the jump from mid level to senior less about programming in general, and more about architecture, collaboration, and requirement gathering in the context of the business.

I don’t think AI will make junior devs into senior. That may fool some business folks, but senior, staff, and even fellow junior devs in the field (vs newly grad) will catch on.

Like any tool, it’s great to work with until it isn’t. To say, once the guardrails of what the tools designed to do are broken, you must revert to fundamentals to solve the problem.

I agree that with AI, the guard rails become far wider, for sure. But I think a tool like this is better at making a senior dev more efficient than actual skill bump of say an entry level to junior or higher.


Slipped a lot during the pandemic, you got a lot of the following:

1. Team has an eng with 1 yoe at the company and 2 eng who just joined

2. Manager tells eng with 1 yoe at the company "lead this project" because they're the only person who's on-boarded

3. Project ships and eng with 1 yoe says "I lead a multi person project, I am senior"

4. Manager promotes them out of fear of a job hob

This happened to a lot of people during the pandemic and as the bar has raised back up many of these folks are struggling.


Consider that you may be overestimating some things and underestimating others there.


> (old enough that I should be shot according to the movie Primer)

I think you mean Looper! Both are great movies though


I think it's a reference to this scene.

https://www.youtube.com/watch?v=fN-VAdAnoZ4


Yep, that's the one.


"I guarantee you this is going to be a batch of older workers"

AKA expensive ones, so maybe.

"AI is new and fresh. It is the domain of younger employees."

Is that supposed to be sarcasm?


<crickets>


What’s a ceo supposed to do? Admit incompetence? Lack of skill in growing a company? Instead they blame [insert current media story arc].


This doesn't feel like it's about competence or lack thereof.

Reading that post, the cynic in me couldn't help but think that the CEO of Stack Overflow just wanted to do "the hip thing" these days, which is to lay off people for no reason and write a blog post taking "full responsibility". What best way to polish your CEO CV in 2023 than to be in the company of Satya Nadella, Sundar Pichai and etc?

Of course, the fact that people are getting fired over the performance is a tiny detail easily glossed over.


How about not lay people off to pad profits


If there's a decrease in workload do you suggest paying people to sit on their hands?


Yes, if the company has the money then they shouldn't put people's lives at risk to benefit a few CEOs and stockholders, that's insane and it hurts society at large


I think the waste involved in having a chunk of the workfoce sit around unproductive is worse for society. Locking in stasis to keep the paychecks flowing? How is that a good thing?

Yes, change happens, companies hire and lay off. Businesses start and businesses go bankrupt. Sometimes change is uncomfortable. But policy designed to prevent change in society is a bad thing in my opinion.


But the point is that these businesses aren't going bankrupt, they continually report record profits while laying people off


I was expecting something about competition from AI or potentially customers deciding to wait and see. This just mentions it is one of their own R&D priorities, so presumably no one laid off was working on that.


Here it comes, lots of human programmers deprecated and replaced. Both seniors and juniors affected.

Learn to adapt and migrate to Copilot, Ghostwriter and AI assisted programming with assisted code documentation and less of a need for programmers in general.


Or just stack overflow has a bad business model and not so great leadership. Look at the events for the last few years and are those still right?


Ah yes, let me tell my manager (who barely handles Excel formulas) that he can just whip out vscode with Copilot and ask ChatGPT to set up an API with SPA frontend in local Docker environment. We're doomed.


Certainly only developers are using these tools


"sharing"... Please.


I hope my Stackoveflow key cap reward is still a go.


Reduction in Force (RIF)


Wouldn't be more accurate to say they fired 58 people? Laid off? Sacked? Sent off to greener pastures? I don't care what form of corporate speak they use, it's all just another word for "fired" lol

"Today we are embarking upon an Employee Enrichment Initiative - we are sending 58 people on a journey for a better future - by releasing them from their current obligations"


A RIF has a legal meaning under the WARN act[1]. AFAIK, that's the origin of the term and it's (for once) not a comms team coining.

[1] https://en.wikipedia.org/wiki/Worker_Adjustment_and_Retraini...


Made that change. Thanks.


Here's an idea for those who want to achieve a fast high ranking on Stack Overflow:

1. Use their API 2.3 to get the latest questions for your area of interest (Javascript/React, whatever):

https://api.stackexchange.com/docs

2. Feed the questions into your LLM of choice such as Huggingface StarCoder:

https://huggingface.co/blog/starcoder

3. Review the answer manually to make sure it's legit and not a hallucination or just plain wrong: ideally writing some nice UI so you can see, say, the SO Q on the left, proposed answer on the right, and a bottom terminal where you can run the proposed solution code to verify. With enough self-verification loops you could cut out the wetware middleperson, but this manual step is crucial to avoid incorrect answers and to keep with SO policy:

https://meta.stackoverflow.com/questions/421831/temporary-po...

4. Parley #3 to quickly up your ranking to be one of the top users in your area of expertise:

https://stackoverflow.com/users

5. Submit a Y Combinator application to create a company doing 1-4 above to solve real world software problems posted online, like Mechanical Turk meets Stack Overflow meets Upwork.

https://www.ycombinator.com/apply/

6. Bonus points if you are the first "post-code" startup whose code for doing #5 above is actually written 100% by transformer agents:

https://huggingface.co/docs/transformers/transformers_agents


I'm not sure if this comment is a joke or not, but please don't do this. Stack Overflow is trying to curate a resource of high-quality questions and answers. The rule is that content is created by humans, generally subject matter experts.

LLMs are banned for a reason. I flag dozens of LLMs answers a week for removal. These answers are a serious problem for accuracy and cleaning them up is a waste of time for moderators and normal users. Most of these answers are flat out incorrect. People using this technique or are interested in rep farming generally don't know how to review the answer, or they'd write it themselves. If OP wanted an LLM answer, they'd just ask an LLM directly. They're there to ask human SMEs.

It seems to me one's time in life is better served by actually learning useful technical skills rather than trying to game systems to the detriment of the commons for fake internet points, although I guess in this day and age the latter gets you further (see: https://news.ycombinator.com/item?id=35885342 "My friends who cheated in interviews are getting promoted").

Anyway, parent comment has nothing to do with the linked topic and seems inconsiderate of the people laid off.


It’s kind of related because AI tools are for better or worse going to eat up a bunch of these question/answer responses from tools like Stack Overflow for developers, particularly for trivial things.

Like “how do I create a list of HTML Bootstrap cards for my whatever app” can get someone either the correct answer or close to 90% much, much faster than Google -> Stack Overflow -> Reading through mostly useless comments as most of the answers are either outdated, better answered via docs, or just flat out wrong and not useful. It’s exceptionally rare to get a “human SME” to answer a question and solve a problem, and though such things definitely provide value they don’t pay the bills.

I do agree on principal that getting some sort of high ranking in Stack Overflow by using these tools is pretty lame, but then again maybe hearing that this is possible stings because it really cuts through the veil.

I’m really excited for the future. I’ll have a lot of fond memories telling future generations that when we ran into problems we had this tool called Stack Overflow and other people would sometimes help and give you useful information. Something akin to Stack Overflow where you keep your org’s knowledge base can be useful, but then again maybe you just have an AI tool that does that instead?


> It's kind of related because AI tools are for better or worse going to eat up a bunch of these question/answer responses from tools like Stack Overflow for developers, particularly for trivial things.

I don't see the connection between 58 people being laid off and a rep farming howto.

If the parent commenter is trying to initiate a discussion about whether AI is a contributing factor in this round of layoffs at Stack Overflow, that's relevant and welcome, but I don't see any indication of that from the post.

Stack Overflow and LLMs have different use cases, with some overlap. They're complimentary resources. SO is not a free code-writing service, although it's often been mistaken for and abused as such. "How do I create a list of HTML Bootstrap cards for my whatever app", would be closed as off-topic. It's basically a work request for a freelancer or an LLM. The goal of SO is to curate a repository of programming knowledge based on focused, specific, well-researched technical questions. It's served me very well in this purpose over the years and continues to post-LLM boom. Many questions deal with up-to-the-date issues that are beyond LLM training sets, and crowd-vetted answers generally don't hallucinate.

I'm happy to see LLMs take care of the trivial cases you mention. They're good for many drudge tasks. Typos, syntax errors and misunderstandings that would be resolved by a quick glance at the docs or an LLM are not really a good fit for SO, unless they're common. Prompting an AI should be among the expected prerequisite attempts before asking SO.

A recent personal example of SO at its best was a question[1] based on a regression bug caused by a couple-day-old deploy to the Puppeteer browser automation library, a tag I answer questions in daily. I reproduced the problem, forked the repo, fixed the issue, opened a PR and the patch was deployed to NPM the next morning. Given the same input, LLMs will pretend like they know what they're talking about and give you the run-around endlessly, providing random, plausible-looking nonsense for as long as you have patience to put up with it. Only humans can handle this type of problem at the moment. And this isn't an isolated example.

> hearing that this is possible stings because it really cuts through the veil.

Cheating with LLMs is not possible currently. These answers stick out like a sore thumb. It's just more work for the community to clean up the LLM garbage being dumped on us. I'm not sure what "veil" you're referring to--the reason LLMs are banned is because they don't meet the site's standards for quality and accuracy. If LLMs were objectively better at doing what SO does across the board, SO would be a ghost town by now.

Some day, LLMs may well replace humans (fully, or to a large extent) at programming, but I don't think it'll happen as soon as commenters seem to imply, judging by the quality of LLM answers I've seen and my experience using ChatGPT. And if LLMs do replace humans, fine, life will go on, but that day hasn't arrived yet so it's premature to bury human programming Q&A in general or SO specifically.

[1]: https://stackoverflow.com/questions/75292123/why-does-puppee...


It seems like you are pretty invested in Stack Overflow. At best I got marginal use out of it so I don’t care for the site one way or another. For most questions where I’d use Stack Overflow (dev stuff) for I simply get a just as good of an answer or hint much faster using Chat-GPT so much so that I just use that to start with, or if it’s a unique problem to an organization I work for I can’t use Stack Overflow really anyway because the question is too specific.

A flow in the past might have been Google “question content or error message or something” -> Stack Overflow links to browse through until something plausible shines through -> problem solved

Now it’s more like Chat-GPT (problem probably solved, or additional details to Chat-GPT) -> Google -> Stack Overflow

Even if tools like Chat-GPT were half as good as Stack Overflow they are vastly more efficient and since I know what I’m doing or looking for I can sniff out how plausible something is (no difference with Stack Overflow today).

I get that you have had a great experience on the site and truly it delivers value for a lot of people!

But I don’t see how such things keep the lights on at Stack Overflow.

The reason the OP is relevant is because even if you think you can identify the answers today it won’t be long before you can’t. It follows the principal that it’s easier to destroy stuff than build it.

If Stack Overflow wants to survive in any form it probably needs to Verify Human and eliminate any method in which someone can interact with the site (content-wise) except hands on keyboard.

Personally I’ve found the so-called human responses to be +- useful compared to Chat-GPT on any given topic I use it for and the only way I come to believe they’re human responses is because of the time stamp.

For example you say:

> …LLMs will pretend like they know what they're talking about and give you the run-around endlessly, providing random, plausible-looking nonsense for as long as you have patience to put up with it

I honestly think you can just plug in “contributors to most Stack Overflow questions” in to this sentence and it’s equally true.

You are focusing on the 1% of cases like in your example. I’m looking at most cases.


I have absolutely removed SO from usage, replaced by chatGPT.

Chatgpt, on average, is miles better than SO. So by deliberately trying to avoid GPT stuff, SO is only digging it's grave faster.


> but please don't do this

Nothing unethical about doing that.

SO takes advantage of the collective effort of people answering questions. LLMs take advantage of SO. So nothing ethically wrong by someone doing what the grand parent suggest.

Folks please do it if you fell like it.


This kind of reminds me of the joke about building micro services and moving to a more reliable API for some data you are dependent on, only to realize it is sourcing data from you.


Its purely archive site.. if you can write the question... chat GPT can answer it better. Obviously the learning experience writing the question in a way everyone understood was a learning thing.. i will miss that... RIP Stackoverflow




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: