Hacker News new | past | comments | ask | show | jobs | submit login
Stack Overflow Is ChatGPT Casualty Traffic Down 14% in March (similarweb.com)
38 points by MichaelMoser123 on May 8, 2023 | hide | past | favorite | 105 comments



Stack Overflow is a casualty of its hostile atmosphere, of the elitism displayed by its most powerful members, of its past community moderation and advertising scandals, and of its stubborn refusal to implement features long requested by users.

ChatGPT is just the executioner. If people are abandoning ship in droves the moment an alternative becomes available, you know just how bad your platform is.


Stack Overflow could be the greatest website in the world; ChatGPT would still kill it. The experience is that much better. The universe is not fair.


I think the difference is the user's type: if a programmer is looking for code, sans any learning interest, then ChatGPT is hands down the optimal choice; just get a solution, do copy-pasta and move on. If one is looking to learn, then I think SO is better. I've personally found the lazier devs jumping to ChatGPT as they are not interested in learning or asking smart questions [0].

[0]: https://www.catb.org/esr/faqs/smart-questions.html


My experience is the opposite.

In ChatGPT i actually ask questions, get answer, and use its response to either refine my questions or look up documentation.

Since I know that ChatGPT can't really code, I use its responses as a hint. I leaned a lot when using it to explore the capabilities of languages and libraries I'm not that familiar with.

On SO, the opposite happens. I seldom ask questions, since the site has a reputation of being hostile. I look for previous answers and just try whatever answer I found on my code. If it doesn't work, I try looking for another answer, rinse and repeat.


Speaking as someone who answers: Your approach is noticeable not hostile, I'd even say it's friendly and nice.

I follow some SO tags where I have domain expertise (there's an RSS feed for each tag, so that's convenient). Often I remember having seen that answered before, so I copy the entire headline and paste it into google, and see a good answer somewhere on the first result page. My domain expertise enables me to pick it out from the other results quickly; the asker would probably need minutes, quite likely tens of minutes. What to do? I personally vote to close as duplicate, which is friendly to you and hostile to the people who asked without searching. What would ChatGPT do, though?

Based on a single experiment, it'll rephrase the top answer in a confident, friendly, patient, informative manner. But the top answer was wrong for that question, for reasons that require domain expertise to understand (a five-word phrase whose meaning isn't a minor variant of its three-word prefix), and ChatGPT's answer nebulised that difference.

I find ChatGTP to be very good at some things. I worry for sites like SO, though. SO's genius is to find a way to make answering worthwhile for people like me, not just for the people who'd like to ask questions and get free answer from someone with domain expertise. Replicating that two-sided model with an AI in the middle seems very, very difficult.


The thing is - On chat GPT I don't need to worry about silly things such as "is my question actually a stupid one?", since it won't berate me for being a moron. I actually feel free there to explore things I don't know that well.

And for someone who asks and is curious about things, this is a godsend. I am using it as a learning tool, and some of its answers contained information that would take a long time for me to find myself, and it triggered me to look for actual tutorials and documentation that actually improved me as a developer.

You argue that SO makes answering worthwhile, but ChatGPT actually makes asking worthwhile.

And as a sidenote, I have expertise in a bunch of things, but I don't answer questions on SO, as I'm not interested in dealing with people interested in competing on it for internet points. I write tutorials on places like Medium or Hackernoon instead.


Good for you. FWIW I'm not interested in the internet points either; if some other site were the one that manages to appeal to both askers and experts, I'd be there. SO is the one that manages to appeal to both askers, experts and search engines. Where a ten-year-old question/answer stays visible in the search engines and be read.


I didn't want to imply that you were one that was interested in internet points, and I apologize if this is how I came across.

But it would be foolish to deny that this is a demographic that exists, with a lot of drama attached to it (e.g.: https://meta.stackexchange.com/questions/331513/lets-take-a-...)

Just the mention of "staff" and "power users" gives me the feeling that this is a place to avoid. I go there to fetch existing answers, and of none are found, I just go elsewhere.


No offense taken; I don't think caring about Internet Points is particular good or bad, anyway. I lean towards thinking they're good even if I don't get the point personally, since I still remember expertsexchange.


I think the difference is the user's type: if a programmer is looking for code, sans any learning interest, then ChatGPT is handsdown the optimal choice; just get an solution, do copy-pasta and move who. If one is looking to learn, then I think SO is better. I've personally found the lazier devs jumping to ChatGPT as they are not interested in learning or asking smart questions [0].

[0]: https://www.catb.org/esr/faqs/smart-questions.html


Defending myself, the lazy programmer: realtime feedback is a proven condition which improves learning. I'm not sure why I would use SO if LLMs continue to improve as I can even debate with it and it encourages me to define my problem and chunk my work. If they remain at their current limits, maybe, sometimes I will need expert advise from a human, but I dont see them slowing down so that seems like wishful thinking on my part.


I dont know about all that. But as someone who had just started with stack overflow (begun actually posting/participating)… it died for me the moment I discovered the GPT playground and eventually chatGPT. These days I use Bing Chat as my coding mentor. My theory is it takes less energy for the user to get what they need from an LLM tuned for code then any alternative. Luckily there will be users who have different needs than choosing the path of least resistence who ennjoy the community and challenges and I salute them, and looking forward to joining them when my needs change. It seems there are a contingent of people suffering "rubber band" effect where LLMs are concerned; "oh its just.."… "...hype...", "...boring...". No, it is AGI now already.


Are you able to tell when it is hallucinating?


Not right away, which is why I prefer bing over chatGPT (less tendency to get things subtly wrong and gaslight me). However I see no significant distinction between a hallucinating LLM and a person who made a mistake. It's something that the LLM engineers and the LLMs will work on every day. There are definitely some caveats but I maintain that less energy is required and I get faster and more tuned results than an SO post or trawl.


Are you able to tell when a human is?


If it is a community effort yes. Yes we are. That’s the idea of SO. It’s a community effort. Highly flawed and elitist yes, but the theory is it is community policed.


If that is good enough, you can replace the "community" with a cohort of different LLMs, which should work analogously.


No? Maybe if they reach human levels of sensory and cognition. For example, again, my FairPlay problem. On iOS 16 there is a timing issue when DRM can be ingested, but this isn’t documented from apple and it is kind of a rare problem. If I ask Bing right now, which is connected to the internet. It gives you links to a human stackoverflow. How would an LLM know about this timing issue? The source code looks fine, there is no compiler error, there are no crashes. There IS sometimes just a video that is not playable in AVPlayer.

How does an LLM with an LLM community solve that?


unfortunately the way web search is implemented in bing chat kind of hamstrings it from that point forward. You miss out the LLM extrapolating from a million other scenarios accross the subject domain in order to come up with possible explanations or approaches. Likewise it is not searching every page multiple adjascent topics to find something on the web that is deeply burried or going by another name because of resource constraints.


It’s really amazing how low our bar has become


> If people are abandoning ship in droves the moment an alternative becomes available, you know just how bad your platform is.

This


Sounds similar to Elitist Jerks for WOW.


[flagged]


Your comment is a good example of how to create a hostile environment online.


Think a moment about your comment maybe. What are you really trying to say?


For human to learn and try harder they need to be in harsh condition, to grow thicker skin. sometimes SO community do not give answer directly and harsh for stupid questions, which I did get harsh treatment when I was a kid and that push me to ask questions better also learn harder. I am thankful for the way they did. Now things gonna be a lot easier for people and sloths and laziness gonna come with it which makes so easy for machines to win.


Ah yes, the "people become better the worse you treat them" early-20th-century political pseudoscience trope.


Yes, quite a convenient lie if you ask me. Why try to address abuse and mistreatment when you can label it character-building!?


It is a studied phenomena what conditions are required for optimal growth and learning. Therefore it should be easy to synthesize a ciriculum for new developers with challenges pitched at the right level for them to reach their highest potential. Thick skin is no substitute for an environment with fewer assholes/abuse, not an ingredient required for growth.


Curiously enough, asking stupid questions is part of any learning process. I that regard, SO is awful for learning anything, it's only use is finding existing answers. If answer doesn't exist, you're better looking elsewhere.


I've lately been frustrated by Stack Overflow and similar because of how they handle XY problems. For those not familiar with the term, an XY problem is where someone wants to accomplish X, they think that to do this they need to do Y, and they ask for help with Y. It often turns out that Y is not actually what they need in order to accomplish X. They should have been asking about Z.

XY problems are common enough that people on SO often won't answer a question asking how to do Y until the submitter explains their X. If it turns out some Z is what the person really needs the answerers explain that and show how to do Z.

Great for the person who asked the question. They learn how to accomplish what they actually wanted.

But it sucks when I actually need to know how to do Y, and all searching SO turns up is those XY problems.

Even more annoying I'm pretty sure I've run into ones where someone who really needed to do Y asked, noting that previous questions about Y all failed to actually tell how to do Y, and their question was quickly closed as a duplicate of one of the earlier XY problems that doesn't actually tell how to do Y.


Maybe I’m slow and stupid, can someone explain to me how new things will be introduced to new forms of LLMs if it all slowly becomes replaced by LLMs? It seems a bit like an ouroboros of human generated content consumption.

Also, SO is flawed that’s without a doubt, but it still even in the age of LLMs been really useful to answer questions that are somewhat tangential not pure code. I was having a problem with FairPlay DRM for live streaming with iOS 16 and SO had something to point me in the right direction. Chatgpt being trained on older data didn’t have the answer. What happens in the future???


LLMs can already browse the web with the help of auxiliary systems, consuming current content. AFAIK GPT-4 has a browser plugin for that, as well as several other plugins for retrieving specific types of (current) information.

A well-trained language model only needs to be retrained when the language changes. New facts can be fed to the model by simply telling it about them, which is how those "plugins" work behind the scenes.


I'd like to know this too. Its concievable that most new web content would be output from an LLM so it would just be feeding on itself. That precise outcome will never quite happen as people will transform and validate this output. However, I see sense in a general worry that people "won't bother" if LLMs can do it all easier than a person. Conversely I also see the potential for it to bootstrap for even greater potential human expression, learning and applied thought.


That still doesn’t answer my question on human to human discovery. Say a site like SO disappears because it’s not viable economically to keep it running. Then having something that scrapes the internet doesn’t seem to help does it? Like a quirk in hardware that a human notices, documents, fixes. Something low level.


How is that situation different from how it was before AI? If a human documents something and then the document becomes inaccessible, the information is gone, regardless of whether the "researcher" is an LLM or a human.


The problem I think I’m not conveying correctly is human to human discovery is made easier with communication hubs like SO. With something like an LLM if that reduces the viability for economically running a site like SO. Is there a gap or period where we don’t see that easy human to human communication? Eventually new generated content from a human has to enter the loop until the AI is up to a level where it can replace humans, but if we see less and less human generated content being viable then we have a clear chicken or the egg problem for new training data


Is it specifically human-human contact thats required to generate new insights or document things? I see no major distinction between that and human-LLM contact (aside from awe) or simply an intelligence pusuing a goal by itself for that matter.


Again I could be a massive idiot, but as I see it? For certain things yes 100%. When it comes to low level things, currently an LLM has no way to confirm behavior of something that doesn't give feedback. The FairPlay example I gave requires you to actually play a video with DRM. Another example I can think of are debouncing buttons on a microcontroller based device. As a super basic example but if you are working on an IoT device and require to know when a pin needs to be debounced, that can only sometimes be found under an oscilloscope. However an open source device could be policed with human-to-human interaction. In fact, I have done so making a PR a long long time ago for a crypto wallet that was open sourced. There are all anecdotal examples, I still see a need for at some level a human to enter the loop, however in a disaster scenario if it becomes unviable to run sites like SO then you slowly lose hubs for that knowledge base.


I didn't consider that but its already begun with plugins and auto GPT. LLMs can test their hypothesis. Next, that might be possible via robotics... so scifi.


New things will still be documented, which LLMs can digest. The documentation may have been auto-generated by a different LLM, no problem.

Incorporating recent info into an LLM (fine tuning) is a solved problem.


If documentation were adequate, there wouldn't have been a need for Stack Overflow in the first place. At least for now, it still takes (human) intelligence to consume documentation and synthesize knowledge through experience with new technologies, at which point it can be communicated to others. LLMs may get to that point, but they aren't there yet. Also, good documentation explains not only the what and the how of a codebase (which LLMs may be able to summarize from the code), but also the why and the when (e.g. when to use one method or another).


The “why” is another great example too. When you document something you try to do so with some way of showing intent as well. While LLMs are so amazing (really). There is still a lot that I get curious about how it can be effectively used.


Maybe, however there seems to be a limit to that no? I’m not talking about errors that are fixed or factual information. More like discoveries, ex: the FairPlay issue wasn’t documented from Apple it was discovered by multiple users who found and solved the issue I had. How does something like that exist with an LLM? There wasn’t really an error message that told you what to look for. It was a community effort of trial and error. Stackoverflow is nice because of the collaboration that can go on.


Do LLMs "learn" how to code by reading documentation, or by reading code? I was under the impression that it was more the latter; a programming language or library which was documented but had few or no examples of real use in the wild for the LLM to consume would (I think) be hard for current LLMs to work with.

Unfortunately, I can't think of a real-world example (library where ChatGPT has probably seen the documentation, but not any source code) to try out.


This, and I believe this is what Phind does. https://www.phind.com/


I feel like the commentary on Stack Overflow answers is often as valuable as the answers themselves. I suspect that ChartGPT can't provide the extra details and nuance that SO does. Also, at the risk of being contrarian to most of the comments I've read so far, I've never had a problem with people at SO, or SO itself.


You haven't asked duplicate duplicate or expected the people at SO to serve you.

Try asking a question along the lines of "why does this happen? see screenshot for details" with a text-only screenshot that uses three-point text as viewed by someone who matters. You'll have a different experience. Or try asking a question that's already well answered, and the answer could be found by searching for the headline of your new question.


Sure it can..

Joining across CSVs, complete with OP taking the inferior option: https://rentry.co/gse58

Different answers because of vague context on an NPE: https://rentry.co/dok7i

Closing down "How do I make a website" complete with passive aggressive commentary before a close: https://rentry.co/ysfe2

System prompt if you want to try it yourself: https://rentry.co/36e7u


sounds like a good business idea

reddit discussion chains for chatgpt conversations


I find this to be really concerning if true (which I’m not convinced of, seems more like correlation than causation).

The most amazing part of chat ai is its ability to be very convincing and yet factually incorrect. It doesn’t seem like it’s a secret that these chat apps spit out just plain wrong information routinely, I’m not sure why anyone would trust it with something important.

SO has bad info too, for sure, but part of SO is peer review to help feel that out. “AI” is a black box and not to be trusted.


I tried asking ChatGPT how to do something for the first time yesterday (an example of how to use libtar to write a tar file from buffers in memory, in C). It gave a very confident, but spectacularly wrong answer, concocting a whole new (admittedly more convenient for this purpose) API for libtar. After that experience, I'm not sure I'll ever ask ChatGPT for help again...


Yep. But how does it compare to GPT-4 or Bing Chat? Then wonder how well GPT-5 will perform, and 6. Also, did rubber-ducking your query have a benefit regardless of the correctness of chat's answer?


I think the premier use for ChatGPT right now is as a starting place for writing tasks. I use it to give me a template for work emails from time to time because there is no element of correctness to get wrong.

I give it facts I want to see in prose, it gives me very neutral, inoffensive prose back. I always have to issue a few corrections to the prompt, and I always have to post edit, but it still saves me a huge amount of time and I don’t need to worry so much about bias leaking in and making anyone uncomfortable.


ChatGPT is (currently) like having an incredibly fast junior engineer working for you. You can ask it to do tasks for you and it will do them quickly, but you need to check its work. Sometimes it will be perfect, sometimes it will be right with a little more prompting, and sometimes it is completely wrong.


You use 3.5 or 4? It’s worth paying for 4. Significantly better results.


I think it’s likely that, unless regulated, commercial LLMs will have a chilling effect on most human-created information sources. With eliminating the opportunity for authors to engage with information seekers (a chance to get constructive feedback, upsell something, show ads, see interest stats) will come reduced willingness of authors to publish.

If you publish on StackOverflow only for LLM to gobble it up and no human even sees the upvote button anymore… why would you? You are basically an unpaid ghost content producer for Microsoft/ClosedAI at this point.


It opens an opportunity for shilling, pushing your agenda, and other ways of poisoning users' generated content. While LLMs will be brainlessly sucking in and replicating. Till they learn to check first and filter out obvious fakes. Still there will be not obvious. The problem is how to make money on this...


I wonder if the very nature of LLM emergent correctness and intelligence itself will identify weakness, fallacies and illusion? I would expect a well-read person to be similarly immune.


Humans aren't immune to manipulations. Neither personally, not in mass. LLMs should be possible to manipulate to some degree if attacker controls significant portion of information on some topic. For example manufacturers can 'upgrade' specs on their devices. Till they get caught LLMs will parrot them as a ground truth. The same with 'human values' and political events interpretation. China already introduced policy on LLMs, they should be politically correct. LLMs producers can manipulate, but not eliminate, the bias by selecting and weighting the sources. Which they do. Plus humans interventions when they see unpleasant output.


Humans have free will and ethical standards and ability to manipulate ideas. LLMs have none of that. They are tools designed to do what told.


This is a simplistic vision. First, not everybody believes they have no consciousness, or ideas. Or cannot generate new ideas. This already happened with 'crazy' chess gameplay style. Humans had never seen it before.

Yes, they are tools, build by humans. But this doesn't mean humans can understand what, how, and why they are doing. Historically observations may take hundreds of years to explain.


If it is admitted to be conscious, idea-understanding, it would be enough like a human and would have to have human rights (and not abused like it currently is). If it doesn’t have human rights, it’s not human-like enough for this to matter. I think these are mutually exclusive.


> “AI” is a black box and not to be trusted.

Someone will be the first against the wall when the revolution comes. I know treason agains Friend Computer when I see it.

Now if you’ll excuse me, I have some hallways to paint red.


You should try Bing chat. It provides citations for almost every assertion. Allowing you to get a quick summary/conclusion, which is also quickly verifiable if you want.


https://www.phind.com/ is also similarly good, but for dev related content.


I find it best to use it with it framed as an untrustworthy black box, even with the increasingly correct answers. Its a rubber duck that can spitball with you.


It doesn’t help either that for the past 14-years, SO has always been quick to close questions and/or answers become out of date.


Even when they aren't closing questions, the userbase is so insanely hostile to anyone who dares ask a question. ChatGPT is such a breath of fresh air after having knowledge gatekept behind an army of smug moderators for so long.


Imagine if the typical response on SO was ChatGPT-esque:

  To convert an ArrayBuffer to a Buffer in Node.js using the node-fetch library, you can use the Buffer.from() method.
   
  Here's an example: [...]
I feel like someone could quickly build up a strong SO profile by taking the question, asking ChatGPT, validating it, then just copy/pasting.


I like that example. Why the heck would you choose to use node-fetch library to convert between your buffers? What are you trying to do? You will be asked that on SO and forced to restate and understand your problem better in the first place, while ChatGPT will just tell you whatever because LLM doesn't understand ideas.


Extra credit if you wire up ChatGPT to the right tools so it can, as well as coming up with the initial response, construct the tests to validate the response, execute and evaluate the tests, and post to SO after validation.


Not my experience, but I have a few kilopoints there already. A well researched question will stay and I am thankful to mods who keep low grade stuff to minimum.


That’s the point. ChatGPT has infinite patience for “low grade stuff”.


That's the point, humans don't. Why call them hostile to you if you are wasting their time

By the way low grade stuff doesn't mean a basic question, it just means a question that shows no effort made to read, comprehend, understand, research. Pretty sure I came out fair bit more resourceful and fluid-intelligent thanks to SO rules...


They are hostile compared to most other platforms. I stopped using SO a while ago because it’s useless. You get much better results asking on a discord community where people are happy to help you out rather than spend all day scouring the site looking for posts to report.


I use SO a lot. I almost never ask questions these days. I think this is actually what is supposed to happen, as more and more questions were answered already you don't need to create duplicates, you just need to search. The only questions left to answer is legit uncharted territory, new tech stacks or rare issues.

I leave chat groups with many people asking stupid low effort questions without using search or any research. Signal noise ratio, waste of time. But there are chat groups where low effort junk is moderated away, just like SO.


> That's the point, humans don't. Why call them hostile to you if you are wasting their time

If you think a question if wasting your time, you can just ignore it.

The person asking the question most likely is not even aware that their question is shit.

ChatGPT is just a much better resource for it, especially considering that most likely it was trained on many SO answers already.


When I visit SO in a spare moment, I don't want to first skim and ignore 2000 (and I really think this is an underestimation) questions from people who basically want me to google something for them, because since my life is finite and I have other things to do I would never get to interesting stuff. It's all about respecting others time and expecting respectful attitude back


Then there should be no complaint that people that would ask stupid questions (considering that asking stupid questions is part of a learning process) are using a different tool.

I actually use SO as you suggest. I don't ask questions, if what I want to know is not answered, I just go and find my answer elsewhere. In the desire yo not bother the "incredibly nice" people on SO, I just never engage with them. Can't really get more respectful than that. For all intents and purposes, I'm the perfect SO user.

ChatGPT, in that sense is an amazing learning tool. I learned more in a couple of months with it than in years of stumbling upon SO replies after searching stuff online. SO, on the other hand, is just about getting lucky in finding some old reply and the code snippet you found works, or move along in your search.


> no complaint that people that would ask stupid questions (considering that asking stupid questions is part of a learning process) are using a different tool.

Hmm. Did I complain about it?


Did I say that you complained?

However, we are in a thread that eminently discusses how SO lost relevance I relation to ChatGPT as a tool used by developers. All my answers are framed in this context.


> However, we are in a thread that eminently discusses how SO lost relevance I relation to ChatGPT

The comments leading up to my comment also didn't complain about ChatGPT, they all just piled on SO for no clear reason. Maybe you mixed up threads.


Was a big supporter in the early years. Still am when all else fails! ;)

After the strange decision to split it into fiefdoms, with separate accounts and reputation, I rarely had enough points to contribute free feedback that may have saved someone’s day. I happen to know stuff on multiple topics, but it wasn’t wanted.

We should be glad all the code and advice has always been being scraped. It was all provided by community. This is probably a better outcome than just seeing a wrong answer about an obscure error code on 3 fake seo blogs.


These restrictions go away once you get 200 reputation on any Stack Exchange site. https://meta.stackexchange.com/questions/141648/what-is-the-...


True. But it still asks me to re-sign in and check my cookie settings nearly every time I use it. On purpose or by search result.

Today, happens to be brisket. Last time, welding. Even when it’s all programming stuff, seems like I’m constantly being asked to take action before participating on the post.

(There have been so many worse ideas. I just hate that stuff.)


As a SO user/contributor, the reduction of traffic is actually great news. The adoption of tools like ChatGPT and Github Co-Pilot did do is filter out repetitive questions and eliminating basic inquiries that have been answered countless times. This mean that the posts will more likely foster genuine discussions around unique, specific use cases that automated tools may not be able to address. This will help elevate the quality of content and promote intellectual conversations among our community members.

I don't see SO as just merely a repository. It's also a networking and collaborative channel, that may not be apparent to users who just drop by to get quick answers. The reduction of traffic would thus finally refocus SO back to this.


It might be worthwhile to point out that tens of thousands of people got laid off in March, again. No shade on ChatGPT (with the solid prompt it actually performs quite well, as @BoorishBears points out), but significantly less people are needing to access Stack Overflow, MoM.


Ugh. Correlation not causation. Unless one of the major search engines is sending queries to OpenAI that were previously pointed at SO, this doesn't make much sense.

Perhaps the higher traffic in the past was from AI training bots, and now they're trained?


People go to ChatGPT instead of a search engine...


It's called Bing.. Oh wait ,i will just use bing chat


Given that two of the three times I used ChatGPT for something it came out completely wrong, and the third failed to mention its sources, I can only dread the day copy-pasting from SO will be replaced by copy-pasting from GPT.


The problem is that ChatGPT (and bing AI) draws heavily from SO - when LLMs starve these communities, they're cutting off their source.


I have a feeling that the SO CEO's blog post [0] about using posts for training data may also have an effect

[0]:https://news.ycombinator.com/item?id=35605323


Shouldn't surprise anyone. Websites, apps, or services that primarily are databases of information won't be primary destinations anymore. AI tools offer a significantly better experience.


Quora is next... I don't have anything against it, but I see hardly a reason to go to Quora anymore.


A tangential question but can we assume that ChatGPT learns/sources content from SO?


I genuinely don't feel bad for StackOverflow. It's a toxic community. Good riddance.


Isn't SO content used for LLM training?

How are they going to train them for new tools?


I think we're entering a new paradigm for programming. AI is the tool. Anything that wants to be used heavily should have a training set ready for an AI to learn.


maybe stack overflow could, i dunno, keep up with the times


How would you suggest they do that now?


Offer a chat bot answer to every question posted?


They have been very hostile to any such suggestion. There was a meta thread where someone made a design specification for a ChatGPT integration (as a food for thought) and the person was downvoted to oblivion.

https://meta.stackoverflow.com/a/421836/6941400


The comments gave several extremely good answers as to why that might be a dangerous idea.


I can see why that would be controversial for sure. But I guess it is “AI” so ticks the box as a viable solution in 2023.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: