Hacker News new | past | comments | ask | show | jobs | submit login

> Another common argument I've heard is that Generative AI is helpful when you need to write code in a language or technology you are not familiar with. To me this also makes little sense.

I'm not sure I get this one. When I'm learning new tech I almost always have questions. I used to google them. If I couldn't find an answer I might try posting on stack overflow. Sometimes as I'm typing the question their search would finally kick in and find the answer (similar questions). Other times I'd post the question, if it didn't get closed, maybe I'd get an answer a few hours or days later.

Now I just ask ChatGPT or Gemini and more often than not it gives me the answer. That alone and nothing else (agent modes, AI editing or generating files) is enough to increase my output. I get answers 10x faster than I used to. I'm not sure what that has to do with the point about learning. Getting answers to those question is learning, regardless of where the answer comes from.






ChatGPT and Gemini literally only know the answer because they read StackOverflow. Stack Overflow only exists because they have visitors.

What do you think will happen when everyone is using the AI tools to answer their questions? We'll be back in the world of Encyclopedias, in which central authorities spent large amounts of money manually collecting information and publishing it. And then they spent a good amount of time finding ways to sell that information to us, which was only fair because they spent all that time collating it. The internet pretty much destroyed that business model, and in some sense the AI "revolution" is trying to bring it back.

Also, he's specifically talking about having a coding tool write the code for you, he's not talking about using an AI tool to answer a question, so that you can go ahead and write the code yourself. These are different things, and he is treating them differently.


> ChatGPT and Gemini literally only know the answer because they read StackOverflow. Stack Overflow only exists because they have visitors.

I know this isn't true because I work on an API that has no answers on stackoverflow (too new), nor does it have answers anywhere else. Yet, the AI seems to able to accurately answer many questions about it. To be honest I've been somewhat shocked at this.


What kind of API is it? Curious if it's a common problem that the AI was able to solve?

It is absolutely true, and AI cannot think, reason, comprehend anything it has not seen before. If you're getting answers, it has seen it elsewhere, or it is literally dumb, statistical luck.

That doesn't mean it knows the answer. That means it guessed or hallucinated correctly. Guessing isn't knowing.

edit: people seem to be missing my point, so let me rephrase. Of course AIs don't think, but that wasn't what I was getting at. There is a vast difference between knowing something, and guessing.

Guessing, even in humans, is just the human mind statistically and automatically weighing probabilities and suggesting what may be the answer.

This is akin to what a model might do, without any real information. Yet in both cases, there's zero validation that anything is even remotely correct. It's 100% conjecture.

It therefore doesn't know the answer, it guessed it.

When it comes to being correct about a language or API that there's zero info on, it's just pure happenstance that it got it correct. It's important to know the differences, and not say it "knows" the answer. It doesn't. It guessed.

One of the most massive issues with LLMs is we don't get a probability response back. You ask a human "Do you know how this works", and an honest and helpful human might say "No" or "No, but you should try this. It might work".

That's helpful.

Conversely a human pretending it knows and speaking with deep authority when it doesn't is a liar.

LLMs need more of this type of response, which indicates certainty or not. They're useless without this. But of course, an LLM indicating a lack of certainty, means that customers might use it less, or not trust it as much, so... profits first! Speak with certainty on all things!


This is wrong. I write toy languages and frameworks for fun. These are APIs that simply don't exist outside of my code base, and LLMs are consistently able to:

* Read the signatures of the functions.

* Use the code correctly.

* Answer questions about the behavior of the underlying API by consulting the code.

Of course they're just guessing if they go beyond what's in their context window, but don't underestimate context window!


So, you're saying you provided examples of the code and APIs and more, in the context window, and it succeeds? That sounds very much unlike the post I responded to, which claimed "no knowledge". You're also seemingly missing this:

"If you're getting answers, it has seen it elsewhere"

The context window is 'elsewhere'.


This is moving goalposts vs the original claim upthread that LLMs are just regurgitating human-authored stackoverflow answers and without those answers it would be useless.

It’s silly to say that something LLMs can reliably do is impossible and every time it happens it’s “dumb luck”.


If that's the distinction you're drawing then it's totally meaningless in the context of the question of where the information is going to come from if not Stack Overflow. We're never in a situation where we're using an open source library that has zero information about it: The code is by definition available to be put in the context window.

As they say, it sounds like you're technically correct, which is the best kind of correct. You're correct within the extremely artificial parameters that you created for yourself, but not in any real world context that matters when it comes to real people using these tools.


The argument is futile as the goal posts move constantly. In one moment the assertion is it’s just megacopy paste, then the next when evidence is shown that it’s able to one shot construct seemingly novel and correct answers from an api spec or grammar never seen before, the goal posts move to “it’s unable to produce results on things it’s never been trained on or in its context” - as if making up a fake language and asking it write code in it and its inability to do so without a grammar is an indication of literally anything.

To anyone who has used these tools in anger it’s remarkable given they’re only trained on large corpuses of language and feedback they’re able to produce what they do. I don’t claim they exist outside their weights, that’s absurd. But the entire point of non linear function activations with many layers and parameters is to learn highly complex non linear relationships. The fact they can be trained as much as they are with as much data as they have without overfitting or gradient explosions means the very nature of language contains immense information in its encoding and structure, and the network by definition of how it works and is trained does -not- just return what it was trained on. It’s able to curve fit complex functions that inter relate semantic concepts that are clearly not understood as we understand them, but in some ways it represents an “understanding” that’s sometimes perhaps more complex and nuanced than even we can.

Anyway the stochastic parrot euphemism misses the point that parrots are incredibly intelligent animals - which is apt since those who use that phrase are missing the point.


This is such a pointless, tired take.

You want to say this guy's experience isn't reproducible? That's one thing, but that's probably not the case unless you're assuming they're pretty stupid themselves.

You want to say that it Is reproducible, but that "that doesn't mean AI can think"? Okay, but that's not what the thread was about.


This doesn't seem like a useful nor accurate way of describing LLMs.

When I built my own programming language and used it to build a unique toy reactivity system and then asked the LLM "what can I improve in this file", you're essentially saying it "only" could help me because it learned how it could improve arbitrary code before in other languages and then it generalized those patterns to help me with novel code and my novel reactivity system.

"It just saw that before on Stack Overflow" is a bad trivialization of that.

It saw what on Stack Overflow? Concrete code examples that it generalized into abstract concepts it could apply to novel applications? Because that's the whole damn point.


Programming languages, by their nature of being formal notation, only have a few patterns to follow, all of them listed in the grammar of that language. And then there’s only so much libraries out there. I believe there’s more unique comments and other code explanations out there than unique code patterns. Take something like MDN where there’s a full page of text for every JavaScript, html, css symbol.

>It is absolutely true, and AI cannot think, reason, comprehend anything it has not seen before. If you're getting answers, it has seen it elsewhere, or it is literally dumb, statistical luck.

How would you reconcile this with the fact that SOTA models are only a few TB in size? Trained on exabytes of data, yet only a few TB in the end.

Correct answers couldn't be dumb luck either, because otherwise the models would pretty much only hallucinate (the space of wrong answers is many orders of magnitude larger than the space of correct answers), similar to the early proto GPT models.


Could it be that there is a lot of redundancy in the training data?

> How would you reconcile this with the fact that SOTA models are only a few TB in size? Trained on exabytes of data, yet only a few TB in the end.

This is false. You are off by ~4 orders of magnitude by claiming these models are trained on exabytes of data. It is closer to 500TB of more curated data at most. Contrary to popular belief LLMs are not trained on "all of the data on the internet". I responded to another one of your posts that makes this false claim here:

https://news.ycombinator.com/item?id=44283713


What would convince you otherwise? The reason I ask is because you sound like you have made up your mind phylosophically, not based on practical experience.

It's just Pattern matching. Most APIs, and hell, most code is not unique or special. Its all been done a thousands of times before. Thats why an LLM can be helpful on some tool you've written just for yourself and never released anywhere.

As to 'knows the answer', I'm don't even know what that means with these tools. All I know is if it is helpful or not.


Also, most problems are decomposable into simpler, certainly not novel parts. That intractable unicorn problem I hear so much about is probably composed of very pedestrian sub-problems.

to what does "unicorn problem" refer to? a specific thing or a general idea?

'Pattern matching' isn't just all you need, it's all there is.

> It is absolutely true, and AI cannot think, reason, comprehend anything it has not seen before.

The amazing thing about LLMs is that we still don’t know how (or why) they work!

Yes, they’re magic mirrors that regurgitate the corpus of human knowledge.

But as it turns out, most human knowledge is already regurgitation (see: the patent system).

Novelty is rare, and LLMs have an incredible ability to pattern match and see issues in “novel” code, because they’ve seen those same patterns elsewhere.

Do they hallucinate? Absolutely.

Does that mean they’re useless? Or does that mean some bespoke code doesn’t provide the most obvious interface?

Having dealt with humans, the confidence problem isn’t unique to LLMs…


> The amazing thing about LLMs is that we still don’t know how (or why) they work!

You may want to take a course in machine learning and read a few papers.


Parent is right. We know mechanically how LLMs are trained and used but why they work as well as they do is very much not known.

Sorry, but that's reductionism. We don't know how human brain works, and that you won't get there by studying quantum electrodynamics.

LLMs are insanely complex systems and their emergent behavior is not explained by the algorithm alone.


That was sarcasm by the poster, in case you failed to notice.

Suspect you and the parent poster are thinking on different levels.

> the corpus of human knowledge.

Goodness this is a dim view on the breadth of human knowledge.


what do you object to about it? I don't see an issue with referring to "the corpus of human knowledge". "Corpus" pretty much just means the "collection of".

Human knowledge != Reddit/Twitter/Wikipedia

Who said it was? I’m pretty sure they’re trained on a lot more than just those.

Conversely, what do you posit is part of human knowledge but isn't scrapable from the internet?

I mean, as far as a corpus goes, I suppose all text on the internet gets pretty close if most books are included, but even then you’re mostly looking at English language books that have been OCR’d.

But I look down my nose at conceptions that human knowledge is packagable as plain text, our lives, experience, and intelligence is so much more than the cognitive strings we assemble in our heads in order to reason. It’s like in that movie Contact when Jodie Foster muses that they should have sent a poet. Our empathy and curiosity and desires are not encoded in UTF8. You might say these are realms other than knowledge, but woe to the engineer who thinks they’re building anything superhuman while leaving these dimensions out, they’re left with a cold super-rationalist with no impulse to create of its own.


I'm sorry but this is a gross oversimplification. You can also apply this to the human brain.

"<the human brain> cannot think, reason, comprehend anything it has not seen before. If you're getting answers, it has seen it elsewhere, or it is literally dumb, statistical luck."


> ChatGPT and Gemini literally only know the answer because they read StackOverflow

Obviously this isn’t true. You can easily verify this by inventing and documenting an API and feeding that description to an LLM and asking it how to use it. This works well. LLMs are quite good at reading technical documentation and synthesizing contextual answers from it.


I broadly agree that cutting new knowledge will need to continue being done and that overuse of LLMs could undermine that, yet... When was the last time you paid to read an APIs' docs? It costs money for companies to make those too.

> ChatGPT and Gemini literally only know the answer because they read StackOverflow. Stack Overflow only exists because they have visitors.

I mean... They also can read actual documentation. If I'm working on any api work or a language I'm not familiar with, I ask the LLM to include the source they got their answer from and use official documentation when possible.

That lowers the hallucination rate significantly and also lets me ensure said function or code actually does what the llm reports it does.

In theory, all stackoverflow answers are just regurgitated documentation, no?


> I mean... They also can read actual documentation.

This 100%. I use o3 as my primary search engine now. It is brilliant at finding relevant sources, summarising what is relevant from them, and then also providing the links to those sources so I can go read them myself. The release of o3 was a turning point for me where it felt like these models could finally go and fetch information for themselves. 4o with web search always felt inadequate, but o3 does a very good job.

> In theory, all stackoverflow answers are just regurgitated documentation, no?

This is unfair to StackOverflow. There is a lot of debugging and problem solving that has happened on that platform of undocumented bugs or behaviour.


Where does the knowledge come from? People can only post to SO if they've read the code or the documentation. I don't see why LLMs couldn't do that.

ITT: People who think LLMs are AGI and can produce output that the LLM has come up with out of thin air or by doing research. Go speak with someone who is actually an expert in this field how LLMs work and why the training data is so important. Im amazed that people in the CS industry seem to talk like they know everything about a tech after using it but never even writing a line of code for an LLM. Our indsutry is doomed with people like this.

This isn't about being AGI or not, and it's not "out of thin air".

Modern implementations of LLMs can "do research" by performing searches (whose results are fed into the context), or in many code editors/plugins, the editor will index the project codebase/docs and feed relevant parts into the context.

My guess is they either were using the LLM from a code editor, or one of the many LLMs that do web searches automatically (ie. all of the popular ones).

They are answering non-stackoverflow questions every day, already.


Yeah, doing web searches could be called research but thats not what we are talking bout. Read the parent of the parent. Its about being able to answer questions thats not in its training data. People are talking about LLMs making scientific discoveries that humans haven't. A ridiculous take. Its not possible and with the current state of tech never will be. I know what LLMs are trained on. Thats not the topic of conversation.

A large part of research is just about creatively re-arranging symbolic information and LLMs are great at this kind of research. For example discovering relevant protein sequences.

> Its about being able to answer questions thats not in its training data.

This happens all the time via RAG. The model “knows” certain things via its weights, but it can also inject much more concrete post-training data into its context window via RAG (e.g. web searches for documentation), from which it can usefully answer questions about information that may be “not in its training data”.


I think the time has come to not mean LLMs when talking about AI. An agent with web access can do so much more and hallucinates way less than "just" the model. We should start seeing the model as a building block of an AI system.

> LLM has come up with out of thin air

People don't think that. Especially not the commentor you replied to. You're human-hallucinating.

People think LLM are trained on raw documents and code besides StackOverflow. Which is very likely true.


Read the parent of the parent. Its about being able to answer questions thats not in its training data. People are talking about LLMs making scientific discoveries that humans havent. A ridiculous take. Its not possible and with the current state of tech never will be. I know what LLMs are trained on. Thats not the topic of conversation.

We'll be back in the world of Encyclopedias

On a related note, I recently learned that you can still subscribe to the Encyclopedia Britannica. It's $9/month, or $75/year.

Considering the declining state of Wikipedia, and the untrustworthiness of A.I., I'm considering it.


We'll start writing documentation for primary consumption by LLMs rather than human readers. The need for sites like SO will not vanish overnight but it will diminish drastically.

The idea that LLMs can only spew out text they've been trained on is a fundamental miss-understanding of how modern backprop training algorithms work. A lot of work goes into refining training algorithms to preventing overfitting of the training data.

Generalisation is something that neural nets are pretty damn good at, and given the complexity of modern LLMs the idea that they cannot generalise the fairly basic logical rules and patterns found in code such that they're able provide answers to inputs unseen in the training data is quite an extreme position.


Yet the models do not (yet) reason. Try to ask them to solve a programming puzzle or exercise from an old paper book that was not scanned. They will produce total garbage.

Models work across programming languages because it turned out programming languages and API are much more similar than one could have expected.


To add, another experience I had. I was using an API I'm not that familiar with. My program was crashing. Looking at the stack trace I didn't see why. Maybe if I had many months experience with this API it would be obvious but it certainly wasn't to me. For fun I just copy and pasted the stack trace into Gemini. ~60 frames worth of C++. It immediately pointed out the likely cause given the API I was using. I fixed the bug with a 2 line change once I had that clue from the AI. That seems pretty useful to me. I'm not sure how long it would have taken me to find it otherwise since, as I said, I'm not that familiar with that API.

You remember when Google used to do the same thing for you way before "AI"?

Okay, maybe sometimes the post about the stack trace was in Chinese, but a plain search used to be capable of giving the same answer as a LLM.

It's not that LLMs are better, it's search that got entshittified.


I remember when I could paste an error message into Google and get an answer. I do not remember pasting a 60 line stack trace into Google and getting an answer, though I'm pretty sure I honestly never tried that. Did it work?

Yes, pasting lots of seemingly random context into Google used to work shockingly well.

I could break most passwords of an internal company application by googling the SHA1 hashes.

It was possible to reliably identify plants or insects by just googling all the random words or sentences that would come to mind describing it.

(None of that works nowadays, not even remotely)


Google has never identified the logical error in a block of code for me. I could find what an error code was, yes, but it's of very little help when you don't have a keyword to search.

A horse used to get you places just like a car could. A wisk worked as well as a blender.

We have a habit of finding efficiencies in our processes, even if the original process did work.


>You remember when Google used to do the same thing for you way before "AI"? [...] stack trace [...], but a plain search used to be capable of giving the same answer as a LLM.

The "plain" Google Search before LLM never had the capability to copy&paste an entire lengthy stack trace (e.g. ~60 frames of verbose text) because long strings like that exceeds Google's UI. Various answers say limit of 32 words and 5784 characters: https://www.google.com/search?q=limit+of+google+search+strin...

Before LLM, the human had to manually visually hunt through the entire stack trace to guess at a relevant smaller substring and paste that into Google the search box. Of course, that's do-able but that's a different workflow than an LLM doing it for you.

To clarify, I'm not arguing that the LLM method is "better". I'm just saying it's different.


That's a good point, because now that I think of it, I never pasted a full stack trace in a search engine. I selected what looked to be the relevant part and pasted that.

But I did it subconsciously. I never thought of it until today.

Another skill that LLM use can kill? :)


Those truly were the dark ages. I don't know how people did it. They were a different breed.

I don’t think search use to do everything LLMs do now but you have a very good point. Search has gotten much worse. I would say search is about the quality it was just before google launched. My general search needs are being met more and more by Claude, I use google only when I know very specific keywords because of seo spam and ads.

It was just as likely that Google would point you towards a stackoverflow question that was closed because it was considered a duplicate of a completely different question.

> when Google used to do the same thing for you way before "AI"?

Which is never? Do you often just lie to win arguments? LLM gives you a synthesized answer, search engine only returns what already exists. By definition it can not give you anything that is not a super obvious match


> Which is never?

In my experience it was "a lot". Because my stack traces were mostly hardware related problems on arm linux in that period.

But I suppose your stack traces were much different and superior and no one can have stack traces that are different from yours. The world is composed of just you and your project.

> Do you often just lie to win arguments?

I do not enjoy being accused of lying by someone stuck in their own bubble.

When you said "Which is never" did you lie consciously or subconsciously btw?


According to a quick search on google, which is not very useful these days, the maximum query length is 32 words or 2000 characters and change depending on which answer you trust.

Whatever it is specifically, the idea that you could just paste a 600 line stack trace unmodified into google, especially "way before AI" and get pointed to the relevant bit for your exact problem is obviously untrue.


> your stack traces were much different and superior and no one can have stack traces that are different from yours

Very few devs bother to post stack traces (or generally any programming question) online. They only do that when they're stuck so badly.

Most people work out their problem then move on. If no one posts about it your search never hits.


One of the many ways that search got worse over time was the promotion of blog spam over actual documentation. Generally, I would rather have good API documentation or a user guide that leads me through the problem so that next time I know how to help myself. Reading through good API documentation often also educates you about the overall design and associated functionality that you may need to use later. Reading the manual for technology that you will be regularly using is generally quite profitable.

Sometimes, a function doesn't work as advertised or you need to do something tricky, you get a weird error message, etc. For those things, stackoverflow could be great if you could find someone who had a similar problem. But the tutorial level examples on most blogs might solve the immediate problem without actually improving your education.

It would be similar to someone solving your homework problems for you. Sure you finished your homework, but that wasn't really learning. From this perspective, ChatGPT isn't helping you learn.


You parent searches for answers, you search for documentation. Thats why AI works for him and not for you.

You're completely missing his point. If nobody figures things out for themselves, there's a risk that at some point, AI won't have anything to learn on since people will stop writing blog posts on how they figured something out and answering stack overflow questions.

Sure, there is a chance that one day AI will be smart enough to read an entire codebase and chug out exhaustively comprehensive and accurate documentation. I'm not convinced that is guaranteed to happen before our collective knowledge falls off a cliff.


Read it again, slowly. FSVO "works":

  Thats why AI works for him and not for you.
We both agree.

For anything non-trivial you have to verify the results.

I disabled AI autocomplete and cannot understand how people can use it. It was mostly an extra key press on backspace for me.

That said, learning new languages is possible without searching anything. With a local model, you can do that offline and have a vast library of knowledge at hand.

The Gemini results integrated in Google are very bad as well.

I don't see the main problem to be people just lazily asking AI for how to use the toilet, but that real knowledge bases like stack overflow and similar will vanish because of lacking participation.


It's perfect for small boilerplate utilities. If I need a browser extension/tampermonkey script, I can get up and running quickly without having to read docs/write manifests. These are small projects where without AI, I wouldn't have bothered to even start.

At its least, AI can be extremely useful for autocompleting simple code logic or automatically finding replacements when I'm copying code/config and making small changes.


> Getting answers to those question is learning, regardless of where the answer comes from.

Sort of. The process of working through the question is what drives learning. If you just receive the answer with zero effort, you are explicitly bypassing the brain's learning mechanism.

There's huge difference between your workflow and fully Agentic AIs though.

Asking an AI for the answer in the way you describe isn't exactly zero effort. You need to formulate the question and mold the prompt to get your response, and integrate the response back into the project. And in doing so you're learning! So YOUR workflow has learning built in, because you actually use your brain before and after the prompt.

But not so with vibe coding and Agentic LLMs. When you hit submit and get the tokens automatically dumped into your files, there is no learning happening. Considering AI agents are effectively trying to remove any pre-work (ie automating prompt eng) and post-work (ie automating debugging, integrating), we can see Agentic AI as explicitly anti-learning.

Here's my recent vibe coding anecdote to back this up. I was working on an app for an e-ink tablet dashboard and the tech stack of least resistance was C++ with QT SDK and their QML markup language with embedded javascript. Yikes, lots of unfamiliar tech. So I tossed the entire problem at Claude and vibe coded my way to a working application. It works! But could I write a C++/QT/QML app again today - absolutely not. I learned almost nothing. But I got working software!


The logical conclusion of this is 'the AI just solves the problem by coding without telling you about it'. If we think about 'what happens when everyone vibe-codes to solve their problems' then we get to 'the AI solves the problem for you, and you don't even see the code'.

Vibe-coding is just a stop on the road to a more useful AI and we shouldn't think of it as programming.


It "tells you about it" with code. You can still learn from the code AI has produced. It may be suboptimal or messy... but so is code produced by many of our fellow humans.

I love leaning new things. With ai I am learning more and faster.

I used to be on the Microsoft stack for decades. Windows, Hyper-V, .NET, SQL Server ... .

Got tired of MS's licensing BS and I made the switch.

This meant learning Proxmox, Linux, Pangolin, UV, Python, JS, Bootstrap, NGinx, Plausible, SQLite, Postgress ...

Not all of these were completely new, but I had never dove in seriously.

Without AI, this would have been a long and daunting project. AI made this so much smoother. It never tires of my very basic questions.

It does not always answer 100% correct the first time (tip: paste in the docs of specific version of the thing you are trying to figure out as it sometimes has out-of-date or mixed version knowledge), but most often can be nudged and prodded to a very helpfull result.

AI is just an undeniably superior teacher than Google or Stack Overflow ever was. You still do the learning, but the AI is great in getting you to learn.


I might be an outlier, but I much prefer reading the documentation myself. One of the reasons I love using FreeBSD and OpenBSD as daily drivers. The documentation is just so damn good. Is it a pain in the ass at the beginning? Maybe. But I require way less documentation lookups over time and do not have to rely on AI for that.

Don't get me wrong, I tried. But even when pasting the documentation in, the amount of times it just hallucinated parameters and arguments that were not even there were such a huge waste of time, I don't see the value in it.


I sort of disagree with this argument in TFA, as you say, though the rest of the article highlights a limitation. If I'm unfamiliar with the API, I can't judge whether the answer is good.

There is a sweet spot of situations I know well enough to judge a solution quickly, but not well enough to write code quickly, but that's a rather narrow case.


For one-offs, sure! Go for it. For production/things you will have to manage long-term I would recommend learning some of the space given the output of AI and your capability to surpass that pretty quickly.

I trust chatgpt and gemini a lot less than stackoverflow. On stackoverflow I can see the context that the answer to the original question was given in. AI does not do this. I've asked chatgpt questions about cmake for instance that it got subtly wrong, if I had not noticed this it would have cost me aa lot of time.

So AI is basically best as a search engine.

As I've said a bunch.

AI is a search engine that can also remix its results, often to good effect.


That's right.

Alwayshasbeen.jpg meme.

I mean yes, current large models are essentially compressing incredible amounts of content into something manageable by a single Accelerator/GPU, and making it available for retrieval through inference.


I mean, it's just a compressed database with a weird query engine.

And ChatGPT never closes your question without answer because it (falsely) thinks it's a duplicate of a different question from 13 years ago

But it does give you a ready to copy paste answer instead of a 'teach the man how to fish' answer.

I'd rather have a copy paste answer than a "go fish" answer

Not if you prompt it to explain the answer it gives you.

Not the same thing. Copying code, even with comprehensive explanations, teaches less than writing/adjusting your own code based on advice.

It can also do that if you ask it. It can give you exercises that you can solve. But you have to specifically ask, because by default it just gives you code.

Of course, I originally was picking on Stack Overflow's moderation.

Which strongly discouraged trying to teach people.


I think the main issue here is trust. When you google something you develop a sense for bullshit so you can "feel" the sources and weigh them accordingly. Using a chat bot, this bias doesn't hold, so you don't know what is just SEO bullshit reiterated in sweet words and what's not.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: