Hacker News new | past | comments | ask | show | jobs | submit login

With all the arguments how ChatGPT is "just autocomplete", I wonder if these people ever used it. I know it is technically autocomplete, but the end results are so much more than that.



Indeed, I was quite sceptical of all of this until I actually tried Co-pilot. Sure you can characterise it as “just autocomplete” but that autocomplete has saved me a bunch of typing and thinking.


I'm not saying it's wrong but usually thinking is the most important part for programming and relevant to long term success.

Typing is pretty easy and not the bottleneck of software development. That's why readable variables are better than abbreviations.

Judicious use of abstractions helps with that as well.

I think where things like copilot might shine is being a JIT educator of coding practices but they should be part of your thinking process and not replace it.

The risk with over relying on crutches is substituting the knowledge and intentionality of development.

There's a balance, and one that needs to be sought.

I'm not sceptical about the power, but I am about the claims of people who want to cash in or simply say "it's awesome" and allow no discussion. That means that you get to detail projects trying to use GPT models as some sort of authority that doesnt need to justify why it's better than some alternative. If you argue against it you're "not seeing the opportunity".

It's not a new battle, fighting hype and separating the actual capabilities of a tech and the lies or misconceptions. It's hard to get people to see short Vs long term.


Curious, have you tried Co-pilot? The type of thinking and typing it saves me from are not trivial. It turns writing the remainder of a method into just pressing tab. Or completely writes test cases based on the case name. Or just suggests test cases entirely.

The other day I let it autocomplete methods in a public interface and it was literally ideating feature ideas for me.

Sure there’s overblown hype but there is also real value here.


If you're talking about Microsoft's ML powered suggestions/autocomplete tool, I haven't.

I'll try it for the sake of this conversation but I think it still doesn't change my point about being intentional and thinking.

I'd like to see examples of those tests. After many years of software development and seeing how many people produce buggy services that create a lot of maintenance burden or simply solve the wrong problem, I firmly believe writing is a minuscule part of the day and it's mostly about thinking enough and communicating well with the human interfaces of your project.

Writing is easy. You can get fast with touch typing or autocomplete features (including the copilot one) but that's not the most important thing.

Knowing why you do something and being able to analyze that decision after a period of time and evaluate how "right" it was, is not something that can be done for you.

For quick brainstorming it can be great but we should be careful with the authority we assign it to, when we can't really explain its thought process


If you think about it, autocomplete based on heuristics has already quite some intelligence. At least a lot more intelligence than just the words output of a parrot. The fact that neural nets are influenced by the same architecture, does not mean they could not be plentyfold more intelligent. I'm pretty convinced that the next level, within 10 years from now, will be indistinguishable from human intelligence in a lot of areas. Think about diagnosis of illnesses, predictions of weather and climate, logistics, teaching, driving cars, traffic regulation, prediction of natural disasters, migration,... We people tend to put the bar for intelligence higher after each breakthrough.


If you haven't spent more than an hour inveestigating ChaGPT or Bing Chat then you really should. They are astonishing. I would say that in certain ways they are smarter than me. In a generation or two, with a motivational layer on top, they will be smarter than me. Do not scoff until you have honest to god sat down and interrogated these things.


I personally wouldn’t say they’re smarter than you. They’re definitely faster. But I believe you, internet stranger, could spend time and resources and produce the same result ChatGPT has produced for you. It would probably take you a lot longer though.


> could spend time and resources and produce the same result ChatGPT has produced for you. It would probably take you a lot longer though.

Actually no. I have a limited lifespan of which I've already burned half and have to spend at least another 1/3rd of the remaining lifespan sleeping. Not even counting making money to feed and shelter myself.

GPT is already a better translator than I'll ever be in far more languages then I'll ever know. To get better than GPT at any one of these things will require a massive amount of dedicated time for me to do so. It is already superhuman in this sense with capabilities well past all but the most unique translators.

This is the story of John Henry again. Yea, humans are still exceptional and can beat the machine at things, especially if they've had any training in doing so, but we're not sticking with the same steam-powered hammer, we're spending billions on building newer, bigger, faster ones. More and more of humanity will fall behind the machine every day.


You should let an AI bot have sex with your significant other when the tech is ripe enough, to avoid burning even more of that precious lifespan on things that a machine can do better than you ;)

Tongue in cheek of course, but if you think like that, nothing is "worth" doing - why learn languages, why run, why climb a mountain...


John Henry wasn't hammering those spikes for fun mate. Many of us (idealistically) work on tech with the goal of freeing us to spend time on things like your examples.


I don’t think we disagree here. I’m arguing that something like CGPT is not smarter, it just has more time available or rather can do in a short amount of time what would require us years or maybe an entire life.


I don't think the argument is that "it's just autocomplete" by itself, but rather the implications of such. The current product is absolutely useful for all sorts of little things, but I think we're all looking to the future - whether consciously or not. The idea is not that ChatGPT, in its current state, will change the world - but the exciting possibility that ChatGPT shows we're on the verge of artificial general intelligence. OpenAI themselves are happy to play into this with regularly repeated claims of AGI coming within 10 years.

So the question is, is it coming? And I think this is where "it's just autocomplete" comes into play. Can you get from a really sophisticated autocomplete system to AGI? Look to the past of humanity. Our intelligence drove the epitome of human knowledge being 'bash stone, poke with pointy part' to putting a man on the moon. Now imagine we were able to seed a ChatGPT style program with all of the expressed knowledge of humanity from that former time. Where would it lead us? Your answer here is going to be driven largely be whether or not what you think what we're seeing today is "just autocomplete."


From where I’m sitting, ChatGPT in its current nascent state is absolutely changing the world. It’s not only showing what’s possible but also making a powerful and highly disruptive tech available to the masses to hack and extend.

Of course, we won’t stop here, but as one of the seeds from which AGI will spring, I truly believe it will be seen as a historical innovation in the same league as early search engines, crypto (yes, unironically), or even the internet itself.


> From where I’m sitting, ChatGPT in its current nascent state is absolutely changing the world.

From what I've seen, ChatGPT is changing the world... it's absolute goldmine for the spammers, the content mills, the bullshitter industries. We've already seen at least one magazine cease unsolicited submissions because ChatGPT overwhelmed their editors. We've had a contributor on an open-source project get incredibly belligerent when told not to use ChatGPT to respond to bugs (because its advice wasn't helpful).


We’ve also had thousands of people use chatgpt to tune their writing to be more polite, and to fix other issues they struggle with.

Pointing to two very real anecdotes as evidence of systemic uselessness of a technology is not super convincing. It’s the rhetorical equivalent of saying cars will never amount to anything because Ms. Agnes Thompson was run over by one in Cleveland, or that the internet is pointless because someone sent a hoax UUCP message.


All this means is it is a powerful tool for many uses. Determining if it's a 'good' tool depends on what end of it you're on.


Let's imagine that ChatGPT stayed frozen in its current state of functionality, but was otherwise completely available for use. What would you see being realistically different in the average person's life ten years from now?


That’s a moot question because it will continue to evolve, and while its successor will inevitably dwarf its capabilities, it will be remembered as an early influence and historically significant innovation.

If it was frozen in current state, however, it would still be a springboard for innovation because businesses will find new ways to integrate it with their products and compose its functionality with other code and itself. It’s impossible to imagine the kinds of influence these products will have because progress will be following the exponential part of an S curve for some time to come.

Finally, even if in this artificially constrained world of our imagination we don’t allow for third party products that use it as a platform, it will completely upend the way business is done, on a 10 year timescale, just as search engines have before it.


1. Lots more communication because it lowers the barrier to composing simple, effective messages. “Write a polite complaint letter to my gym asking them to play music other than Prince” changes the ROI on casual notes.

2. Much, much easier for non-native speakers to ensure writing is grammatically and tonally correct.

3. Huge reduction in make-work exercises where the idea is to just soak up an hour of someone’s time to validate they have some understanding of a topic.


I use it to make all business correspondence more politically correct.


What they don't understand is that it's technically not an autocomplete. It is a hierarchical model of concepts within a 96 layer neural network.


He can't have done in any serious way. All these negative opinions just show you how much people make stuff up out of thin air, rather than bothering to spend time with the actual real world. It's all emotion driven and it drives me wild with rage that people are so unprincipled.


I've made this type of argument and have used it quite a bit (though I wouldn't say "just"). Autocomplete is a very useful tool. ChatGPT even more so. It is actually fantastic. The point I was making when explaining it this way was that, in its current state, this technology isn't going to do anything amazing on its own, but will help more people do more amazing things. It still needs to be carefully directed. For people outside the technology world, this distinction is important. I don't think it will hold up forever though.


When you break anything down to its constituent parts it can seem pretty mundane; walking is just controlled falling.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: