Hacker Newsnew | past | comments | ask | show | jobs | submit | z3c0's commentslogin

It's an abstraction for you, not the rest of that developer's team, who have to reproduce the same solution even after said developer has "won the lottery", so-to-speak.

inb4: "Don't worry, just use GPT to make the docs"


If you throw away the code then yeah, but I've never seen anyone do this.

Given how many people attempted to date their computer after ChatGPT launched, I don't even want to imagine what this technology has in store.

People already pay shitton of money for silicon sex dolls and fantasize about robot sex online. Sex toys are connected to a network for remote control. As soon as a a humanoid robot becomes commercially accessible some will have sex with it.

God we are a horrible species.


It did, verifiably here. Based on their own marketing, I thought it an alternative to Codex, not Codium.

Knowledge of this setting has shifted my perspective considerably.

edit: not enough to ditch Sublime, however.


I have not tried it, but there is "GRAM" as well. Which is a fork of Zed without any AI stuff. Not sure what's missing for you from ST but for me, I don't really see myself ever upgrading to ST4 or any future version, unless they come out with something worthwhile for ST5.

https://gram.liten.app/


Well, bullshit tends to be more bullish, and it's not the bears keeping money on the table.

I'm not convinced. To me it looks like the bulls are throwing money and priding themselves on the bits that land back on it (or come from the next table over). It looks very chaotic and wasteful to me

Not to mention, it doesn't actually create the productivity promised at the lower rates promised. The most enthusiastic proponents are middle-management, not actual doers.

It's an expensive route to mediocrity, which doesnt offer an edge in a market where everyone is using the same snakeoil.


They got way over their skis on this one. There's a difference between "impressive" tech vs. "operational" tech. That difference usually boils down to prioritizing engineering rigor over marketing.

Unreliable mediocrity, because you simply can never be sure when the damned thing lies/hallucinates unless you double-check everything.

So now you're wrangling an "AI" system and you're doing most of the work you would have had to anyway. ...And when you don't it can get really embarrassing.

https://www.abajournal.com/news/article/elite-wall-street-la...

Not the first time, surely not the last. The problem is that so much money is tied up in this thing, and the moment the music stops the bag holders are going to be utterly doomed.


> the bag holders are going to be utterly doomed

Good news, the plan is for us to be the bag holders as they rush to IPO.


This study considers caffeine concumption outside of coffee, so an alternative caffeine source might be worth looking into. That was my takeaway, at least. I also drink espresso, for the caffeine and the noticable ease on my gut compared to drip or pressed coffee.

"photographic" memory.

I can't tell if it's a joke or not.


This might be the first time I've seen a HN comment in a GPT thread that actually reflects what the average business user sees in GPT products.

They don't do the job, reliably or well. No amount of wishful thinking or extra tokens will change that.


No surprise really.

Remember when Steve said 'The computers for the rest of us'?

I suppose it isn't a surprise. Are researchers/generally geeky people meant to be able to relate to the average person's day-to-day beyond their sphere? Lmao.

You can't produce stuff for people you don't understand. Understand being a very key term.


Ha! To think that we're finally back to asking ourselves why we are using generative models for categorization and extraction. I wonder how much money has collectively been wasted by companies wittling away at square pegs.


> why we are using generative models for categorization and extraction

Because LLM models have already amortized the man-years cost of collecting, curating and training on text corpuses?


They amortized the creation of corpuses with trainable features, not the myriad of methods that can categorize text with a success rate in the levels required by high-stakes industries.


Yeah, LLMs are a solution to the cold start problem plus they are easy to integrate and if you know what you're doing in terms of evals, post processing and so on you can get excellent performance out of them, plus they can do semantic classification and reasoning that you won't get out of some bespoke traditional DS/ML model.


Thank you for the laugh. Some probably need to be reminded of Sam Altman's roots, as much I detest what he's done to the word "open".


That would be the Sam Altman who left YC back in 2019? Or some other, more recent, Sam Altman?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: