Hacker News new | past | comments | ask | show | jobs | submit login
Gen AI: Too much spend, too little benefit [pdf] (goldmansachs.com)
48 points by pajop 4 months ago | hide | past | favorite | 30 comments



My workstation runs a local LLM that allows me to ask deeply technical questions in plain English and receive useful, usually highly accurate (95%+ of the time) responses in a few seconds.

It has no memory and requires no Internet access, so I feel generally comfortable interacting with it in a way that I do not when using Google or Facebook - if I randomly ask about drill presses or brownie recipes I will not suddenly be deluged by ads for toolboxes and Ozempic.

This to me is a radically powerful tool, and one that makes me skeptical of the "AI Bust" memes I see on occasion. There may be a lot of dumb money being poured into applying AI to problems it's ill suited for, but the things it is good at is such a positive step-function increase over the old tools that I cannot possibly imagine losing it.


And does that get your groceries bagged or your leaky tap fixed?

Acemoglu readily concedes that AI helps a lot with some jobs, but points out that those are a small proportion of total employment anyway.


Attach it to a robot arm and yes it does.

The latency isn't there yet so there's not much money being poured in, but when a 8b model can run in microseconds you'd see a very different world of robotics.


What is your setup?


Minisforum BD790i with 96 gigs of LPDDR5 RAM running Ollama. I run Qwen 110b on the CPU and get around a token per second, which is enough for me to dash off a question and get an answer a few minutes later. Smaller models like CodeLlama that can run 100% on the GPU are 10x faster, but I've found that any model under 70b params makes stupid errors when asked more complex questions.


> Tech giants and beyond are set to spend over $1tn on AI capex in coming years, with so far little to show for it.

Regardless of whether or not the implicit claim here is true (the claim being "all this spend won't produce an ROI"), the explicit claim here is nonsensical.

Of course the $1tn in capex has nothing to show for it! The spend has not happened yet! Of the spend that _has_ happened, most of the chips are not physically in data centers yet. Of the chips that _are_ in data centers, most of the models are not yet trained!

And of the models that _have_ been trained, many have clearly had a significant ROI. GPT-4 cost $100m, and OpenAI's revenue is now reported to be $3.4 billion a year.

Saying there's "little to show for it" is an absurd claim; the products are _printing_ cash! We beat the turing test! You can drive around in a self-driving car!

It's perfectly reasonable to say "where does the ROI come from when you spend $1tn on capex", but it's hard to argue against the success of the spend of the last generation of models.


Er, what is your AIffiliation? Do you work for a coLLMnar database company, maybe in eGyPT?


GPT-4 might have cost 100m to train, but how much does it cost to run?

If we look at NVIDIAs profits roughly 10x that annual revenue number is spend on hardware from NVIDIA each quarter.


If OpenAI has merely 10% margins, they've recouped their costs by 3x.


> We beat the Turing test!

I guess? It's very clear that the Turing Test is not a sufficient benchmark for intelligence, though, because a lot of the answers I get (far too many, compared to a human) are nonsense, technically. They feature correct grammar, and they sound correct, but are very, very wrong. In fact, a lot of the responses I get are simply catastrophically wrong when I ask technical questions. I wonder what else these things are wrong about? There's no way it's merely technical stuff, where usually there are only a small number of correct answers, if there are multiple correct answers at all.

My employer has bots in Teams which have been trained on years of questions and answers and the answers they give are so incredibly wrong that we've replaced them all with fixed-response automation. GenAI has been a severe disappointment, even for me, and I had almost no hopes for it at all and have poo-pooed the technology from the start.

> You can drive around in a self-driving car!

Can you? Really? Or do you need to be there to prevent the car from doing stupid things? The only fully self-driving vehicles that I know of either follow fixed routes, operate in a small, fixed area, or do not carry humans at all. Another example of where many AI promises were made, and the results have simply failed to materialize.

Generative AI, at least for anything useful, has been a severe disappointment for me. Even if I spent $0.01 per year on it, I would consider it a waste of money.


The technology has been around for about one year now. For people who have been working in the field for as long as I have (> 30 years), the output is absolutely incredible. Really incredible... However, for the vast armies of academics and others who went on a different route, the result is bitter. We failed to deliver anything close to LLMs. I put myself in the list. The problem is that many of these technologies have been developped by the GAFAM, which very people have confidence in, for obvious reasons, however these were the only entities with enough computing power to do so. Nothing surprising here. Now we have many people who think that AI is a hoax and that it is going to burst soon. There is something pretty religious here, it reminds me of these people who get convinced by gurus that the world will end on a specific date for reasons... Even in its state, AI is already incredibly useful, and I don't see anything to convince me that AI has reached a glass ceiling. Believe me, 5 years ago this glass ceiling was much.. much lower than today.


In what extent you see the output being incredible?

At appearance the output _seems_ incredible but once one starts pushing for more or requiring consistency for production, it requires a tremendous effort to put in place or it is simply not possible.

I have also a few decades in the field, especially regarding automation of knowledge processes, so genuinely interesting in getting other viewpoints.


I think it's a little weird to say there's no killer app when millions of people already use ChatGPT and tools like GitHub Copilot are already quite embedded into professional workflows. It's just not a given that these will truly take off into the stratosphere as the next Google or whatever. If people are expecting that to happen in the next year, well, I guess they spent too much money. Whoopsies.


But what about how the same professional workflows are gearing up to dismiss anything that has been visibly produced through GenAI tools (seen in HR and supply contracting, as well as in art commissions - and I don't see how it could not expand further).

Not only is there a legal risk, but there's a disconnect as to the genuineness of what is given/provided in the end.


Aye. Adoption of AI is successful when it feels as smooth as spellcheck.

We're still a long way from that, and will get there incrementally.


When someone has a short position, they and the submission author will use all kinds of biased information to manipulate others to sell.


This is probably accurate.

But GenAI is a major upheaval. In most those, a massive amount of initial capital is invested, but the payoff happens slowly - only over decades. But the payoffs are huge.

Think about electricity. Building out the grid was HUGELY expensive. But then benefits are derived for decades. Same for the highway system, railway system, etc.


Not sure public infrastructure is a good comparison, because that kind of stuff benefits the public.

GenAI is definitely a major upheaval, and the payoffs will definitely be huge, but those payoffs will go to those who fund it at the detriment of the working class who haven’t been sharing in the productivity gains made by capital.

I just don’t see the public benefit that we see from public infrastructure. If anything GenAI will exacerbate and accelerate all the structural issues we have in our economy and society.


Electricity, the printing press, etc were all privately owned.


AI is useless for any kind of foresight from human intuition and serendipity, it answers without any clue of the impacts of its answers and when they filter responses out AI just gets more useless as censorship is known to reduce culture and learning.


Within large tech companies AI projects (often just features within existing products) are seen as the safe place to be while non-AI projects have higher layoff rates.

I wonder how much of the AI spend is driven by the above situation.


>And he doesn’t take much comfort from history that shows technologies improving and becoming less costly over time, arguing that AI model advances likely won’t occur nearly as quickly—or be nearly as impressive—as many believe.

Currently we need cards with larger memory, then we'll need faster cards, then cards with more memory again.

The only reason why we don't have $1000 64gb cards is because, as the article points out, NVidia currently has a stranglehold in the market. Given that GPUs print money it won't be more than a few years before we see those cards from other parties.

At the same time the improvement in small models has been phenomenal, there are now 8b models that consistently outperform the original gpt-3.5-turbo. In two years the state of the art model is now something that can run in your phone for free.


Lots of warning signs here, good on GS to call it out

[1] virtually one supplier in the mix (NVidia)

[2] Costly ops not returning good value

[3] Signal-to-noise is extremely low in the GenAI services space, where Blockchain grifters moved following the notional collapse


And [4] Energy-hungry, already breaking green transition targets, and already becoming an issue for arbitration (as to what goes to IT/AI, and what goes to supporting existing life/society-critical systems).

Blockchains, crypto and NFTs had the same red flags, just without anything as demonstrable as ChatGPT or image/video/music generation.



Very acute observation, although perhaps a little too subtle for lesser minds.


only suitable for zen no minds (english-japanese pun intended)


acute vs chronic (disease)

or

acute vs obtuse (angle)

or

acute vs obtuse (comment)?

;)


Synonym for sharp, when used with observation meaning perspicacious. Acute eyesight can make out small details at large distances.

Nabla9 perhaps might have considered expanding on what they saw a little more.


Yes, I knew the meaning in the context. I was just doing wordplay, but thanks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: