Hacker News new | past | comments | ask | show | jobs | submit login
Goldman Sachs: AI Is overhyped, expensive, and unreliable (404media.co)
133 points by mrzool 8 months ago | hide | past | favorite | 146 comments



Discussions

(88 points, 12 days ago, 157 comments) https://news.ycombinator.com/item?id=40837081

(89 points, 1 week ago, 47 comments) https://news.ycombinator.com/item?id=40885632


Goldman is correct that AI is expensive and unreliable.

But it's only overhyped if AI stays that way.

The hype isn't coming from where AI is today, but where it will be in 2030 and 2040.

What excites the true believers is the slope, not the intercept.


“Over an arbitrarily long timeline, assuming stable geopolitical conditions, technology will continue to improve, at an arbitrary rate”. What a groundbreaking insight, no wonder people are excited.


That's not the hype you are looking for.

It's more that by 2040 AI/robots will be considerably smarter than humans and we can kick back while they build us palaces and yachts. Not that that will definitely happen but it's not a business as usual scenario.


Did you even look at the Wright brothers plane? Why are people excited about it? It's an expensive deathtrap!


The Wright flight was in December 1903 and planes were considered for a ban as weapons at the international level in 1911 [1]. Their capabilities and potential were obvious to everyone before the flight occurred. Imagining traveling through the air like a bird was obviously going to be a game changer to even a simpleton. The "Attention Is All You Need" has now been out for over 7 years and so a directly comparable timeline. What do we have? Better autocomplete? Worse search results? An overall efficiency increase of 5%? For people knowledgeable about the technical details, a path to AGI as a bigger LLM is about as probable to a path to AGI as using a big random forest.

Arguing by example is stupid. Plenty of other examples have been proposed elsewhere in this thread. My only contribution is that I think LLMs are not going to yield anything more than incremental efficiency gains. If the only result after 7 years is that NVidia became a new $1T company and no one else then I think it should be becoming more obvious that this is a gold rush situation and not an iPhone situation.

[1] - https://en.wikipedia.org/wiki/Aviation_in_World_War_I#cite_n...


When the AI gets good enough to fly FPV drones with no datalink (which is sooner than you think) that will be the tipping point you are looking for.


Tangentially related, I've been rather disappointed with the advances coming from software since communication was solved at some point in the early 2000s.

Software mainly seems to enable new business models, of which some tangentially improve our quality of life, others are actively harmful. Contrasting this to advances in technologies like solar panels or electric cars almost makes me wish I had chosen a different career path.


> The Wright flight was in December 1903 and planes were considered for a ban as weapons at the international level in

What a coincidence that we have people trying to ban AI at a domestic and international level

> What do we have? Better autocomplete? Worse search results?

It's ridiculous to minimize LLMs as better autocomplete.

Also I get it, for most people remembering how things were more than a couple of years ago is hard. Can you even remember when you couldn't talk to computers?


So you are basing your outlook on extrapolating from a completely different engineering domain and a completely different socio-historical context?


yes and im right


The slope has changed between 10 years and ago.

You are free to disagree, but don't pretend like you don't understand what the AI crowd is saying.


Never in the history of man has the slope of progress been linear. The AI crowd is entirely predicated on a platitude.


AI right now is like covid in March or climate change in the 1980s. It's no longer a real intellectual debate on whether it will arrive (that is basically over by 2022) now it's an emotional battle where people are incapable of accepting that such a possibility can exist vs actually calculating


> AI right now is like covid in March

March 2021 yeah, well known all over the world, everyone talks about it but not much has really changed in a while. Some fear COVID might mutate into something deadlier, and some think generative AI will turn into something completely different.

The quick pace was GPT-2 and GPT-3, those happened 2019 and 2020, today it is a crawl in comparison, GPT-3 was like covid in march-2020, at the time most were barely aware of it but today it is mainstream.

The revolution you are talking about has already happened, people all over the world interact with their computer using natural language, so your prediction here is a bit late.


Did we experience the same thing? March 2020 saw explosion in nyc and much of mainland US and widespread lockdowns. However many people were convinced the lockdowns would last two weeks at most with covid dropping to near zero (in hindsight completely ridiculous low probability thinking)


Yeah, that isn't generative AI today, the explosion already happened. At best you can call the explosion ChatGPT, but that was already 1.5 years ago.

Not much unexpected will happen now unless a new breakthrough is made. Similarly covid in march 2021 were well understood and not much would happen unless it mutated into a much deadlier strain, which of course is unlikely to happen. Covid in march 2021 was still a massive thing, as is AI today, but just that it is a massive well understood thing not something that will completely change the world in the future.


Agreed, but might push that back to January 2020

By the second week of March I was well aware of COVID. By the third I was locked in my apartment.

In January someone at the airport in Bangkok thought it was important to scan my body as I arrived, but I had no idea why


Depending on your reference point, a sigmoid and an exponential curve look the same.


True but i think some sigmoid curves have very low probability. Like COVID doing down right after it spread in NYC


The slope changed between 1980 and 1990.

The slope changed between 1990 and 2000.

There's a pattern somewhere, here.


Between 2000 and 2020 slope was lousy. Or, frame it the other way, the rest were catching up with the best.


Your take reminds me how self driving car boosters have been saying for the last 20 years. I know Waymo is out there driving itself IF you don’t count the operators with remote controls connected over 5G. We could achieve AGI today if you don’t count the Indian worker sitting remotely typing a response to your chat messages.


lol the cognitive dissonance of knowing your own argument is wrong and still not able to admit it


Please feel free to put your money where your mouth is and convert your portfolios into an AI frontier fund.


Yes. This argument is kind of pointless because it's just a simple bet. I would be very happy if the market was full of people like you--i would gladly take a huge bet on AI against it. But i guess the market is not fully in agreement with you in this matter


That's not a very interesting opportunity for insitutional investors (hedge funds, banks, pensions, etc). They have the resources and confidence to move fast and cleverly and don't need to make big bets on far off possibilities. They have better things to do with their money in the meantime.

The nearly $1T in investment hype since Q42022 was chasing the possibility that revolutionary commercialization of generative AI might arrive any day. That no longer seems likely.

As you note, there seem to be at least a few more profound discoveries that need to be made and matured, and so that revolution is still a long way out (from the perspective of these investors), even if it looks meaningfully more possible than it did two years ago.


What is far off for you? Is it five years?


The hype follows what OpenAI's latest release does though, not what OpenAI may release in a decade or two.

> What excites the true believers is the slope, not the intercept.

The phrase "true believers" jumps out to me here. It sure makes AI (well, ML) sound more like a religion than a field of technological research.


Cult of the exponential, considers sigmoids heresy.


> The hype follows what OpenAI's latest release does though

Yes, because what OpenAI releases today helps us improve the prediction of what it may release in a decade or so.


But how can we actually quantify the accuracy of said predictions?

Each release of an LLM is largely a black box. We don't know exactly how they work, what they learned, why they learned it, or what their limitations are.

How exactly does that help us predict what could exist in a decade or two? And how do we align that predictive ability with the fact that so many people working in the industry would not have been able to predict where we are today a decade or two in the past?


That is only true if you believe research never has dead ends and progresses on a well-defined curve.

It does neither. What OAI or others release next does, in the very best case, help set a floor. And if it's a blind alley to a local maximum, even that isn't super-helpful.


>But it's only overhyped if AI stays that way.

This is a fallacy that comes from not doing discounting. When you apply discounting with any realistic discount rate, current investments don't pay off if AI becomes something in 2040.

Fro investors eventually is not a good enough. They move money into some other asset and wait until the time comes.


Exactly. It's amazing how many people seem to be missing this.

I think a big problem is the amount of companies rushing to add AI to it's products when it's not really that good at the moment or not really that useful so there is somewhat of a backlash against it and people hearing the headlines about AI changing the world, seeing that AI has been added to a product they use, testing it and seeing it's useless and then wondering what all the fuss is about.

I stay up to date with AI news, I follow it quite closely and I interact with chatbots somewhat, however I've yet to find it useful in any real use cases for me, since I don't code and the answers it gives for everything else are usually fairly generic nonsense, but it's amazing just that they can do that and the improvement is there. Image generation is pretty crazy too, video is getting there and audio for sure is going to be done, so I wonder what happens to the music industry? Instead of having Spotify you can generate your own music on a whim. As a music lover, that is going to be very odd.


I didn't read the paper. But I opened it up in a tab, typed "unreliable" in the search box, and zero results. I'm wondering what words they actually used. I also didn't find "overhyped? or "expensive." I'm sure those words aren't unreasonable as a take-away from the paper. But they aren't in the paper.


> The hype isn't coming from where AI is today, but where it will be in 2030 and 2040.

Scott McNealy circa 2002[1][2] had some relevant food for thought:

>> But two years ago we were selling at 10 times revenues when we were at $64. At 10 times revenues, to give you a 10-year payback, I have to pay you 100% of revenues for 10 straight years in dividends. That assumes I can get that by my shareholders. That assumes I have zero cost of goods sold, which is very hard for a computer company. That assumes zero expenses, which is really hard with 39,000 employees. That assumes I pay no taxes, which is very hard. And that assumes you pay no taxes on your dividends, which is kind of illegal. And that assumes with zero R&D for the next 10 years, I can maintain the current revenue run rate. Now, having done that, would any of you like to buy my stock at $64? Do you realize how ridiculous those basic assumptions are? You don't need any transparency. You don't need any footnotes. What were you thinking?

>> I was thinking it was at $64, what do I do? I'm here to represent the shareholders. Do I stand up and say, "Sell"? I'd get sued if I said that. Do I stand up and say, "Buy"? Then they say you're [Enron Chairman] Ken Lay. So you just sit there and go, "I'm going to be a bum for the next two years. I'm just going to keep my mouth shut, and I'm not going to predict anything." And that's what I did.

[1] https://archive.is/EZ6vD#selection-2985.79-2985.928

[2] https://archive.is/EZ6vD#selection-3009.1-3009.399


This is exactly the argument we were all having about crypto a few years ago. "It's early days!" they told us for 10 years.

Now crypto is used primarily for crime and speculation, which is exactly what we'll be using GenAI for in a few years, especially as it poisons its own source of training data.


Key difference from my perspective: I never did hear a convincing use case for large-scale use of crypto. It's really easy to make one for AI.


Well… “use it as money for the entire world” is about as large scale of a use case as you’ll ever find. The issue wasn’t that there was no use-case, it’s that none of the crypto technologies were good enough to actually fulfill that use case. Blockchain wasn’t the right answer - it was just a precursor technology flirting with the right concepts.

I’d argue that transformer-driven AI could follow the exact same trajectory: hype cycle driven by people imagining legitimately transformative use cases, which is ultimately popped when we realize the actual here-and-now implementations don’t work that way.

The idea of transferring value over the internet using one common, frictionless, ownerless system isn’t dead; it can’t be, because it’s simply too good of an idea. But nothing actually offers that. Actual artificial intelligence is, likewise, too good of an idea to ever be “dispelled” or dismantled. The only room for doubt is whether the technical designs we have today are actually AI… or just some wannabe scam coins that LOOK like AI.


Crypto (currency) never served a real use case other than pump and dumps or dark web transactions.

GenAI is used by people for all sorts of purposes. Anecdotally, it has largely replaced my own usage of traditional search engines.

This strikes me as a false equivalence the same way comparisons to the dot com bubble do.


I'm using it primarily for code autocomplete, interactive story telling, and listening to online articles and stories.

I don't think the current generation of AI will go away -- there are too many use cases for it. I think it will take some time as people and companies experiment with it, find what works and what doesn't. Then, some use cases will disappear but others will remain -- like how autocorrect came from previous generation of AI tech.

---

With the code autocomplete it works best when there is repetitive logic such as doing the same thing for each variable in a class.

With interactive story telling it can be good, but can often say things that don't make sense or not pick up on the subtext of what you are saying. It can still be entertaining. -- I still use other forms of entertainment.

On the TTS side, I've been using it for reading articles and stories for a long time. The current gen models are very good quality wise, but need work on accuracy and artefacts in generation.


I have found hallucinations to be bad enough that I can't use it as a search engine. Even Google's AI search is awful. How do you deal with its likelihood of being convincingly wrong, even if it's small?


There’s a time and a place for checking its work.

Depending on the search, I either don’t care, or I use something like perplexity, which includes its references.


> The hype isn't coming from where AI is today, but where it will be in 2030 and 2040.

Yeah but the investors they're talking to aren't thinking about 2030-2040. They are thinking about 2025. If they don't get their 2000% profits they are imagining now they will declare it a FLOP!!!! and drop it. Doing harm to the overall development of a technology that does have its merits but is just propped up to a position it's not ready for by a long shot.

With this kind of pressure a slow slope is not enough to keep investors happy.

These guys are all hitting themselves in the head every day that they didn't buy bitcoins when they cost $0.01 and they are constantly telling themselves the next big thing will make them rich beyond their wildest dreams. They are ruining good tech this way. Metaverse was pretty decent too, it was just not for everyone yet (and it won't be for a loooong time), but it does have its niche uses.


No, the hype is coming from where people are saying it will be in 2030 and 2040.


- The hype isn't coming from where Bitcoin is today, but where it will be in 2030 and 2040.

I'm among the AI true believers but the change above made me have second thoughts.


this sort of argument was used for web3/crypto, vr, quantum computing, heck even nuclear fusion.

but the first half vs the rest of the list did not receive the same proportional funds, at least from the tech phase from what we can see.

the core difference i can see now is that the sota is captivating the crowd enough, whether from demos alone or from personal tests.


There has been no convincing argument by anyone that this will change in a way that will drive the value that the grifters in silicon valley are promising. It's tautological to say that people making bets are making expected value estimations about the future, and goldman sachs are doing just that


“All you need is irrational faith and you too can be unable to distinguish between hype and overhype!”


Goldman Sachs has no reason to disseminate valuable market insights and analysis for free. All financial thinkpieces by investment firms should be disregarded as economic manipulation and propaganda


They do have an interest in accurate pricing in markets (as does any investor). Sometimes something can be both true and have a portion of self-interest. An organization of that size might know something you don't.

Do you have information they don't? Can you show that LLM tools are worthy of the hype (investment in particular), are consistent, and reliable? That's certainly not my experience.

It's worth saying the emperor has no clothes, especially if we want to avoid the bubble pop, which admittedly GS has an interest in.


They don't have an incentive to share their view of reality until after they make investments that would capitalize on their belief in that view of reality. After they take those positions though, it is in their best interest to inform everyone about what they view to be real. It would seem to be risky for them to try to take positions based on some false narrative on some aspect of the market because if someone else finds and publishes the true narrative, and that becomes the prevailing sentiment, they would be harmed.


Or maybe they do believe this, and entered into trades to express this sentiment. Now, they need the market to correct to what (they believe) is accurate, so they can take profits and free up their capital again.


Incomplete understanding of GS’s business and why someone might want to do business with them.


I think you are vastly misunderstanding the role of investments banks. I think you might be confusing them with Hedge Funds.


do you mean something like generalized front running, where they see imminent happenings, get there first, and in this case dissuade competition?


AI is enabling a new generation of exploits and hacks that will force tech to get a lot simpler, more human readable, and more privacy oriented.

Everything is going to need to be refactored for a world where bad actors have truly unlimited time and attention to invest in identifying privacy and security vulnerabilities.


That would be a great change.


News flash, stock related company says thing to influence market


news flash, 155 year old company (that has never meaningfully innovated anything and is run by a 62 year old man with zero experience outside finance) is resistant and pessimistic about technology


Wow.

I did not imagine I would ever agree with Goldman Sachs on anything in my life.

Now, I do think it has its uses, but it's once again way overhyped like all the hypes that came before it. As always there is a certain use to it but it's way overblown.


I think it's great in specific use cases, like the meeting discussion overview in teams. The problems come when you think you can build one AI that can do anything and everything, or tackle problems with a wide scope, or start using the AI as an infallible source of information. It seems like a lot of people were expecting the latter to be true.


> like the meeting discussion overview in teams

Amusingly, Microsoft had an autosummarize feature in Word as far back as, like, 1997. The fact that this appears to be the rare feature that actually got removed from Word could be an indication that this use case might not be as compelling to users as one might think.


I think it was more because it was no good at that time and couldn't be relied upon. For all their flaws, summarisation is something LLMs do pretty well.

I think the chat feature is also really helpful "tell me more about this aspect" kind of thing. That really helps make the summary more tailored to your needs.


Wait until the moment that they can monetize it/save a bunch of money. Then see them dropping 5000-10000 employees globally because 'shareholders are pressing us to...'

Some banks that develop their own applications will start using it heavily when they will realize that their in-house trading platform can execute orders 0.0001 seconds faster (or something insanely small).

I strongly believe that the only thing that will hold AI/LLMs back is regulation, and nothing else.


Here's the thing though: You don't agree. Not really.

Because Goldman Sachs have been using AI for well over a decade. They're saying one (popular right now) thing, and very much doing another (quietly).

Their employees are capitalizing on this latest AI boom [1] to make their efficiency go way up, radically changing their processes... And a lot of people who were slow to grasp what AI can do for their workflow are being made redundant [2].

I can only imagine what those people laid off are thinking of this statement.

1 - https://www.wsj.com/articles/goldman-sachs-deploys-its-first...

2 - https://duckduckgo.com/?q=goldman+sachs+layoffs


I like to think that its nice to see that Goldman Sachs agree with me for a change.

I came to this conclusion a while ago :-P


I strongly believe that AI is the next big thing, but there's no point in discussing things, because there's no empirical evidence either way. There was no evidence that internet would take off, and there was no evidence that crypto wouldn't.


But unlike “AI”, unlike “Crypto”, the internet wasn't initially hyped through the roof, and people had no expectations of it, and it came from academia/military, and not “OMG! I'm an buzzword spouting entrepreneur and I need me some investment dollars so I can grow and cash out and retire.”


ever heard of dotcom bubble


Many years after the technology was widespread, yes. Unlike “Crypto” and “AI” which are all underpinned by short-term “get rich quick” thinking.


It’s overhyped like the internet was in 1995.


It's overhyped like the time everyone told us cryptocurrencies in 2015 were like the internet in 1995.


Have they? AI compared to cryptocurrency has already been deployed and battle tested on billions of users, just think of BERT used on Google queries, all the models used for moderation and recommendation on social media (probably the whole reason why TikTok is so popular), translation, captioning etc... It really seems quite different.


How is that different from bitcoin being used as currency in many places all over the world? You used to be able to go and buy a pizza using a bitcoin transaction, then that regressed but at the time it looked like bitcoin might actually be the future.

Turns out there were some problems that crypto currencies hadn't solved yet, and those still aren't solved. At the time people expected the tech to advance to solve all those things, but they failed to do so.

Same thing applies to current day AI, in order to revolutionize the world like the hype expects they need more things that are completely unknown whether we are actually able to solve or not.

Until those things are solved generative AI is just new search, translation and some funny pictures. Useful but not that world changing. Internet 1995 you could already connect and talk to people from all over the world in social networks, do banking, shopping etc, all the technical problems were already solved, people just hadn't caught up to it yet.

Internet just needed social changes to happen, generative AI still requires technological breakthroughs.


Bitcoin has never reached the proliferation of AI, even though some have used it to pay for pizzas. It's 2024, and with AI you can already do: semantic search, moderation, translation, captioning, TTS, STT, context-aware grammar checker, LLM, audio/image classifier, smart editing tools etc... So yeah it's quite different compare to Bitcoin.


And none of those changes the everyday life of most people. Bitcoin revolutionizing banking would, as would generative AI if some new breakthrough was made, but currently the biggest thing it does for most people is that it lets them cheat on homework etc.


This is incredibly delusional ahaha

Just look at how popular TikTok is (it wouldn't be without its powerful recommendation model), or the fact that the majority of the world's population has to rely on translation models and subtitles to translate the vast amount of English text online. Bitcoin is never going to "revolutionize" banking, it's really just a downgrade, no privacy, no fraud protection.


The recommendation AI portion isn't much related to generative AI, where the former connects existing data points and the latter creates new data points.

This TikTok ads team job post[0] does, however, mention "AI-powered smart video generation (we are also exploring AIGC)" (AIGC = AI-generated content) which implies to me that they are looking to genAI to make new content.

[0] https://careers.tiktok.com/position/7189645714418141499/deta...


Tiktok isn't generative AI, I don't think you understand this topic. The topic is generative AI, not recommendation AI, recommendation AI doesn't require much investments, the generative AI that companies has poured many billions into the past few years is the topic.


Found the CCP plant. This example makes AI a net negative.


Tiktok may be a net negative, but it's clear how it has changed the World for youth who are addicted to AI suggestion algorithms.


Generative AI will take my spreadsheet and reorganize it based on a prompt I give it in natural language

I’ve still yet to discover a use for crypto other than waiting for it to increase in value and sell to the next person


Right, for you it might be more useful. But people still make payments across the world into countries using bitcoin that would be hard to do without, I'd say that is a more world changing use case than an error prone document reformatter.

I am not saying bitcoin is more important overall, I'm just saying that it is fair to put the two on similar levels today, I believe generative AI will ultimately have important use cases, but as they are today that is far from a given.

It is possible that companies will spend hundreds of billions of dollars on electricity and hardware just to come up with stuff that are barely better than we have today, and then it turned out to be a massive big waste poured into overhyped technology.

Could also be that it gets good enough to do a lot of useful stuff worth all that effort, but you can't count on that happening. The main difference is that we today know crypto currencies was a dead end, we don't know that about transformer based generative AI so it makes sense to make that bet, but it could still be a massively overhyped dead end.


Cryptocurrency is a use case, AI is a technology. It’s odd to compare them.

AI is more comparable to GUI or internet or cell service than it is to cryptocurrency. Blockchain, if you must, but I don’t think anyone could saw AI has to more relevance than blockchain.


Generative AI is also a use case, this doesn't talk about AI in general just the hype around generative AI that we see today.


Internet in 1995 had already solved all the problems required to revolutionize the world with cheap fiber optics fiber being laid everywhere, the world just had to catch up to it.

The same isn't true for generative AI today, in order for it to revolutionize the world similarly to how the internet did it still need some more unknown pieces, which you are expecting to happen but it might not. The internet however was already completely solved when it was hyped, very different.


I mean I was there, and it feels similar. The internet was “solved” in the way that transformers “solve” AI. And there were so many unknown pieces for internet business and tech.

The difference between internet and AI is one is mature and the other is nascent. That’s it.


Exactly. I’ve thought since ChatGPT first took the world by storm that the directional promises of AI were completely correct but the hardware isn’t at all there yet. The dot com bubble crashed yet the Internet is more important than ever today.


And we're adding browser APIs for it! What could go wrong?


Are we?



It's 100% stock market manipulation; nothing more and nothing less.


It’s healthy for GS to express a contrarian opinion.

If GPT 5 doesn’t meet high expectations, another winter could soon arrive.


If the success of a single company -- indeed, a single product -- is really that pivotal, the industry is already on the brink of collapse.


A winter where AI will continue to be used for semantic search, moderation, translation, captioning, TTS, STT, context-aware grammar checker, LLM, audio/image classifier.

An AI winter where AI will be used everywhere.


It's a difference between big, interesting, investment worthy AI that stands alone and small, uninteresting, practical business features that happen to use AI. The real bellwether for me will be whether or not Apple, after releasing it's AI features, ever talks about them as a main point in a WWDC again, or if it just gets pushed into "here's something new photos can do (with AI)" like it has for the last x many years.


People always conclude I hate AI. But it’s only a matter of ROI. The amount of capital investment must be matched with increased sales or reduced expenses. If reduced expenses come via job elimination, it doesn’t last long in a consumer society. Increased sales through new product categories are an option, but costs will need to come down and reliability will need to increase. Hence the current hopes.


I didn't infer anything about your opinion on AI, I just reported that the AI winter will, funny enough, still have AI everywhere, if it's going to happen.


The last AI winter also had AI everywhere, just not AI research everywhere.


What AIs are you referring to? "If ... Else" type of AI?


The statistical model type of AI/machine learning, things like decision trees, decision forests etc. Those has been everywhere for a long time now.


Another winter is inevitable, but how soon will it come?


It's not inevitable


AI is fundamental research, it is inevitable that funding for AI research will dry up at some point in the future just like all such research. AI winter doesn't mean AI isn't used, it refers to AI research funding shrinking back to the same level as other fundamental research fields.


Really hard to disagree with Goldman Sachs on that one. There's no reason to believe in all this hype unless you are one of the many that are surfing, and making money, with it. Besides all those chatbots there is not a lot of AI products that are really useful to the everyday user.


Goldman Sachs is overhyped, overpaid, and unreliable.


Those are not mutually exclusive.


Just remember: the only reason they threw a trillion dollars at genAI is because they thought they could lay off their entire creative staff



Actual report https://www.goldmansachs.com/intelligence/pages/gs-research/...

Right in the preamble

..despite these concerns and constraints, we still see room for the AI theme to run, either because AI starts to deliver on its promise, or because bubbles take a long time to burst.


Overhyped? Sure. Expensive? Definitely. Unreliable? Sometimes.

So many HNers love to be contrarian about anything resembling a hype train, and feel validated when a report like this comes out. I think the reality is that people moving forward with integrating AI into various aspects of their business and reaping the rewards aren't here making snarky "I told you so" comments, which seems to be every other comment on this and the other threads about the same article.


But, I've yet to see the "rewards". I see tech companies laying off staff with the expectation of AI taking up the slack while every piece of software I use gets shitier. Show me the bottom line improvement, or better tools, better art, better music... It just doesn't live up to the hype.


Others can speak to the bottom line improvement. I'm sure for investors the "return" looks a lot different, but for my own uses various AI tools have been a major boon to my productivity.


Such as?

I tried copilot once and it was pretty keen on sneaking itty bitty bugs in to my programs very frequently.

It is fine for certain boiler plates, but it wasn’t “great” for anything that templates weren’t already great for.


This is classic example of y intercept that the parents mean


Sometimes unreliable is a very funny qualifier. If something were consistently unreliable then unreliable wouldn’t be the right term.


Sometimes is the qualifier. What are you using it for and how are you using it? The barrier to consistent reliability with these tools is definitely above most organizations' grasp at this point, but that'll only improve with time.


If something is reliable, it is, by definition, consistently reliable.

If something is unreliable, it is, by definition inconsistently unreliable.


It's not overhyped. It's correctly hyped because it's hard to estimate the upper bounds of the value AI can bring.


HN is overwhelmingly pro-AI and on the LLM hype train. Silicon Valley and YC are largely only investing in that sector right now. Pretending there is some huge contrarian group here is absurd


this is a really common problem online and HN too: some people think the general consensus is really biased one direction, and some people (reading the same material) think the opposite is true.

i see people constantly ragging on LLMs as being infringing and bad and wanting to see them fail


Are they? Virtually every comment on this thread, and the two others from the past week, have a pretty negative sentiment overall if not more or less saying "I told you so"


I mean, I think people are getting a little tired of the hype; not sure how many true believers there are at this point.


Losing faith at an (arguable) lull of two years are not the believers we need anyway. Some of us were believers even ten years ago.


Roko’s Basilisk thanks you for your devotion.


You're talking like AI is a religion


It’s useful for turning unstructured data into structured data. And other similar repetitive tasks.

Not chatting with and asking to write something open ended that even you don’t know how to do.


They're early to this opinion, people will start to realize this en masse soon.

Digital coding assistants are here to stay, but not at the $10/month price point. Maybe $10/year.


I would easily pay $100/mo for copilot at its current level of function


What's current AI platforms are missing is a friendly way to use their API individually. It should be easier for me to use my own ChatGPT or Claude account on another app/website. They could get a much bigger revenue with that too. But none of them are really willing to share the pie.


I would have thought GS would be wise enough to not bully AI now in its early days just in case it ends up taking over the world.


I'm not going to read the whole paper, but I didn't find anything in the paper which used the words "overhyped" "wildly expensive," or "unreliable."

Unreliable for what? There's so many different ways which people use these services. The way I use them, it's not unreliable. Because I don't use these services in a way which would be unreliable.

Instead, it seems the broader conversation that the huge investment in AI might not pay off right away is very plausible. I pay $20 per month for one of these services. And that $20 service is maybe the most expensive to implement of anything I have paid $20 monthly by a long shot.

And this is important for the audience of Goldman Sachs. Maybe it's not so important for the typical reader of Hacker News. Who reads Goldman Sachs papers?


Quibbling a bit:

>questions whether generative AI will ever become the transformative technology that Silicon Valley and large portions of the stock market are currently betting on...

Who really cares if the AI is generative or not? AI is pretty much bound to be a transformative technology.

>higher productivity (which necessarily means automation, layoffs, lower labor costs...

Nah. It can also mean more stuff produced. And probably will.

That said current asset prices may well be a bit inflated. And some startup could come up with a better algo for self improving AGI rendering the other companies not worth much. Bit like how Google came along and rendered the other search companies not worth much.


HNers are wrong about this just like they're wrong about Tesla, and the proof is outside on the road.


Say what you will about GS, they are correct here.


Why do people not understand that such news come out of the woodwork when they have a big short position in the stock market.


It doesn't sound like they are short saying bubbles may go on for a considerable time etc. Goldman etc make most of their money from fees from clients, not from punting the market.


Goldman Sachs itself is overhyped, unreliable and expensive.


Maybe so, but please don't post unsubstantive comments to Hacker News.


Also Goldman Sachs: uses quant algorithms for high speed trading at scale.

What a joke.


There is no contradiction in that.

I'd like to see the quants that are using ChatGPT for HFT. Apples and oranges.


The existence of sophisticated algorithms does not necessarily imply the existence of machine learning/"AI". Despite Rentech executing all their trades via algorithmic models, Jim Simons once said that they didn't use any machine learning.



Interesting that he does say that in that somewhat recent interview. I forget where exactly I heard that he said that Rentech didn't use machine learning, but it may have been from an older interview.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: