Hacker News new | past | comments | ask | show | jobs | submit login
Big Tech Fails to Convince Wall Street That AI Is Paying Off (yahoo.com)
28 points by mgh2 43 days ago | hide | past | favorite | 14 comments



There is still no business model. $20 for end user subscriptions isn’t going to cut it and cost per token is still falling so there is less and less money to be made from the commercial side as time goes on. There’s still no killer app either.

I’m not a total hater, I did have a “wow” moment with LLMs and do see some value in them. But there is an ongoing mass psychosis that we’re about to enter some kind of age of digital hyperintelligence when we’re nowhere near it.


It's been kind of enlightening seeing leadership at $BIGCORP push AI coding solutions like they're guaranteed to be a 10x increase in velocity in every context. Feedback from ICs isn't wholly negative - there are definitely situations where it can be useful, like quickly grokking common applications of common tools, or semi-intelligently applying a diff pattern that is more then just a regex - but there's a complete unwillingness to hear any feedback that isn't "this tech is a total paradigm shift that allows us to finally get rid of all these pesky and expensive developers". Reports of, for instance, the introduction of subtle bugs that take extended amounts of time to understand and fix, are met with outright hostility and accusations of incompetence. When a complex defect or escalation drags on, a common question is "why haven't you asked AI to fix it yet", belying a total misunderstanding of the sorts of tasks the tool is applicable to. The kool-aid is not so much drunk as rectally infused. If valuations are based on this sort of outlook, whew, this market is totally fucked.


I think a weird irony is that the model's inability to know when its response is good is both the reason why often the output is not useful, and why when it's very useful, they can't capture the value efficiently.

Like, I was encouraged to use AI assistants more after a colleague saved a bunch of time debugging some issue where copilot (IIRC) immediately identified an obscure issue. Probably in that case, we should have been willing to pay a decent amount for that one valuable response -- it may have saved a significant amount of engineer time. But I've also had copilot give me stuff that isn't even syntactically correct, or had copilot chat make up a newer version of a language and tell me to use it. Cases where it's a waste of time are worth negative dollars.


Sounds like a good ol’ fashioned case of confirmation bias. ‘Look at this one good suggestion the AI made! Wow!’… all while ignoring the many unhelpful outputs.


I don't think it's just confirmation bias where we ignore some bad results (which presumes we know up front that they're bad) -- I think because these models are specifically RLHFed to learn what we think looks good, you can't judge quality just by looking at the outputs and deciding whether they seem plausible. You actually have to do the follow-up of seeing whether they're correct/useful, which may be much more involved.

E.g. to judge the quality of a particular coding example, one may need to have/create a project in which that code would be used, install actual libraries it invokes, create data for it to operate on, etc. In cases where the assistant was basically giving me wrong information about scala 3 metaprogramming capabilities, I could only determine they were BS by actually trying to compile the program (in the context of a project with sbt config that pulls in some relevant libraries, sets appropriate flags etc).

But of course the model doesn't do this, the high-level exec doesn't do this, and so "these examples look great!" can be an honest evaluation, based on the inability to actually meaningfully validate.



Love the example of the guy using an llm all day to make a simple crud app. Basic auto generated crud apps have existed forever. I still remember showing my boss my django admin built in a day back in 2005. He told me to tell no one about this because he was afraid he would have to layoff devs.


Someone could replace the Bloomberg Terminal with a single dial that includes all three investor sentiments in one convenient interface: "Fund radical startups", "Support the Incumbants" and "Invest in War"


Investors betting on openai wrappers with no moat outraged at losing money.


Who could have possibly seen this coming after Theranos and Enron and Worldcom and FTX and a few thousand banrupt Blockchain businesses?


This is the usual market journalism where you see prices are down over the previous days and try to guess a reason to ascribe it eg "Stocks down because AI!" but the three stocks mentioned are also up about 100% since the start of 2023 which you could probably ascribe "Stocks up because AI!"

Which is just to say journos seeing the price is up or down a bit is not very reliable accounting at to whether AI is paying off or not.


I took a look at the offerings of Suno and Udio for music generation. They are pretty smart and give quite good results. But, whilst numbers are hard to verify, there are many tens of thousands of tracks added to Spotify every day. To stand out in the crowd requires name recognition, live touring, visual image, etc. I can't see how AI music can really make money. I don't feel a track for grandma's birthday will pay the rent.


They’re spamming music platforms with a high volume of songs and as many tags as they can, trying to generate revenue from listens.

Here’s an example: I listen to a band called Voyager. In Tidal, sometimes I get recommended some random AI-generated music (think ambient sounds and lullabies) because the producers are tagging several “artists” in each track, and one of them is also called Voyager (Tidal has some issues with differentiating artists with the same name).

One actual example: an album called “Loyal Listener: Chill Music for Dogs”. Artists: Voyager, Dog Radio 1, Easy Sunday Listening. All the tracks in the album are called things like “Dogs’ Quiet Watch” or “Listener’s Gentle Bark”.

I hope streaming services force artists to tag AI music as such so we can filter it out.


First bad[1] of a series of great results: see, I told you it's a bubble.

[1] not bad results at all, it's simply not as surprising as the Wall St. expected. It's still better than initially predicted.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: