Hacker News new | past | comments | ask | show | jobs | submit login

ChatGPT does a good job of imitating the average crypto influencer. They don’t know what they’re saying either, and 99% of crypto investors would be thrilled by the prospect of a “pivotal Phase 3 Bitcoin ETF trial” that will “drive trust value realization”. Sounds great, can’t miss out on that!

The hallucinations are simply a mirror of a community that thrives on this nonsense. When nothing is real, you can’t blame the LLM for not figuring it out.




This made me chuckle. You made a very interesting point that if LLMs are copying hallucinations those hallucinations are not infact hallucinations.


Simpler than that: It's all hallucinations, some of them just happen to be ones humans approve-of.

It's kind of like a manufacturer of Ouija boards promising that they'll fix the "channeling the wrong spirits from beyond the mortal plane" problem. It falsely suggests that "normal" output is fundamentally different.


This is a great insight and fascinating to me as well. What even is the solution though? It does seem like it follows logically though, since the earliest days of the internet huge swaths of wrong, fraudulent, or misleading info has plagued it and you’d usually have been wise to check your sources when trusting anything you read online. Then we had these models ingest the entire web, so we shouldn’t be surprised at how often it is confidently wrong.


I guess reasoning and healthy self-doubt to be built in system. Already the reasoning thing seems like 2025's candidate for what large labs will be zeroing down on.


This is the interesting part of the experiment. Since these LLMs are general and not specifically trained on historical (and current) stock prices and (business) news stories, it isn't a measure of how good they could be today.


My first through after seeing this post was that it's a real world eval. We are running out of evals lately (arc-agi test, then sudden jump on frontier math, etc). So it's good to have such real world tests which show how far we are.


If you believe (as many HNers do, although certainly not me) that LLMs have intelligence and awareness then you necessarily must also believe that the LLM is lying (call it hallucinating if you want).


Intelligence is a prerequisite for lying, but its foundation is morality and agency.

To lie, you have to know that you are not telling the truth, and arguably have to be able to held accountable for that action.

It's easy to babble a series of untruths, but lying requires intention, which requires an entity that can be recognized as having intentions.

I'd argue that ChatGPT's lack of a cohesive self prevents it from lying, no matter how many untruths it creates.


If you ask chatgpt to tell a story of a liar it is able to do so. So while it doesn't have a motivated self to lie for it can imagine a motivated other to project the lie on.


Reminds me of recent paper where they found LLMs are scheming to meet certain goals; And that is a scientific paper done by a big lab. Are you referring from that context?

Words and their historical contexts aside, systems which are based on optimization can take actions which can appear like intermediate lying to us. When deepmind used to play those atari games - the agents started cheating but that was just optimisation wasn't it? similarly when a language based agent does a optimisation, what we might perceive it as is scheming/lying.

I will start believing that LLM is self aware when a research paper from a top lab like Deepmind/Anthropic put such a paper in a peer reviewed journal. Otherwise, it's just matrix multiplication to me so far.


> [paper claimed] LLMs are scheming

IMO a much better framing is that the system was able to autocomplete stories/play-scripts. The document was already set up to contain a character that was a smart computer program with coincidentally the same name.

Then humans trick themselves into thinking the puppet-play is a conversation with the author.


When I'd watch the financial news on TV, they would always bring on the "technical analyst", show a graph of the stock price, and then hand-draw some lines on it, and then spew out various technical terms for it guaranteed to impress.

Me, I always regarded technical analysis as drawing pictures in clouds.

If any of those analysts were worth spit, they'd be working for a hedge fund, not the network.


> drawing pictures in clouds.

Well phrased and it's how the stock market works, not only by technical analysts but everyone else playing: make a story in your head, place your bets, majority rules.

Some even believe that's how reality works in general. Sometimes belief or need could be a factor[0].

[0] https://www.guinnessworldrecords.com/news/2012/9/norwegian-f...


On a more long term basis, the stock market reflects the business reality. But in the short term, it's chaos.


The former is a belief. It always reflects the imagined realities of those investing--we assume that business reality catches up with them, and it mostly does but not always within a predictable time frame.


> The former is a belief

It's based on the Law of Supply & Demand, which is always in play.


Always in play for goods and services, but this is a crypto currency – it's supply is mathematically limited, and it's value is fully market-dependent – determined only by players on the market.


A huge short term influx of free capital can shape that longterm business reality. Of course both in positive and negative ways


There is something to technical analysis. But you do need to approach it rationally rather than by performing magical rituals.

The markets are made of a finite and sometimes very small number of participants that may have their own reasons for buying and selling unrelated to company performance. Figuring out what they will do is the basis.

Maybe Bob is looking to sell a lot to free up cash for private jet. Maybe Alice buys every month the same day like clockwork as she gets her paycheck. Maybe Charlie thinks the stock can't go about $50 and will take profits at $49. Maybe Debbie regrets not buying and is likely to fomo buy soon.

Probably can't figure this out one by one, but can in aggregate.


At the end of the day the stock market is a consensus model with a spectrum between two, sometimes contradictory, metrics (sentiment and analytical). If your conclusions about a stock agree with the market then you profit. If you can guess what the market will decide before it has decided, then you profit more.

All those lines do actually mean something, so long as the market is in agreement as how to draw them.

FWIW these bots aren't doing the lines stuff, they are purely sentiment traders.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: