Unfortunately some companies and content creators still post there. It's a good way to find what the subject is about to identify external sources. So you can participate in less toxic discussions like on HN.
I initially got excited about this (and know people who would really love this to exist), but after a bit of digging, I am convinced that this is likely a scam. This report explains why it is: https://whitediamondresearch.com/research/know-labs-is-an-ob....
You could also look at the stock economics to see that this company has not behaved like one with a bright future.
Yeah, the stock price made me suspicious but I figured that it might be due to the not-so-great accuracy plus a lack of moat - even if it worked, you'd see a cheap copy on AliExpress in 2 weeks. I didn't see the report before though, so yeah, I'd agree it smells like scam. Especially when you see the CEO dabbles in NFTs
ChatGPT has one trade that is guaranteed to be bad. I'm not saying unprofitable, just bad. GBTC is the bitcoin ETF with biggest expense ratio - 1.5%. If you want to bet on bitcoin, a better choice would be BITB (0.20%) or BTC (0.15%).
Also, the reasoning is partially a hallucination - "The holding period of 9 months aligns with the expected completion of Grayscale's pivotal Phase 3 Bitcoin ETF trial, a major catalyst for unlocking investor demand and driving trust value realization."
There is no such thing as a "holding period", nor are they doing a "Phase 3 Bitcoin ETF trial". It's possible the "Phase 3" thing is picked up from news about a drug company.
ChatGPT does a good job of imitating the average crypto influencer. They don’t know what they’re saying either, and 99% of crypto investors would be thrilled by the prospect of a “pivotal Phase 3 Bitcoin ETF trial” that will “drive trust value realization”. Sounds great, can’t miss out on that!
The hallucinations are simply a mirror of a community that thrives on this nonsense. When nothing is real, you can’t blame the LLM for not figuring it out.
Simpler than that: It's all hallucinations, some of them just happen to be ones humans approve-of.
It's kind of like a manufacturer of Ouija boards promising that they'll fix the "channeling the wrong spirits from beyond the mortal plane" problem. It falsely suggests that "normal" output is fundamentally different.
This is a great insight and fascinating to me as well. What even is the solution though? It does seem like it follows logically though, since the earliest days of the internet huge swaths of wrong, fraudulent, or misleading info has plagued it and you’d usually have been wise to check your sources when trusting anything you read online. Then we had these models ingest the entire web, so we shouldn’t be surprised at how often it is confidently wrong.
I guess reasoning and healthy self-doubt to be built in system. Already the reasoning thing seems like 2025's candidate for what large labs will be zeroing down on.
This is the interesting part of the experiment. Since these LLMs are general and not specifically trained on historical (and current) stock prices and (business) news stories, it isn't a measure of how good they could be today.
My first through after seeing this post was that it's a real world eval. We are running out of evals lately (arc-agi test, then sudden jump on frontier math, etc). So it's good to have such real world tests which show how far we are.
If you believe (as many HNers do, although certainly not me) that LLMs have intelligence and awareness then you necessarily must also believe that the LLM is lying (call it hallucinating if you want).
If you ask chatgpt to tell a story of a liar it is able to do so. So while it doesn't have a motivated self to lie for it can imagine a motivated other to project the lie on.
Reminds me of recent paper where they found LLMs are scheming to meet certain goals; And that is a scientific paper done by a big lab. Are you referring from that context?
Words and their historical contexts aside, systems which are based on optimization can take actions which can appear like intermediate lying to us. When deepmind used to play those atari games - the agents started cheating but that was just optimisation wasn't it? similarly when a language based agent does a optimisation, what we might perceive it as is scheming/lying.
I will start believing that LLM is self aware when a research paper from a top lab like Deepmind/Anthropic put such a paper in a peer reviewed journal. Otherwise, it's just matrix multiplication to me so far.
IMO a much better framing is that the system was able to autocomplete stories/play-scripts. The document was already set up to contain a character that was a smart computer program with coincidentally the same name.
Then humans trick themselves into thinking the puppet-play is a conversation with the author.
When I'd watch the financial news on TV, they would always bring on the "technical analyst", show a graph of the stock price, and then hand-draw some lines on it, and then spew out various technical terms for it guaranteed to impress.
Me, I always regarded technical analysis as drawing pictures in clouds.
If any of those analysts were worth spit, they'd be working for a hedge fund, not the network.
Well phrased and it's how the stock market works, not only by technical analysts but everyone else playing: make a story in your head, place your bets, majority rules.
Some even believe that's how reality works in general. Sometimes belief or need could be a factor[0].
The former is a belief. It always reflects the imagined realities of those investing--we assume that business reality catches up with them, and it mostly does but not always within a predictable time frame.
Always in play for goods and services, but this is a crypto currency – it's supply is mathematically limited, and it's value is fully market-dependent – determined only by players on the market.
There is something to technical analysis. But you do need to approach it rationally rather than by performing magical rituals.
The markets are made of a finite and sometimes very small number of participants that may have their own reasons for buying and selling unrelated to company performance. Figuring out what they will do is the basis.
Maybe Bob is looking to sell a lot to free up cash for private jet. Maybe Alice buys every month the same day like clockwork as she gets her paycheck. Maybe Charlie thinks the stock can't go about $50 and will take profits at $49. Maybe Debbie regrets not buying and is likely to fomo buy soon.
Probably can't figure this out one by one, but can in aggregate.
At the end of the day the stock market is a consensus model with a spectrum between two, sometimes contradictory, metrics (sentiment and analytical). If your conclusions about a stock agree with the market then you profit. If you can guess what the market will decide before it has decided, then you profit more.
All those lines do actually mean something, so long as the market is in agreement as how to draw them.
FWIW these bots aren't doing the lines stuff, they are purely sentiment traders.
This assumes that both GBTC and BITB have the same price movements, volatility and liquidity. This is far from true and as a result you might end up with a higher alpha in GBTC despite the fees. I am not saying it is guaranteed, but the fee is one variable.
God help the regulators that need to determine if it's insider trading for the people training the LLM to know it will be biased in ways they can profit from when used in inappropriate ways like this. I suspect the answer will be that users should have known better... I am sad that some people will certainly assume it's unbiased analysis.
Hopefully the LLM trainers didn't "accidentally" bias the model in weird ways that favor their employer or themselves... two of the three recommendations are a fund for investing in bitcoin and a company using blockchain to trace chemical supply chains.
I look forward to seeing if the AIs can beat an index fund, or if they'll just invest in a thousand blockchain, NFT, and AI companies. I suspect a LLM has a high opinion of a company making AI given how many press releases they're summarizing.
You can't become a billionaire by betting on hundreds of thousands of events via "survivorship bias". It's about as likely as getting 1000 monkeys typing on typewriters and producing Shakespeare's works in 10 years.
I think only the top one of those was actually a billion. Sum of payments is poor financial math, and I really wish news agencies would grow some standards and not use them in the headlines.
it's a typical HN Gotcha of which I myself am often guilty, given hundreds of different chances, and one of those chances can make you a billionaire, then you can become a billionaire by betting on hundreds of different chances - but of course horse race gambling doesn't give you that billion in one shot chance.
on edit - well I guess it technically does it give it, but at such a high rate of investment it isn't really that worthwhile either. The point about the lottery is that a single ticket which costs little can return a billion. A horse race that returned a billion probably needs at least a 100 million to be bet, which is probably not even possible.
it sounds to me like you think I've said something about the likelihood of a working system, and also that you think I am somehow in opposition to your second sentence, and require setting straight on the matter?
I admit I am at a loss how either of these suppositions could actually come to be, based on what I wrote, so I suppose I am mistaken.
You're making an assumption there that the educational/opportunity systems in the country aren't designed specifically to feed these jobs in particular.
Yes, that's exactly how rent works nowadays. You rent a piece of real estate and you pay the rent. So, yes, it's still relevant today but not particularly noteworthy.
I'm just making fun of the certainty with which the poster assumes that just because we had humongous progress in all areas of knowledge for the last 100 years, it's somehow guaranteed that the progress will continue at the same rate. Fundamental limits or not, we've already picked the lowest hanging fruit and further progress is painfully incremental, slow and expensive and Star Trek-like devices seem extremely unlikely.
I think you're reading it backwards. If you look closely at how medicine is done today, you will see that there are many areas where it is wildly divorced from reality. So, the point was not "we'll be vastly better soon", it's more "we're in a bad place now".
The current most wildly successful, heavily prescribed medicines today are statins. They help 1 in 104 people in terms of preventing heart attacks, 1 in 154 people in terms of preventing stroke. (Those are people without known heart disease, but they are the vast majority of people taking statins.) They harm 1 in 10 by causing muscle damage, 1 in 50 by causing diabetes. [1] That's the success story. (Sure, you can debate the details. Do they really cause diabetes? Unclear. Do they help anyone, ever, to not die sooner? Unclear.)
It seems like the main reason they're considered so successful is that they do indeed lower an intermediate metric, namely blood cholesterol level. I am sure that bloodletting was successful at removing blood, and if you have an infection, you could even say at removing bad blood.
And yes, I'm cherrypicking my definition of success. Modern medicine can indeed dramatically improve outcomes for a large set of problems (eg cancer). But doctors were successfully setting bones back in the bloodletting days, too.
There is a serious problem with that site's analysis. The meta cited on statin death prevention covered an average trial length of 3.74 years per person. That means they can give you, at best, your 3-4 year probability of having a fatal heart attack. For most age cohorts, that probability is very near 0 no matter what you do, so no intervention whatsoever can prevent cardiac event death by this metric. But this metric isn't what people care about. They're not trying to reduce the risk of having a heart attack in the next few years. They're trying to reduce the risk of ever having a heart attack.
Note this is exactly why we actually use the studies of people with prior cardiovascular disease that this meta excludes. Those people are sufficiently likely to actually have another heart attack within the time horizon of the study that you can get useful data!
The other option is to only conduct 60 year trials. It should be obvious why that isn't a viable option.
The limited time duration is a big deal, I agree. It's an extrapolation from insufficient data. (Though the studies were evidently powerful enough to come up with a number, so the probability is not that near 0.) But that also means insufficient data to provide evidence for net benefit from an intervention, and an intervention really needs to prove its worth before you go about tempting fate by taking something biologically active. Where is the evidence that statins "reduce the risk of ever having a heart attack"?
I'm going to disagree about the cohort. That only means that if you have prior heart disease, you should not be looking at an NNT derived from a population without prior heart disease. The site's conclusions are mostly irrelevant for you, and should not factor into a rational decision.
If you don't have prior heart disease and are weighing your options, then those data are relevant to you. The vast majority of people who are deciding whether to take statins are in this category.
People deciding whether to try to remove a bullet from their abdomen, and who have no reason to believe that they have ever been shot, should not be weighing the outcomes of test subjects who had been shot before participating in the trial. (It would really suck to be in the control group...)
I'm not saying you shouldn't take statins, with or without prior heart disease. An individual would have more to go on than the existence or absence of a prior heart disease diagnosis. Exact cholesterol readings, for example, might create more or less urgency.
But if I were in the situation of deciding for myself, I'd want better evidence for them than I have seen presented so far. I am suspicious of an industry for which this is a big success story.
The cost to show one person "this is the browser icon and these are the excel/word icons; click them when you need to", multiplied by number of people who need to be trained.
Why is it terrifying? Because it's "artificial"? Would you be more at ease with something "natural" as calcium being part of the plaque?
We've been living with plastics for decades. I don't see people dropping dead around me. Life expectancy around the world has been steadily growing, not the other way around. When exactly are these micro/nano plastics supposed to kill me? When I'm 90?
Hmm, many health markers have been declining. Fertility, hormone levels, many exotic cancers incidence rates and how often they occur in young people. You may need to just look a bit harder.
reply