One could argue it's the first step of the slippery slope process. First you introduce a checkbox as a "non-intrusive way" to for age verification, knowing full well it's useless. Next step is you say "Ok, we clearly agree there is a need for age verification, we all voted for the checkbox but kids are lying so we must put into place a system that cannot be gamed. Think of the children!"
If we don't trust the legislative and see them as malevolent entities with their own agendas unaligned with those of their constituents, then, yes, a checkbox opens a path for further abuse.
If we trust the legislative to have a modicum of common sense and don't try to invent a technical solution to a non-technical problem, then a warning "what you're going to see is not for the younger audiences" might be a reasonable compromise.
And it's a shame we live in a world where the former doesn't sound completely nuts.
You can assume they'd rather be constructing new clothes, rather than doing alterations. You can also assume that there is some amount of their previous customer base who aren't interested in restarting the process at 0 with creating custom patterns, etc.
It's quite possible that the lasting effects are more dramatic, as this plays out over time and we move increasingly towards casual dress.
> You can assume they'd rather be constructing new clothes, rather than doing alterations
Thankfully, the free hand of the market provides a solution uniquely tailored to this kind of problem - just raise the price for the adjustments to a point where it's easier and cheaper if you just buy a new suit. In fact, if we are talking about huge weight loss I'm not even sure how the "adjustment" would be any less time-consuming than starting from scratch.
"The cost of alterations is an economically reasonable risk: the above would come in at £1,600 with Terry when they would need £5,000 to 7,000 for a replacement."
Yes, the customer is returning, but that’s completely normal in the bespoke tailoring process—it’s not new business. The process of getting a completely bespoke suit there's usually multiple fittings over several weeks. It's normal for the customer's body to change, and adjustments to made to create a better fit.
That’s why it becomes such an issue when customers come in requesting an alteration—it’s like being dropped into a team at the final stages of a project that leadership says is 90% done, but it’s been stuck for weeks trying to finalize that last 10% due to some "small last minute requirement changes"
I assumed if I kept reading there would be a line explaining why they can't simply raise prices until the demand becomes manageable with current staffing, such as "We sold all these suits with guaranteed adjustments for £[some heavily discounted number] for life", but I didn't find any such explanation. Shrug
I think the population of people buying bespoke suiting is small enough that you would not want to alienate your existing customers. I agree that they should raise the prices, but I've got to think there's an aspect of a relationship there. It was hinted at, a little bit, in the article. It's not just a financial transaction, I mean.
Precisely. They're talking about a customer who has spent £700,000 ($870,000) on suits. That's a long-term relationship built on trust. Hiking your prices to manage demand might be a short-term financial bonanza, but it's disastrous in terms of reputation.
And the article suggests that's it's not even the population of everyone with a bespoke suit so much as the minority of whales who own a lot of them. There is going to be a fair number of very demanding and impatient rich guys in that group.
Unfortunately some companies and content creators still post there. It's a good way to find what the subject is about to identify external sources. So you can participate in less toxic discussions like on HN.
I initially got excited about this (and know people who would really love this to exist), but after a bit of digging, I am convinced that this is likely a scam. This report explains why it is: https://whitediamondresearch.com/research/know-labs-is-an-ob....
You could also look at the stock economics to see that this company has not behaved like one with a bright future.
Yeah, the stock price made me suspicious but I figured that it might be due to the not-so-great accuracy plus a lack of moat - even if it worked, you'd see a cheap copy on AliExpress in 2 weeks. I didn't see the report before though, so yeah, I'd agree it smells like scam. Especially when you see the CEO dabbles in NFTs
ChatGPT has one trade that is guaranteed to be bad. I'm not saying unprofitable, just bad. GBTC is the bitcoin ETF with biggest expense ratio - 1.5%. If you want to bet on bitcoin, a better choice would be BITB (0.20%) or BTC (0.15%).
Also, the reasoning is partially a hallucination - "The holding period of 9 months aligns with the expected completion of Grayscale's pivotal Phase 3 Bitcoin ETF trial, a major catalyst for unlocking investor demand and driving trust value realization."
There is no such thing as a "holding period", nor are they doing a "Phase 3 Bitcoin ETF trial". It's possible the "Phase 3" thing is picked up from news about a drug company.
ChatGPT does a good job of imitating the average crypto influencer. They don’t know what they’re saying either, and 99% of crypto investors would be thrilled by the prospect of a “pivotal Phase 3 Bitcoin ETF trial” that will “drive trust value realization”. Sounds great, can’t miss out on that!
The hallucinations are simply a mirror of a community that thrives on this nonsense. When nothing is real, you can’t blame the LLM for not figuring it out.
Simpler than that: It's all hallucinations, some of them just happen to be ones humans approve-of.
It's kind of like a manufacturer of Ouija boards promising that they'll fix the "channeling the wrong spirits from beyond the mortal plane" problem. It falsely suggests that "normal" output is fundamentally different.
This is a great insight and fascinating to me as well. What even is the solution though? It does seem like it follows logically though, since the earliest days of the internet huge swaths of wrong, fraudulent, or misleading info has plagued it and you’d usually have been wise to check your sources when trusting anything you read online. Then we had these models ingest the entire web, so we shouldn’t be surprised at how often it is confidently wrong.
I guess reasoning and healthy self-doubt to be built in system. Already the reasoning thing seems like 2025's candidate for what large labs will be zeroing down on.
This is the interesting part of the experiment. Since these LLMs are general and not specifically trained on historical (and current) stock prices and (business) news stories, it isn't a measure of how good they could be today.
My first through after seeing this post was that it's a real world eval. We are running out of evals lately (arc-agi test, then sudden jump on frontier math, etc). So it's good to have such real world tests which show how far we are.
If you believe (as many HNers do, although certainly not me) that LLMs have intelligence and awareness then you necessarily must also believe that the LLM is lying (call it hallucinating if you want).
If you ask chatgpt to tell a story of a liar it is able to do so. So while it doesn't have a motivated self to lie for it can imagine a motivated other to project the lie on.
Reminds me of recent paper where they found LLMs are scheming to meet certain goals; And that is a scientific paper done by a big lab. Are you referring from that context?
Words and their historical contexts aside, systems which are based on optimization can take actions which can appear like intermediate lying to us. When deepmind used to play those atari games - the agents started cheating but that was just optimisation wasn't it? similarly when a language based agent does a optimisation, what we might perceive it as is scheming/lying.
I will start believing that LLM is self aware when a research paper from a top lab like Deepmind/Anthropic put such a paper in a peer reviewed journal. Otherwise, it's just matrix multiplication to me so far.
IMO a much better framing is that the system was able to autocomplete stories/play-scripts. The document was already set up to contain a character that was a smart computer program with coincidentally the same name.
Then humans trick themselves into thinking the puppet-play is a conversation with the author.
When I'd watch the financial news on TV, they would always bring on the "technical analyst", show a graph of the stock price, and then hand-draw some lines on it, and then spew out various technical terms for it guaranteed to impress.
Me, I always regarded technical analysis as drawing pictures in clouds.
If any of those analysts were worth spit, they'd be working for a hedge fund, not the network.
Well phrased and it's how the stock market works, not only by technical analysts but everyone else playing: make a story in your head, place your bets, majority rules.
Some even believe that's how reality works in general. Sometimes belief or need could be a factor[0].
The former is a belief. It always reflects the imagined realities of those investing--we assume that business reality catches up with them, and it mostly does but not always within a predictable time frame.
Always in play for goods and services, but this is a crypto currency – it's supply is mathematically limited, and it's value is fully market-dependent – determined only by players on the market.
There is something to technical analysis. But you do need to approach it rationally rather than by performing magical rituals.
The markets are made of a finite and sometimes very small number of participants that may have their own reasons for buying and selling unrelated to company performance. Figuring out what they will do is the basis.
Maybe Bob is looking to sell a lot to free up cash for private jet. Maybe Alice buys every month the same day like clockwork as she gets her paycheck. Maybe Charlie thinks the stock can't go about $50 and will take profits at $49. Maybe Debbie regrets not buying and is likely to fomo buy soon.
Probably can't figure this out one by one, but can in aggregate.
At the end of the day the stock market is a consensus model with a spectrum between two, sometimes contradictory, metrics (sentiment and analytical). If your conclusions about a stock agree with the market then you profit. If you can guess what the market will decide before it has decided, then you profit more.
All those lines do actually mean something, so long as the market is in agreement as how to draw them.
FWIW these bots aren't doing the lines stuff, they are purely sentiment traders.
This assumes that both GBTC and BITB have the same price movements, volatility and liquidity. This is far from true and as a result you might end up with a higher alpha in GBTC despite the fees. I am not saying it is guaranteed, but the fee is one variable.
God help the regulators that need to determine if it's insider trading for the people training the LLM to know it will be biased in ways they can profit from when used in inappropriate ways like this. I suspect the answer will be that users should have known better... I am sad that some people will certainly assume it's unbiased analysis.
Hopefully the LLM trainers didn't "accidentally" bias the model in weird ways that favor their employer or themselves... two of the three recommendations are a fund for investing in bitcoin and a company using blockchain to trace chemical supply chains.
I look forward to seeing if the AIs can beat an index fund, or if they'll just invest in a thousand blockchain, NFT, and AI companies. I suspect a LLM has a high opinion of a company making AI given how many press releases they're summarizing.
You can't become a billionaire by betting on hundreds of thousands of events via "survivorship bias". It's about as likely as getting 1000 monkeys typing on typewriters and producing Shakespeare's works in 10 years.
I think only the top one of those was actually a billion. Sum of payments is poor financial math, and I really wish news agencies would grow some standards and not use them in the headlines.
it's a typical HN Gotcha of which I myself am often guilty, given hundreds of different chances, and one of those chances can make you a billionaire, then you can become a billionaire by betting on hundreds of different chances - but of course horse race gambling doesn't give you that billion in one shot chance.
on edit - well I guess it technically does it give it, but at such a high rate of investment it isn't really that worthwhile either. The point about the lottery is that a single ticket which costs little can return a billion. A horse race that returned a billion probably needs at least a 100 million to be bet, which is probably not even possible.
it sounds to me like you think I've said something about the likelihood of a working system, and also that you think I am somehow in opposition to your second sentence, and require setting straight on the matter?
I admit I am at a loss how either of these suppositions could actually come to be, based on what I wrote, so I suppose I am mistaken.
You're making an assumption there that the educational/opportunity systems in the country aren't designed specifically to feed these jobs in particular.
Yes, that's exactly how rent works nowadays. You rent a piece of real estate and you pay the rent. So, yes, it's still relevant today but not particularly noteworthy.
I'm just making fun of the certainty with which the poster assumes that just because we had humongous progress in all areas of knowledge for the last 100 years, it's somehow guaranteed that the progress will continue at the same rate. Fundamental limits or not, we've already picked the lowest hanging fruit and further progress is painfully incremental, slow and expensive and Star Trek-like devices seem extremely unlikely.
I think you're reading it backwards. If you look closely at how medicine is done today, you will see that there are many areas where it is wildly divorced from reality. So, the point was not "we'll be vastly better soon", it's more "we're in a bad place now".
The current most wildly successful, heavily prescribed medicines today are statins. They help 1 in 104 people in terms of preventing heart attacks, 1 in 154 people in terms of preventing stroke. (Those are people without known heart disease, but they are the vast majority of people taking statins.) They harm 1 in 10 by causing muscle damage, 1 in 50 by causing diabetes. [1] That's the success story. (Sure, you can debate the details. Do they really cause diabetes? Unclear. Do they help anyone, ever, to not die sooner? Unclear.)
It seems like the main reason they're considered so successful is that they do indeed lower an intermediate metric, namely blood cholesterol level. I am sure that bloodletting was successful at removing blood, and if you have an infection, you could even say at removing bad blood.
And yes, I'm cherrypicking my definition of success. Modern medicine can indeed dramatically improve outcomes for a large set of problems (eg cancer). But doctors were successfully setting bones back in the bloodletting days, too.
There is a serious problem with that site's analysis. The meta cited on statin death prevention covered an average trial length of 3.74 years per person. That means they can give you, at best, your 3-4 year probability of having a fatal heart attack. For most age cohorts, that probability is very near 0 no matter what you do, so no intervention whatsoever can prevent cardiac event death by this metric. But this metric isn't what people care about. They're not trying to reduce the risk of having a heart attack in the next few years. They're trying to reduce the risk of ever having a heart attack.
Note this is exactly why we actually use the studies of people with prior cardiovascular disease that this meta excludes. Those people are sufficiently likely to actually have another heart attack within the time horizon of the study that you can get useful data!
The other option is to only conduct 60 year trials. It should be obvious why that isn't a viable option.
The limited time duration is a big deal, I agree. It's an extrapolation from insufficient data. (Though the studies were evidently powerful enough to come up with a number, so the probability is not that near 0.) But that also means insufficient data to provide evidence for net benefit from an intervention, and an intervention really needs to prove its worth before you go about tempting fate by taking something biologically active. Where is the evidence that statins "reduce the risk of ever having a heart attack"?
I'm going to disagree about the cohort. That only means that if you have prior heart disease, you should not be looking at an NNT derived from a population without prior heart disease. The site's conclusions are mostly irrelevant for you, and should not factor into a rational decision.
If you don't have prior heart disease and are weighing your options, then those data are relevant to you. The vast majority of people who are deciding whether to take statins are in this category.
People deciding whether to try to remove a bullet from their abdomen, and who have no reason to believe that they have ever been shot, should not be weighing the outcomes of test subjects who had been shot before participating in the trial. (It would really suck to be in the control group...)
I'm not saying you shouldn't take statins, with or without prior heart disease. An individual would have more to go on than the existence or absence of a prior heart disease diagnosis. Exact cholesterol readings, for example, might create more or less urgency.
But if I were in the situation of deciding for myself, I'd want better evidence for them than I have seen presented so far. I am suspicious of an industry for which this is a big success story.