Hacker Newsnew | past | comments | ask | show | jobs | submit | DavidSJ's commentslogin

Capacity is tight, you serve from where you can.

Probably also because most token use cases are not latency sensitive. A 200ms extra delay isn't going to change much for most use cases.

Right, so if they were able to get a discount in UAE…

I'm like you.

I loved Apple IIs at schools and libraries as a young child, fell in love with my Mac IIsi at home at the age of 7. Later, at 13, I had a Macintosh-evangelizing web site and mailing list that Guy Kawasaki (Apple's lead evangelist) even subscribed to.

I've been a primary Mac user through the 68k, PowerPC, Intel, and Apple Silicon days, from System 6.0.7 through today. Got an original iPhone and iPad, have upgraded my iPhone every few years since.

The technofeudalism, bugginess, and UI crappiness has me done and looking for the exits, to say nothing of the embrace of Trump. My next laptop won't be a Mac, and my next phone won't be an iPhone.


Yes, the actual LLM returns a probability distribution, which gets sampled to produce output tokens.

[Edit: but to be clear, for a pretrained model this probability means "what's my estimate of the conditional probability of this token occurring in the pretraining dataset?", not "how likely is this statement to be true?" And for a post-trained model, the probability really has no simple interpretation other than "this is the probability that I will output this token in this situation".]


It’s often very difficult (intractable) to come up with a probability distribution of an estimator, even when the probability distribution of the data is known.

Basically, you’d need a lot more computing power to come up with a distribution of the output of an LLM than to come up with a single answer.


What happens before the probability distribution? I’m assuming say alignment or other factors would influence it?


In microgpt, there's no alignment. It's all pretraining (learning to predict the next token). But for production systems, models go through post-training, often with some sort of reinforcement learning which modifies the model so that it produces a different probability distribution over output tokens.

But the model "shape" and computation graph itself doesn't change as a result of post-training. All that changes is the weights in the matrices.


OpenAI should not be agreeing to any contract with DOD under these circumstances of Anthropic being falsely labeled a supply chain risk.


That's 4–6 months in the 18 months the trials lasted for, i.e. about a 30% slowdown of progression. The open-label extensions suggest this relative slowdown seems to continue at least to the 4-year mark (at which point it would have bought you over a year of time): https://www.alzforum.org/news/conference-coverage/signs-last...

Time will tell if the 30% slowdown continues beyond four years, and/or if earlier treatment with more effective amyloid clearance from newer drugs has greater effects. The science suggests it should.


It’s one of the best blood tests. There are also PET scans, lumbar punctures (spinal taps), and postmortem analyses of brain tissue.


I don’t think we should preemptively surrender our free speech to the authoritarians.


Even the counting numbers arose historically as a tool, right?

Even negative numbers and zero were objected to until a few hundred years ago, no?


A mistake in this critique is it assumes an exponential: a constant proportional rate of growth. It is true that, in some sense, an exponential always seems to be accelerating while infinity always remains equally far away.

But this is a bit of a straw man. Mathematical models of the technological singularity [1], along with the history of human economic growth [2], are super-exponential: the rate of growth is itself increasing over time, or at least has taken multiple discrete leaps [3] at the transitions to agriculture and industry, respectively. A true singularity/infinity can of course never be achieved for physical reasons (limited stuff within the cubically-expanding lightcone, plus inherent limits to technology itself), but the growth curve can look hyperbolic and traverse many orders of magnitude before those physical limits are encountered.

[1] https://www.nber.org/system/files/working_papers/w23928/w239...

[2] https://docs.google.com/document/d/1wcEPEb2mnZ9mtGlkv8lEtScU...

[3] https://mason.gmu.edu/~rhanson/longgrow.pdf


> A true singularity/infinity

It can’t be infinitely fast, but after the point where we all collectively cease to be able to comprehend the rate of change, it’s effectively a discontinuity from our point of view.


One note: the standard deviation of the remaining effects would be sqrt(1/2) as large, not 1/2 as large. So more like 8.5-10.5 years.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: