This + NFT integration will be the real game changer. Like it's Breaking Bad, except Walter White is decked out like one of your Slonks. Or it's Indiana Jones stealing a Bored Ape instead of the idol. Possibilities are endless.
Theoretically, if a technology came along that destroyed the human condition in a four year time frame, your "let's wait longer than four years and see" philosophy would kill us all.
Okay, but you have no basis to assume a technology that automates labor will do that given your priors (previous technologies that have automated general labor en masse). In other words, the FUD is not based in anything, whereas the optimism most certainly is.
I'm not trying to argue whether AI is good or bad. I'm only pointing out the flaw in your philosophy.
Like, there will one day be a technology invented that could indeed wipe us all out in 5 years. On a long enough timeline, it's a certainty that someone will come up with such a thing. And when it comes, there will be people such as yourself saying "no technology has ever wiped us out before, therefore this one won't either". And then it will wipe us out, and there will be no one left to say "well I guess this time was the exception".
I didn't say you said all technology was good. I'm addressing the part of your argument where you point to past events as evidence that the automation of labor will always continue to be good in the future. I used an extreme example to show that one day, if a new technology that automates labor is actually a net negative for society, your philosophy won't catch it. It will slip right through.
To spell it out so we don't keep going around in this circle: it's worth looking at indicators other than the past to judge a technology's benefit to the labor force.
You don’t buy FartCoin for the short term gains, you buy it for the long term. For example it’s down over the past hour, but if you look long term (past day) it’s up.
And yes of course I’m joking. If you’re spending money on these get rich quick schemes instead of dollar cost averaging into an index fund, you’re being irresponsible. That’s real advice.
It's the same reason young men are drawn to crypto. Younger generations are faced with an economy that prices them out of the housing market, so they feel the need to explore alternative wealth-building pathways if they're to achieve the aspirational lifestyles they've been sold.
Realistically though, with the demographics as they are, aren't these young men just throwing dice to gamble against and take the money of other young men? Isn't this a 0 sum game?
Or is it more young men vs the establishment where the establishment wins the vast majority of the time but occasionally a young dude makes the right longshot bet?
> Or is it more young men vs the establishment where the establishment wins the vast majority of the time but occasionally a young dude makes the right longshot bet?
Seems like the latter - except that not only describes how people perceive gambling, but the entire economy considering startups, silicon valley, the current crop of tech billionaires and how they made their fortunes, etc.
So, why not gamble on crypto, NFTs, or prediction markets? Might as well go for the longshots since everything is a longshot anyway
A poor person paying $5.00 for an odds–adjusted $4.99 lottery ticket a couple dozen times in their life is likely not making her worst investment. And if she does win, it is hard to argue against the wisdom of it.
The gamblers, however, will see a future where they have paid $5.00 for a $0.03 ticket and still won the lottery a couple dozen times in a row because they deserve it so they will buy all tickets they can right now ending with 3 —because that's important.
Even when you think you have a legitimate insight so the book is mispriced for your actual odds, you should consider the risk.
Risk management is foremost.
What happens if I lose the almost certainly sure bet?
This may come as a surprise to you, but in the real world, there are not few people whose business is making you think you have an edge on your "long shot".
I mean you could be describing society as it already existed. It is what itself capitalism promotes, gambling just seems a bit more direct. When 99.9% of people apply for a job, they are directly competing against other workers in a zero sum game. Maybe a few years down the line they might open up more jobs so over long enough time spans its not zero sum, but for the person seeking a job at that very moment, it is zero sum. Our economy is zero sum in the short term, its not like people can go freeze themselves in cryostasis and wait a few decades for prospects to be better or the economy to expand, they either earn money now, or they throw that opportunity cost into the trash to never be returned.
> Maybe a few years down the line they might open up more jobs so over long enough time spans its not zero sum
Also, everybody benefits from a society that chooses qualified people for a position, and gives everybody an opportunity to get a job. But that is also something that shows over time and many processes, and it is harder to see in the moment.
Nepotism is the zero-sum version of applying for a job. Only the power to take away from others is accounted, no qualification required just raw power. Which nepo-baby gets the government contract, the board position, etc. is a zero-sum game and participants behave like what it is. Betrayal, lies, etc. is part of that game.
Um, no, it’s not. It’s notoriously hard to estimate exactly but annual consumer surplus in the US alone is estimated to be in the trillions of dollars.
So if I come up with a billion dollar invention, does the money just poof into existence? No, that money has to come from other peoples pocket and will no longer be spent on those other things if I sell and collect. Over long time spans no it isn't zero sum, but in anything time that isn't measured in many years, it most definitely is zero sum for all purposes. And since people can't just check out from the economy without losing money, the fact that the economy could be larger in 10 years doesn't make a damn bit of difference to someones right at this moment.
Monopoly isn't a zero sum game, and yet within every turn there is a maximum zero-sum amount of money available in play that can be utilized. And the fact that 5x more money might be on the board 20 turns later doesn't make a bit of difference in what I or anybody else can spend or earn during our turn right now.
They should try releasing one with a futuristic space-age design and flashing rainbow lights. Maybe give it a name like HARDTEK and put some random techy shapes all over it.
Can someone explain why this is? Do LLMs somehow contain a true random number generator? Why wouldn't they produce the same outputs given the same inputs (even temperature)?
edit: I'm not talking about an LLM as accessed through a provider. I'm just talking about using a model directly. Why wouldn't that be deterministic?
The model outputs a probability distribution for the next token, given the sequence of all previous tokens in the context window. It’s just a list of floats in the same order as the list of tokens that the tokenizer uses.
After that, a piece of software that is NOT the LLM chooses the next token. This is called the sampler. There are different sampling parameters and strategies available, but if you want repeatable* outputs, just take the token with the highest probability number.
* Perfect determinism in this sense is difficult to achieve because GPU calculations naturally have a minor bit of nondeterminism. But you can get very close.
Believe it or not in statistics and machine learning the hard coded parts of a model that impact the results are considered part of the model. But I understand that now days we don't care about these things because ai goes brrr.
There are A LOT of misconceptions about llms, biggest one is they are not deterministic. And they are 100% deterministic and temperature has nothing to do with it. You WILL get exactly same result every single time (at ANY temperature) as long as you use same sampling parameters and server config parameters. What causes variance in LLM's is server parameters like batch processing and caching among a few other things possibly. the batching being responsible for most of the issues. The reason that flag is used is because large providers serve multiple customers per one gpu, and breaking up the vram is tricky and causes drift. If you start llama.cpp for example with only one person per slot batching off, you will always get same results every time even at temperature 1.2 or whatever other parameters because you are using one gpu per inferance call so no fucky buseness there. Reason most people are unaware of this is because most people have experience only with api instead of working with the actual inferance enjine itself so this godd damned myth keeps spreading. my vide for referance here where you can download and try for yourself. https://www.youtube.com/watch?v=EyE5BrUut2o
Thanks so much for this! I still haven't got around to building my own language model yet, so I'm a bit fuzzy on the details, but if I imagined a thought experiment where I did all the math by hand on paper, I just couldn't see how I would end up with a different output each time given the same inputs. Finding out that the variance other people are seeing comes from the server/hardware stuff clears that up.
This is a surprisingly annoying question to Google. A lot of articles give the reason that softmax returns a probability distribution, as if the presence of the word "probability" means the tokens will be different every time.
An LLM model itself -- that is, the weights and the mathematical functions linking them -- does not tell you exactly how to train from data, nor how to generate an output. Instead, it describes a function providing relative likelihood(output | input).
Deciding how to pick a particular output given that likelihood function is left as an exercise for the user, which we call inference.
One obvious choice is to keep picking the highest likelihood token, feed it into the model, and get another -- on repeat. This is what most algorithms call "temperature=0". But doing this for token after token can lead boring output, or steer you into pathological low-probability sequences like a set of endless repeats.
So, the current SOTA is to intentionally introduce a random factor (temperature>0) to the sampling process -- along with other hacks, like explicit suppression of repeats.
Yea sure. So temperature is baked into these LLM models and when it isn't zero it increases the probability of taking a different path to decode the tokens. Whether it's at a provider or downloaded on your own machine.
Technically even when the temperature is 0 it's not deterministic but it's more likely to be... You can have ties in probabilities for generating the next words. And floating point noise is real.
All these models are doing is guesstimating the next token to say.
OP is being a bit tongue-in-cheek, I believe they mean that some vibe coders really want to be abstracted away from their own jobs, and are very much not interested in computer-scientific abstraction.
I found some more details about this, for anyone interested. It looks like critics of Tron's visual effects mistakenly thought the computer was generating it all for them, with little human input, when it was actually quite a laborious process.
"Tron’s offices were trailers in the Disney parking lot, recalls Chris Wedge, then an animator for MAGI, who worked on Tron’s light cycle sequences. “[That’s] because the Disney animation department didn’t believe that this was animation,” he says. “They thought it was computers just making effects. They just didn’t understand anything about it.”"
"Tron’s distinctive glowing circuitry was achieved through a technique called backlight animation, which involves making a negative of each frame and hand-painting the glowing areas. There were 75,000 frames to do; more than half a million pieces of artwork."
"Star Wars and Alien both feature 3D wireframe graphics projected on screens. Only a few companies could produce such images, each of which had their own room-sized computer and their own custom-built software. The process was still cumbersome. “We had to figure out how to position and render objects 24 times to make one second of perceived movement on the screen,” says Bill Kroyer, Tron’s head of computer animation. Tron’s animators had to map out the CGI scenes on graph paper, then calculate the coordinates and angles for each element in each frame."
> It looks like critics of Tron's visual effects mistakenly thought the computer was generating it all for them, with little human input,
I no longer remember the details but I certainly didn't get the impression that "human input" vs "no human input" was the actualy criterion.
And this line to me is meaningless unless we have specific definitions of what the Christ Wedge meant by "animation" and "just effects"
> “[That’s] because the Disney animation department didn’t believe that this was animation,” he says. “They thought it was computers just making effects. They just didn’t understand anything about it.”"
HN is an interesting case, because a lot of those types of comments are likely to be either astroturfing, people who work at those companies, or people who are invested in them. I wouldn't extrapolate too much to the rest of humanity based on the people here.
You might overthink this as well, reality is most people just want to type `XXXX mix` and they just want to have this playing for 1h with nice backgrounds and call it a day, I doubt most actually care if it's AI or not. They might care only if you tell them. I hope I'm wrong.
There is also the genuine interest in a group to play AI tracks out of curiosity which is entertaining on its own as well, especially for the ones unaware of what it can produce (particularly since 2026).
> If nothing else, to have a shared experience with other people. A lot of the value for people is derived from the fact that they can talk about the same song with someone else. If it's all individualized you lose that.
While I agree this is a valuable part of the human experience, I don't think the average person would recognise it as one. Without it, they might start to feel a general lack of connection to the people around them, but probably wouldn't be able to trace where it came from.
It makes me wonder what other human cultural experiences we've lost over the years, which are causing us a kind of collective mental anguish in their absence.
reply