This study had no controls at all, and can safely be ignored.
>The participants reported not previously using noise to help them sleep or having any sleep disorders.
All this study said was that people who didn't need noise to sleep had their sleep disrupted when noise was introduced. It has absolutely no implication for people who use noise to help them sleep.
Personally I'd like to see the model get better at coding, I couldn't really care less if it's able to be 'creative' -- in fact i wish it wasn't. It's a waste of resources better used to _make it better at coding_.
Resources issue is really something that needs to be thought about more. These things already siphoned all existing semiconductors and if that turns out to be mostly spent on things like op does and viral cats then holy shit
Thing is dear people, we have limited resources to get out of this constraining rock. If we miss that deadline doing dumb shit and wasting energy, we will just slowly decline to preindustrial at best and that's the end of any space society futurism dreams forever.
We only have one shot at this, possibly singular or first sentients in the universe. It is all beyond priceless. Every single human is a miracle and animals too.
Right, but that's the point -- prompting an LLM still requires 'thinking about thinking' in the Papert sense. While you can talk to it in 'natural language' that natural language still needs to be _precise_ in order to get the exact result that you want. When it fails, you need to refine your language until it doesn't. So prompts = high-level programming.
You can't think all the way about refining your prompt for LLMs as they are probabilistic. Your re-prompts are just retrying until you hit a jackpot - refining only works to increase the chance to get what you want.
When making them deterministic (setting the temperature to 0), LLMs (even new ones) get stuck in loops for longer streams of output tokens. The only way to make sure you get the same output twice is to use the same temperature and the same seed for the RNG used, and most frontier models don't have a way for you to set the RNG seed.
Randomness is not a problem by itself. Algorithms in BQP are probabilistic too. Different prompts might have different probabilities of successful generation, so refinement could be possible even for stochastic generation.
And provably correct one-shot program synthesis based on an unrestricted natural language prompt is obviously an oxymoron. So, it's not like we are clearly missing the target here.
>Different prompts might have different probabilities of successful generation, so refinement could be possible even for stochastic generation.
Yes, but that requires a formal specification of what counts as "success".
In my view, LLM based programming has to become more structured. There has to be a clear distinction between the human written specification and the LLM generated code.
If LLMs are a high level programming language, it has to be clear what the source code is and what the object code is.
Programs written in traditional PLs are also often probabilistic. It seems that the same mechanisms could be used to address this in both types (formal methods).
While large language models don't have enough nuance for AGI, there is some promise still in multi-modal models, or models based purely on other high-bandwidth data like video. So probabilistic token-based models aren't entirely out of the running yet.
Part of the problem with LLMs in particular is ambiguity -- this is poisonous to a language model. And English in particular is full of it. So another potential that is being explored is translating everything (with proper nuance) to another language that is more precise, or by rewriting training data to eliminate any ambiguities by using more exact English.
So there are ideas and people are still at it. After all, it usually takes decades to fully exploit any new technology. I don't expect that to be any different with models.
The vast majority of schools in North America don't allow teachers or students to download and run software on school computers (let alone AI models), so I don't entirely know who the audience is for this. I suppose home users? Maybe it's different in the UK.
>The participants reported not previously using noise to help them sleep or having any sleep disorders.
All this study said was that people who didn't need noise to sleep had their sleep disrupted when noise was introduced. It has absolutely no implication for people who use noise to help them sleep.
Meaningless trash.
reply