Hacker Newsnew | past | comments | ask | show | jobs | submit | empressplay's commentslogin

This study had no controls at all, and can safely be ignored.

>The participants reported not previously using noise to help them sleep or having any sleep disorders.

All this study said was that people who didn't need noise to sleep had their sleep disrupted when noise was introduced. It has absolutely no implication for people who use noise to help them sleep.

Meaningless trash.


This tells you all you need to know about OpenAI, honestly.

Personally I'd like to see the model get better at coding, I couldn't really care less if it's able to be 'creative' -- in fact i wish it wasn't. It's a waste of resources better used to _make it better at coding_.

Resources issue is really something that needs to be thought about more. These things already siphoned all existing semiconductors and if that turns out to be mostly spent on things like op does and viral cats then holy shit

Thing is dear people, we have limited resources to get out of this constraining rock. If we miss that deadline doing dumb shit and wasting energy, we will just slowly decline to preindustrial at best and that's the end of any space society futurism dreams forever.

We only have one shot at this, possibly singular or first sentients in the universe. It is all beyond priceless. Every single human is a miracle and animals too.


What is the difference between creativity and coding?

works excellent on m2 max -- thanks! good times.

I'm glad B5 is still getting a new audience.

This article sounds very AI generated though.


I compose all sorts of music but the only music people really like (and give me money for) is pirate music.

Not pirated music. Pirate music.

shrug


hey at least someone is listening to it XD

I've got multiple hours of music in different genres and get 50 views in 10 years...


What's pirate music?


Right, but that's the point -- prompting an LLM still requires 'thinking about thinking' in the Papert sense. While you can talk to it in 'natural language' that natural language still needs to be _precise_ in order to get the exact result that you want. When it fails, you need to refine your language until it doesn't. So prompts = high-level programming.

You can't think all the way about refining your prompt for LLMs as they are probabilistic. Your re-prompts are just retrying until you hit a jackpot - refining only works to increase the chance to get what you want.

When making them deterministic (setting the temperature to 0), LLMs (even new ones) get stuck in loops for longer streams of output tokens. The only way to make sure you get the same output twice is to use the same temperature and the same seed for the RNG used, and most frontier models don't have a way for you to set the RNG seed.


Randomness is not a problem by itself. Algorithms in BQP are probabilistic too. Different prompts might have different probabilities of successful generation, so refinement could be possible even for stochastic generation.

And provably correct one-shot program synthesis based on an unrestricted natural language prompt is obviously an oxymoron. So, it's not like we are clearly missing the target here.


>Different prompts might have different probabilities of successful generation, so refinement could be possible even for stochastic generation.

Yes, but that requires a formal specification of what counts as "success".

In my view, LLM based programming has to become more structured. There has to be a clear distinction between the human written specification and the LLM generated code.

If LLMs are a high level programming language, it has to be clear what the source code is and what the object code is.


I don't think framing LLMs as a "new programming language" is correct. I was addressing the point about randomness.

A natural-language specification is not source code. In most cases it's an underspecified draft that needs refinement.


Programs written in traditional PLs are also often probabilistic. It seems that the same mechanisms could be used to address this in both types (formal methods).

Huh?

What's an example of a probabilistic programming language?


This isn't what the parent was talking about, but probabilistic programming languages are totally a thing!

https://en.wikipedia.org/wiki/Probabilistic_programming


Race conditions, effects of memory safety and other integrity bugs, behaviours of distributed systems, etc.

Ah sorry I read your comment wrong. Yes I agree we can and do make probabilistic systems; we've just to date been using deterministic tools to do so.

... yet.

Yes, because you have your own style. You do things your own unique way. That's what makes your music your music.

While large language models don't have enough nuance for AGI, there is some promise still in multi-modal models, or models based purely on other high-bandwidth data like video. So probabilistic token-based models aren't entirely out of the running yet.

Part of the problem with LLMs in particular is ambiguity -- this is poisonous to a language model. And English in particular is full of it. So another potential that is being explored is translating everything (with proper nuance) to another language that is more precise, or by rewriting training data to eliminate any ambiguities by using more exact English.

So there are ideas and people are still at it. After all, it usually takes decades to fully exploit any new technology. I don't expect that to be any different with models.


The vast majority of schools in North America don't allow teachers or students to download and run software on school computers (let alone AI models), so I don't entirely know who the audience is for this. I suppose home users? Maybe it's different in the UK.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: