Hacker Newsnew | past | comments | ask | show | jobs | submit | drw85's commentslogin

Since there is FinalCut and Logic Pro on iPad now, larger projects absolutely benefit from this.

Nice ChatGPT answer. Put some real thought and data in it too.

The whole point is that LLMs, especially the attention mechanism in transformers, have already paved the road to AGI. The main gap is the training data and its quality. Humans have generations of distilled knowledge — books, language, culture passed down over centuries. And on top of that we have the physical world — we watched birds fly, saw apples drop, touched hot things. Maybe we should train the base model with physical world data first, and then fine tune with the distilled knowledge.

Human life includes a lot of adversarial training (lying relatives) and training in temporal logics, which would seem to be a somewhat different domain than purely linguistic computations (e.g. staying up late, feeling bad; working hard at a task for months, getting better at it; feeling physical skills, even editing Go with emacs, move from the conscious layer into the cerebrellar layer). I think attention is a poor mans "OODA" loop; cognitive science is learning that a primary function of the brain is predicting what will be going on with the body in the immediate future, and prepping for it; that's not a thing that LLMs are architecturally positioned to do. Maybe swarms of agents (although in my mind that's more of a way to deal with LLM poor performance with large context of instructions (as opposed to large context of data) than a way to have contending systems fighting to make a decision for the overall entity), but they still lack both the real-time computational aspect and the continuously tricky problem of other people telling partially correct information.

There's plenty of training data, for a human. The LLM architecture is not as efficient as the brain; perhaps we can overcome that with enough twitter posts from PhDs, and enough YouTubes of people answering "why" to their four year olds and college lectures, but that's kind of an experimental question.

Starting a network out in a contrained body and have it learn how to control that, with a social context of parents and siblings would be an interesting experiment, especially if you could give it an inherent temporality and a good similar-content-addressable persistent memory. Perhaps a bit terrifying experiment, but I guess the protocols for this would be air-gapped, not internet connected with a credit card.


That's the point. It now gets rendered in Notepad. Before these changes Notepad was just able to edit plain text and not rendered markdown etc.

So you subscribe to the Microsoft CoPilot 365 App or whatever it's called now.

hmm... ok maybe that's reasonable

charge extra premium for a "secure vanilla-text™ pure unadulterated wysiwyg editor" experience should be a thing for the "security-minded" enterprises


I don't think it sounds crazy at all.

To me this feels as made-up as many reddit stories are.

Either by the so-called 'operator' of the bot, or by the author.


This is the problem with the LLM fallacy.

You think it'll rapidly get smarter, but it just recreates things from all the terrible code it was fed. Code and how it is written also rapidly changes these days and LLMs have some trouble drawing lines between versions of things and the changes within them.

Sure, they can compile and test things now, which might make the code work and able to run. The quality of it will be hard to increase without manually controlling and limiting the type of code it 'learns' from.


> Is it maintainable? Well it's AI that's going to maintain it.

That's what's currently not possible, it might work in a small webapp or similar. But in a large system, it absolutely falls apart when having to maintain it. Sure, it can fix a bug, but it doesn't understand the side effects it creates with the fix, yet.

Maybe in the future that will also be possible. I do agree with you about business/management not caring about long term impacts if short term gains are possible.


I also think this is why AI works okay-ish on tiny new greenfield webapps and absolutely doesn't on large legacy software.

You can't accurately plan every little detail in an existing codebase, because you'll only find out about all the edge cases and side effects when trying to work in it.

So, sure, you can plan what your feature is supposed to do, but your plan of how to do that will change the minute you start working in the codebase.


Your computer could also be used as part of a botnet or to commit crimes from. Not all malware/viruses are used to directly steal from the target.


For MS, it's currently eroding through every single one of their products.

Azure, Office, Visual Studio, VS Code, Windows are all shipping faster than ever, but so much stuff is unfinished, buggy, incompatible to existing things, etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: