it feels like google in panic mode, the only thing it can think of is to put a chatbot everywhere, just b/c it can. I don't see a value proposition at all.
My understanding is that every manager at Google has had one of their quarterly goals be to integrate genAI into their team's product (regardless of whether it makes sense to) for the past several years already, so you're not wrong.
Interesting formulation! it captures the intuition of the "smartness" when solving a problem. However, what about asking good questions or proposing conjectures?
Both use immediate mode rendering. Both have the “single header” design. There doesn’t appear to be any shared implementation.
The examples use Raylib as a renderer behind the layout engine. I suppose it would be possible to use Dear Imgui as a renderer, but you might have to write some glue code.
The flow based visual programming has always being challenging field, and the problem remains: what are the values added comparing with the text based coding?
I really want to love it, please give me a strong reason.
Debugging is quite nice. Clicking on a wire and being able to visually see its value at runtime and even present it in a custom UI, like a graph or chart, is awesome.
It's easier to parallelize a flow graph than a sequential series of instructions. (This is why visual flow programming caught on in embedded hardware and IoT.)
I feel it's a pretty dangerous optimization before we REALLY understand what's going on inside of the LLM. e.g. guys believe in the geometric interpretation will have something to say, and it would probably hurt if you are using "filler" tokens.
Besides, the assumption (not a universal fact) that "forming complete sentences in mind before articulating word by word" seems overly simplifies activities happens in our mind: do we really have a complete planning before start talking/typing? as a Buddhist I lean towards it's an illusion. further more, what about simultaneous thoughts? are we linear thinker in the sentence level?
The optimization does not affect the result of LLM, it's guaranteed to produce equivalent results as decoding directly. Let's not treat that LLM as some magic that resembles our mind, it's just another program that produces sentences that happens to make sense.
> Let's not treat that LLM as some magic that resembles our mind,it's just another program that produces sentences that happens to make sense.
"That happen to make sense" is hiding a lot of magic. It would be statistically impossible to make as much sense as LLMs do in response to prompts if it did not actually make semantic distinctions. If it makes semantic distinctions, then it does resemble the human mind in at least one way.
According to the original Jacobi decoding paper, it's set in the machine translation tasks, with encoder + decoder, in which parallel algo applied only to the decoder part.
Lets not treat our mind as something magical. It's just another program that learned to speak by consuming lots of training input. The implementation might look slightly different from the outside, but from a mathematical perspective, artificial neural networks are proven to be at least as capable as the human mind.
That's really nowhere near enough of a proof. You'd need to prove that a human brain is equivalent to a mathematical function, and that that function can be sufficiently approximated by a NN to be functionally identical.
Additionally UAT doesn't actually prove NNs can approximate any function. Non-continuous functions and infinitely large domains aren't covered.
That assumption might be useful in this context, but I think it's pretty clearly not true. Ask anyone to tell you about a complex past event with a lot of parallel branches and you'll quickly see them add bits, pieces and tangents midsentence to cover the full range of events. I don't think I've seen the sentence granularity hypothesis in any serious scientific context before.
Can't speak for everyone but I definitely don't mentally form complete sentences before talking. Sometimes I grammatically talk myself into a corner in the middle of a sentence and need to use some awkward words/phrases to finish my thought, or simply pause and restart the phrase from the beginning.
I feel surprisingly disconnected from my speaking self, acting as more of an observer, who is sometimes surprised at what I come up with. It just flows. I feel I have very little need for input.
But, I also feel fairly disconnected from my thinking self. I point my attention at something and solutions usually just pop out, maybe with some guidance/context forming required, in the form of internal dialog, which is usually of a rubber ducky style format [1], or mental testing of that mostly spontaneous solution.
I feel the "real" me is the one sensing/observing, which includes the observing of those spontaneous solutions, and what I say.
We don't appear to be forming words sequentially from underlying parts, even though in many languages they are broken down in smaller units that carry semantic meaning themselves. There doesn't seem to be any clear reason for this to break down suddenly at sentence level.
Unable to process request due to missing initial state. This may happen if browser sessionStorage is inaccessible or accidentally cleared. Some specific scenarios are - 1) Using IDP-Initiated SAML SSO. 2) Using signInWithRedirect in a storage-partitioned browser environment.
the problem is then the total number of computation drops dramatically therefore leads to much less "thinking" power. i think the idea originated from an understanding that when we write/speak, we have an overall idea. my current hypothesis is it's probably an illusion.
you may want to search for "filler" papers to read.
People make same kind of mistakes all the time, instead of judging the contents, we look down to things that are "mostly repackaging" what already exist. Brand new things are extremely rare, mostly just remix of things we've seen everyday.