Hacker Newsnew | past | comments | ask | show | jobs | submit | ianand's commentslogin

I'm not a fan of the database lookup analogy either.

The analogy I prefer when teaching attention is celestial mechanics. Tokens are like planets in (latent) space. The attention mechanism is like a kind of "gravity" where each token is influencing each other, pushing and pulling each other around in (latent) space to refine their meaning. But instead of "distance" and "mass", this gravity is proportional to semantic inter-relatedness and instead of physical space this is occurring in a latent space.

https://www.youtube.com/watch?v=ZuiJjkbX0Og&t=3569s


Then I think you’ll like our project which aims to find the missing link between transformers and swarm simulations:

https://github.com/danielvarga/transformer-as-swarm

Basically a boid simulation where a swarm of birds can collectively solve MNIST. The goal is not some new SOTA architecture, it is to find the right trade-off where the system already exhibits complex emergent behavior while the swarming rules are still simple.

It is currently abandoned due to a serious lack of free time (*), but I would consider collaborating with anyone willing to put in some effort.

(*) In my defense, I’m not slacking meanwhile: https://arxiv.org/abs/2510.26543 https://arxiv.org/abs/2510.16522 https://www.youtube.com/watch?v=U5p3VEOWza8


This is an excellent analogy! Thank you!

The site’s domain name is the best use of a .fail tld ever.


OT from TFA, so high jacking your thread …

I don’t recall if there was ever a difference between “abort” and “fail.” I could choose to abort the operation, or tell it … to fail? That this is a failure?

¯\_(ツ)_/¯


Take reading a file from disk.

Abort would cancel the entire file read.

Retry would attempt that sector again.

Fail would fail that sector, but the program might decide to keep trying to read the rest of the file.

In practice abort and fail were often the same.


Makes sense. Maybe I ran across a proper use a time or two back then and just don’t remember. But the two being the same was the overwhelming experience.


As the guy who did GPT2 in Excel, very cool and kudos!!

Curious why you chose WebGL over WebGPU? Just to show it can be done?

(Also see my other comment about fetching weights from huggingface)


This was a final project for a graphics class where we used WebGL a lot. Also I was just more familiar with OpenGL and haven't looked that much into webGPU


Someone needs to implement Excel using graphics shaders now.



Probably because WebGPU support is still rather iffy.


> Curious why you chose WebGL over WebGPU? Just to show it can be done?

For a WebGPU implementation, one can use transformers.js directly (or many other libraries actually), maybe WebGL is more original.

[1]: https://huggingface.co/docs/transformers.js/index


Transformers.js wraps the ONNX runtime which is rather versatile (WASM, WebGL, WebGPU, and WebNN). It's not the backend that makes it novel.


ianand, I immediately thought of you when I saw this post. Miss you friend.


Dude, been forever. Thanks. Will DM you.


Checkout https://github.com/jseeio/gpt2-tfjs fetches the weights for GPT2 from huggingface on the fly.


Sounds like an interesting masters thesis. Is your masters thesis available online somewhere?


Well, not sure about the final doc that went to the university, but this is the almost final draft.

https://docs.google.com/document/d/e/2PACX-1vSyWbtX700kYJgqe...

Since its in Cyrillic you should perhaps use a translation service. There are some screens showing results, though as I was really on a tight deadline, and its not a PHD but masters thesis, I decided to not go into in-depth evaluation of the proposed methodology against SPIDER (https://yale-lily.github.io/spider). Even though you can find the simplifed GBNF grammar, also some of the outputs. The grammar, interestingly it benefits/exploits a bug in llama.cpp which allows some sort of recursively-chained rules. Bibliography is in English, but really - there is so much written on the topic, by no means comprehensive.

Sadly no open inference engine (at time of writing) was both good enough in beam search, and grammars, so this whole things needs to perhaps be redone in pytorch.

If I find myself in a position to do this for commercial goals, I'd also explore the possibility of having human-catered SQLs against the particular schema, in order to guide the model better. And then do RAG on the DB for more context. Note: I'm already doing E/R model reduction to the minimal connected graph which includes all entities of particular interest to the present query.

And finally, since you got that far - the real real problem with restricting LLM output with grammars is the tokenization. Because all parsers work reading one char at a time, and tokens are very often few chars, so the parser in a way needs to be able to "lookahead", which it normally does not. I believe OpenAI wrote they realized this also, but I can't really find the article atm.


Thanks. Took a quick look and definitely needed to use Google Translate but seems to have worked to get the gist of it.


Fun fact: A decade ago the designer of HAML and Sass created a modern alternative to XSLT. https://en.wikipedia.org/wiki/Tritium_(programming_language)


The model architecture is the same during RL but the training algorithm is substantially different.


> LLMs that haven't gone through RL are useless to users. They are very unreliable, and will frequently go off the rails spewing garbage, going into repetition loops, etc...RL learning involves training the models on entire responses, not token-by-token loss (1).

Yes. For those who want a visual explanation, I have a video where I walk through this process including what some of the training examples look like: https://www.youtube.com/watch?v=DE6WpzsSvgU&t=320s


hey, creator of spreadsheets-are-all-you-need.ai here. Thanks for mentioning!

I now have a web version of GPT2 implemented in pure JavaScript for web developers at https://spreadsheets-are-all-you-need.ai/gpt2/.

The best part is that you can debug and step through it in the browser dev tools: https://youtube.com/watch?v=cXKJJEzIGy4 (100 second demo). Every single step is is in plain vanilla client side JavaScript (even the matrix multiplications). You don't need python, etc. Heck, you don't even have to leave your browser.

I recently did an updated version of my talk with it for JavaScript developers here: https://youtube.com/watch?v=siGKUyTk9M0 (52 min). That should give you a basic grounding on what's happening inside a Transformer.


Reminder that Microsoft ships RWKV with Windows (~1.5 billion devices), making it probably the most widely deployed non transformer model out there. Amazing work! https://blog.rwkv.com/p/rwkvcpp-shipping-to-half-a-billion

ps Eugene you should brag about that on the homepage of RWKV.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: