Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In essence it is a thing that is actually promoting your own brain… seems counter intuitive but that’s how I believe this technology should be used.




Meant to say prompting*

This technology (which I had a small part in inventing) was not based on intelligently navigating the information space, it’s fundamentally based on forecasting your own thoughts by weighting your pre-linguistic vectors and feeding them back to you. Attention layers in conjunction of roof later allowed that to be grouped in higher order and scan a wider beam space to reward higher complexity answers.

When trained on chatting (a reflection system on your own thoughts) it mostly just uses a false mental model to pretend to be a desperate intelligence.

Thus the term stochastic parrot (which for many us actually pretty useful)


Thanks for your input - great to hear from someone involved that this is the direction of travel.

I remain highly skeptical of this idea that it will replace anyone - the biggest danger I see is people falling for the illusion. That the thing is intrinsically smart when it’s not - it can be highly useful in the hands of disciplined people who know a particular area well and augment their productivity no doubt. Because the way we humans come up with ideas and so on is highly complex. Personally my ideas come out of nowhere and mostly are derived from intuition that can only be expressed in logical statements ex-post.


Is intuition really that different than LLM having little knowledge about something? It's just responding with the most likely sequence of tokens using the most adjacent information to the topic... just like your intuition.

With all due respect I’m not even going to give a proper response to this… intuition that yields great ideas is based on deep understanding. LLM’s exhibit no such thing.

These comparisons are becoming really annoying to read.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: