Hacker News new | past | comments | ask | show | jobs | submit | lngnmn2's comments login

  Location: South East Asia (an expat)
  Remote: only
  Willing to relocate: yes
  Technologies: FP, ML, PLT, SQL
  Résumé/CV: https://lngnmn2.github.io/
  Email: lngnmn2@yahoo.com
Just read my stuff https://lngnmn2.github.io/


Every training set will produce a different set of weights, even the same training set with produce different weights with different initialization, leave alone slightly different architectures.

So what exactly is the point, except "look at us, we are so clever"?



Maybe this won't be deleted this time.

https://lngnmn2.github.io/articles/llm-predictions/


Absolutely, unless you want to make any life-changing decisions based on them.

This is also part of the modern culture - producing useless verbiage is ok on social media, and even is scientific communities as "research papers".

In reality though no accurate predictions can be made from merely information which is what an indexed text is.

There are a few fundamental principles at various levels behind this statement.

One is that the actual causes of events are at a different level from the texts. Another is that mere observations and counting (weighting) will always miss a change, and so on.

There is also the notion of "fully observable" environment, which is related to why validity of the "prediction" about the Sun to rise tomorrow is based not on mere "statistics" but on knowing the dynamics within the Sun (the actual process).

But, yes, everyone is just riding the hype though.


It should be able to predict the outcomes of human behaviours in specific cases, such as elections.

There is enough knowledge out there on how humans behave, both from examples of past events and scientific knowledge from studies.

Then for elections there is enough human content about how humans are behaving leading up to the event. Elections specifically are long events that people are very vocal about. If you try to use it for something like 'will the demand for toilet paper go up or down this week?' you aren't going to see the same results as there is not enough data in social media.

From that an LLM should be able to predict - well extrapolate - the outcome, basically a glorified opinion poll. This however would need a near real-time knowledge cutoff, so it's not something that current LLMs could do.


This wrong idea, for me, encapsulates everything wrong about AI generally as well.

Human beings are not predictable; we're not automatons that can produce reliable outcomes. We're simply too random.

This is why a "sentient AI" or whatever silliness won't take over the world; not that it isn't smart, but at some point it would have to give orders to humans, who can't be relied on to predictably execute them correctly.



  Location: SEA
  Remote: only
  Willing to relocate: Japan, Singapore, Norway, Sweden 
  Technologies: Functional programming, PLT, AI, Trading 
  Résumé/CV: https://lngnmn2.github.io/
  Email: lngnmn2@yahoo.com
Just read my stuff.

Can do remote part time R&D.


Isn't "encrypted messages" implies the impossibility of any scan?

Oh, could it be that they store all the msgs as a plain text on the server and the "encrypted" is just a meme for only a TLS connection?

Rhetorical question, of course.


E2EE messages aren't encrypted until they're sent and are decrypted on the receiving end. Scanning can be mandated to happen at either or both of those points.


Not general or universal. Only for pre-trained data and with abysmal worst cases.


I'm not positive what you mean here. Are you saying the discovered algorithm isn't actually good? Didn't LLVM accept it?


It is not a number, it is a separate concept, and the current consensus that it is part of the Set of numbers and of a Monoid is just wrong.


  Location: South East Asia 
  Remote: Only 
  Willing to relocate: Yes, only Sweden, Norway, Japan, Singapore or US
  Technologies: Just read my stuff.
  Résumé/CV: https://schiptsov.github.io/
  Email: lngnmn2@yahoo.com, schiptsov@gmail.com


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: