Location: South East Asia (an expat)
Remote: only
Willing to relocate: yes
Technologies: FP, ML, PLT, SQL
Résumé/CV: https://lngnmn2.github.io/
Email: lngnmn2@yahoo.com
Every training set will produce a different set of weights, even the same training set with produce different weights with different initialization, leave alone slightly different architectures.
So what exactly is the point, except "look at us, we are so clever"?
Absolutely, unless you want to make any life-changing decisions based on them.
This is also part of the modern culture - producing useless verbiage is ok on social media, and even is scientific communities as "research papers".
In reality though no accurate predictions can be made from merely information which is what an indexed text is.
There are a few fundamental principles at various levels behind this statement.
One is that the actual causes of events are at a different level from the texts. Another is that mere observations and counting (weighting) will always miss a change, and so on.
There is also the notion of "fully observable" environment, which is related to why validity of the "prediction" about the Sun to rise tomorrow is based not on mere "statistics" but on knowing the dynamics within the Sun (the actual process).
But, yes, everyone is just riding the hype though.
It should be able to predict the outcomes of human behaviours in specific cases, such as elections.
There is enough knowledge out there on how humans behave, both from examples of past events and scientific knowledge from studies.
Then for elections there is enough human content about how humans are behaving leading up to the event. Elections specifically are long events that people are very vocal about. If you try to use it for something like 'will the demand for toilet paper go up or down this week?' you aren't going to see the same results as there is not enough data in social media.
From that an LLM should be able to predict - well extrapolate - the outcome, basically a glorified opinion poll. This however would need a near real-time knowledge cutoff, so it's not something that current LLMs could do.
This wrong idea, for me, encapsulates everything wrong about AI generally as well.
Human beings are not predictable; we're not automatons that can produce reliable outcomes. We're simply too random.
This is why a "sentient AI" or whatever silliness won't take over the world; not that it isn't smart, but at some point it would have to give orders to humans, who can't be relied on to predictably execute them correctly.
E2EE messages aren't encrypted until they're sent and are decrypted on the receiving end. Scanning can be mandated to happen at either or both of those points.
Location: South East Asia
Remote: Only
Willing to relocate: Yes, only Sweden, Norway, Japan, Singapore or US
Technologies: Just read my stuff.
Résumé/CV: https://schiptsov.github.io/
Email: lngnmn2@yahoo.com, schiptsov@gmail.com