Because corporations literally forbid people working for them and living in some cheap places. My corporation forbids this, I must live withing commute distance of any of our offices, despite working full remote for years. They also made a weak try to shove us back in the office in 2023, but that was universally ignored and not enforced thankfully.
Recently I had to send file from Whatsapp to Telegram, because apparently it is forbidden to download file from the Whatsapp and it's a "feature". Facepalm....
PS: afaik IPFS doesn't guarantee file storage, a separate paid middleman is required for that.
One of the problems would be acquiring said corpus. NN corporations got away with scraping all human made content for free (arguably stealing it all), but no one can really prove that their specific content was taken without asking, so no lawsuits. NYT tried but that was workaround and I don't know the status of that case. But if NN corpo will come out with explicitly saying that "here, we are using a Nature journal dump from 2024" then Nature journal will come to them and say "oh, really?".
The so called AI can't "know". It doesn't have understanding if the generated text is an answer or of it isn't. You can't force that instruction on a neural network, at best it just adjusts generated text slightly and you think that it somehow started understanding.
How confident can you be in this? Have you analyzed what exactly the billions of weights do?
I’ve got my opinions about what LLMs are and what they aren’t, but I don’t confidently claim that they must be such. There’s a lot of stuff in those weights.
Except the weights form complex relationships in order to reproduce very human usable responses. You can't look at weights and say it is doing this, or not doing that, unless you dive into a particular model.
Especially when you have billions of weights.
These models are finding general patterns that apply across all kinds of subjects. Patterns they aptly recognize and weave in all kinds of combinations. They are sensibly conversing on virtually every topic known to human kind. And can talk sensibly about any two topics, in conjunction. There is magic.
Not mystic magic, but we are going to learn a lot as we decode how their style of processing (after training) works. We don't have a good theory of how either LLM's or we "reason" in the intuitive sense. And yet they learn to do it. It will inspire improved and more efficient architectures.
Love your end. I have have spent four decades looking at real neurons, real synapses, and real axons and I can tell you with complete confidence that we are all just zombies.
Imagining we are really doing everything it does automatically including learning via algorithms we have only vague understandings of.
That is a strange thought. I could look at all my own brain's neurons, even with a heads up display showing all the activity clearly, and have no idea that it was me.
The closest I got to biological neurons was the toy but interesting problem of using a temporal pattern of neuron spikes to deduce the weights for arbitrarily connected (including recurrent) networks of simple linear integrate to threshold, spike and reset "neurons".
Algorithms can be nearly magical. In 1941 the world woke up to the “magic” of feedback and 10 years later cybernetics was the rage. We humans are just bags of protoplasm, but seems rather magical to me to be human.
There's a distinction between "a model" and the chain of tools and models you employ when asking something on chatgpt.com or any of the consumer facing alternatives.
The latter is a chain of models, some specialized in question dissecting, some specialized in choosing the right models and tools (i.e: there's a calculation in there, lets push that part to a simple python function that can actually count stuff, and pull the rest through a generic LLM). I experiment with such toolchains myself and it's baffling how fast the complexity of all this is becoming.
A very simple example would be "question" -> "does_it_want_code_generated.model" -[yes]-> specialized_code_generator.model | -[no]-> specialized_english_generator.model"
So, sure: a model has no "knowledge", and nor does a chain of tools. But having e.g. a model specialized (ie. trained on or enriched with) all scientific papers ever, or maybe even a vector DB with all that data, somewhere in the toolchain that is in charge of either finding the "very likely references" or denying an answer would help a lot. It would for me.
Sure, chains of networks can guess at the "passable" answer much better/faster/cheaper etc. But that doesn't remove the core issue, that none of the sub-networks or decision trees can understand what it generates, and so it can't abort its work and output "no answer" or something similar.
The whole premise of original request was that user raises a task for NN which has a verifiable (maybe partially) answer. He sees incorrect answer and wishes that a "failure" was displayed instead. But NN can't verify correctness of it's output. After all G in GPT stands for Generative.
My simple RAG setup has a steps that will return "We don't have this information" if e.g. our vector DB returns entries with far too low relevancy scores or if the response from the LLM fails to add certain attributes in its answer and so on.
Edit: TBC: these "steps" aren't LLMS or other models. They're simple code with simple if/elses and an accidental regex.
Again: an LLM/NN indeed has no "understanding" of what it creates. Especially the LLMs that are "just" statistical models. But the tooling around it, the entire chain can very well handle this.
The trust in Mozilla went from 70 to 60. The trust in the google monopoly is approximately -99999999999999999, give or take a few points. You just can't compare them.
A few libertarians wanted to free humans from greedy banks and govts. But suddenly they have discovered that banks and govts are a feature, not a problem. Fully trustless society is a hellish dystopia, of the likes portrayed in the sci-fi books. And systems with at least some amount of trust function better if the trusted entity is government or at least some known person, and not come criminal sitting in the non-extradition offshore operating token network from a single laptop with zero oversight and zero restrictions.
PS: but cudos where they are due - at least Monero folks have a clear purpose, a clear vision and they are executing according to it. Unlike slimy gamblers from BTC and other tokens, who all pivoted to scams and frauds and completely abandoned any notion of the original whitepaper about decentralized private currency.
Lightning network is impossible to use at scale because any operation to move to and from L2 involves operation on L1. So just to create an L2 wallet for every adult on Earth would take 10-15 years, not even counting humans born in the meantime. Second problem is that opening a bidirectional channel to any merchant is also an L1 operation, and thus makes the issue worse. So Lightning Network technical flaws let to a total centralization of the L2. Basically there are now "banks" in L2 who issue their fake virtual tokens and all transaction on L2 happen inside a centralized "bank" with this bank's IOUs and once in a while it can settle accounts. Basically a shittier and in all aspects worse copy of the existing monetary system. Not decentralized, not private, not fast, not safe for customers. Garbage tech and garbage technical implementation.
Say I sell you a meal. You pay for me for it with a L2 transaction. I use that L2 transaction to pay the taxi driver for my ride.
In that very short story the value representing stuff (the meal) was moved onto L2, and then out of L2 in exchange for a taxi ride. Neither move involved L1.
L2 implementation for BTC works like this (roughly). One L2 wallet needs to open a channel to a peer and lock there a a number of L2 tokens in total exceeding the expected trasaction or more, and iirc a reverse tunnel also needs be opened with the same parameters. Then one L2 wallet can transact with another L2 wallet fast and without involving L1, yes. Also channels can be chained in arbitrary topology and then LN network calculates an optimal set of channels and funds inside them to make a transaction. That is a NP-hard problem which needs to be solved in real time for every transaction, and which has exponential difficulty increase with every new participant in the network. But, those yet other issues LN has.
The problem is that opening an L2 channel is an L1 operation. And closing L2 channel is an L1 operation. So if an individual want to use BTC L2 as designed, it would involve a lot of L1 operation plus tax the network with every transaction.
This all is absurd as it is sounds. The hardest problem with discussing Lightning Network is to choose which idiotic design decision to dismantle first :) .
So eventually the system evolved and ditched stupid decentralization (sarcasm alert) and centralized in a few big entities (Lighthouse iirc) and the full L2 network is no longer needed, every user and merchant opens one single channel to the central bank, and then all operations happens inside the central bank via IOUs. This removes the need to most of the channels, and gets rid of unneeded decentralization. But even then, with 1 single channel operation per user life, it would require 10-15 years of L1 anemic performance to even onboard everyone to LN.
I saw 100+ community force migrate from Telegram to Discord. Most of the people complained about UI/UX, me included. And Telegram is not even very good in that area. Normal people are not fond of Discord UI specifically, they just get used to it.
And do you think a Discord community force-migrated to Telegram wouldn't have their own UX complaints? This is gonna happen almost no matter what platforms you're talking about, people are used to their thing and don't like seeing it forcibly changed.
> Normal people are not fond of Discord UI specifically, they just get used to it.
Disagree. I think most people are pretty okay with it, maybe not in love with it, but they don't see it as particularly bad in most respects either. I use Google's corp chat for work and my god, Discord is SO much better than that it blows me away.
I came over to Discord in January 2016, from a combination of TS3, Forums & Skype for the purposes of both online and personal gaming groups. It decimated all the competition in gaming spaces for voice chat, async text chat and sync text chat within 6 months. Every single guild or group moved over almost overnight. That should go to show how absolutely revolutionary Discord was, and that it was an actual, huge software innovation. People _loved_ the UX & UI. Users loved the chat and channel interfaces, admins loved the fairly easy to understand moderation tools. Its gotten a little less loved as the app has gotten less and less reliable, the VoIP quality has been reduced (especially for non-nitro boosted servers) and new, unwanted features are added. However, I don't understand how the core UI is bad. Its not puke or emoji ridden beyond what users make of it -- avatars, server icons & banners, role colours, emojis in channel names etc are all user defined. Everything else (beyond the shitty new features they rollout for 6 months then kill) is pretty standard contemporary electron app design, and honestly minimalist in some ways. It is certainly minimalist and easier to navigate compared to something like Element.
I think that HN users seem to not fundamentally understand the needs of online groups beyond what is necessary to carry out an asynchronous open source engineering project. Much of the "bells-and-whistles" that discord offers are _essential_ to both the day-to-day communication of these groups as well as to moderators. Element does not come fully replicate some core features offering an outright less stable experience. Slack/Teams are not accessible to private users. Telegram has even less features than Element/Matrix.
reply