The Amazon Knight (2028): Batman hacks Ring cameras to track down the Joker, showing himself to be a rebellious vigilante who's not afraid to break a company's ToS to make justice happen. After the job is done, cut to a montage of Batman telling an Amazon worker about Wayne Enterprises' new villain-detection technology that could be used to upgrade Ring, then screwing in cameras in every room of every building of the city, and then proudly telling the bystanders that they won't have to suffer any more. He's invited to a ceremony where Jeff Bezos thanks him. A swarm of anti-evil Amazon drones takes off, flooding the city streets. The morning sun rises over Gotham City, colors become more saturated, faint shots of executing every criminal in the city can be heard. The civilians run to the streets to cheer it on, finally free from oppression. The screen fades to white, revealing the Ring Camera Pro 3 Batman Edition, complete with a Batman logo on its black outer shell. "Now only $99! (Available for free in partner municipalities)"
> The energy density of fossil fuels is very high (about 50x higher than that of lithium ion batteries)
That doesn't make sense. Batteries are an energy container, they're not energy itself. How can it be compared to a fuel? The direct counterpart to oil or coal is wind or solar radiation itself, batteries are used to amortize the supply and store an excess for emergency use, but otherwise those types of energy just immediately go into powering the grid.
The economic case for renewable power is actually extremely good, because unlike fossil fuels, they're effectively infinite and don't need complex infrastructure to extract. They're free. You only need a power plant that directly converts them into power. If we were just able to shift fossil fuel demand towards producing goods like plastics, this would already be massive. However, a lot of powerful people are deeply invested into fossil fuels and will do anything to tip the scales into their favor, despite gradually losing in the energy sector.
You're the one looking at it in a vacuum. Engineering is all about looking at it in context and the main context where density of energy is important is in vehicles of all shapes and sizes. That's where the rubber quite literally meets the road. You can then theorize about what the 'weight' of the electrical energy is but it is pointless: the weight of container dwarfs the weight of the energy carriers (effectively the electrons) themselves. In the case of fossil fuels the ratio is more balanced, the container will weigh a couple of kilos and the fuel will way a bit more (say 10:1 or 20:1). So to compare the one with the other we weigh the batteries and ignore the electrons and we then compare that with the fuel because that is the dominant factor.
Solar + electricity are not directly suitable for powering electric vehicles, that's where the batteries come in.
Comparing apples (transport of electricity via wires) with oranges (transport of energy via liquid or gas) misses the elephant in the room: you are not going to be able to use those electrons without a suitable temporary storage medium unless you plan on carrying a very long and impractical extension cord behind your now very light EV.
Why? In the context of the electrical grid, has the amount of storage you can have in the backburner ever been a choke point? If anything, fossil fuel power plants have the very same batteries to buffer some energy. But for the vast majority of power consumers that can just exist on the grid, power storage is nearly irrelevant because it can go directly from producer to consumer. Even in places where storage is relevant (anything that can't be tethered to the grid, like vehicles) the equation is different because the infrastructure you need to convert fuel to power (engines vs electric motors) don't weigh the same. Yes, even with that, pure electricity still falls behind somewhat, but it's getting better. And I was mainly talking about the power grid anyway, with how universal and important it is. Fossil fuel straight up loses in that sector, like what I said before, so replacing it is an easy choice... and yet we don't do that.
It is a perfectly fine rhetorical device, and I don't consider a text that just has that to be automatically LLM-made. However, it is also a powerful rhetorical device, and I find that the average human writer right now is better at using these than whatever LLM most people use to generate essays. It's supposed to signify a contrast, a mood shift, something impactful, but LLMs tend to spam these all over the place, as if trying to maximize the number of times the readers gasp. It's too intense in its writing, and that's what stands out the most.
I don't understand. In this hypothesis, in the elite's view, what is the purpose of the rest of society? If everyone has little to no productive output, why would they support us with a UBI? They could just hire whatever human skeleton crew they'd need to sustain their activities (if needed). The rest of humanity could be either mercifully left alone with absolutely nothing, or annihilated.
Humans are here to create the Training data to bootstrap the system
Luckily we’re already most of the way there!
Over half of the population has been instrumented already to collect all their behavior data worldwide
That’s been the goal: persistent collection of training data should come out of your day-to-day life in order to bootstrap the action systems that are machine based
The challenge now is that most of that data is based on actions we don’t want machines to do
I'm definitely making certain assumptions, such as: (1) democratic rule endures, (2) even absent true democratic rule, the populace can still resort to violent rebellion as a failsafe, (3) psychopathic tendencies amongst said elite are constrained enough such that mass genocide remains sufficiently psychologically unpalatable, (4) economic calamity substantially precedes the deployment of fully autonomous policing, etc.
How this all unfolds is absolutely path dependent.
I agree. Although, looking at these assumptions, subjectively I think that all four of them are in question, and as time passes, their eventual long-term failure seems increasingly likely. Even if one of these four pillars persists, I would expect an overall worsening by default. If democratic rule persists in places, the most powerful would occupy places where it does not exist, or create fully private states, still wielding enormous power over democratic states through wealth and military might. If violent rebellion is technically possible, a middle ground will be carefully calculated where the lower classes are kept on life support with the minimum amount of resources required to dissuade unrest. If the trillionaires of tomorrow suddenly start caring about other people, they could employ second-order measures to effectively reduce the population, thereby safeguarding themselves - massively constraining or removing the supply of food, water, medicine, any vital technology that would be only available to them. I don't see how an economic crisis would prevent automated enforcement, it may only delay it a bit.
Hope is kind of in short supply nowadays. Even if your hypothesis of absolute-automation doesn't happen within our lifetimes, things seem to be guaranteed to get worse for people like us. If it does happen... we'll likely never reap any real rewards from it, barring a complete restructuring of our whole society to an extent that has never happened and likely would never be allowed to happen.
The cynical pessimist in me agrees with you that the odds are somewhat bleak. The slightly irrational optimist in me says "rage against the dying of the light".
Also, there has never been a better time to learn about philosophies that get at the essence of being human and that elucidate precisely how and why our baser characteristics (acquisitiveness, status-seeking, ego, etc) hold us back from being happy. The world we're heading towards will convert desire into suffering more readily than ever before, even as our basic needs are easily met. Philosophy is the cure. And I strongly believe happiness will remain accessible to those who embrace it.
I think you're confusing OP for the people who claim that there is zero functional difference between an LLM and a search engine that just parrots stuff already in it. But they never made such a claim. Here, let me try: the simplest explanation for how next token estimation leads to a model that often produces true answers is that for most inputs, the most likely next token is true. Given their size and the way they're trained, LLMs obviously don't just ingest training data like a big archive, they contain something like an abstract representation of tokens and concepts. While not exactly like human knowledge, the network is large and deep enough that LLMs are capable of predicting true statements based on preceding text. This also enables them to answer questions not in their training dataset, although accuracy obviously suffers the further you deviate from known topics. The most likely next token to any question is the true answer, so they essentially ended up being trained to estimate truth.
I'm not saying this is bad or underwhelming, by the way. It's incredible how far people were able to push machine learning with just the knowledge we have now, and how they're still making process. I'm just saying it's not magic. It's not something like an unsolved problem in mathematics.
No one ever made the claim it was magic, not even remotely. Regarding the rest of your commentary: a) The original claim was that LLM's were not understood and are a black box. b) Then someone claims that this is not true, and they know well how LLM's work, it is simply due to questions & answers being in close textual proximity in training data. c) I then claim this is a shallow explanation because you then need to invoke additionally a huge abstraction network - that is a black box, d) you seem to agree with this while at the same time saying I misrepresented "b" - which I don't think I did. They really claimed they understood it and only offered this textual proximity thing.
In general, every attempt at explanation of LLM's that appeal to "[just] predicting next token" is thought terminating and automatically invalid as explanation. Why? Because it is confusing the objective function with the result. It adds exactly zero over saying "I know how a chess engine works, it just predicts the next move and has been trained to predict the next move" or "A talking human just predicts the next word, as it was trained to do". It says zero about how this is done internally in the model. You could have a physical black box predicting the next token, and inside you could have simple frequentist tables or you could have a human brain or you could have an LLM. In all cases you could say the box is predicting the next token and if any training was involved you could say it was trained to predict the next token.
If you feel so strongly about your message, why would you outsource writing out your thoughts to such a large extent where people can feel how reminiscent it sounds of LLM writing instead of your own? It's like me making a blogpost by outsourcing the writing to someone on Fiverr.
Yes it's fast, it's more efficient, it's cheap - the only things we as a society care about. But it doesn't convey any degree of care about what you put out, which is probably desirable for a personal, emotionally-charged piece of writing.
> And .. more. It's like the LLM latched on to things that were locally "interesting" writing, but applies them globally, turning the entire thing into a soup of "ah-ha! hey! here!" completely ignorant of the terrible harm it does to the narrative structure and global readability of the piece.
It's like YouTube-style engagement maximization. Make it more punchy, more rapid, more impactful, more dramatic - regardless of how the outcome as a whole ends up looking.
I wonder if this writing style is only relevant to ChatGPT on default settings, because that's the model that I've heard people accuse the most of doing this. Do other models have different repetitive patterns?
They're existing, not really thriving. Artisanal things have become more popular as a hobby, but even people who get into them commercially rarely make real money off of it. The demand exists, but purely as a novelty for people who appreciate those types of things, or perhaps in really niche sub-markets that aren't adequately covered by big businesses. But the artisans aren't directly competing with companies that provide similar goods to them at scale, because it's simply impossible. They've just carved out a niche and sell the experience or the tailoring of what they're making to the small slice of the population who's willing to pay for that.
You can't do this with software. Non-devs don't understand nor appreciate any qualities of software beyond the simplest comprehension of UX. There's no such thing as "hand-made" software. 99% of people don't care about what runs on their computer at all, they only care about the ends, not the means. As long as it appears to do what you want, it's good enough, and good enough is all that's needed by everyone.
If you have a bespoke busines that needs some very custom software to run it, you will want a team to build it for you and provide extensive support for it.
I work in manufacturing industries where software for a single factory is completely custom written to their workflow and custom built machines, and they have dedicated teams of engineers on site 24 hours a day to monitor and maintain it. When it comes to large manufacturing factories, minutes of downtime equals millions in lost sales.
I guarantee you these types of companies will not be running software created by ai anytime soon, if ever at all.
This proposal feels really vague to me, I don't really understand what this actually does. Can you explain more? What exactly is a computer with permanence? What is software that forces a user to treat the computer it runs on "as an appliance"? In what ways is this different from any general-purpose computer, and what's the reason why a user would pick this over something standard?
I mean "permanence" in the same vague senses that I think the OP was hinting upon. A belief that regardless of change, the primitives remain. This is about having total confidence that abstractions haven't removed you the light-cone of comprehension.
Re: Appliance
I believe turing-completeness is over-powered, and the reason that AGI/ASI is a threat at all. My hypothesis is that we can build a machine that delivers most of the same experiences as existing software can. By constraint, some tasks would impossible and others just too hard to scale. By analogy, even a Swiss-army knife is like an appliance in that it only has a limited number of potential uses.
Re: Users
The machine I'm proposing is basically just eBPF for rich applications. It will have relevance for medical, aviation, and AI research. I don't suppose that end-users won't be looking for it until the bad times really start ramping up. But, I suppose we'll need to port Doom over to it before we can know for sure.
What if in reality it's not one or the other, but having 10% odds of being good enough to be selected to become a technician operating the machines, 10% odds of getting so enraged as to dedicate your lives to pushing back, and 80% odds of being shoved out due to lower demand and value of your work, having to go do something else, if you still can?
reply