Also entirely ineffective. Banning individual behavior won't prevent collective dysfunction and will only harm honest actors. The only answer that makes sense is reforming the acceptance of work to be resistant to the inevitable (ab)use of AI.
It's the exact same anti-cheating method used to great effect in top level e-sports. You can't trust competitor-supplied hardware, so the only option is for the institution to ban it and supply all hardware itself. Higher education is primarily about competition between job candidates. Eliminating cheating needs to be top priority or the whole system will collapse.
> Eliminating cheating needs to be top priority or the whole system will collapse.
It's going to collapse regardless because of the replication crisis. You might as well tackle the hard problem and figure out how to integrate replication into acceptance, or the consensus publication is intended to represent is meaningless. This is true regardless of whether a human or a robot is performing the work.
XFCE is saddled with its GTK requirement, and GTK gets worse with every version. Even though XFCE is still on GTK3, that's a big downgrade from GTK2 because it forces you to run Wayland if you don't want your GUI frame rate arbitrary capped at 60 fps.
For people wanting the old-fashioned fast and simple GUI experience, I recommend LXQt.
It makes it easier to treat the computer as part of your own body, allowing operation without conscious thought, as you would a pencil or similar hand tool.
Outside of gaming, not much. However, now that I'm used to a 144Hz main monitor, there is no world where I would get back. You just feel the difference.
So basically, no use when you've not tasted 120+Hz displays. And don't because once you do, you won't go back.
I have a 165hz display that I use at 60hz. Running it at max speed while all I'm doing is writing code or browsing the web feels like a waste of electricity, and might even be bad for the display's longevity.
But for gaming, it really is hard to go back to 60.
Mine supports variable refresh rate, which means for most desktops tasks (I.e when nothing is moving), it runs at 48Hz.
Incredibly, Linux has better support than windows for it on the desktop: DWM runs full blast, while sway supports VRR on the desktop. Windows will only enable it for games (and games that support it). Disclaimer: Wayland compositor required.
It’s not enabled by default on e.g. sway because on some GPU and monitor combos, it can make the display flicker. But if you can, give it a try!
Windows 11 idles at around 60 Hz in 120 Hz modes on my VRR ("G-SYNC Compatible") display when the "Dynamic refresh rate" option is enabled, and supports VRR for applications other than games (e.g., fullscreen 24 FPS video playback runs at 48 Hz* via VRR rather than mode switching, even with "Dynamic refresh rate" disabled).
* The minimum variable refresh rate my display (LG C4) supports is 40 Hz.
> What use is there in display frame rates above 60 fps?
On a CRT monitor the difference between running at 60 Hz and even a just slightly better 72 Hz was night and day. Unbearable flickening vs a much better experience. I remember having some little utility for Windows that'd allow the display rate to be 75 (not 72 but 75). Under Linux I was writing modelines myself (these were the days!) to have the refresh rate and screen size (in pixels) I liked: I was running "weird" resolutions like 832x604 @ 75 Hz instead of 800x600 @ 60 Hz, just to gain a little bit more screen real estate and better refresh rate.
Now since monitors started using flat panels: I sure as heck have no idea if 60 fps vs 120 fps or whatever change anything for a "desktop" usage. I don't think the problem of the image fading too quickly at 60 Hz that CRT had is still present. But I'm not sure about it.
120 FPS vs 60 FPS is definitely noticeable for desktop use. Scrolling and dragging are night and day, but even simple mouse cursor movement is noticeably smoother.
If the human is killed every 5 seconds and replaced by a new human, they are indeed less conscious. The LLM doesn't even get 5 seconds; it's "killed" after its smallest unit of computation (which is also its largest unit of computation). And that computation is equivalent to reading the compressed form of a giant look-up table, not something essential to its behavior in a mathematical sense.
I'm not understanding how this is analogous to being killed every 5 seconds as opposed to being paused. Let's call it N seconds, unless you think length matters?
> And that computation is equivalent to reading the compressed form of a giant look-up table, not something essential to its behavior in a mathematical sense.
Because (during inference) the LLM is reset after every token. Every human thought changes the thinker, but inference has no consequences at all. From the LLM's "point of view", time doesn't exist. This is the same as being dead.
The "time" part is what I don't get. If you want to say that "resetting and reingesting all context fresh" somehow causes a problem, that I can see. If you want to say that the immutability of the weights is a problem, okay great I'm probably with you there too. "Time" seems irrelevant.
LLM() is a pure function. The only "memory" is context_list. You can change it any way you like and LLM() will never know. It doesn't have time as an input.
As opposed to what? There are still causal connections, which feel sufficient. A presentist would reject the concept of multiple "times" to begin with.
How can consciousness be possible without internal state? LLM inference is equivalent to repeatedly reading a giant look-up table (a pure function mapping a list of tokens to a set of token probabilities). Is the look-up table conscious merely by existing or does the act of reading it make it conscious? Does the format it's stored in make a difference?
For all practical purposes, calling it a LUT is somewhat too reductive to be useful here I think. But we can try: leaving aside LLMs for a second; with this LUT reasoning model you're using, would you be able to prove the existence of just a computer?
What state is lacking? There is a result which requires computation to be output. The model is the state. The computation must be performed for each input to produce a given output. What are you even objecting to?
It's plausible that LLMs experience things during training, but during inference an LLM is equivalent to a lookup table. An LLM is a pure function mapping a list of tokens to a set of token probabilities. It needs to be connected to a sampler to make it "chat", and each token of that chat is calculated separately (barring caching, which is an implementation detail that only affects performance). There is no internal state.
The context is state. This is especially noticable for thinking models, which can emit tens of thousands of CoT tokens solving a problem. I'm guessing you're arguing that since LLMs "experience time discretely" (from every pass exactly one token is sampled, which gets appended to the current context), they can't have experiences. I don't think this argument holds - for example, it would mean a simulated human brain may or may not have experiences depending on technical details of how you simulate them, even though those ways produce exactly the same simulation.
The context is the simulated world, not the internal state. It can be freely edited without the LLM experiencing anything. The LLM itself never changes except during training (where I concede it could possibly be conscious, although I personally think that's unlikely).
Right, no hidden internal state. Exactly. There's 0. And the weights are sitting there statically, which is absolutely true.
But my current favorite frontier model has this 1 million token mutable state just sitting there. Holding natural language. Which as we know can encode emotions. (Which I imagine you might demonstrate on reading my words, and then wisely temper in your reply)
"Parallel" evolution is just different branches of the same evolutionary tree. The most distantly related naturally evolved lifeforms are more similar to each other than an LLM is to a human. The LLM did not evolve at all.
Evolution is the way how the "mechanism" came to be, which is indeed very different. But the mechanism itself - spiking neurons and neurotransmitters on one hand vs matrix multiplications and nonlinear functions (both "inspired" by our understanding of neurons) don't seem so different, at least not on a fundamental level.
What is different for sure is the time dimension: Biological brains are continuous and persistent, while LLMs only "think" in the space between two tokens, and the entire state that is persisted is the context window.
Evolution and Transormer training are 'just' different optimization algorithms. Different optimizers obviously can produce very comparable results given comparable constraints.
"Minimize training loss while isolated from the environment" is not at all similar to "maximize replication of genes while physically interacting with the environment". Any human-like behavior observed from LLMs is built on such fundamentally alien foundations that it can only be unreliable mimicry.
The environment for the model is its dataset and training algorithms. It's literally a model of it, in the same sense we are models of our physical (and social) environment. Human-like behavior is of course too specific, but highest level things like staged learning (pretraining/posttraining/in-context learning) and evolutionary/algorithmic pressure are similar enough to draw certain parallels, especially when LLM's data is proxying our environment to an extent. In this sense the GP is right.
It is not necessary, because the state has a legitimate right to taxation. Every non-military person in the country can be taxed down to the poverty level to pay volunteer troops. This is morally superior to enslaving people.
It sounds entirely reasonable and moderate to me.
reply