The reason it's increasingly an "echo chamber" is because liberals are so offended by actual free speech that they stopped posting there. To blame conservatives for this development is illogical.
> They go looking for confirmation, rather than new information. This is why they're hard to untangle.
This applies to most readers of most things, not just fringe content on the Left or the Right.
Most people are stuck in their confirmation biases, and few make an intellectual effort to look at topics from multiple angles and via multiple media outlets on various sides of the political spectrum.
Ah yeah, the general sentiment was definitely "there are laws that everyone breaks every day, so they can always get you on something". Mostly because I remembered the anecdote about the USSR outlawing fax machines that no business could do without, so they could always charge any business with a crime.
The first few times were by accident, but once I realized they are durable enough I started wearing them sometimes while showering if I've got a good audiobook or YT video that I don't want to put down.
I think they're supposed to withstand some degree of moisture, but I don't believe they're designed specifically to be submerged. However, one of mine (gen 2 airpods) got fully submerged for maybe 3 seconds in the bathtub but it managed to start working again when I let it dry out.
I'm not saying I recommend others treat their AirPods as if they're water resistant but, in my experience, all the generations of AirPods can take a bit of a water beating. The only ones I've never done this with are any of the Pro models.
In some brief testing, I discovered that the same models (Llama 3 7B and one more I can't remember) are running MUCH slower in LM Studio than in Ollama on my MacBook Air M1 2020.
Has anyone found the same thing, or was that a fluke and I should try LM Studio again?
By default LM Studio doesn't fully use your GPU. I have no idea why. Under the settings pane on the right, turn the slider under "GPU Offload" all the way to 100%.
That froze the whole computer, and even disabled the possibility of clicking both the internal and external trackpad.
The model is Dolphin 2.9.1 Llama 3 8B Q4_0.
I set it to 100% and wrote this:
"hi, which model are you?"
The reply was a slow output of these characters, a mouse cursor that barely moved, and I couldn't click on the trackpads:
"G06-5(D&?=4>,.))G?7E-5)GAG+2;BEB,%F=#+="6;?";/H/01#2%4F1"!F#E<6C9+#"5E-<!CGE;>;E(74F=')FE2=HC7#B87!#/C?!?,?-%-09."92G+!>E';'GAF?08<F5<:&%<831578',%9>.='"0&=6225A?.8,#8<H?.'%?)-<0&+,+D+<?0>3/;HG%-=D,+G4.C8#FE<%=4))22'*"EG-0&68</"G%(2("
Two replies to parent immediately suggest tuning. Ironically, this release claims to feature auto-config for best performance:
“Some of us are well versed in the nitty gritty of LLM load and inference parameters. But many of us, understandably, can't be bothered. LM Studio 0.3.0 auto-configures everything based on the hardware you are running it on.”
So parent should expect it to work.
I find the same issue: using a MBP with 96GB (M2 Max with 38‑core GPU), it seems to tune by default for a base machine.
Yeah, me. Even without other applications running in the background and without any models loaded, the new 0.3 UI is stuttering and running like a couch-locked crusty after too many edibles on my Macbook Air 2021, 16GB. When I finally get even a 4B model loaded, inference is glacially slow. The previous versions worked just fine (they're still available for download).