> every minute, it's harder to distinguish AI output from actual output unless you're approaching expertise in the subject being written about.
So, then what really is the problem with just including LLM-generated text in wordfreq?
If quirky word distributions will remain a "problem", then I'd bet that human distributions for those words will follow shortly after (people are very quick to change their speech based on their environment, it's why language can change so quickly).
Why not just own the fact that LLMs are going to be affecting our speech?
> So, then what really is the problem with just including LLM-generated text in wordfreq?
> Why not just own the fact that LLMs are going to be affecting our speech?
The problem is that we cannot tell what's a result of LLMs affecting our speech, and what's just the output of LLMs.
If LLMs result in a 10% increase of the word "gimple" online, which then results in a 1% increase of humans using the word "gimple" online, how do we measure that? Simply continuing to use the web to update wordfreq would show a 10% increase, which is incorrect.
Did you think the same thing when photoshop came out?
It's relatively trivial to photoshop misinformation in a really powerful and undetectable way- but I don't see (legitimate) instances of groundbreaking news over a fake photo of the president or a CEO etc doing something nefarious. Why is AI different just because it's audio/video?
And it's not the grounbreaking the problem, it's the little constant lies.
Last week a photoshopped Musk tweet was going around, people getting all up in arms against it despite the fact it was very easy to spot as a fabricated one.
People didn't care, they hate the guy, they just wanted to fuel their hate more.
The whole planet run on fake content, magazin covers, food packaging, instagram pics of places that never looks that way...
And now, with AI, you can automate it and scale it up.
People are not ready. And in fact, they don't want to be.
This is a common survivorship bias fallacy since you only notice the bad CGI.
I'm certain you'd be shocked to see the amount of CG that's in some of your favorite movies made in the last ~10-20 years that you didn't notice because it's undetectable
Luckily, for those of us who prefer when film photography meant at least mostly actually filming things, there’s plenty of very good film and TV (and even more of lesser quality) to keep a person occupied for a couple lifetimes.
I won’t be, I’m aware that lots of movies are mostly CGI.
But, yeah, I do think it is some kind of bias. Maybe not survivorship, though… maybe it is a generalized sort of Malmquist bias? Like the measurement is not skewed by the tendency of movies with good CGI to go away. It is skewed by the fact that bad CGI sticks out.
Actually wait I take it back, I mean, I was aware that lots of Digital Touch-up happens in movie sets, more than lots of people might expect, and more often that one might expect even in mundane movies, but even still, this comment’s video was pretty shocking anyway.
Why would a user-replaceable battery be better for the environment? Do you think that consumers are able to recycle hard-to-recycle lithium components like that _correctly_? Apple already offers battery replacements (comparatively) cheaply.
Beyond that there's huge issues with it such as third party batteries tainting the overall quality of the phone, having an entirely removable back plate would kill their water/dust resistance rating, it looking bad (Yes this is important to Apple) etc, god-knows-what other issues arise changing the internal structure of their components that drastically.
Don't assume they can't figure out an engineering problem because you're upset that phones aren't the same as they were 20 years ago.
The battery replacement service is not comparatively cheap. I just paid $90 for my iPhone 11. There is no world in which a cell phone battery of even the highest quality costs anywhere near that much. For example iFixIt offers a comparable battery for less than half that price, and generic sellers less than a quarter the price. You could argue that the labor required justifies the price but that makes the design all the more predatory.
The battery disposal is not really the point of the conversation, you could make the same argument about the whole phone. The point is that all of the components of the phone last a lot longer than two years, but the sealed in battery will barely make it that long. They obviously benefit financially from the current arrangement because they’ve shifted consumer decision from “should I pay $40 to increase my battery life by 25% on my 2 year old phone” to “should I invest $100 in this older phone or just throw it away and spend $200 (subsidized) on a new one?” The second one makes a lot more money for Apple and has a much larger negative impact on the planet.
As far as waterproof ratings, such phones exist, even in the thin form factor. This argument is a non-starter that doesn’t agree with observed reality. It’s an active choice they’re making because the incentives are misaligned.
As to the argument that users will use bad components, how is this any different than the myriad of bad Bluetooth headphones available that degrade user experience? Should Apple disallow those as well to protect their stupid users? What about cheap chargers? Cases that induce thermal throttling? Screen protectors that greatly degrade visual fidelity?
I’m not upset about phones being different than 20 years ago, I’m upset that the planet is being destroyed to slightly increase profits, all while Apple lies to our face and parades Mother Earth around the keynote stage.
+1. In Ableton on Windows you can get your latency down to ~40ms without a dedicated sound card using ASIO. Mac's drivers are even better with sub ~20 ms on my m2 pro IIRC.
+1 to the comments here. Part of the issue here is running these applications in Python. It's not really optimized to handle these loads and do DSP-based compute efficiently.
You seem surprised, but any sort of live production requires this. Check out SonoBus, it achieves adequately low end-to-end latency even with network delays in the mix.
Then C-the-language can be Turing complete, even if C-as-actually-implemented is not. Just implement a python interpreter. (Or you can also just implement bignums in C and use those for your computation)
Why not? Whether you're running python code in your C interpreter or just running C code, the same memory restrictions will apply based on your hardware. CPython doesn't place a lower bound on bignums over a non-C based implementation
EDIT: See the GMP library, which states "There is no practical limit to the precision except the ones implied by the available memory in the machine GMP runs on"[0]
The C specification limits programs to addressing a finite amount of memory, though it can be made arbitrarily large by an implementation. The Python specifications do not imply this though real interpreters do.
> though it can be made arbitrarily large by an implementation
Yes, this is my entire point
Why should I care what the language specification states in a computability theory discussion? There only needs to exist a method to accomplish our goal-Whether the method conforms to specification or not doesn't seem relevant to me.
Would it be fair to say then that "Python" is Turing complete, while CPython/PyPy implementations are not turing complete, because they will always implicitly run up against C's memory limitations, therefore they do have a hard limit. Python itself as a language is turing complete because it does not place realistic limitations on the user like C does?
> Unless an AI becomes embodied and can do the same, I have no faith that it will ever "think" or "reason" as humans do. It remains a really good statistical parlor trick.
This may be true, but if it's "good enough" then why does that matter? If I can't determine if a user on Slack/Teams is an LLM that covers their tickets on time with decent code quality, then I really don't care if they know themselves in a transparent, prelinguistic fashion.
He put me on to Sun Ra and Azymuth, it's high praise coming from Madlib. Other highly individual/stylized artists also share the love for Ra; Earl sweatshirt and members of sLums New York group.
So, then what really is the problem with just including LLM-generated text in wordfreq?
If quirky word distributions will remain a "problem", then I'd bet that human distributions for those words will follow shortly after (people are very quick to change their speech based on their environment, it's why language can change so quickly).
Why not just own the fact that LLMs are going to be affecting our speech?