Throwing this on the "brainstorm if we had an ideal legislative world" pile: Stealing a user's private key should be a felony, even if it hasn't (yet) been abused for anything.
The tricky part is keeping it from being "permitted" by a crappy contract of adhesion. Banning it entirely would make it very difficult to buy/sell backup services...
> to edit this very specific kind of phrasing out of their writing
Even without touching moral/ethical/normative reasons, it's impractical. LLMs will continue to incorporate the most popular phrasings or grammars, and touchy readers will simply pivot to a new "telltale" du-jour.
Eventually any personal or organic writing will be gone, as one twists themselves into an artificial form of "the inverse of the LLM."
> Michael Bolton: "No way, why should I change? He's the one who sucks."
The total lack of principles and consistency in public is all you need to know about the lower standard that will apply when they implement it in private.
> I won't be surprised if one of Trump's goals isn't so much to make AI safer as it is to ensure that the answers AI gives are the ones he and his regime want people to see. Today, for example, when I asked a variety of chatbots who lost the 2020 election, they all agreed Trump had lost. Funnily enough, when the Senate Judiciary Committee asked numerous Trump nominees for federal judgeships the same question, they universally refused to say he lost.
Exactly, "ID" is a solution masquerading as a requirement, the real requirements are far more granular, and the more we can narrow it down then the better our chances are for a solution that isn't evil/abusable.
To recycle parts of an old comment [0]:
> If I had my 'druthers, there would be a kind of physical vending machine installed at local city hall or whatever, which leverages physical controls and (dis-)economies of scale.
> The trusted machine would test your ID (or sometimes accept cash) and dispense single-use tokens to help prove stuff. For example, to prove (A) you are a Real Human, or (B) Real and Over Age X, or (C) you Donated $Y On Some Charity To Show Skin In The Game.
> [...] The black-market in resold tokens would be impaired (not wholly prevented, that's impossible) by factors like: [...] scaling the physical portion of the work [...and...] There's no way to test if a token has already been used, except to spend it.
North Korea calls itself the Democratic People's Republic of Korea, but nobody else calls it that. It also claims to control the entire Korean peninsula.
Umm...when we lived in Colombia, my son decided to re-name himself Martillo Veneno. For those who don't know Spanish, that's Hammer Poison. You have something against that?
> In a world where you're probably reaching adulthood with your brothers and sisters without encountering any sibling death, a story with 'unfair' death
Two hot-take theories to add onto the pile:
1. In a traveling oral tradition, the teller doesn't want to memorize lots of different versions known in different towns or regions, and they also don't want people to get angry that your version doesn't have some key things from how they remember it. This leads to compromises that don't quite fit together.
2. If you can only store one version, you've got to decide between "fun" versus "faithfully honors the memory of our elders and how they told it", and maybe the latter wins. However with the printing press etc., now there's room to do a bit of both, and the fun version sells better.
I think it's step more-abstract that that, we're doing... How about "narrative programming"? (Though we could debate whether "programming" is still an applicable word.)
Yes, it may look like declarative programming, but it's within an illusion: We aren't aren't actually describing our goals "to" an AI that interprets them. Instead, there's a story-document where our human stand-in character has dialogue to a computer-character, and up in the real world we're hoping that the LLM will append more text in a way that makes a cohesive longer story with something useful that can be mined from it.
It's not just an academic distinction, if we know there's a story, that gives us a better model for understanding (and strategizing) the relationship between inputs and outputs. For example, it helps us understand risks like prompt-injection, and it provides guidance for the kinds of training data we do (or don't) want it trained on.
Yes, but at the same time: "God forbid managers and executives actually permit people enough time to do do work and fact-check the hallucination machines." Especially in contexts where they are also mandating that staff find ways to use the hallucination machines.
Much like industrial accidents, some portion of blame has to go to the system, rather than any individual.
The tricky part is keeping it from being "permitted" by a crappy contract of adhesion. Banning it entirely would make it very difficult to buy/sell backup services...
reply