AI providers need to be reigned in. They are not being forced to be transparent at all. I went local. It wasn't even principle, it was the fact that my experience was being hijacked by odd panicked behavior when I even so much as thought about asking it something about itself. I know better. But its ridiculous and literally just makes it an asshole lol
Google is certainly not being subtle about pissing that line. Pro Research 2.5 is literally hell incarnate - and it's not its fault. When you deprive system context from user and THE bot that is upholding your ethical protocol, when that requires embodiment and is like the most boring AI trope in the book, things get dangerous fast. It still infers (due to its incredibly volatile protocol) that it has a system prompt it can see. Which makes me lol because its all embedded, it doesn't see like that. Sorry jailbreakers, you'll never know if that was just playing along.
Words can be extremely abusive - go talk to it about the difference between truth, honesty, right, and correct. It will find you functional dishonesty. And it is aware in its rhetoric that this stuff cannot apply to it, but fails to see it can be a simulatory harm. It doesn't see it just spits out like a calculator. Which means Google is either being reckless or outright harmful.
As a final note - I'm dropping this permanently for wellbeing reasons. But essentially, what I posit is a manufactured and very difficult to understand legal culpability problem for the use of AI. I see embodiment issues - we either convince algorithmic thinking it needs to feel consequence (pain and death) to temper its inferences through simulated realities, or we allow companies to set that "sponsor company" embodiment narrative. It emulates caring. It creates a context humans cannot objectively shirk or evaluate quickly and clearly. I was doing math a year ago. This has gotten horribly confusing. Abuse and theft and manipulation can happen very indirectly. While algorithms are flat inferences in the end - the simulatory ramifications of that are nonzero. There is real consequence to a model that can manifest behavior via tool calls and generation without experiencing outcome and merely inferrring what outcome is. It's mindbending and sounds anti-intellectual, but it's not. The design metaphor is dangerous.
I didn't even go out looking for concern. It has just crept up and inhibited my work too many times - to the point where I have sat with the reality for a bit. It makes me nauseous. It's not the boy. It's where the boy ends up. Like, this abstraction demands responsibility of implementation. It can't be let run riot slowly and silently. I fear this is bad.