The population may trust the government now, but totalitarian regimes are returning to fashion and love when they can skip the data collecting bureaucracy and go straight into building or offshoring their gulags.
I’d rather word that differently. High-trust societies with little expectation of privacy and valuing community tend to do well with social democracy. Otherwise people end up abusing the system and it’s hard to catch them if privacy trumps community needs.
Here in ex-USSR country people are very pro privacy and individualist. At the same time we try to copy a lot of Nordic stuff from our neighbors. It’s a shitshow how those cultures mesh. A lot of welfare abuse, hiding beyond muh privacy to avoid scrutinity.
That comment is 100% dislocated from reality. To be sure, entry dynamics vary by country, person and other circumstances. Law enforcement is different in philosophy and approach all over the world. For example, in the US you are not met with soldiers carrying machine guns, which is something you are likely to encounter in Italy. To some people, being met by soldiers with machine guns is terrifying, to others, it is normal. To some people being asked questions in a terse manner is rude and scary, others understand security theater and the role these people play. It's shades of gray, not black and white.
Personally, I’ve found agents to be a great “multitasking” tool.
Let’s say I make a few changes in the code that will require changes or additions to tests. I give the agent the test command I want it to run and the files to read, and let it cycle between running tests and modifying files.
While it’s doing that, I open Slack or do whatever else I need to do.
After a few minutes, I come back, review the agent’s changes, fix anything that needs to be fixed or give it further instructions, and move to the next thing.
The AI wasn’t just pretending to be a rape victim, it was scraping the profiles of the users they replied to infer their gender, political views, preferences, orientation, etc - and then use all that hyper-targeted information to craft a response that would be especially effective against the user.
It's research on whether malicious actors could use LLMs to persuade people. If you restrict the tricks it can use, what is the research telling you? How nice, honest people might persuade Redditors by secretly using LLMs? How malicious actors might persuade people if they never lied?
I can see where you may get that idea, but I believe the point was supposed to be: just because it is done, it doesn't mean you can do it too. And that also doesn't mean it's ok for corporations to do it, they just have an easier time than the common person because our society runs and money and they have (much!) more of it.
The population may trust the government now, but totalitarian regimes are returning to fashion and love when they can skip the data collecting bureaucracy and go straight into building or offshoring their gulags.
reply