When i see those ads I legit thought it required gaming licenses. Just wow, apparently you can get away with gaming as long as it doesn't have the label.
Wait, I thought I've been seeing 'genuine question' a lot lately, but does that actually have anything to do with AI? I had assumed people were always annoying with it and it just so happened to bother me more recently
High risk high reward - I think if I ponied up capital, I'd rather not feel obliged to 'share the success' unless it were part of a needed capital raising.
I seem to recall a reporter being given a Tesla to test drive and they wrote a scathing report about bad battery, range, problems with finding recharge stations, and all a flagrant tear down which would have been great reporting...had it not been for Elon having vehicle logging which revealed the flagrant misuse of the vehicle e.g. riding past recharge after recharge after recharge station, riding the car in circular routes to drain the battery, and plain misrepresentation of their experience.
Journalism does itself no service writing like this and it's exhausting
I think the operative word people miss when using AI is AGENT.
REGARDLESS of what level of autonomy in real world operations an AI is given, from responsible himan supervised and reviewed publications to full Autonomous action, the ai AGENT should be serving as AN AGENT. With a PRINCIPLE (principal?).
If an AI is truly agentic, it should be advertising who it is speaking on behalf of, and then that person or entity should be treated as the person responsible.
I think we're at the stage where we want the AI to be truly agentic, but they're really loose cannons. I'm probably the last person to call for more regulation, but if you aren't closely supervising your AI right now, maybe you ought to be held responsible for what it does after you set it loose.
I agree. With rights come responsibilities. Letting something loose and then claiming it's not your fault is just the sort of thing that prompts those "Something must be done about this!!" regulations, enshrining half-baked ideas (that rarely truly solve the problem anyway) into stone.
I don’t think there is a snowball’s chance in hell that either of these two scenarios will happen:
1. Human principals pay for autonomous AI agents to represent them but the human accepts blame and lawsuits.
2. Companies selling AI products and services accept blame and lawsuits for actions agents perform on behalf of humans.
Likely realities:
1. Any victim will have to deal with the problems.
2. Human principals accept responsibility and don’t pay for the AI service after enough are burned by some ”rogue” agent.
It takes 20mins to fet from base houseing to the gate, lord k ows what traffic is like by the causeways, its an hour of driving before you're anywhere worth being and then its a coin flip if its exciting, so its either the ft. Walton beach strip clubs or onbase recreation.
No wonder eglin is addicted hahaha.
But in all seriousness, there are teams of people on the data crunch side of things that seems like a pedestrian insight
reply