Current AI reminds me of when Bart Simpson is sitting in the back seat of a rental car and leaves the car running on Cruise Control thinking it will drive itself. It's just good enough to make you think it's doing it's job before it drives you off a cliff.
Not a bad comparison and actually perhaps with the same short term outcome; feels like it can drive itself but cannot and won’t get there this generation.
Blatant fear mongering seeing that no solutions are proposed in the article. Just a bunch of "what if" scenarios and speculation without addressing the issues with thoughtful solutions to the issues they present.
I don't think that's entirely accurate - the article seems to have a realistic bent to me.
> The summit concluded with an agreement between 10 governments – including the UK and US but not China – the EU and major AI companies, including the ChatGPT developer OpenAI and Google, to cooperate on testing advanced AI models before and after they are released.
The scenario referred to in the title, "financial devices that only AI can
understand, that no human being can understand", seems rather silly.
If by "devices" he means automated trading systems, those have been around for decades, they already account for most trading activity, and no human can understand their collective behavior. They are controlled the same way as human traders: by imposing hard constraints on what they are allowed to do.
If by "devices" he means financial instruments, consider how long it took for Bitcoin futures and ETFs to be approved by the CFTC and SEC (the latter still hadn't approved a spot Bitcoin ETF last time I checked, only ones holding futures or Bitcoin company stocks). How likely would they be to authorize financial instruments "that no human being can understand"?
> They mean devices that you aren't intelligent enough to understand.
They don't even define what they mean by "device", nor do they try to explain why regulatory bodies would allow something nobody understands into the financial system.
> The fact that you don't even understand the description isn't a great booster to your counter argument.
The fact that they provide no description for anyone to understand isn't a great booster for their argument, nor for your attempt to support it.
He's just describing where he sees the course of history leading us. From what I understand he doesn't generally think of history as something that can be "solved". It's just what happens. And sometimes what happens is super unpleasant for the people living through it.
The financial system is already largely disconnected from reality. It's already non-sensical for even expert in the field. The stock market rally against all odds are crash without external events. Outside a few connected people, to whom Yuval is also connected, and are aware of all sort of secret manoeuvers, most cannot fanthom grasping it. Perhaps his real fear is that AI is leveling the playing field as everyone can now develop his own trading algorithm cheaply.
The financial system is seemingly totally disconnected from reality but I wonder if that's really true. "Reality" is an unfathomably complex global economy. There is a difference between nonsensical vs too complex to reliably predict.
well yes, there's no shortage of people out there developing algorithms, there are massive projects and libraries freely available all over the internet for the layman to explore, there has been for a very long time, there are even groups (in NYC and London that I'm aware of, presumably other cities too) for hobbyists (usually hobbyists in the industry).
Writing algorithms is easy, even finding alpha is easy, having enough capital to make it worthwhile is the hard part.
Just because a system uses AI doesn't make it fundamentally unsafe. But giving AI unchecked control over systems that handle large amounts of money is no different than giving such control to any other automated computer system. You have to think about risk and how to manage it. None of that is new.
Yes and no. We need reorganize the society to redistribute wealth. If AI is going to replace human in work places, let it be, it is good. We just need to make sure human don't need to work to survive. That's it.
We need to make mental health care more accessible, particularity to treat the epidemic of pareidolic anthropomorphism prevalent among ai workers and evangelists.
It's sad how many people looked up to Harari, Geoffrey Hinton, Yoshua Bengio, Sam Altman, Musk, and Douglas Hofstadter a few years ago, but when they all came out to say "there are some very serious existential dangers approaching with AGI," most of HN went:
"Am I so out of touch? No, it's the children who are wrong."[0]