When Altman says high risk, he means only Microsoft should be allowed to run an AI in case the plot of Terminator happens.
When the EU say high risk, they mean that an AI pacemaker should be explainable enough that you can guarantee it won't randomly kill people. They also mean that low risk applications such as AI holiday recommendation or fiction writing should be more or less unregulated.
When the EU say high risk, they mean that an AI pacemaker should be explainable enough that you can guarantee it won't randomly kill people. They also mean that low risk applications such as AI holiday recommendation or fiction writing should be more or less unregulated.
Which one is reasonable?