> having enough money controlled by these models could pose a serious systemic risk
This is the part that worries me. For a decade before the 2008 financial collapse, people were quietly saying, "Gosh, there's a lot of activity in derivatives and we don't really know where the risk is going."
One of many factors there was the way rating agencies gave very generous ratings to mortgage securities. Critics note that it was in their short-term financial interest to do that. If people can screw up that badly with models they supposedly understand, it seems to me to be even more risky when working with models where people have just given up understanding and put their faith in the AI oracle. As long as they get the answers that maximize their end-of-year bonus checks, they have a strong incentive not to dig deeper.
This is the part that worries me. For a decade before the 2008 financial collapse, people were quietly saying, "Gosh, there's a lot of activity in derivatives and we don't really know where the risk is going."
One of many factors there was the way rating agencies gave very generous ratings to mortgage securities. Critics note that it was in their short-term financial interest to do that. If people can screw up that badly with models they supposedly understand, it seems to me to be even more risky when working with models where people have just given up understanding and put their faith in the AI oracle. As long as they get the answers that maximize their end-of-year bonus checks, they have a strong incentive not to dig deeper.