I can imagine that with the rise of AI, you now have the basis of plausible deniability for insider trading:
"The AI told me to do it"
I think this will be an interesting legal case when it (inevitably) happens. Because AI is almost impossible to introspect/poll from the outside. If you query it, that is a different "neuron path" and it can't be relied upon to know what/how/why it "knows what it knows".
It can also be plausibly denied that the CURRENT AI response doesn't say what you did. Just say it was retrained / modified to any degree (and of course you don't keep backups), and even if the AI doesn't produce the same answer, you can simply say the training has changed or the inputs have changed (you also don't back up the historical input values).
>I can imagine that with the rise of AI, you now have the basis of plausible deniability for insider trading:
>"The AI told me to do it"
Do you even need to use that as a defense? Doesn't the SEC have to provide affirmative proof that you traded on material nonpublic information? Have they ever convicted someone of insider trading on good trading records alone?
Otherwise, I don't really see how using an AI to launder stuff makes sense. You need to acquire that material nonpublic information somehow. Because it's nonpublic, you're probably going to have to go out of your way to get a hold of it, so that's how the SEC is going to nail you. Feeding a bunch of leaked quarterly earning reports into an AI isn't going to let you off the hook.
As a concrete example, Uber and Lyft can likely drive up prices without any price fixing or collusion simply by understanding the maximum price customers will pay and pushing it higher. Simple explore/exploit.
IT is happening in rents.
A lot of rental properties have started using a tool to determine what the 'market can bear'. And have started raising rent.
Their is no collusion.
But they are all using the same technology, which is telling them to raise rent.
Since they are all raising rent, everyone is trapped and must pay, thus the 'market' can bear the price, since the software can tell everyone else is increasing rent, then it must be possible.
I'm not sure the percentage of the market owned that is needed to influence the total market. I'm sure different markets have different thresholds.
I think you are thinking that all renters are 'actors', so there is to many individual choices, thus this can't happen.
But, here the actors are the rental property owners. That is a much smaller number. IF you only have a few property owners, and they all use software telling them to increase prices, then the total market goes up, and individual renters do not have any options to go anywhere else, so must pay.
It really is the same problem as with collusion, or price setting, it is just here, what is happening is 'the colluders' all happened to be using the same software. They aren't coordinating, they are coincidentally all being pushed in same direction by a single software company.
>"
RealPage became the nation’s dominant provider of such rent-setting software after federal regulators approved a controversial merger in 2017, a ProPublica investigation found, greatly expanding the company’s influence over apartment prices. The move helped the Texas-based company push the client base for its array of real estate tech services past 31,700 customers.
The impact is stark in some markets.
In one neighborhood in Seattle, ProPublica found, 70% of apartments were overseen by just 10 property managers, every single one of which used pricing software sold by RealPage."
This is why a healthy market needs more than competition, it needs to be easy for new competitors to enter the market and make a buck. New competitors will enter the market and undercut prices to help themselves grow.
This is what Airbnb did and was addicted to, until it found itself now in a position where many hosts are professionals with many properties, and the rates are very uncompetitive with hotels and altering travelers perception of the brand. I’ll choose hotel most times now over Airbnb, only in select situations.
"The AI told me to do it"
I think this will be an interesting legal case when it (inevitably) happens. Because AI is almost impossible to introspect/poll from the outside. If you query it, that is a different "neuron path" and it can't be relied upon to know what/how/why it "knows what it knows".
It can also be plausibly denied that the CURRENT AI response doesn't say what you did. Just say it was retrained / modified to any degree (and of course you don't keep backups), and even if the AI doesn't produce the same answer, you can simply say the training has changed or the inputs have changed (you also don't back up the historical input values).