It's not saying "predictions, recommendations, and decisions". It's or.
So it just needs to recommend or decide based on any data (such as how many are left at this moment in a database) in a way that has real world outcomes (someone stocks a shelf).
A thermostat that turns on an HVAC unit after temperature drops below a reference point technically qualifies as AI based on this definition.
I've always thought the difficulty in defining what AI is stems from a need to differentiate humans from the "artificial" part of it.
The real issue with using AI (from a law enforcement perspective) is the inability to put somebody under oath and ask them why they made the decisions they did. All the FTC really needs to say is something like "If we suspect your product is discriminating against a protected class and you can't (as a company) explain what the decision making process was (that was non-discriminatory) we will assume the worst."