How often does that actually happen? It's only when a fun owner was irresponsible, leaving a loaded gun in an accessible place for the toddler.
Similarly, AI can easily sound smart when directed to do so. It typically doesn't actually take action unless authorized by a person. We're entering a time where people may soon be willing to give that permission in a more permanent basis, which I would argue is still the fault of the person making that decision.
Whether you choose to have AI identify illegal immigrants, or you simply decide all immigrants are illegal, the deciding is made by you the human, not by a machine.
Similarly, AI can easily sound smart when directed to do so. It typically doesn't actually take action unless authorized by a person. We're entering a time where people may soon be willing to give that permission in a more permanent basis, which I would argue is still the fault of the person making that decision.
Whether you choose to have AI identify illegal immigrants, or you simply decide all immigrants are illegal, the deciding is made by you the human, not by a machine.