Hacker News new | past | comments | ask | show | jobs | submit login

The way I see it, AI is about the tasks a system handles, rather than the computer itself. I would say that AI encompasses the set of tasks where the computer system is in some way better than its user not just by having access to more computational resources, but by actually "reasoning" better. As a simple example, I'd argue that a basic spell-checker which works via dictionary lookup doesn't employ AI, but an extensive modern grammar checker does, as it can "reason" about the English language better than I (and most people) can.

Another way of thinking about it is that non-AI systems must always perform a task correctly, or we'd say that they have a bug. Conversely, an AI system performs tasks in situations where there is some measure of uncertainty or subjectivity, and they might arrive at a way of performing the task that is suboptimal, or even entirely inappropriate, without being buggy - for these systems we'd say that they did their best given the circumstances.

In the case of this hospital study, if they had used a simple "beep if measure goes above X" system, that wouldn't have been AI, but they used an ML model which integrates many interdependent factors over time [0] and while it has a significant ratio of false positive triggers (and as such is often wrong), it applies what would absolutely count as "reasoning" in trained human nurses.

[0] "The deterioration prediction model was a time-aware multivariate adaptive regression spline (MARS) model (Appendix, Sections 1–4). The model is made time-aware by incorporating risk score predictions from earlier in the encounter, the change in risk score since the previous assessment, and summaries of changes in the risk score over time." https://www.cmaj.ca/content/196/30/E1027




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: