The solution would be some combination of video or audio that records the clinical encounter and automatically generates a note based on what was discussed and performed. It falls under Paul Graham's "Schlep Task" (http://www.paulgraham.com/schlep.html); you'd have to work with individual clinicians, get their (and the patient's!) approval to record the encounter, record it from multiple angles, build tech into dumb devices (e.g. stethoscope, Dopplers for pulses, O2 saturations, and somehow use machine learning to integrate all that sensor data into a plain-text note.
It's probably the #1 problem from a provider's viewpoint right now, especially on the primary care side. Any individual provider will see 10-20 patients per day and write just as many notes.
Edit: I'd add that you'd also have to have access to existing notes because progress notes typically summarize the patient's prior visits.
Or, to be useful, would the final output be structured data? (example: pre-filling in a form with set number of fields and data types).
I'd drop my doctor immediately if this was implemented at their office, particularly if the reasoning behind it was "I'm too lazy to take notes myself".
This is a silly example because we don't need an AI for that but it's late here and I should've given the idea. No centralized architecture, no spying, no tracking. I'll pay for that.
Previous HN discussion: https://news.ycombinator.com/item?id=14055655
IMHO this is gonna be huge, but probably not gonna be huge for a decade. The technical challenges seem large and it'll take a lot of experimentation by entrepreneurs before they find a use-case where people are not currently willing to use some big corp's solution.
It addresses the part of running the neural network locally, plus adding local training data. Collecting inputs for the neural network is obviously left to developers and it could be as hard as the rest.
So technologies like CoreML will be part of the solution but not all of it. Maybe we'll get there.
Currently, I look forward to/want to see more research/movement in other ideas/fields/methods which peep into AGI. Maybe Probabilistic (Quantum? or Neuromorphic?) Computing? Maybe Artificial Life? Maybe Cognitive Architectures? Recap into Symbolic?
PS: This is not a popular opinion, but I do want to share it with fellow HNers. We do most of the computing to solve the hard problems in our society and trade, and that is surely very noble. Having said that, When I was growing up, I used to see/feel a lot of developments in Computers happened for the sake of Computing. It felt like hacking was all together different field; Linux, the all the Free Software movement, Windows NT, Doom, Quake, Huge advances in Compilers, the whole culture around it etc.
Today I mostly see computing that works to advance and solve real world business problems like better advertisements or helping with other noble fields like astrophysics and genetics. I consider it as an advancement of our tech society, but as immature as I am, I miss that time.
Anyway, I would love to see AI for the sake of AI.
Ah, and at the same time, why not create machines that automatically,creatively and cheaply do biomedical r&d ? Or AI that accelerates innovation around technologies that has an health impact ?
We aren't impossibly far off from this, but not that close yet either.
AI is no exception. I have swarms of CPU cores in my life, the totality of which could easily accomplish all of the "AI" functions of any value that I've observed in my life and far more. Today's AI that works for me (rather than against me) provides things like voice recognition, calendar management, reminders, to-do list management, and route planning. None of these functions requires even a small fraction of my personal CPU core armada, which are idle for 99% of their clock cycles.
Today's all-too-popular refrain of leveraging the cloud to provide sufficient compute capacity for these tasks is disingenuous and is too often accepted as truth when it's just cover for data exfiltration.
Not change, or stop nessesarily. Just be aware. This is going to be one of the enablers for the biggest "behind the scenes" changes in how we work as a society in our generation. As Devs, we should take some responsibility for how we act in that regard. I don't see enough of that attitude at the moment.
- Natural language understanding (choosing the right dialog options)
- Interpretation of visual cues (navigating terrain and buildings)
- Hierarchical planning (training certain Pokémon to relatively high levels rather than a scattershot approach)
- Puzzle solving (for gym access)
and quite a bit more.
Everyone knows Pokemon Red is clearly the best Pokemon game of them all.
I would also add to this that once the basic mechanics are learned, it should be much easier for that AI to pick up another similar game (for instance, Pokemon Gold or Silver). Adapting to slightly different but similar environments is a plus.
I think when thinking about these tasks, the right question to ask is "what's the dumbest way this could be solved?" rather than "what high level knowledge would I use to solve this?", otherwise you will be disappointed when the task is solved but the methods do not look what you would like them to.
Part of promising system is providing more understanding of output instead of being blind and deaf, but indeed the main decision structure has to handle it at all.
Maybe making generic novelty metrics will turn out to be harder than anything else anyway, but it has some intuitive appeal.
So I guess we're on the same page, we do need to understand the game output, but IMO understanding it well enough to know when things are novel & interesting vs when things are not is probably enough to get quite far.
As someone who's made games, released them, and suffered so many complaints about how hard they were, so I'd dumb down the A.I. and still get complaints about how hard it is.
Or another 2 player turn based game that included three different difficulties and had a ton of people telling me easy mode -- which was basically the computer making random moves -- was impossible to beat, and the computer cheated, and the game was stupid, and I should kill myself for making such a stupid game.
A.I. isn't usually super great in games because it doesn't have to be to give a challenge to the vast majority of people.
I am getting into DL myself and very excited about the potential, but I have spent literally 5 hours on setup (python and DL-specific AWS Ubunut instance) and have gotten exactly nowhere. Version conflicts, iPython not working in venv, dependency mismatches where many people built demos I want to use on python 2.7 that don't work with my 3.5 setup, can't get matplotlib to display graphs over X11/remote ssh, etc, etc, etc. So frustrating.
I want programmers to be able to one-button setup a DL machine so they can start tinkering and not wasting time on setup/dependencies/bugs.
Spinning up a Jupyter notebook with Pytorch 0.2 is as simple as `floyd run --env pytorch-0.2 --mode jupyter`. All the steps you mention in your comment are automated.
DevOps hassles is, of course, just the first of many hurdles to doing effective deep learning. Experiment management, version control, reproducibility, sharing & collaboration, etc. are also other important problems.
To me, the core technology of my startup looks better than anything I've seen in AI, but my technology is from some of my original research in applied math based on some advanced pure/applied math prerequisites. I'm not calling my work AI, but maybe some AI people would.
E.g., my work does well with the meaning of content. Just how my applied math does that has nothing to do with anything I've seen in current AI.
My view is that the current directions of AI are essentially out of gas -- there will continue to be new applications of the existing techniques, but new techniques are not promising.
IMHO, for new techniques for AI we need to do much better with (A) applied math and/or (B) implementations of relatively direct borrowing from natural intelligence, e.g., at the level of a kitty cat and working up from there.
E.g., for the math, stochastic optimal control can look brilliant, even prescient, and has had the basics on the shelves of the research libraries for decades.
For years now, if not decades, we've been creating science - as measured in publications - much faster than any human being, or group of humans, can keep up with. Granted, lots of those papers are probably bogus, but then you don't know which is which until you actually sit down and read it...
I believe we're missing tons of insights and discoveries that we have all the necessary components for making, and it would only take a smart person to read the right three papers to connect the dots. Alas, chances that any human will do that are fast approaching zero. I think the only way to tackle this is to employ computation.
What I would love to see, therefore, is an AI system that would a) filter out bogus/bullshit papers and mark shoddy research that probably needs to be redone, and b) mine the remaining ones for connections, correlations, discovering which research supports which and which research contradicts each other.
Can you personalize what I learn? Can you find the exact best explanation for my current level by looking at my facial expression?
AIs don't need to know everything but they must know where everything is and must know how to get that info. Sort of a DNS for intelligent repos and APIs to access them. Wolphram Alpha would be one but we need more, like medicine silos, agriculture silos, wikipedia, facebook, etc should have their own silos with AI interfaces.
Then Google AI would help us search for silos of interest.
- automated security guard, especially in sensitive areas like public restrooms
- watching and editing hours of stock footage for a good summary, or the highlights
- automated refereeing in sports
The Keras lead developer wrote a post called "The future of deep learning". The podcast Partially Derivative has a good summary of it.
On the implementation side, I'm looking forward to distribution. Computers are all around us, sitting by idly – how can we put them to use? How can we make them secure? We're starting to see integration of ML models into mobile systems (with Apple's .mlmodel and ARKit).
Please don't suggest - use of APIs
they ain't cost effective and the future lies in ability to tweak things on your own.
- Also availability of DL libraries across different languages and stacks. Yes there are methods but they need an extra effort to get working. At this moment to learn AI you first need to either learn python, R or matlab.
The future of AI lies in its applications. And it can only happen through experimentation. If you come across a possible application, it shouldn't take you an year to really start experimenting.
You're competing against optimizing compilers there.
1. Do you get still that much AI deals in your inbox?
2. Do you close this quarter as much deals as one year ago?
AI in general is not a bubble, and progress will continue, but I imagine most entrepreneurs are vastly overpromising, after all they only have more to gain by doing this, and they might lose by not doing it if others are doing it (so like a prisoners dilemma situation).
I should also say back in 1995 and 1996 a lot of people were saying we were in a “tech bubble”, and they were technically right, but they underestimated just how sustaining the euphoria was. They looked like idiots 2 years later when stocks they decided against were quintupling in value, but less so 4 years later when the stocks were worth nothing. So the main lesson is there are really 3 bubbles, the actual bubble, the bubble where the elites and those in-the-know finally can’t believe it anymore and bail out, shortly followed by the bubble where all the others bail out (which triggers the most dramatic decline).
DL tops out as mapping vector spaces to less complicated vectors. Incredibly powerful, but incredibly limited in what problems it can emulate.