Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What's the next big advance in AI you're looking forward to most?
66 points by crypticlizard on Sept 17, 2017 | hide | past | web | favorite | 77 comments



I'm a medical resident. I see a patient in clinic, do a history and physical, sit down at a computer, and write a note describing what I just did. That's medical documentation, and it's a burdensome, growing problem.

The solution would be some combination of video or audio that records the clinical encounter and automatically generates a note based on what was discussed and performed. It falls under Paul Graham's "Schlep Task" (http://www.paulgraham.com/schlep.html); you'd have to work with individual clinicians, get their (and the patient's!) approval to record the encounter, record it from multiple angles, build tech into dumb devices (e.g. stethoscope, Dopplers for pulses, O2 saturations, and somehow use machine learning to integrate all that sensor data into a plain-text note.

It's probably the #1 problem from a provider's viewpoint right now, especially on the primary care side. Any individual provider will see 10-20 patients per day and write just as many notes.

Edit: I'd add that you'd also have to have access to existing notes because progress notes typically summarize the patient's prior visits.


What an interesting idea. Would the final output be raw text, a sort of English summary of what was said?

Or, to be useful, would the final output be structured data? (example: pre-filling in a form with set number of fields and data types).


Most clinical documentation in EHRs is template based with lots of boilerplate into which key words and phrases are dropped. And some template sections are free text, enabling highly customizable final documentation.


> The solution would be some combination of video or audio that records the clinical encounter and automatically generates a note based on what was discussed and performed.

I'd drop my doctor immediately if this was implemented at their office, particularly if the reasoning behind it was "I'm too lazy to take notes myself".


fyi that link redirects to pg's homepage for me, since it's erroneously picking up the `);` (auto-linking is hard!) Here's the link, for those who are lazy like me: http://www.paulgraham.com/schlep.html


It runs on my laptop and phone and doesn't phone home. Example: I download a trained network, it runs locally (maybe on some special purpose hw), discovers remote resources (example train, metro, bus timetables) and tells me the shortest route to destination without any company knowing where I'm going to.

This is a silly example because we don't need an AI for that but it's late here and I should've given the idea. No centralized architecture, no spying, no tracking. I'll pay for that.


There's a research paper published recently (ironically, by Google) on this topic, which additionally describes how you could improve the model locally on the device without revealing the training data to the model's owner.

https://research.googleblog.com/2017/04/federated-learning-c...

Previous HN discussion: https://news.ycombinator.com/item?id=14055655

IMHO this is gonna be huge, but probably not gonna be huge for a decade. The technical challenges seem large and it'll take a lot of experimentation by entrepreneurs before they find a use-case where people are not currently willing to use some big corp's solution.


I listened to a talk about this, and it seemed, that you can recover a lot of data by looking at the "diffs". You could probably even train autoencoder-like architecture to decode these deltas for you.


Very true, the paper calls out that there are no formal privacy guarantees on this style of learning. The diffs that are sent ostensibly provide more privacy than just the raw data, but it's hard to quantify how much "more" privacy that really is, and where it applies/doesn't apply. I do applaud the step in the right direction, however.


Along the same lines, I want the voice recognition in my car / gps to be able to have a conversation (within the domain of travel, of course) without relying on a big AI model in the sky.


Isn’t this how CoreML works on the new iOS?


I didn't know about CoreML, thank you for mentioning it. I googled for it and found an analysis at https://www.infoworld.com/article/3200885/machine-learning/a...

It addresses the part of running the neural network locally, plus adding local training data. Collecting inputs for the neural network is obviously left to developers and it could be as hard as the rest.

So technologies like CoreML will be part of the solution but not all of it. Maybe we'll get there.


Currently, we are focusing on very narrow field of Artificial Intelligence. Not that it is a bad thing, but we should not call it "AI". We are mostly solving the problem of Data Analysis at Scale, which again, for businesses is the most logical thing to do. I do not doubt that in the future, data analysis will be an important jigsaw part in a true AI system.

Currently, I look forward to/want to see more research/movement in other ideas/fields/methods which peep into AGI. Maybe Probabilistic (Quantum? or Neuromorphic?) Computing? Maybe Artificial Life? Maybe Cognitive Architectures? Recap into Symbolic?

PS: This is not a popular opinion, but I do want to share it with fellow HNers. We do most of the computing to solve the hard problems in our society and trade, and that is surely very noble. Having said that, When I was growing up, I used to see/feel a lot of developments in Computers happened for the sake of Computing. It felt like hacking was all together different field; Linux, the all the Free Software movement, Windows NT, Doom, Quake, Huge advances in Compilers, the whole culture around it etc.

Today I mostly see computing that works to advance and solve real world business problems like better advertisements or helping with other noble fields like astrophysics and genetics. I consider it as an advancement of our tech society, but as immature as I am, I miss that time.

Anyway, I would love to see AI for the sake of AI.


AI that looks at all the medical research in the world, and helps people get the most accurate medical diagnosis and treatment.

Ah, and at the same time, why not create machines that automatically,creatively and cheaply do biomedical r&d ? Or AI that accelerates innovation around technologies that has an health impact ?


Isn't IBM's Watson trying to do the same


IBM is trying, Google in deepmind is trying, and Amazon also has a healthcare lab. but i don't think any of them are there yet.


And it's failing at it. Hard.


Source?


IBM is all marketing


I want to see much better human in the loop systems in general. A simple example is systems that will allow users to tweak the output of creative/generative systems. E.g. Neural style transfer methods that allow the user to easily adjust the output with human centric feedback. I want to say "good start, but make the background more like Caravaggio would than it is now, and lighten up the foreground to be more playful" and have the algorithm adjust the output accordingly.

We aren't impossibly far off from this, but not that close yet either.


As with everything in computing, what I want is more capability that runs on my own devices—my own compute servers, my workstation, my laptop, or my mobile device. Every application or service that has to interact with third-party servers ("the cloud") to provide its functionality is, to my mind, defective or at least deficient of the ideal.

AI is no exception. I have swarms of CPU cores in my life, the totality of which could easily accomplish all of the "AI" functions of any value that I've observed in my life and far more. Today's AI that works for me (rather than against me) provides things like voice recognition, calendar management, reminders, to-do list management, and route planning. None of these functions requires even a small fraction of my personal CPU core armada, which are idle for 99% of their clock cycles.

Today's all-too-popular refrain of leveraging the cloud to provide sufficient compute capacity for these tasks is disingenuous and is too often accepted as truth when it's just cover for data exfiltration.


What does this have to do with the "next big advance in AI"?


Everything. But to rephrase GP's point in terms of the main question: the next big advance in AI they - and I - are looking for is AI being local, and not in the cloud.


I too am longing for local AI. That way the AIs I interact with can truly begin to learn. From me. For me. Not the same bot learned from all the aggregated Big Data for everyone. Personalized local AI. Also: Making AI local opens the doors for privacy protection once again, I hope... I want my bot to protect my data, not feed it to the big coorperations


I can't wait for the people building and deploying AI / Machine Learning tools on a large scale to become aware of the broad societal implications of what they are helping to do. Or, if they are, impress that knowledge loudly and clearly onto their peers.

Not change, or stop nessesarily. Just be aware. This is going to be one of the enablers for the biggest "behind the scenes" changes in how we work as a society in our generation. As Devs, we should take some responsibility for how we act in that regard. I don't see enough of that attitude at the moment.


Maybe not the next big advance, but I'll be truly impressed when an AI system with no game-specific prior knowledge is able to complete a game of Pokémon Yellow within a reasonable amount of game time.

It requires:

- Natural language understanding (choosing the right dialog options)

- Interpretation of visual cues (navigating terrain and buildings)

- Hierarchical planning (training certain Pokémon to relatively high levels rather than a scattershot approach)

- Puzzle solving (for gym access)

and quite a bit more.


Don't be ridiculous. Nobody would waste the effort on Pokemon Yellow.

Everyone knows Pokemon Red is clearly the best Pokemon game of them all.

:)

I would also add to this that once the basic mechanics are learned, it should be much easier for that AI to pick up another similar game (for instance, Pokemon Gold or Silver). Adapting to slightly different but similar environments is a plus.


Technically many of these are not really necessary IMO. The path that is most likely result in a solution to this problem is some scaled up version of Deep Reinforcement Learning (with a novelty reward) without any explicit natural language understanding; though planning will probably be involved in scaling it up and understanding visual cues is already trivial.

I think when thinking about these tasks, the right question to ask is "what's the dumbest way this could be solved?" rather than "what high level knowledge would I use to solve this?", otherwise you will be disappointed when the task is solved but the methods do not look what you would like them to.


You would be surprised how bad actual AI is at games. All kinds of, but especially ones where the goal is far away and you don't readily know you're succeeding. (Definition of a sparse reward game.)

Part of promising system is providing more understanding of output instead of being blind and deaf, but indeed the main decision structure has to handle it at all.


I think the Intrinsic Motivation is pretty important to solve this, and DeepMind has shown it to work well for games, since it makes the reward far less sparse, in a way that is very similar to our own way of playing games - which IMO is somewhat important since we are the intended consumers.

Maybe making generic novelty metrics will turn out to be harder than anything else anyway, but it has some intuitive appeal.

So I guess we're on the same page, we do need to understand the game output, but IMO understanding it well enough to know when things are novel & interesting vs when things are not is probably enough to get quite far.


Depending on the game, it may be done on purpose.

As someone who's made games, released them, and suffered so many complaints about how hard they were, so I'd dumb down the A.I. and still get complaints about how hard it is.

Or another 2 player turn based game that included three different difficulties and had a ton of people telling me easy mode -- which was basically the computer making random moves -- was impossible to beat, and the computer cheated, and the game was stupid, and I should kill myself for making such a stupid game.

A.I. isn't usually super great in games because it doesn't have to be to give a challenge to the vast majority of people.


I want robust, packaged setups (docker, AWS, whatever) to be cheap/free, accessible, and work out of the box with no end-user setup.

I am getting into DL myself and very excited about the potential, but I have spent literally 5 hours on setup (python and DL-specific AWS Ubunut instance) and have gotten exactly nowhere. Version conflicts, iPython not working in venv, dependency mismatches where many people built demos I want to use on python 2.7 that don't work with my 3.5 setup, can't get matplotlib to display graphs over X11/remote ssh, etc, etc, etc. So frustrating.

I want programmers to be able to one-button setup a DL machine so they can start tinkering and not wasting time on setup/dependencies/bugs.


DevOps is indeed a huge bottleneck in deep learning. Provisioning machines, installing drivers and packages and managing their dependency hell distracts focus from the core deep learning. At FloydHub (I'm a co-founder), we're building a zero-setup deep learning platform.

Spinning up a Jupyter notebook with Pytorch 0.2 is as simple as `floyd run --env pytorch-0.2 --mode jupyter`. All the steps you mention in your comment are automated.

DevOps hassles is, of course, just the first of many hurdles to doing effective deep learning. Experiment management, version control, reproducibility, sharing & collaboration, etc. are also other important problems.


i hear you. check out floydhub [https://www.floydhub.com/]


Personally I'd love to talk to a bot that's mastered and memorized many books and all of philosophy and science and is also researching philosophy. Imagine getting his/her/their opinion on all the subtle questions life throws at you.


An "AI tutor," of any sort, would be incredible. One-on-one instruction, directly from expert to novice, is leaps and bounds ahead of every other way we commonly learn. It can be so frustrating to research a topic online and find that all the resources are lacking in some way.


The irony is that a lot of philosophical problems are already solved if you dig deep enough, and the real question is how you present the answers to specific humans in a way that they understand, if the question that they asked is even the question they wanted answered in the first place, which it usually isn't.


I agree. Although, to be fair Google Assistant's knowledge graph powered is already pretty awesome.


With the research directions and methodology that dominate AI now, I don't look forward to anything significantly new or better.

To me, the core technology of my startup looks better than anything I've seen in AI, but my technology is from some of my original research in applied math based on some advanced pure/applied math prerequisites. I'm not calling my work AI, but maybe some AI people would.

E.g., my work does well with the meaning of content. Just how my applied math does that has nothing to do with anything I've seen in current AI.

My view is that the current directions of AI are essentially out of gas -- there will continue to be new applications of the existing techniques, but new techniques are not promising.

IMHO, for new techniques for AI we need to do much better with (A) applied math and/or (B) implementations of relatively direct borrowing from natural intelligence, e.g., at the level of a kitty cat and working up from there.

E.g., for the math, stochastic optimal control can look brilliant, even prescient, and has had the basics on the shelves of the research libraries for decades.


My dream: AI applied to scientific research.

For years now, if not decades, we've been creating science - as measured in publications - much faster than any human being, or group of humans, can keep up with. Granted, lots of those papers are probably bogus, but then you don't know which is which until you actually sit down and read it...

I believe we're missing tons of insights and discoveries that we have all the necessary components for making, and it would only take a smart person to read the right three papers to connect the dots. Alas, chances that any human will do that are fast approaching zero. I think the only way to tackle this is to employ computation.

What I would love to see, therefore, is an AI system that would a) filter out bogus/bullshit papers and mark shoddy research that probably needs to be redone, and b) mine the remaining ones for connections, correlations, discovering which research supports which and which research contradicts each other.


We're very, very far from a good discovery AI. Currently world is at a stage of "not get trapped in a local minimum when optimizing known decisions", which is very far from actually generating decisions. Latest research in sparse reward games is a good step forward, as research is generally one.


ML that helps me learn faster as a human. Let's be honest, watching videos online is suboptimal. The middle-ages method of listening to a professor in a large room is suboptimal.

Can you personalize what I learn? Can you find the exact best explanation for my current level by looking at my facial expression?


Not needing millions of examples and being resilient to adversarial examples


AI-NET. Specific AI that can communicate with other specialized AIs using a simple protocol to solve specific problems in their field with the help of others.

AIs don't need to know everything but they must know where everything is and must know how to get that info. Sort of a DNS for intelligent repos and APIs to access them. Wolphram Alpha would be one but we need more, like medicine silos, agriculture silos, wikipedia, facebook, etc should have their own silos with AI interfaces.

Then Google AI would help us search for silos of interest.


An AI that can watch videos:

- automated security guard, especially in sensitive areas like public restrooms

- watching and editing hours of stock footage for a good summary, or the highlights

- automated refereeing in sports


I'm excited for self-driving cars and ML/AI tools that help professionals (e.g., drug discovery, treatment).

The Keras lead developer wrote a post called "The future of deep learning"[1]. The podcast Partially Derivative has a good summary of it[2].

On the implementation side, I'm looking forward to distribution. Computers are all around us, sitting by idly – how can we put them to use? How can we make them secure? We're starting to see integration of ML models into mobile systems (with Apple's .mlmodel and ARKit).

[1]:https://blog.keras.io/the-future-of-deep-learning.html

[2]:http://partiallyderivative.com/podcast/2017/07/25/the_future...

[3]:http://papers.nips.cc/paper/4390-hogwild-a-lock-free-approac...


A conversational bot. One where you can not determine if you are talking to a human or not. This could disrupt the huge call center business.


Come on, they're using a lot of automation already. Do you really want more failure on top of that?


Stronger incorporation of hard constraints / rule or program learning in models for control and sequential decision making.


Can you give an example?



- Dumbing down the implementation ... Most tools like tf, keras, pytorch are heading that way .. but still there is a bit of expectation from engineer to understand insides.. the future of AI depends on ability of every engineer to use the techniques without the learning curve that is right now.

Please don't suggest - use of APIs they ain't cost effective and the future lies in ability to tweak things on your own.

- Also availability of DL libraries across different languages and stacks. Yes there are methods but they need an extra effort to get working. At this moment to learn AI you first need to either learn python, R or matlab.

The future of AI lies in its applications. And it can only happen through experimentation. If you come across a possible application, it shouldn't take you an year to really start experimenting.


AI toolbox for manipulating real life objects. Can't wait to make a robot that slices tomatoes for me!


Given the same algorithm implemented in any languages, the AI should be able to compile(translate) every program to the same highly optimized machine code or at least the same performance, learning only from sources of other programs and their resulting compiled code.


And what you would do with the result? It is worthless to just run it.

You're competing against optimizing compilers there.


I'm looking forward to seeing the movement of software from our computers to the third dimension in our daily lives, especially with robots in the workplace. Computer Vision used to be the limiting factor, and now it's not. Especially for things like science (eg. opentrons), the concept of brute force and massively parallel experimentation & data collection will be a huge game changer for discovery. There are areas in chemistry and materials science alone where modest improvements to existing processes would have such massive impacts to our society, we probably couldn't even estimate the resulting benefits to their fullest extent.


Question to VCs on HN: Is AI declining? Not that it is dying down but my feeling is that the general investment activity in AI slowed down but maybe I am wrong.

1. Do you get still that much AI deals in your inbox?

2. Do you close this quarter as much deals as one year ago?


The head of Facebook AI said something a few years ago that AI has “died 4 times in 5 decades due to overpromising”.

AI in general is not a bubble, and progress will continue, but I imagine most entrepreneurs are vastly overpromising, after all they only have more to gain by doing this, and they might lose by not doing it if others are doing it (so like a prisoners dilemma situation).

I should also say back in 1995 and 1996 a lot of people were saying we were in a “tech bubble”, and they were technically right, but they underestimated just how sustaining the euphoria was. They looked like idiots 2 years later when stocks they decided against were quintupling in value, but less so 4 years later when the stocks were worth nothing. So the main lesson is there are really 3 bubbles, the actual bubble, the bubble where the elites and those in-the-know finally can’t believe it anymore and bail out, shortly followed by the bubble where all the others bail out (which triggers the most dramatic decline).


GAN improvements that lead towards finding global minima for any given distribution mapping. GANs are versatile but still suffer from mode collapse and instability, drastically reducing their utility.


Regulation.


To be honest - games. AI that could write a story by itself and characters that would interact based on AI and not some state machines would be amazing experience.


AI writing AI.


Big advance? AI than can learn conditional logic.

DL tops out as mapping vector spaces to less complicated vectors. Incredibly powerful, but incredibly limited in what problems it can emulate.


Distributed and decentralized AI a la things like Synapse https://synapse.ai/


I want to see an "open models" community where trained models have been validated with high accuracy that anyone can use and share.


Neural network controllers for bipedal robots that grant them adaptability, efficiency and elegance of movemeny.


I want an AI assistant that can do my taxes.


I'm looking forward to AI that can recognize ads visually, and can perfectly extract ASCII text from an article.


Not sure if it will happen, but I'd love to see life after back propagation (1). In particular, I'd love to see advances in unsupervised learning, since the amount of labeled data is a huge limit on what AI can currently do.

https://www.axios.com/ai-pioneer-advocates-starting-over-248...


AI to help software engineers, since that would impact me the most in my daily job.


Specifically I would love an AI tool that looks at the context of the expression I'm editing and finds suggestions that satisfy type checks.


Sure. Static analysis, scaffolding, a little bit of a conversation where the AI is trained (by the developer themselves possibly) on the style. An AI doesn't need to replace the developer, just let them think at a higher level and the AI would deal with the lower level details, just like a compiler. There's so much possibilities here, but alas, software development itself is not exactly the most amenable to be addressable with machinery since it is at the crux of both creativity and engineering when even the silliest things can be uncomputable.


Self awareness.. Intellegence only exists where there is self awareness


People will realize how silly privacy is and start sharing their data with AI for the benefit of everyone.


Natural language generation.


Developing a sense of humor


advances in general intelligence




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: