Hacker News new | past | comments | ask | show | jobs | submit login

I'm surprised no one has commented yet on the first couple of these - Energy, AI, Biotech, and Drug design.

These have traditionally been domains requiring a huge research apparatus with tremendous manpower, for only very long term gains. Not good for startups. In AI, how can a startup hope to succeed when academia has had almost no success in 50 years (and I am doubtful throwing more CPU/neuron layers will 'solve' the problem).

In addition, the people with the skills necessary to make progress are going to be advanced researchers with PhDs, who are good enough to remain in academia if they wish or who have already developed a proven-enough idea through their research career that they don't need Y-combinator-style money.

I am not trying to be a downer on the idea, contrarily I hope there can be success. Really I am fishing for anyone with a good perspective (or an answer) to these points.




Exactly. It is quite optimistic and borderline naive to believe that YC can replicate the Manhattan project in every one of these fields with a few hundred grand and a few months in a dorm.

This is just a huge lack of perspective by people who've only worked on commercial software. Real science is very hard, very expensive, and does not result in billion dollar IPO's within 3 years.

I'm hoping that once YC realizes that such projects will never succeed through a startup incubator, they will become politically active and spearhead the reversal of the current decay of government funded science. Only the government has the resources, time and foresight to fund 50 year research projects in the fields listed in the RFS. I hope this becomes clear in time.

EDIT - Here's a bit more of my thoughts on this issue from a previous comment: https://news.ycombinator.com/item?id=7614344


I agree, I think this list is just going to make folks add AI or science! to their startup idea in the hopes it has a better chance of getting in to an incubator.


These have traditionally been domains requiring a huge research apparatus with tremendous manpower, for only very long term gains. Not good for startups. In AI, how can a startup hope to succeed when academia has had almost no success in 50 years (and I am doubtful throwing more CPU/neuron layers will 'solve' the problem).

This is a pure example of the AI Effect[1]. Academia has been extremely successful with AI Research, but you don't see it because as soon as something becomes successful, you disassociate it with AI.

[1] http://en.wikipedia.org/wiki/AI_effect


Interesting link, but I still feel AI hasn't had much success: Do we really have much more today that A* search, neural nets with backpropagation, and HMMs/SVM/etc, which were all developed in the 1960s? The successes of AI that I see (eg OCR/speech recognition and Chess/Jeopardy) use these same old algorithms with only marginal improvements and more CPU. There have been no new major techniques or insights. I'm not an expert though, correct me if I'm wrong.


> The successes of AI that I see (eg OCR/speech recognition and Chess/Jeopardy) use these same old algorithms with only marginal improvements and more CPU.

Backpropagation training wasn't introduced until 1986 (http://www.nature.com/nature/journal/v323/n6088/pdf/323533a0...). SVMs weren't useful until the kernel trick was applied to them in 1992 (http://dl.acm.org/citation.cfm?doid=130385.130401). Feature learning wasn't an active area of research until the 2000s.

There have been huge improvements in algorithms since the 1960s. The only things around back then were a few speculative papers on analytic methods. The current state of the art in learning algorithms is a huge advance over just having some ideas about the mathematical properties of learning and a few analytic tricks in obscure papers.


Huh, thank you for the informative reply. I will reconsider my view, which I had perhaps overstated before to make a point.

Wikipedia gives a citation for backpropagation going back to 1963 by the way, but looking more carefully you are right that the 1986 paper is important.


Of course research is iterative -- you wouldn't say other fields of math or science haven't had success/breakthroughs just because they are relying on old techniques.

That said, some more recent work comes to mind.

In terms of new algos: planning algorithms, deep learning architectures (ANNs without backprop), reinforcement learning, alife and multi-agent systems.

In terms of applications (which you already hint at): Deep Blue and Watson, both of which are great examples that shouldn't be regarded so trivially. Is the only difference between the "old algorithms" from the 1960s and Watson challenging people on Jeopardy is a matter of margin? No. It's not as if we were nearly there in the 60s and only needed to crank up CPU or RAM speed/storage. Read IBM's paper on it -- it took a complex architecture spanning natural language processing, databases, search, and machine learning. As for Deep Blue, even in the early 90s people said there would never be an AI to beat the best human Chess players. Once it happened, the paradigm shifted and "of course" AI can beat humans at Chess, as if there hadn't been who denied it was possible.

Some of the coolest more recent applications are in the realm of machine learning: self-driving cars, robots that learn to navigate or perform tasks, and image recognition (which has made an immense leap in the past ~2 years).


For a while Random Forest (2001) blew everything away, but more recently deep neural networks have been making huge progress. For example Google's work in image recognition: http://googleresearch.blogspot.co.uk/2014/09/building-deeper...


The very incomplete high level overview: http://en.wikipedia.org/wiki/Applications_of_artificial_inte...


In AI, how can a startup hope to succeed when academia has had almost no success in 50 years (and I am doubtful throwing more CPU/neuron layers will 'solve' the problem).

I would say that academia have had tremendous success with AI research... but that's IF you accept that the goal doesn't have to be "a machine that thinks just like a human" and if you don't hold to an "all or nothing" outcome.

In terms of incremental improvements in techniques that make machines "smarter" and more capable of helping humans solve problems, there's absolutely been amazing progress. Look at Watson, for crying out loud.

So, if you accept that premise (that the goal is just "smarter" and not "thinks exactly like a human") I don't see any reason to think a startup can't make progress in this area. Will they invent the first full-fledged AGI? Maybe not, but I don't think that's the point.


We were a biotech in last batch (Ginkgo Bioworks) and got a lot out of YC. Would do it again in a heartbeat.

In biotech anyway, cost of doing the work is falling rapidly. It's not software development costs yet but we're getting there. Also YC offers a lot outside the check (alumni network, demo day, great partners, visibility, etc).

PhDs are an untapped founder pool in general. There are tons of great PhDs minted every year where academia may not be the best way to accomplish their goals. They are used to living on low salaries and working on open-ended problems. Great founders.


Dumb, but practical AI would be a game changer. Look, I don't need HAL. I need a simple robot with enough vision processing and brains to vacuum up my house, pick up dirty clothes, and load the dishwasher.

We can't be too far from that. Even if that thing sold at $10k it would have buyers lined up.

Agreed about y-combinator not being the appropriate format for hard nuts to crack. Mobile stuff and low hanging fruit like disqus and dropbox? Sure. Breakthroughs that define how business and society works? That's probably going to come out of larger institutions that dont consist of 20 somethings living off ramen. This format can be seen as working with breakthroughs that are out there, but haven't been applied the right way or are under-monetized. TBL didn't need to invest TCP/IP, fiber networking, server kernels, etc. He just had to write HTTP.


Startups create strong incentives to implement a simple solution to a severe problem.

"incentives" - because the founders can get rich if they succeed.

"implement" - because customers don't care about theoretical work, they care about solving the problem.

"simple solution" - because founders can't afford to design a complicated one.

"severe problem" - because the problem has to be bad enough for even a very simple solution to be worth paying for.

Now, to answer your question directly, why is there hope for startups even in highly technical fields where academia is slow and expensive? Because when people are laser-focused on solving specific problems like this, they occasionally make leaps of insight, either in terms of reframing problems to make them easier, applying newly available technology, or just thinking of a new idea on their own. Smart people can pick up skills surprisingly quickly when they're focused on solving problems.

Also, the incentives are strong enough that they can sometimes convince these skilled academics to quit/supplement their academic jobs with startup work.


the energy and biotech companies that went through YC this summer seemed to have a good experience. we can often help companies raise very large amounts of capital after YC.


Recall that today's dominant notion that the only startups worth f(o)unding are cynically-leveraged, hockey stick Internet frivolities is a relatively recent development [1] --blame pmarca et al. for that.

It isn't inconceivable, then, that today there ought to be enough liquidity and appetite for riskier, much less leveraged, longer-term growth modalities, as in the past.

[1] http://www.foundersfund.com/the-future


>Energy, AI, Biotech, and Drug design

If you want to advance these fields throw money at universities, not startups.


Are you implying that these fields are only pure science? I'm just curious.


No, but universities are (and always were) highly effective "startup accelerators" for science/engineering disciplines.

The thing YCombinator (and its ilk) did differently was to realize that software was atypical of science/engineering fields in that it didn't benefit as much from many of the services offered by universities, so you could strip out most of the "cruft" and form a "lean" university that was just as effective (more effective, in hindsight).

When you bring the focus back to science/engineering, suddenly the "cruft" doesn't seem so pointless. If you try to build an accelerator aimed at traditional science/engineering problems, you re-invent the university.

What is that "cruft"?

* Formal training and apprenticeships from experts in various fields

* Many-million-dollar macroscopic and microscopic fab facilities (shared but not specialized)

-- Fancy microscopes (optical, electron, etc)

-- Fancy spectrometers

-- Nanofab junk (mask writers, aligners, chemical benches, CVD machines, etc)

-- Chemistry junk (NMR machines, MS machines, Chromatography machines, etc)

-- Physics junk (telescopes, accelerators, etc)

-- Engineering junk ($50k oscilloscopes and logic probes, test machines, FPGAs, CAD/CAE software, expensive simulation software)

* ~$1MM-ish labs (highly specialized but shared less)

-- Strange chemicals, gasses, and the tools required to deal with them

-- Strange biologicals (animal lines, cell lines, specialty constructs, reagents)

-- Fume hoods, centrifuges, schlenk lines, etc

-- 3D printers, milling machines, highly specialized fabrication and diagnostic apparati that are custom-built and one-of-a-kind

* Library/journal access

* Connections to cheap labor (no comment)

* Connections to funding for both blue-sky research

* Connections to funding for seed-stage commercial prospects

YC specializes on the last bullet point and mixes in business training. It could certainly have something to offer to startups in science/engineering fields (especially if their ultimate product was software), but we shouldn't forget that it has relatively stiff competition once it starts wandering outside of its core competency into more traditional fields.


AI doesn't mean AGI/strong AI/human-level AI. In the last few years deep learning methods have advanced the state of the art in different areas a great deal. Natural language processing, machine vision, and even some results in reinforcement learning. And these are all things a small startup could reasonably do.


Many problems in AI, such as NLP, are AI-complete. While it's possible to solve subsets of the problem without creating a human level intelligence (and many companies have done so), these solutions do not "seem" very intelligent. Based on Sam's blog post it sounds like he does mean human-level intelligence. Which unfortunately does seem out of reach within our lifetimes though I hope to be proven wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: