I've been working on this for several years, though a startup seems the wrong vehicle for it. I think the description in the RFS is misguided:
"We’re interested in helping developers create better software, faster. This includes new ways to write, understand, and collaborate on code, and the next generation of tools and infrastructure for delivering software continuously and reliably."
There's a blind spot in prose like this that gets repeated all over the place in our community: it emphasizes writing over reading. I think we have to start with reading. My hypothesis is that we need to reform the representation of programs to address this use case: I download the sources for a tool I use, wanting to make a tweak. How can I orient myself and make a quick change in just one afternoon? This is hard today; it can take weeks or months to figure out the global organization of a codebase.
You can't "deliver software continuously and reliably" until you rethink its underpinnings. Before the delivery problem there's a literacy problem: we programmers prefer to write our own, or to wrap abstraction layers over the code of others, rather than first understanding what has come before.
More on my approach: http://akkartik.name/about
Reading is not the issue as well because most code is terrible and it doesn't matter how easy you make it to read the code I'm still going to waste my time reading it. So I'd say that the problem again lies elsewhere. Few people have the aesthetic sense to write elegant code. It doesn't matter how many tools and abstractions you throw at the problem there are still going to be people writing terrible code.
Writing and reading code or doing anything else with it is in many ways like art. It is also like science and craft and a bunch of other things that require creativity. Even with the accessibility of paints and all the accompanying technology we still don't have the likes of Michaelangelo and Picasso being any more prevalent than they were around the time those guys were alive. Literature is another good example. We teach everyone to read and write but it doesn't matter how much money or technology you throw at it (ebooks, libraries, etc.) we still don't have any more great writers than we did a century ago.
This is not a technology problem. It is a culture and human problem for which you are not likely to find a technical solution.
Most reasonably skilled programmers can read code. They choose not to. Cultural problem not technical one.
I've been in the industry a long time and the best codebases I've seen had a handful of things in common:
* They were written by highly competent programmers who all had an interest in doing a good job.
* The code was neat, well-document, and sensibly structured based on agreed upon standards.
* The programmers writing them weren't forced to inject changes faster than they could compensate for them.
That's it. Most of the disasters in industry are the result of extrinsic demands. People not caring or people. People documenting nothing. People under extreme deadline pressure hammering in something that then has massive, long-term design ripple effects on the rest of the codebase. Do this several dozen times and you're almost certain to produce a disaster and some point.
Therefore, most software problems have to do with people in over their heads or making changes haphazardly to meet deadlines. These cultural issues seem largely to stem from people thinking that writing software is a deterministic process - it isn't, and has more in common with chaotic systems than linear deterministic systems. Hell, I've written virtually the exact same piece of software twice at one company and each time produced both identical bugs and completely new bugs.
We would probably benefit more from understanding what software actually is and educating people about that than attempt to technologize away our problematic understanding of technology.</screed>
Elegant code does not necessarily mean readily understandable code (or "readable" code). For example, some Haskell programmers like writing extremely elegant code -- so elegant that you can base a whole new (elegant) branch of mathematics on it -- yet it is no more readable than, say, many early BASIC programs.
But why pick on Haskell? I've never seen Microsoft Word's code, but let's imagine it were the paragon of OOP design. Then, I'd like to add a feature similar to the spellchecker, that tests whether consecutive sentences rhyme. Now, I could probably find the spell-checking code rather easily, only to learn that it is attached to the main program via the most beautifully intricate plugin system -- with lifecycle management, runtime loading and unloading and whatnot -- that it takes me a few days only to learn that bit.
The point is that software is complicated, and very non-standardized. Code readability rests only in part on its structure, and a lot on how many "advanced" language features are used, the number of libraries used and their familiarity. You'll probably find "terrible" code that uses a couple of popular, mature libraries, that you're familiar with, much easier to understand than the most "elegant" polyglot codebase (written in both Python and Haskell, because, you know, the best tools were picked for each job) that makes use of 10 of the newest, shiniest libraries you've read a lot about on HN and always wanted to learn but never had the time to.
I want programming to be like reading, with most people able to skim most article-length pieces of prose -- even if it's poorly written -- and get a sense for its global organization. That doesn't require teaching everyone to write like Shakespeare.
To reiterate, you're responding to things I didn't say. Code shouldn't have to be perfectly designed to be readable, and nobody should have to wade through utter crap either. Very often code starts out nice when it has one author or three, and gradually turns to crap as more cooks are added. I want to eliminate that dependency on author churn, to have it be beautiful or ugly based on the capabilities of the programmers involved, not on the difficulty they had understanding those who came before. To make progress on this project, I find it most valuable to utterly ignore aesthetics.
That article you linked to (http://alistair.cockburn.us/ASD+book+extract%3A+%22Naur,+Ehn...) connects the dots really well. Programming is really about building theories and then implementing them with computational building blocks and then transferring the understanding of those theories. Short of developing mind reading I think there is an irreducible complexity in that endeavor that is impossible to skirt around.
In light of that article I'd like to amend my comment about aesthetically pleasant code. Some programmers are good at structuring things so that the overarching theory is present throughout all code level structures. That kind of code is both aesthetically pleasant and easy to read. I don't know if this is some kind of special talent of if it can be learned but given that most software is a confused mess I'm willing to bet there is a large talent component.
I started out thinking you couldn't solve social problems with technical solutions. Now I think social problems arise in the context of configurations of technical energy barriers. Making something easier can make good behavior more or less likely to arise. So it behooves technologists to think hard about what they make easier. But this is getting abstract, and I need to show examples of what I'm trying, what I'm keeping and what I'm discarding. If you send me an email I'll show you what I have.
Not necessarily. It can also be indicative of the circumstances in which the code was written. Even great programmers can write terrible code if they are stressed out or overworked. So you can't just look at the code they have written and jump to conclusions about their programming ability.
This is what we're trying to address at Sourcegraph (https://sourcegraph.com/). 80% of programming is about reading and understanding code, not writing code.
Why don't existing tools focus more on helping people read and understand code--and, more broadly, collaborate on a development team? Things like:
* seeing everywhere a function or class is used, in context (like https://sourcegraph.com/github.com/joyent/node/.CommonJSPack... on the right side)
* seeing who at your company knows the most about an area of your codebase
* seeing the history of changes, in terms of functions/modules added, not just lines (like https://sourcegraph.com/github.com/fsouza/go-dockerclient/.c...)
* having a long-lived discussion about a module/class/function that doesn't vanish after a commit is merged or the file's lines shift around
So far, most of the innovation in programming tools has been in editors or frameworks, not in collaboration and search tools for programmers.
While there are great editors and frameworks, the lopsidedness is unfortunate because making it easier for programmers to learn and reuse existing code and techniques, and to collaborate on projects, can have a much bigger impact than those other kinds of tools. That's because, in my experience, the limiting factors on a solo developer's productivity are the editors and frameworks she uses, but the limiting factor on a development team's productivity is communication (misunderstanding how to use things, reinventing the wheel, creating conflicting systems, not syncing release timelines, etc.).
...but I can't give you money without Java/Scala support. Roadmap? Pleeeeease? =)
Find All Usages and Jump To Definitions are the reason I spend 400 EUR on Resharper licenses, it makes reading code so much easier..
Perhaps the biggest gain yet to be realized in the programming environment is the use of integrated database systems to keep track of the myriads of details that must be recalled accurately by the individual programmer and kept current in a group of collaborators on a single system.
Surely this work is worthwhile, and surely it will bear some fruit in both productivity
and reliability. But by its very nature, the return from now on must be marginal.
Tools will not solve the essential difficulties of software engineering.
It lets you cross reference and document any language with gigabytes of code, e.g. Android platform code. Jelly bean ~9 gb of code.
Try it out:
Sample Documents created from that app are here:
The ux is ok. A few gigabytes of code db running in a cheapest $5 instance of Digital Ocean.
The app is very simple ~8 mb of standalone binary, no external dependency do any other app/db.
I like to sell it as team/site license for large dev team in the future. The traction doesn't seem to be there yet.
Because my burn rate is < $150 per year, I have ~1500 users / month come to the site from pure google search alone.
I am just slowly experiment with different features/msg/channels.
This tells me it's not that understanding how that code works is intellectually difficult; rather it's discovering how it works is time-consuming.
It would be amazing if you could "interact" with the code about its structure and intent the same way you might interact with its author.
"There are so many times where, if I could sit down with the original program author for 5 - 10 minutes, I could understand more from that interactive back-and-forth than several hours of reading code in solitude."
Peter Naur wrote a paper in 1985 where he conjectured that this seemed to be a universal law. No matter how much documentation authors provided, new programmers still needed to talk to them. http://alistair.cockburn.us/ASD+book+extract%3A+%22Naur,+Ehn...
So you could choose to hear what the author was saying right when he was working on a certain area of the code. (You'd normally turn it off, but if you're really stuck it might be a good last resort.)
Has helped me figure out what the hell I'm doing many a time.
FWIW I think you are exactly right, it made me sad that Google expected people to be 'useless' for anywhere from 6 to 9 months of their early employment as they tried to get their head around the code base and tools. Having the ability to eliminate that spin up time would probably triple their productivity numbers.
The biggest difficulty with current languages is their reliance on absolute determinacy. Programmers must express in exact form what computations ought be performs, and the languages possesses no intelligence in themselves. I think this needs to change. Inference should have applications outside type declaration, and the vast knowledge base that is the Internet should be taken into account.
At this point, there are two promising pursuits that I've seen. First is Wolfram Language, which you've all heard of. The other is Escher, which enables "programming in analogies". It's on early stages right now, but I'd encourage you to check it out at https://github.com/gocircuit/escher.
In the end, an analogy-based approach seems inevitable though, especially once natural language-based inexact. grow popular. Everyday speech is littered with ambiguities, and programming "languages" need to handle these. They can't throw compilation errors upon each inexact command.
The software fulfillment process has a lot of issues.
We are stuck in a local maxima and jumping the chasm to the next maxima with ideas like "what comes after programming languages" is really really hard.
Sadly, one of the biggest hurdles is self preservation because software developers don't want "software development accessible to the widest part of our society". I hate to say it, but there are Luddites among us.
So, I tend to agree that a startup is the wrong vehicle for this.
> I think we have to start with reading.
I agree with this point. People understand things differently. There should be multiple ways for people to read and understand software programs. Unfortunately, there is only one major way now and that is with source code.
I do think that tools like IFTTT are moving us towards more accessibility but it is a small step.
I like what you are working. Keep the conversation going.
It's a strange form of Luddism that seeks to limit opportunities to integrate the existing skills of the labor force into the structure of production. We don't have much in the way of unions or guilds, so perhaps the only effective way to restrict the labor supply is to keep the tools user-unfriendly.
But are you sure the RFS actually disagrees with you? What you're working on covers 2/3 of what the RFS deems worth focusing on. Collaboration and understanding.
†I think it's clear what comes after programming languages. A return to their roots. Programming a computer today is like moving by telling every single muscle what to do. To make writing code easier we want to be able to do more with less, to have the code figure out from context what the best thing to do is. We want more intelligence in our compilers. It is telling how powerful the notion of AI is when so many world changing ideas (functional programming, OOP, search, Databases [influence goes: prolog -> datalog -> SQL]) are compromises, failed attempt and detritus of AI projects.
So I'll stand by my statement that the RFS's phrasing is -- not wrong, but rather -- misguided as stated. Or maybe "misleading" would be better, but that carries connotations of malice.
Software engineering won't exist as a profession until it really is turtles all the way down, and not a pig on a fish on a tiger etc.
Is this a reference to something, or should I point here if I want to reference this idea?
Both those things literally only came after reading so much code! No shortcuts from the Dev environment (idea, docs) aside from navigation and search!
from Bret Victor:
and intentional programming from Charles Simonyi:
Once Apple and other companies showed what the world looks like when a phone is a giant touch screen + giant battery, phones with buttons become a niche, not the norm.
I don't know what the programming paradigm is that completely changes that, but I do know that I haven't seen it yet and everything we've done so far seems to completely miss the mark.
Someone is going to come along with some other radical assumption about what designing/implementing software really is about.
I know that it is not about static vs dynamic, functional vs. imperative, etc. like we've traditionally thought. Maybe it will be more like using Excel or Gmail. Maybe it will be more like flowcharts, or maybe just like sketching out a structure and the machine knows how to wire it together for you. Maybe humans won't be involved at all.
Speculation is endless, but nobody has had the iPhone unveil of new programming paradigms yet.
I'll suggest an alternative approach that might stimulate your neurons. Look back at the ways that your life is improved reading code today compared to twenty years ago. When I look back, the highlights for me are:
a) version control, and
b) automated tests
Forget all their benefits to the people writing code. For me, the guy with one afternoon to make a tweak to an alien codebase, it has been invaluable to have not just the current snapshot of a codebase but a story of its evolution so that I can see what the skeleton of the program used to look like in a simpler time, and gradually play enhancements forward atop it. It has also been invaluable to be in situations where I can go, hmm why do I need _, why don't I just rewrite it as _, try it, and find a failing test with an enlightening name.
What's common to these two ideas is that they are additive. You don't have to give up programming languages. You just need new ways to share information between programmers besides just the code that runs "in production". My current favorite candidate for the next augmentation for codebases is logs. More info: http://akkartik.name/post/tracing-tests. If that whets your appetite feel free to get in touch. My email is in my profile.
 More details of how this helps: http://akkartik.name/post/wart-layers
Programming as it is now is a 2 way communication mechanism, we are writing code for other humans to understand, but for a machine to translate into what we intend for the machine to execute.
So, your point of making code a better human to human communication mechanism is absolutely correct.
What I really wonder about is what set of tools could exist that could solve whole sets of low level problems that we are specifying now that we shouldn't. Sort of like, right now we have blueprints and so on, and then humans look at those and build a house. Yet, with 3-D printed houses, you feed a set of materials and specifications into a machine, and you get a house built for you.
What is the programming equivalent that would allow us to "3-D print" software, so to speak? For physical things, it seems to be the combination of 3-D CAD systems, 3-D printers, and maybe some thought about how things need to be designed to fit this approach.
I have no idea exactly, but I could see a future where instead of paying people to hand craft all the code, that a series of features, modules, structure, etc. are specified, and the software is just "built". There would be more effort in the specification, but less on the build.
I think "naked objects" or "apache isis" are the best examples we've got for that for now , but they are rooted in somewhat complex java code, instead of being rooted in an easy to use tool, such that lets the business analyst who had some course , sit with the client and fully define a running system , step by step. In some cases the auto-generated UI would be good as it is , and the system will be used as-is.
In other cases , we might need easy tools to customize the UI.
When you think about it, the job of a designer is often quite systematic. You have some entities/data you need to communicate to the user, through whatever interface/device that's available to him. A touchscreen, a keyboard, knobs, LEDs, microphone, speaker, paper, etc.
When your "user" is a computer, JSON (while not perfect) seems to do the job as an interface. In the case of humans, JSON does a poor job at efficiently communicating information.
- A calendar is better than "2012-11-05"
- A color is better than "#FF3300"
- An image is better than "http://example.com/image.png"
- A clickable link is better than "http://example.com/document.html"
The list goes on. We can easily generate a basic UI based on complex entities, and map specific types to custom/reusable templates if needed.
Now that your UI can automatically be generated from data, you can build an app (business model) once, and make it usable (and actually look and feel good/native) on any device. A smartwatch, a smartphone, a smarttv, etc.
Basically, the core of software development should be knowledge representation. Describe the world semantically (with RDF or similar technologies), and let the UI-compiler generate a UI for any given target platform, language, culture, user preferences. That's what responsive design should be all about.
It seems that there's something missing from all the existing implementations of your suggestion. Perhaps it is that the UI and data flow primitives that we have aren't flexible enough to express a lot of business models, thus requiring custom implementations.
Regardless, that was kind of my point - all these systems are only good as long as you stay within their constraints. The unfortunate thing is that the constraints always end up being way too tight in practice.
Would love to get in touch with you.
$264.39, for students that work part time jobs at $7.00 an hour (before taxes).
Not only students are angry about this. Professors are angry, and authors are angry too. Bitter fights between professors and publishers are common.
Everybody wants to see the big players in this industry fail. Please, someone, make it happen.
Cost of attendance at my state flagship is $24,000. Full time at minimum wage is $15,080. Any education worth that kind of money is hard enough that you physically can't work full time while doing it. Student labor is pretty miniscule as a source of funding.
In other words, $264.39 is not actually 37 hours at McDonalds. It's a rounding error on a parent's 6-figure contribution over 4 years. Or a rounding error in your monthly loan payment when you're making $50k instead of $8k. Or it's coming out of interest on the school's endowment if the school is good enough and your family poor enough. Or it's coming from whoever funds your scholarship if you are one of the handful of people smart enough to get a merit-based full ride. Etc.
(No, your experience of putting yourself through a state flagship in the 80s is not relevant. Minimum wage is roughly what it was; tuition is decidedly not.)
I believe the textbook publishing industry could adapt (or be disrupted) to be more cost-efficient, but you would first have to vastly reconfigure the higher education system such that it's reasonable for students to pay their own way. Then you'd actually have incentives to price things for students.
I don't really understand how they can afford tuition + room + board but need the $140 (less taxes) a week, but as it turns out this is very common.
It made a big impression on me when a student walked in to turn in his homework wearing a Chick-Fil-A cap. Asking around, I learned that this is typical at the university where I teach.
So, yes, here at least, $264.39 is 37 hours at McDonalds. Well.... taxes.... so make that more like 50.
As a student, maybe I can shed a little light on this.
There are a few reasons working part-time at minimum wage can make economic sense for a student.
At least at my institution, few kids whose parents foot the tuition bill work.
The students most likely to be working 20 hours/week are the same students likely to receive some form of financial aid/merit scholarship. As such, the tuition + room + board costs may be significantly less than the sticker price. Considering this reduced expense, the ~$200/week from part-time work may make a considerable contribution to a student's budget.
Even if these students aren't able to completely cover the remaining cost of school not covered by financial aid, there are many instances where that part-time job replaces a potentially high interest student loan, reducing the overall cost of education in the long run.
In some instances, even students whose parents assist them with educational expenses require a part-time job for discretionary expenditures. I have more than one friend whose parents pay tuition, but do not cover the cost of gasoline/car repairs necessary for the student to go to class.
I also worked 40-50 hours per week during the summer. Between that and loan distributions, I could just barely keep up with my typical costs. If anything went wrong--e.g., an injury (yep!), car repairs (yep!), fines (yep!)--the credit card debt didn't get paid off.
You might predict that I would graduate with a lot of debt. You would be right. In spite of having a merit scholarship for full tuition and $4500/year, I had over $35,000 in student loans and another $5000-10,000 in credit card debt when I graduated.
You talk about full-time classes and a full-time job. Actually some of my classmates are raising children or that sort of thing as well. But beyond that, you're correct that it is hard to take a hard STEM major full-time and also work full-time. It would be almost impossible to maintain a 4.0 or 3.9 or whatnot. The solution is obvious, don't take a full course load - take three classes a semester, or perhaps two, or perhaps one. It takes longer, but what is the alternative for those who can't afford full-time study?
If some 18 year old can't really afford full-time study...then don't do full-time study. Why make your parents shell out thousands they can't afford, as well as burdening yourself with enormous loans, for something you may very well not complete in four years. Some kids graduate and are not working - a lot nowadays. On this public school commuter campus, the smarter half of the CS major seniors I know have never heard of software version control, have no idea what git, Perforce, cvs etc. is. Most of our professors are good too - most of them understand their topics, and some are even good at explaining it. A dedicated person can get a lot out of the education, and then perhaps go get a Masters at a more prestigious school afterward if they want.
If people can't afford fulltime, don't go fulltime. Maybe the government should help more, maybe not, but if someone can't pay fulltime they should go parttime.
The right place to target reform would be professors. They should take a stand to write and use only open textbooks that can be freely downloaded and distributed. This is already common at the research level, but it doesn't seem as common for intro textbooks.
Studies have shown that text is not the ideal format for novices to learn a subject for the first time.
I'm a co-founder at https://www.clutchprep.com and that's what we're tackling. Students go to 300+ student classrooms, don't learn much and then have to rely on a $250 textbook to teach themselves (doesn't work well for most). It's a broken system.
Edit: fixed the link
As a recent graduate I remember shelling out thousands of dollars during my undergrad years to pay for textbooks. It was always a crapshoot because you never knew if the professor would not use the textbook at all or would heavily rely on it and assign reading/problems from the book.
There are definitely ways around purchasing expensive textbooks, but most are illegal, and none are convenient or guaranteed.
Here's an open source Java book on Penflip: https://www.penflip.com/lynnlangit/tkp-lesson-plans
Free online, <$10 in print. There is a high school level stats text book available as well. Its also not "crowd sourced", but the results seem solid.
Listen, Uber is an example: Not playing by the rules makes it easier to win and when you do a few will complain about your lack of ethics but only a tiny fraction will make that count against you.
JustFab? You may be upset about dark patterns but people love them. Apple, Google, et. al. intentionally breaking the law? No one cares, people still want to work for them. Intel's illegal anti-competition activities? Microsoft's? People still love them.
No one cares about the guy who worked his ass off so he could afford a book, if that work cut into the time needed to study, leaving him less educated. People celebrate the guy who, having pirated books, was able to concentrate on learning and has the free time to become better at many other things.
But please oh please disrupt textbooks for schools. Parents are forced to buy new books every year because of social pressure. After all, you wouldn't want your kid to be known in class as "the one with parents too poor to even buy a textbook" and bullied for it. Not that adults would care about someone having a copied book - but the kids do. Recognition and respect of their classmates is of paramount importance to every kid in the school; publishers know this and they can charge whatever they like, and parents have to pay.
The big problem I'm facing right now is how to subvert universities requiring super specific and customized versions of their textbooks. I think the solution is a crowdsourced list of textbook alternatives (ie: "your university says you need Biology Harvard Custom Edition, but that book is the same as Biology minus chapters 6 and 11. You can get the generic edition for $11").
That's really just a bandaid, though. A real solution would look something like https://www.boundless.com/ or Khan Academy.
Obviously the books changed slightly. But they kept change lists, so you always knew if you had the latest info or not. Sometimes, even the lecturers did this. E.g. "If you have ed4, it's on page 400-403, otherwise it's page 389-392 on the latest edition, etc".
Some useful discussion here, including indications that cheaper options exist:
Maybe what's required is a company that a) publishes cheap (probably older) textbooks, and b) rouses student organizations to push for their adoption.
Part b) is by far the harder one, and probably doesn't have a technical solution. It's a marketing/consciousness-raising thing.
Really takes open access (which is great, but still expensive) to the next level. Very inexpensive publishing, and free preprints.
If anyone is interesting in this domain and would like to chat, please ping me. I'm in SF Bay Area, email is in my HN profile.
Sorry, this is not the right problem in financial services. Companies like Vanguard are already doing a great job of this and the costs are extremely low. Its a commodity product with razor thin margins that actually serves the needs of its customers well. Maybe there's a marketing issue where they aren't educating enough people, but that's not a technology problem.
As an alternative: Lower the Costs to IPO, disrupt Investment Banks
Sarbanes-Oxley, minimal competition between investment banks, and heightened SEC scrutiny have made the fixed costs to an IPO astronomical. These days a company, for the most part, cannot IPO for less than a $1 billion raise. This means that the broad public, including those index funds YC loves, is prevented from enjoying any returns at all for younger, high-growth companies.
There is room for startups to disrupt part or all of the process. It would be capital intensive and hard as hell. But, you're not looking for easy right?
Can you explain what that means?
I'm kind of confused by what Sam is talking about in the RFS about enabling lower cost index fund investing. VFINX has a minimum initial investment of $3,000 and minimum additional investment of $100. At 17 bps that's literally $5/yr. on the Vanguard S&P 500 fund. You are basically talking about a nominal cost to service accounts (send statements, support, backoffice, etc). It would be interesting to think about how technology can lower these costs, but cost isn't preventing anyone from participating.
And there are plenty of very low cost index trackers.
For a VC to make this sort of request is troubling as it indicates that they have very little knowledge on how investing works.
Remember too that Google got burnt by trying to avoid the sales process entirely.
The basic idea is this: it's easy to make mistakes in business. As a startup or business in general you made many mistakes over the years that you've had to learn from and bounce back from in order to build you business into the success it currently is. You've earned this competence through your mistakes, and these mistakes cost you at the time you made them. Most businesses only get sold once (acquisition or IPO), so you want to go to the people who know what they're doing when you go to sell your business. For all the shit investment bankers get, they are very good at their jobs.
As for data, it isn't something that has been lacking. Moreover, IPOs aren't priced according to valuation (though sometimes bankers reverse engineer a valuation to satisfy a price).
I'm no longer a mid 20-something that can live on Ramen and 16 hour days. I'm married and have a young child.
Are there YC founders in this phase of life that were able to make it work in YC? What did you do differently? Is YC interested in working with these kinds of founders? (it's certainly a different kind of "Diversity")
Realistically YC doesn't care what stage of life you're in, they just want you in Mountain View for 3 months so you can significantly improve the chances of your company growing large. It's difficult to make it work, but I promise for me it was totally worth it.
Not having money must be worse than beeing alone a few months?
"This is one reason I'd bet on the 25 year old over the 32 year old."
"By 38 you can't take so many risks-- especially if you have kids"
I would be wary approaching an organization that has publicly expressed such age discrimination.
"By 38 you can't take so many risks-- especially if you have kids" is a statement of fact.
"This is one reason I'd bet on the 25 year old over the 32 year old." is a reasonable conclusion.
Calling this age discrimination is like saying all employers are evil because they discriminate based on having or not having skills required for a job.
In more modern times a more apt comparison would be with respect to pregnant women. Suppose he'd written instead "This is one reason I'd take the childless, single 25 year old woman over the married 32 year old. By 33 she'll probably be pregnant or have a baby."
That's very clearly discrimination, and very clearly in the negative-connotation sense.
For similar reasons, PG's statements re: age are very clearly "negative-connotation" age discrimination. Moreover, there's nothing particularly "sober" or even "rational" about it, considering the heaps of evidence regarding age and people who are very successful in running businesses (hint: "young" isn't exactly a word that comes to mind).
Only you can answer whether you're willing to do what it takes to make a startup succeed. I personally do not believe it requires sacrificing things like family and personal health. I'm not going to lie, it's easier when you have less obligations in your life (like when you're young), but not impossible.
It's more a question of whether your life fits to everything that comes with being a startup founder, rather than YC itself.
It's tough. But not impossible. My solution was to cut everything else out. But, I love my family, my co-founder is awesome, and building my company is what I want to do. So, "everything else" should probably have been cut out anyway.
> Healthcare in the United States is badly broken. We are getting close to spending 20% of our GDP on healthcare; this is unsustainable.
That's mostly a policy problem, not a technology problem. Countries with single-payer healthcare spend massively less on it per % of GDP than the United States with its pro-profit healthcare system, and American doctors and healthcare corporations end up being fabulously more rich than in those countries. (And they still have private healthcare, like in Sweden, which competes with public healthcare organizations.) The other reason healthcare costs are getting higher is that people are getting older and thus more sick. That's a generational bump, there's very little we can do about that. Not that I'm opposing the types of ideas YC is after in this sector (preventative medicine and better sensing/monitoring), just that the premise is wrong that it's a technological problem.
> At some point, we are going to have problems with food and water availability.
That's because we dedicate most of our water and land resources to feeding cattle that we then eat. Innovations that will have the most impact in that sector will involve weaning people from animal products. Stuff like Beyond Eggs and lab-grown meat.
> It’s not a secret that saving money is hard, and that people tend to be bad at doing it. The personal savings rate has largely been falling since the early 80s.
Sure, some super-low-cost index funds would help, but the main problem here is two-fold: 1) real incomes are stagnant, due to government policies favouring corporations and 2) government/pension funds are much better at providing good ROI on investment than individuals can. Once again policy change is much more likely to have a massive impact than trying to improve the individual worker's investment returns. Collect retirement contributions at the source, and have the best investors in the country manage them. Without taking a profit for themselves. It's done elsewhere.
#1 on that list is medical costs.
To quote the article: http://finance.yahoo.com/news/pf_article_109143.html
"A study done at Harvard University indicates that this is the biggest cause of bankruptcy, representing 62% of all personal bankruptcies. One of the interesting caveats of this study shows that 78% of filers had some form of health insurance, thus bucking the myth that medical bills affect only the uninsured. "
So yea, fixing healthcare will also fix a lot of America's debt problem. There are only a couple ways to do that - decrease the costs or have a single payor. We've tried the 'decrease costs' part with college funding. That didn't work, because any time we subsidize funding to colleges, they request more money. College debt is huge now. So going the medical route and just subsidizing a broken system isn't going to fix it. It will only make the problem worse. We need to have medicare for everyone, and it should start before birth. If someone wants a college education, allow them to get it, only paying to re-take classes. That would wipe out most people's debts.
So the big policy change the parent asks for has already happened, and the U.S. will gain universal insurance coverage over the next couple years.
If we want to see healthcare costs drop in cost beyond that policy change, it's almost certainly going to need to be driven by technology in some way.
Yeah, but people being is debt is great for the part of the financial services industry that they owe money to, often with ridiculous and crippling interest rates (compare to the interest rates these players get from the government when they borrow, which is essentially 0% -- which is nice, for them). This is especially true when it comes to student debt, which can't ever be discharged. And these companies have a ton of political power (money begets power begets money).
Which all is to say I agree with frandroid. The root problem here is our political system is almost completely broken due to lobbying and lack of meaningful campaign finance laws and the best way to actually fix some of the issues listed here is fixing those core government/policy problems, but those problems aren't technical in nature and won't be fixed with a Ruby on Rails app.
Software may be eating the world, but if your only options for government leadership are (to put it in South Park terms) a "turd" or a "douche", both of which are controlled by big money whose interests are at odds with the overall populace then there are a lot of core problems that there will never be a software fix for (short of the call for better AI going really well and having a benevolent SkyNet take over).
I also agree with your final conclusion ("skynet") and think it is inevitable given time. Remove human corruptibility from governance. Efficiency... it is selected for.
Countries with universal coverage through other-than-single-payer systems do this, too. (every OECD country other than Mexico and the US has universal coverage -- but not all of them through single-payer -- and every OECD country spends massively less per GDP, let alone per capita, on healthcare than the US does; to the extent that many of those that do have public single-payer universal systems pay less in GDP for that than the US does considering public costs in the US alone, without considering the slightly-higher private costs in the US.)
Page 6 of
gives a very thorough overview.
I wouldn't use either of the terms "single-payer" or "free-market" in describing systems in which there are multiple private sector health insurers (payers), and it is mandatory for individuals to purchase a plan from one, with highly regulated plan provisions and operations.
They don't much look like "single-payer" anything, and don't very much look like any "free-market" bit has been stuck on (there is a market component, but its not free.)
> If that sounds exactly like the ACA, you're correct
The ACA is similar in outline, but the differences aren't particularly subtle (the ACA's isn't universal; the poor, elderly, and disabled -- rather than being subject to the mandate and operating through the same market, potentially with a public subsidy, instead are directed to one [in some cases, both] of two completely separate public insurance systems, etc., etc., etc.)
1 - using buying leverage to negotiate prices, something republicans owned by the drug and device industry specifically banned (see eg )
2 - rationality about end of life care, which we spend a lot of money on -- 20%+ off the top of my head. As many doctors have shared, they often choose not to aggressively treat terminal illnesses and focus on quality of life. Unfortunately (remember Sarah Palin's death panels, and let's all thank John McCain for bringing that snowbilly grifter to the national stage), attempts to do things like pay doctors to sit down with patients and have end of life conversations, explaining what is happening have been successfully yet stupidly fought off. Whereas when doctors talk about how they die, they often chose to undergo very little treatment [2,3]
Almost all medical professionals have seen what we call “futile care” being
performed on people. That’s when doctors bring the cutting edge of
technology to bear on a grievously ill person near the end of life. The
patient will get cut open, perforated with tubes, hooked up to machines, and
assaulted with drugs. All of this occurs in the Intensive Care Unit at a
cost of tens of thousands of dollars a day. What it buys is misery we would
not inflict on a terrorist. I cannot count the number of times fellow
physicians have told me, in words that vary only slightly, “Promise me if
you find me like this that you’ll kill me.” They mean it. Some medical
personnel wear medallions stamped “NO CODE” to tell physicians not to
perform CPR on them. I have even seen it as a tattoo. 
my physician has my choices. They were easy to make, as they are for most
physicians. There will be no heroics, and I will go gentle into that good
Research shows that most Americans do not die well, which is to say they do
not die the way they say they want to — at home, surrounded by the people
who love them. According to data from Medicare, only a third of patients die
this way. More than 50 percent spend their final days in hospitals, often in
intensive care units, tethered to machines and feeding tubes, or in nursing
More typical was an almost eighty-year-old woman at the end of her life,
with irreversible congestive heart failure, who was in the I.C.U. for the
second time in three weeks, drugged to oblivion and tubed in most natural
orifices and a few artificial ones. Or the seventy-year-old with a cancer
that had metastasized to her lungs and bone, and a fungal pneumonia that
arises only in the final phase of the illness. She had chosen to forgo
treatment, but her oncologist pushed her to change her mind, and she was put
on a ventilator and antibiotics. Another woman, in her eighties, with
end-stage respiratory and kidney failure, had been in the unit for two
weeks. Her husband had died after a long illness, with a feeding tube and a
tracheotomy, and she had mentioned that she didn’t want to die that way. But
her children couldn’t let her go, and asked to proceed with the placement of
various devices: a permanent tracheotomy, a feeding tube, and a dialysis
catheter. So now she just lay there tethered to her pumps, drifting in and
out of consciousness. 
The problem is not that it's hard to find good ways to save and invest, although that is a true statement. The problem, at least for most people in the US, is that besides Social Security, "personal saving and investing" is currently the only available way to secure one's future/retirement. An additional, related problem is that the only personal saving and investing option available to most people is "buy one or more Financial Services products": Savings accounts, stocks, bonds, funds, 401(k)s, Roth IRAs, even pensions which are long gone. They're pretty much all the same scam: Hand your own personal money to someone else, and in 50 years, it may end up bigger or smaller or the same, depending mostly on who you chose to give it to, and other factors totally outside of your control. If it ends up bigger, you chose wisely and/or got lucky, and deserve to retire comfortably. If it ends up a lot bigger, you chose brilliantly and/or got really lucky, and deserve to retire in luxury. If it ends up smaller, you chose stupidly and/or got unlucky and deserve to eat dog food when you're old.
Can we get away from "use your own personal money to buy a risky financial services product" being the sensible way to secure one's financial future? Now that would be a worthwhile problem to solve.
Not to mention the fact that "Saving and Investing" is only available to people who can actually afford to save and invest (which is yet another problem that desperately needs solving).
You claim it's all a scam but the market has been going up and up over the last 100 years. The scam lies with the advisors and products that charge high fees and are not transparent. Companies like Betterment and Wealthfront are changing the game buy making these fees transparent and putting you in a good diversified portfolio.
The second problem is that like you mentioned saving and investing is only available to people who can afford to save. This is the same thought that 85% of millennials who don't save feel but the reality is, you can. There is just no easy way to do it...yet
Or maybe ryandrake had a more "tax the rich" goal, with the usual set of problems. (It's usualy much better to solve the problem that people got rich exploiting [moraly or not], instead of simply taxing them.)
Look at Uber, Uber has an app, but the real issue in Uber is creating a marketplace, managing a brand, managing relationships with drivers, fighting the taxi companies, etc.
Innovations in how health care is organized and delivered are very possible.
As for food and water I'll say that the case for vegetarian and veganism is often overstated. Out here in upstate New York we have plenty of water and plenty of hillsides that are good for grazing and not for tilling. In other places the situation is different, but in some places animal agriculture is part of the solution and not the problem.
I think both the single payer and pension arguments miss the fact that the US is at a hub of a system. Inflated drug prices in the US finance drug development and cheaper drug prices in the ROW. Similarly, what a government run pension fund can attain in another country is unrelated to what one can attain in the US.
I agree with the part about stagnant real incomes, which meshes with the rising cost of health insurance, housing and college, but I don't think professional pension fund managers do that much better than individuals in the long term. They may avoid stupid mistakes like selling all of your shares in the winter of 2008-2009, but the real advantage pensions have is that they can borrow from peter to pay paul, at least in the short term.
The room in Canada was paid out of pocket (because I'm not a resident) and cost $600 for 1 x-ray a consultation with 2 doctors and a room for the night, and another $30 for the pain meds (morphine)
The 60 minute consultation I had in the US was $150 co pay, which if I had no insurance would have been $2000 Which got me an x-ray and 15 minutes with a doctor and another $10 for "prescription" acetaminophen. (aka overpriced over the counter Tylenol)
Doctors aren't the problem, it's the insurance companies. I don't see how it's a technology problem as much as a political will (and maybe stubbornness in believing America is always the best even when it's not). The best we can hope for is technology can help by gathering political will.
I'd love to be proved wrong.
Where I live there is an abundance of crystal clear drinking water being wasted by fracking to sell LNG at bargain basement prices.
As a farmer, I'm struggling to picture how we could change that land utilization in a significant way without technology to enable it. It's not quite as simple as consumer desires, although you are right that changing consumer habits changes the flow of money and where it is invested which would also spur on the necessary technology, presumably.
Take for instance the factory farms which have to constantly ship food in to feed the cattle. Another example is deforestation to create fields for cattle to graze.
I think the OP is right, we need to convince more people that beef is not a sustainable food with the rate at which we consume it today.
People probably wouldn't eat as much meat as they do now if it were from free range, non-grain fed animals.
Yep. You're not going to fix a problem caused by privatization with more privatization.
Also, a lot of this prevention can be tied back to diets, not a lack of sensors.
That's the point though: the right insight might provide a technology solution to what everyone could only think of as a policy problem.
These have traditionally been domains requiring a huge research apparatus with tremendous manpower, for only very long term gains. Not good for startups. In AI, how can a startup hope to succeed when academia has had almost no success in 50 years (and I am doubtful throwing more CPU/neuron layers will 'solve' the problem).
In addition, the people with the skills necessary to make progress are going to be advanced researchers with PhDs, who are good enough to remain in academia if they wish or who have already developed a proven-enough idea through their research career that they don't need Y-combinator-style money.
I am not trying to be a downer on the idea, contrarily I hope there can be success. Really I am fishing for anyone with a good perspective (or an answer) to these points.
This is just a huge lack of perspective by people who've only worked on commercial software. Real science is very hard, very expensive, and does not result in billion dollar IPO's within 3 years.
I'm hoping that once YC realizes that such projects will never succeed through a startup incubator, they will become politically active and spearhead the reversal of the current decay of government funded science. Only the government has the resources, time and foresight to fund 50 year research projects in the fields listed in the RFS. I hope this becomes clear in time.
EDIT - Here's a bit more of my thoughts on this issue from a previous comment: https://news.ycombinator.com/item?id=7614344
This is a pure example of the AI Effect. Academia has been extremely successful with AI Research, but you don't see it because as soon as something becomes successful, you disassociate it with AI.
Backpropagation training wasn't introduced until 1986 (http://www.nature.com/nature/journal/v323/n6088/pdf/323533a0...). SVMs weren't useful until the kernel trick was applied to them in 1992 (http://dl.acm.org/citation.cfm?doid=130385.130401). Feature learning wasn't an active area of research until the 2000s.
There have been huge improvements in algorithms since the 1960s. The only things around back then were a few speculative papers on analytic methods. The current state of the art in learning algorithms is a huge advance over just having some ideas about the mathematical properties of learning and a few analytic tricks in obscure papers.
Wikipedia gives a citation for backpropagation going back to 1963 by the way, but looking more carefully you are right that the 1986 paper is important.
That said, some more recent work comes to mind.
In terms of new algos: planning algorithms, deep learning architectures (ANNs without backprop), reinforcement learning, alife and multi-agent systems.
In terms of applications (which you already hint at): Deep Blue and Watson, both of which are great examples that shouldn't be regarded so trivially. Is the only difference between the "old algorithms" from the 1960s and Watson challenging people on Jeopardy is a matter of margin? No. It's not as if we were nearly there in the 60s and only needed to crank up CPU or RAM speed/storage. Read IBM's paper on it -- it took a complex architecture spanning natural language processing, databases, search, and machine learning. As for Deep Blue, even in the early 90s people said there would never be an AI to beat the best human Chess players. Once it happened, the paradigm shifted and "of course" AI can beat humans at Chess, as if there hadn't been who denied it was possible.
Some of the coolest more recent applications are in the realm of machine learning: self-driving cars, robots that learn to navigate or perform tasks, and image recognition (which has made an immense leap in the past ~2 years).
I would say that academia have had tremendous success with AI research... but that's IF you accept that the goal doesn't have to be "a machine that thinks just like a human" and if you don't hold to an "all or nothing" outcome.
In terms of incremental improvements in techniques that make machines "smarter" and more capable of helping humans solve problems, there's absolutely been amazing progress. Look at Watson, for crying out loud.
So, if you accept that premise (that the goal is just "smarter" and not "thinks exactly like a human") I don't see any reason to think a startup can't make progress in this area. Will they invent the first full-fledged AGI? Maybe not, but I don't think that's the point.
In biotech anyway, cost of doing the work is falling rapidly. It's not software development costs yet but we're getting there. Also YC offers a lot outside the check (alumni network, demo day, great partners, visibility, etc).
PhDs are an untapped founder pool in general. There are tons of great PhDs minted every year where academia may not be the best way to accomplish their goals. They are used to living on low salaries and working on open-ended problems. Great founders.
We can't be too far from that. Even if that thing sold at $10k it would have buyers lined up.
Agreed about y-combinator not being the appropriate format for hard nuts to crack. Mobile stuff and low hanging fruit like disqus and dropbox? Sure. Breakthroughs that define how business and society works? That's probably going to come out of larger institutions that dont consist of 20 somethings living off ramen. This format can be seen as working with breakthroughs that are out there, but haven't been applied the right way or are under-monetized. TBL didn't need to invest TCP/IP, fiber networking, server kernels, etc. He just had to write HTTP.
"incentives" - because the founders can get rich if they succeed.
"implement" - because customers don't care about theoretical work, they care about solving the problem.
"simple solution" - because founders can't afford to design a complicated one.
"severe problem" - because the problem has to be bad enough for even a very simple solution to be worth paying for.
Now, to answer your question directly, why is there hope for startups even in highly technical fields where academia is slow and expensive? Because when people are laser-focused on solving specific problems like this, they occasionally make leaps of insight, either in terms of reframing problems to make them easier, applying newly available technology, or just thinking of a new idea on their own. Smart people can pick up skills surprisingly quickly when they're focused on solving problems.
Also, the incentives are strong enough that they can sometimes convince these skilled academics to quit/supplement their academic jobs with startup work.
It isn't inconceivable, then, that today there ought to be enough liquidity and appetite for riskier, much less leveraged, longer-term growth modalities, as in the past.
If you want to advance these fields throw money at universities, not startups.
The thing YCombinator (and its ilk) did differently was to realize that software was atypical of science/engineering fields in that it didn't benefit as much from many of the services offered by universities, so you could strip out most of the "cruft" and form a "lean" university that was just as effective (more effective, in hindsight).
When you bring the focus back to science/engineering, suddenly the "cruft" doesn't seem so pointless. If you try to build an accelerator aimed at traditional science/engineering problems, you re-invent the university.
What is that "cruft"?
* Formal training and apprenticeships from experts in various fields
* Many-million-dollar macroscopic and microscopic fab facilities (shared but not specialized)
-- Fancy microscopes (optical, electron, etc)
-- Fancy spectrometers
-- Nanofab junk (mask writers, aligners, chemical benches, CVD machines, etc)
-- Chemistry junk (NMR machines, MS machines, Chromatography machines, etc)
-- Physics junk (telescopes, accelerators, etc)
-- Engineering junk ($50k oscilloscopes and logic probes, test machines, FPGAs, CAD/CAE software, expensive simulation software)
* ~$1MM-ish labs (highly specialized but shared less)
-- Strange chemicals, gasses, and the tools required to deal with them
-- Strange biologicals (animal lines, cell lines, specialty constructs, reagents)
-- Fume hoods, centrifuges, schlenk lines, etc
-- 3D printers, milling machines, highly specialized fabrication and diagnostic apparati that are custom-built and one-of-a-kind
* Library/journal access
* Connections to cheap labor (no comment)
* Connections to funding for both blue-sky research
* Connections to funding for seed-stage commercial prospects
YC specializes on the last bullet point and mixes in business training. It could certainly have something to offer to startups in science/engineering fields (especially if their ultimate product was software), but we shouldn't forget that it has relatively stiff competition once it starts wandering outside of its core competency into more traditional fields.
I'd like to invite people to try the early release of Empire API, which is one API for every enterprise SaaS:
Empire is an API for accessing enterprise SaaS services such as Salesforce, Zendesk, Google Apps, etc. It provides a uniform, database-like interface to every service that it supports. Empire makes it easy to integrate data from multiple enterprise services into your own enterprise app.
You can click Login to create an account, and we'll send you an API key. Or you can just sign up for the mailing list.
We're hoping that S(aaS)^n = SaaSaaS for n >= 2, and we can prove by induction that ours is the last enterprise API you need to learn ;)
Aggregator APIs tend to provide extremely reduced functionality compared to the source API, and the ease-of-use doesn't compensate for this. In most cases it just makes more sense for the developer to spend a day building a custom adaptor for the API.
That said I think the SQL front-end is an interesting twist. While developers find it easier to use the source API there are many people (i.e. business analysts, etc.) who can't program but can use SQL and that might be an interesting market to go after (and also a market more willing to pay).
In terms of value to the end-developer, there are a handful of value-add services we'll be rolling out that are a pain for people to implement:
- Federated search
- ETL / caching
- Record matching / fuzzy inference of foreign-key relationships
- Entity de-duplication
We also feel that Empire API will be exciting for pure client-side apps and apps that don't want to run a backend, e.g. the sort of apps that would build on Parse or Firebase.
[sumedh at above]
you also get the advantage of having a datastore to conveniently persist data that you extract from Salesforce and other data sources, so that you can process data in batches easily.
We currently have integrations with Salesforce, Hubspot, Gmail, Google Spreadsheets, Zendesk, Mailchimp, Stripe, and CallRail.
If there's another integration you need, feel free to email us: hello at empiredata dot co
[edit: I have added the current and upcoming integrations on the home page: http://empiredata.co/#supported-platforms]
I'll reach out to you over email because I'm curious about your intercom.io use-case.
I'm also going to try it on our webapp, so we can dogfood it.
Between the staid companies that have been providing tools for decades that can be better, the tools that don't really exist that need to - I think we're ready. Similarly, with the security world starting to consolidate (FireEye buying Mandiant, likely goings public of companies like Rapid7 and TripWire), I'd think it's an ample rate/return option.
> We can’t imagine life without the Internet. We need to be sure it keeps working–this includes everything from security to free and open communication to infrastructure.
The YC one is 'internet specific' - at least as I read it.
It could also be written like this: the government is a very bad customer with very large software.
This includes decision makers who often play favorites and often have zero expertise or sound counsel to leverage in making key technology related decisions.
The healthcare exchange is not an exception to the rule. It is the status quo of most government technology related initiatives.
"Yes, we launched it and it works. We have a great technical staff, and we've won some very lucrative contracts. We called it EnterpriseAdaptor, thinking the geeks would go for that but everyone persists in calling it CondomCo."
There are a number of companies that can help you (Lockheed Martin, CSC, and IBM being three) if you find yourself in a position where you have something you need to sell in to government and don't have the credentials to land the contract directly. They do essentially what you describe... take a cut off the top in exchange for doing some of the paper work and taking some of the liability if things go awry.
Doesn't this already exist, in the form of the bicycle?
Bicycles are not an 80/20 solution, they're a 99% solution, with the appropriate kind of bicycle and accessories. You can haul all kinds of crap - construction equipment, children, appliances, etc. - with a bicycle.
There are many other kinds of bicycles that make them accessible to people with reduced mobility: tricycles, hand-cycles, electric assist. Further - active transportation also counts as exercise and physical therapy, even further reducing the number of people who end up with reduced mobility in the first place!
I don't really know what a startup could possibly do in this area that's not already being done by the dozens of active transportation advocacy organizations at every level of society.
Isn't that the point of these requests? To inspire new and innovative ways of addressing a small (or, a chance at a larger) piece of these huge problem spaces?
It doesn't matter if you don't think these are realistic requests. YC is just trying put out an image that it wants to invest in historically "hard" market startups.
(As an aside, I'd say that bikes are not a 99% solution until the majority of people aren't afraid of riding them around cars.)
I really really hate cars, but I can see why people prefer them.
Such a system would bring ancillary benefits of improving travel safety in the city, and reducing the need for road maintenance and vermin control (esp. mosquitoes).
I imagine the biggest technical challenge to be durabiity: The need to be resilient against hail, high winds and flying debris. And should it fail at the worst time ... an awful thought! But levees are a similar technology in that regard.
The final product -- Segway -- didn't, though. I'm not sure anyone is willing to try again soon.
"Other stories claimed that Apple Computer co-founder Steven P. Jobs got an early peek and made the wacky prediction that cities would redesign themselves around the device. (Jobs denies he ever said this.)"
But then, I wrote "reportedly", and it was indeed widely reported at the time :)
- bike/scooter/... share programs
- new designs for bikes to improve safety, could work with the existing market
- while you're at it, new additions to other forms of transportation etc. to improve safety too.. even if a lot of it is policy, but even things like self-driving cars fall into this category
- new designs for bikes to improve ease of use. I don't ride bikes because I have never found any seat and any amount of cushioning that isn't immediately painful, while bikes that accommodate that (like recumbent bikes) are typically not as portable
Some of this is outside the scope of YC or it's definitely something else like policy, but just a thought. I especially like the idea of improving rideshare programs, and that is very doable. I was going to try out SF's program but there's no bikes near me since the company behind the rideshare program went out of business. :(
I came to Demo Day in 2010 (as an investor) but left without investing in anything, because I was so demoralized by the way it seemed everyone was trying to start lame web sites doing relatively trivial things.
If Demo Day looked like the stuff on this list, I'd be banging down the door to get in again.
Generation - Solar & Wind
Transmission - Distributed Grid
Storage - Batteries
Consumption - Electric Vehicles
> We believe economics will dominate - new sources must be cheaper than old ones, without subsidies, and be able to scale to global demand.
The world uses a huge amount of energy and it is vital that any technology is 1.cost competitive and can 2.scale on a globally. These are no small feats, but like Airbnb the assets already exist, but our access to them does not. This is a distribution and financing problem, not a creating new technology problem.
Also it's a very difficult field of science. Now you need to be proficient in AI, machine learning, computational linguistics, linguistic corpora research, cognitive sciences, statistics, and sometimes physics if the text changes over time. Of course, you also need to be a good programmer. This combination of skills is very rare. Thus, very slow progress.
I suggest to start with well defined practical problems. For example, no one seems to do much with user generated reviews. There is some sentiment analysis but that is just a binary text categorization problem - not even close to general purpose AI.
It would be much more interesting to show a seller a time ordered stream of clustered reviews that depict only the most representative review for each cluster. This way a seller can see how his/her fixes/changes impact user reviews. Also it would be a great source for features and bug fixes requests. This is an ideal testing bed for clustering, novelty detection, categorization and mild inference. The inference is required because of sparseness of data.
This would create a good data set for a more general purpose AI. We would have reviews and text documenting changes and improvements of a new version of a product. Now the computer could start learning the dialog between users and product developers. Then, we are just one more step from statistical inference based question-answering system. Not a brute force system like "Watson" or a hand crafted rule base system like "Siri".
[EDIT:] I was thinking more about a decision support system that can recommend product changes. But in a way that maximizes customer satisfaction and minimizes the cost of implementation. The dialogue between past changes and customer reaction would give us the surface that needs to be optimized. This would generalize well to other domains where there is a text for request and a text for response - just to name one: clinical text in healthcare (position 5 in the RFS).
From what I understand from speaking with Selmer Bringsjord, Bloomberg has an outstanding internal QA system, so there is progress, the trouble is that it's all behind corporate firewalls.
There was a silly little online game that came out a few years ago called Akinator  that would "guess" a public personality and did so by "learning" based on user inputs - very naiive implementation of CTL but gets the gist of how you can implement a mock AI to get damn good results.
If you did a little delphi to stack the initial deck of results, say for a car buying QA recommendation service, I think you could have a pretty powerful tool that could be replicated across services.
The http://www.reddit.com/r/oculus, http://www.reddit.com/r/oculusdev/, and https://developer.oculusvr.com/ are jam packed with excited hackers cranking out their projects and with the Oculus Connect conference coming up I'd love to see some of this talent pointed towards Y-Combinator
The Oculus acquisition by Facebook is possibly the worst thing that could have happened to VR. With the Rift, Oculus had the opportunity to chart a brand new low cost platform accessible to millions, and be the next IBM/Microsoft/Apple/etc of their day. They had everything going on for them: a founder who knows the field by heart, which allowed him to act the moment he saw the curves of "state of the art" and "realistic potential for a consumer product" intersect. A lucrative vertical (gaming) in which to get their v1 out. Industry titans believing in and joining the company.
I don't know about you, but this reminds me of things like the Macintosh: the potential for brand new applications (with VR, "computer assisted design" takes on a whole new meaning). The potential to reach brand new audiences, and to make existing audiences experience thing they could have never experienced on traditional 2D screens.
But they went the acquisition route. Now they're owned by Facebook, which means that everything they do has to go through all the motions that a large company has. They can't do anything really risky, they can't say "fuck you" to the status quo (because Facebook is the status quo). What are we going to see from Oculus? Locked-in app stores. Social networking bullcrap à la second life (pro tip: we've been trying to make "social VR" a thing since the very first days of the internet, and it's always failed. The Palace (1995), Second Life (2003), etc. Every 10 years, like clockwork, someone tries it again and miserably fails. It makes for great science fiction -presumably why people are so intent on trying to make them happen in the first place- but in reality, it just doesn't work out.
What will we see from Oculus? Most likely nothing ever really revolutionary. As far as gaming goes, we'll probably see half-assed VR from Microsoft and Sony.
But as far as truly disruptive uses of VR goes? Well, it certainly won't be Oculus. Maybe someone else will pick up the torch where Palmer Luckey dropped it, but it seems like the window of opportunity has closed.
BUT in the months now following my opinion has changed.
There remains a vibrant and resilient ecosystem of devs working on amazing homebrew stuff that everybody on day-one predicted would close up shop.
But since a lot of the work is unity based, the end platform almost doesn't matter and people realize that.
Oculus will start out as the PC enthusiast / high-res experience and FB gives them the dry powder to actually get the hardware out the door.
But in the long-ball view EVERY smart phone, console, etc etc etc will be capable of VR and it's largely thanks to the preliminary oculus effort and their inclusive structure of opening the platform to small devs.
Think about MineCraft as an example.
Brilliant concept, single dev (then small team), and now basically universal adoption that is completely device and platform agnostic
And of course large studios will throw huge budgets against the VR platform, and maybe Oculus isn't the long-term leader but they are wholly responsible for the coming renaissance!
Look at all the amazing things the Kinect did. I played a few Kinect games. The effect is amazing. Imagine if Facebook bought it instead of MS. I don't see why MS or Sony would make things half-assed. If anything, VR gaming might be a natural monopoly from a commercial entity instead of some open standards things Oculus kinda-sorta is pushing.
Reminds me of how people are baffled that there's a near MS monopoly on the dekstop and why Linux can't break through. Natural monopoly here as well.
VR gaming might be a console-only affair with a small "pc master race" types telling us how much better their experience is. Meanwhile, Jane Console Gamer puts on her headset and enters the world of Minecraft or whatever is going to be the big time waster, with little fuss as it all because it "just works." Meanwhile Joe PCGamer is whining on forums why $video_game isn't working with his Oculus and he has no one to support him.
Mmh, what amazing things did he do? Last time I checked, there is no killer app for the Kinect. Microsoft tried to force it on everyone by bundling it with the Xbox One (because no one wanted it otherwise), and now they're removing it because even when their console is bundled with it, no one does anything with it.
There already several startups in the "personal saving" space largely based on index funds, though some of them have large minimums. Complex schemes may not be worth the effort for those with only a few bucks to spare:
It would be cool if Vanguard had an API so we could do the same thing open-source rather than incurring the extra management fees from these companies which are mostly based on Vanguard funds.
Then there's a whole broad class of lifestyle apps that could be written, like a saving social network where you are socially rewarded for not spending a lot, win status in a game, etc.. E.g. maybe an app where you make a bet with a friend, if you spend under $100 on food this week, they owe you $100, and vice-a-versa. That would keep you both watching your spending and actually having something to save.
I find this confusing as well because while simpler investment mechanisms would help some people who already have money to invest, for the majority of people getting this initial money is the problem.
It's improvements in personal finance tools and systems to help people make better decisions that let them save (and hence invest more) that could make the most difference for the most people. It doesn't necessarily have to be games or social networks though. Even just presenting the right information at the right time could work.
> Most people either pick individual stocks and bonds and expose themselves to high volatility, or pay very high fees to mutual fund managers and lose to the index anyway. Most people are also non-experts when it comes to portfolio rebalancing and tax optimization.
> This seems to us like something software should help solve. We’d like to see new services that make it possible to invest in super low-cost index funds (in a normal account or a retirement account), do some customization around individual stocks, and otherwise set it and forget it.
Basically exactly what those companies are doing.
Oh yes, yes, yes. Everyone is talking about the quantified self but human augmentation would be so much cooler. I don't care if a watch can tell me my heart rate at all times (I know when my body is tired, or out of breath, because I live in it!!!)
But there are so many senses that I would like to have; for example, be able to always know where the North is relative to me. A device that would let me feel the North would be so cool and useful (I wear a Tissot T-Touch for that reason, but it's a very poor solution to this problem).
I think I heard the Apple watch will be able to do this, in some cases; but it sounds like an afterthought. I would pay serious money for a wrist bracelet or some other wearable that would do only that, but do it well.
replace your eyes with cameras (20/20 vision, ability to see "invisible" frequencies, night vision, 360 degree vision). replace your ears with microphones (hear "invisible" frequencies). if we can do that, we can also directly input synthesized sources (think oculus rift without the headset. or your iPod without the headphones.)
granted, if we can do that we've built the matrix sans software and at that point all bets are off. not that I think it's a bad thing, a body is a pretty shitty vessel for a consciousness.
im not sure we will have such a reliable interface in my lifetime, but it's amazing to live in a time where it seems completely possible.
Watches and wearable things are just the tip there. The issues with implants is that they are 100% against the Hippocratic oath. No doctor would ever do one if you are healthy. Implants are a long way off, and wearable enhancers may ease the road to them, however a new ethical system for doctors, or an extension of doctors without such oaths, would be required. Think about it for more than 30 minutes, it gets heeby jeeby real quick.
A simple device that would buzz on your skin isn't "medical".
A few of the accelerators I'd applied to in the past don't see the business opportunity present in developer tools (Who pays for those?) so it's a relief that YC recognizes the opportunity there.
Thankfully, companies like Jetbrains, Github and Light Table have made a lot of progress in demonstrating that a viable market exists here.
This exists already, there are a ton of discount brokers with very competitive pricing.
"Most people either pick individual stocks and bonds and expose themselves to high volatility, or pay very high fees to mutual fund managers and lose to the index anyway. Most people are also non-experts when it comes to portfolio rebalancing and tax optimization."
Education, clear UI, consistent metrics and prioritisation are all potential issues. Do all these services exist outside of your country? What if I move countries? How do I explain it to my mother? What if I only have $1000 to invest?
So we are going to make another choice?
What if I only have $1000 to invest?
Put it into your savings account because you need a rainy day fund.
When you have $3000 to invest, put it into Vanguard's S&P 500, and call it a day. You are already beating more than half the managed funds out there. You spend time with your kids.
This is slightly more complicated than YO, but only slightly.
A lot of the 401k providers just dump a list of 40 funds on your lap, and let you figure it out, but it's information overload.
I really want something that can help me decide what type of risk I can handle, and then properly choose funds / balance between all of my accounts. Financial advisors are able to do this, but software can do it better and cheaper.
Basically, I want something like Mint that connects to all of my investment accounts, and sends me an email once or twice a year with specific instructions for rebalancing.
It's missing the Mint-style auto-connect, but it largely does the rest of it (the actual hard math).
Telling people to just invest in an S&P500 ETF (metrics and comparison don't matter because people don't understand them anyway) doesn't seem like a great business, financial writers are already advocating it pretty regularly and investment in index ETFs is growing steadily.
Sure, but next ask "why." What is the motivation to provide metrics? Who will compile and compare them? How will they stay impartial? Who benefits?
We don't even have as much as a comprehensive internet offer comparison.