Hacker News new | past | comments | ask | show | jobs | submit login
Frighteningly Ambitious Startup Ideas (paulgraham.com)
1032 points by anateus on March 10, 2012 | hide | past | favorite | 430 comments

Man finds a black kind of rock that burns; discovers that you can get a lot of this rock if you dig deeper, but deep mines have water. In order to successfully mine this rock, man devises a steam powered engine (neatly enough powered by this same rock) to pump out the water. No, not the steam engines you're familiar with. This is the Newcomen Steam Engine: http://en.wikipedia.org/wiki/Newcomen_engine

The Newcomen Engine has a fatal flaw: it cools the steam for the return stroke, losing energy to the latent heat of evaporation each time. James Watt discovers the latent heat of evaporation, and realizes that separating the condenser from the piston would improve efficiency. So let's go build some railroads, right? Not so fast. It would still be another 30 years (100 years from the invention of the Newcomen Engine) before railroads and ferry boats would be regularly powered by reciprocating steam engines.

What's the moral? For 100 years, vast leaps in technology came one after the other. In the process, the Laws of Thermodynamics were discovered and described. Many learned men stood around patting each other on the back at how successful, how inventive they were...at digging a black rock called "coal" from the ground.

But most people don't dig rock from the ground. Most people do travel from point A to point B on a fairly regular basis. The world changed when 100 years of technology left the mine shaft and the factory, and got people where they were going just a bit faster.

I'm convinced that computers are still at the Newcomen/Watt transition. We have a ways to go before the world truly changes.

This perspective is interesting when applied to communications.

Parlay. Courier. Pigeon. Mail. Telegraph. Telephone. Transatlantic Comms. Fax. Early Internet. Email. Text Messaging. Live Chat. Voip. Twitter. Facebook Messages. Video Chat.

It's a naîve summary of communications history, but look at the persistence of some of the early players. Many have not been replaced to this day - snailmail, POTS, Fax, Email, Chat, VOIP, Videochat - there are fundamental reasons to stick with certain technologies (Fax, POTS, FB and Twitter excepted). There is disruption to be had, but there is still massive value in some of the oldest methods, with some evolutionary shifts.

The services need to adapt, and incumbents do restrict progress, but the 'email killer' notion is not well conceived. Most people don't use email as a 'todo' - that's an extension, not a replacement. This is why Rapportive has a market, but is not _the_ market.

An idea I've been musing on: is there a fundamental set of problems of humanity from which all economic activity is derived? For example:

"Shrink the world": couriers, seafarers, caravans, riders, roadbuilders, railroads, telegraphs, automobiles, steamships, dockworkers, truck drivers, aviation, telephones, email, social networking, videoconferencing.

"Organize labor": lords, finance, education, recruiting, HR, management, information systems, law, accounting.

"Keep us safe": militia, pikemen, shamans, legions, samurai, knights, musketeers, standing armies, chemists, doctors & nurses, the military/industrial complex.

"Food and shelter": self-explanatory.

"Money for primarily good feelings, not stuff": charity, religion, theater, tv, films, gambling, music, story books, fashion, holidays, tourism.

"Find stuff to burn" is a pretty big chunk too.

"Control more energy than your body can produce"

The "sex & war" category seems to be a great catalyst for innovation. These days, "sex" of course means "Internet porn".

Don't know if you'll get this,

but it's cause we all want to grow. To keep the gains from growing, and insure it's invested. We intrinsically don't want to see kids starve, why?

That answer's the perspective, I think.

Not sure if this answers your question, but the fundamental problem from which all economic activity is derived is scarcity.

> there is still massive value in some of the oldest methods, with some evolutionary shifts.

Yes, and newer protocols often carry emulation layers for older protocols, so we have things like POTS running on top of TCP/IP, when just several years ago most of us still had TCP/IP running on top of POTS via dialup modems.

The other day I needed my insurance company to send a fax to my bank (banking regulations mandate the use of faxes rather than electronic formats to share documents). The insurance agent did it by hitting a few keys on her computer. A piece of paper didn't leave her office, but it arrived on cue at the bank's fax machine nonetheless.

Sidenote: One of the interesting side effects of the internet disrupting traditional retail is that the losses in traditional lettermail delivery are being offset by massive gains in package delivery. The USPS has pilloried itself by doubling down with huge new investments in the part of its business that is shrinking instead of pivoting into the obvious growth opportunity.

Some would argue Email is little more than USPS over IP.

I think the most interesting aspect of modern communications - accidentally in the 90s, deliberately in the post-twitter-era - is the simple addressability of people.

There was a tradition of letter writing for centuries (visit the British Library), but it required some level of introduction to connect. The academic roots of email broke some communication boundaries (to the time-detriment of prominent academics), and Twitter has opened the same addressability to celebrities and field-leaders (with a more voluntary twist I would say).

Yes, but this addressability of people also means that the sender must have some value to provide.

Being able to self-create a platform of value that you can offer to people you wish to network with is crucial (I created a magazine to accomplish this objective).

> Yes, but this addressability of people also means that the sender must have some value to provide.

If only that were the whole of it. The sender must have some value to provide in the eyes of the recipient. But the recipient will actually have to look at the message in order to determine if this is the case or not. That decision alone makes many messages that were sent with value '0' a net negative to the recipient.

Hence all the spam. If the 'providing of value' would be a thing we could determine in advance then the low barrier would not be an issue.

Effectively a spam filter determines that the value of a message is '0' to the intended recipient to avoid them becoming negatives.

The breakthrough in your comm analogy will come with the translation of thought to word.

We will wear a device which will be able to read our brainwaves and determine which word we are thinking ala dictation, then send that to the recipient.

This will be wired-telepathy - the recipient will get a message which they can receive any way they choose; visually (email - they read it) audio playback, or thought-injection. It is played back on the nerves and is "heard" in their head as a thought. (evolutionary results to be sure)

As a life long Cyberpunk enthusiast who, at 37 years old, has been using computers daily since I was 8, I really have concern over the mental health of the yet-to-be digital world.

I.E. the ADHD that will result in direct cerebral access to information 24/7.

What will be the impact on the (generally) serially wired brain to vastly parallel inputs?

I suspect massive upheaval on the social level. There will always be adopters of immersion, as there will be the future Amish who will eschew all digital, but the median social reaction will be a result more of our true, and unknown, innate biology that we wont even be aware of until this happens.

> We will wear a device which will be able to read our brainwaves and determine which word we are thinking ala dictation

Since this thread is presumably being read by entrepreneurs making bets on the future of technology, it needs to be said that this will never happen with the current imaging technology. Brainwaves implies EEG, and the research in this field strongly suggests that it is information theoretically impossible to extract this information through the electrical activity on the scalp.

For this vision to become reality we need a new imaging device that has both the temporal resolution of an EEG, and a spatial resolution that probably needs to be better than an MRI.

In summary: Certain things are impossible. I can say with certainty that no algorithmic improvement will allow this to work using an EEG. I don't know whether it is physically possible to create a non invasive imaging device that allows such a signal to be detected reliably, but it certainly does not exist today, and it seems like a leap of faith to assume that it definitely will exist at some point in the future.

I can key morse code at 40wpm with two muscles. With one hand I can chord at 120wpm. On a stenowriter I can transcribe about as quickly as most people can read - 250wpm.

I've invested an extraordinary amount of effort into improving the speed at which I can interface with a computer; I think the practical limit is about 300 baud, half-duplex.

Of course, we're trying to establish an interface with a bafflingly complex lump of grey meat, but are we really daunted by the idea of outpacing a V.21 modem?

Your judgment that present technology is inadequate is based on the assumption that computers need to learn to read the human thoughts.

What about the inverse, that the humans learn how to think in a way that a computer understands? That will be much easier, as humans learn much better than computers, and also much safer - I will have complete control over which of my thoughts the computer can detect and interpret.

The human learning to adapt to the machine has been the way EEG-based brain computer interfaces have been made for a couple of decades. Using machine learning to adapt the machine to the human is a much more recent development.

It is possible today to make EEG controlled devices. They typically differentiate between a small number of real or imagined movements in the user. This is awesome, because it can allow severely paralyzed people to communicate, control a wheelchair. etc. Nevertheless, the algorithms used to do this are perfectly useless when it comes to distinguishing whatever words the user is internally vocalizing.

The keyboard is not very good at determining which words I'm internally vocalizing either, still seems to work. The point I'm trying to convey is that maybe we can learn to transmit words using some form of brain reader, but that measures something else than vocalizing.

Doesn't have to be "brainwaves". The brain has a few outputs that can be highjacked (e.g. a computer with a neural interface that appears to be another muscle in the body). I don't know whether the bandwidth of these outputs is sufficient for interesting communication; we've evolved to take in far more data than we produce.

Edit It seems that more direct methods of neural interface are already plausible: http://www.technologyreview.com/biomedicine/37873/

It doesn't actually need to be noninvasive. If an invasive procedure is useful enough and can be made safe, eventually it will be ubiquitous.

The problem with invasive is upgrading.

Asher's Gridlinked supposed a limited set of society [operatives & wealthy] who could manage this fulltime connection, and even then it was perceived as unhealthy.

Wikipedia has undoubtedly changed how our generation views knowledge, but it's still a pull-technology. Outbound messaging will still be a push-technology (nobody wants to compose an email of their stream-of-consciousness, and brains are poorly wired to retain full structure in mental 'RAM')

Wetware doesn't add significant differences to the existing protocols - merely a more rapid input mechanism than checking your phone. Assuming contact is voluntary, people will not opt for the PubSub model for comms. If you choose to use it for trivia, caveat emptor.

Sure, but I was not saying that there will be compulsory receipt of info... though, given human nature and the already prevalent propensity for people to be overly responsive to the flood of alerts - I see a negative impact on conscious.

It will be very interesting to say the least.

Personally, I am already overly unacceptable to the karma endorphin boost from reddit, quora and HN. I was thinking about this just the other day; I was originally against karma being hidden on posts, but now, I like the fact I am less enticed to for bias based on that number.

We already continually scan for karma upticks on all our primary sites. This is bad...

Killer app for email is idiot-proof cryptographic signatures and cryptography.

It's baffling to me that a squiggle on a bit of paper is more trustworthy than a properly implemented cryptographic signature.

You are contradicting yourself. On one hand, you state that a "killer app", i.e. something that everyone would use is cryptography. On the other hand, you state that the vast majority of people trust a squiggle on paper, rather than actual crypto. If the average joe doesn't care about crypto, why would a crypto email be interesting?

You'd need to find a way to make crypto interesting. Lots of email don't have crypto, so if you sell something that does crypto well, then you can corner the newly created crypto market.

A peer-to-peer system for sharing music/films that does good strong crypto (and faster than tor) would do the job.

There is ResoMail which does it, and it seems it's not very popular.

I don't seem to understand why did you put "Facebook Messages" in a line of great inventions. Isn't it just another "Live Chat"? Am i missing something?

Facebook (and Twitter to some extent) solve one of the biggest problems of email which is the concept of verified sender.

I add it to the paradigm shifts as it resolves (in its own [large] namespace) a longstanding problem with email.

I add it to the 'transient' list as its solution is purely driven by network effects which leaves it vulnerable to the next player sideways market dissolution.

It's a bit of a stretch to say they solved it. I'm not on FB and so FB to me don't matter. On the other hand, I'm part of an Active Directory of my company, so AD solves this problem for me at work. But none of them guarantee that the email came from the particular person and not from a dog.

No, Facebook Messages and Facebook Chat are two different things.


Yes anyone who says that "email isn't a messaging protocol" has probably been smoking to many of those funny Jazz cigarettes.

Before the rail roads, people didn't do all that much travelling from A to B, unless they were very wealthy or had some external pressure like the need to find work or religious persecution, they mostly stayed at A. Which makes the invention of rail roads even more impressive in my eyes, it created demand rather than addressing it.

What moved from A to B was coal, ore, wheat, manufactures, but mostly by rivers. You can see this on a good map of northeast U.S. -- ask yourself why Pennsylvania is essentially rural in its center, with large cities at both ends, or why New York State is essentially rural 50 miles from the Hudson. Inland water transport was so important that the major capital projects of the early 19th century were canals.

The initial advantage of railroads over barges wasn't reach but speed. By moving goods faster, merchants were able to sell and receive payment faster, no small thing when credit was scarce, uncertain and expensive. I expect the initial rail lines served the same markets as the canals (that's where the business was), then began to extend their reach with spur lines.

There was probably some rail demand creation as the roads extended into the West -- farmland near a railroad was no different than that 20 miles away in anything but railroad proximity. But that was later, after the technology's dynamics were well understood and the players well capitalized.

Yes. The conscious revolution is just beginning, our species is very young, and the stars are still far away.

Yeah but until we invest as much time, money, and effort into defense against weapons and customized microorganisms as we do into their creation, we're running the risk of this young species (and maybe a lot of the other ones) disappearing very, very soon.

Oh please. Humanity is pretty much the definition of a species that should, in all probability, have died out on a multitude of occasions over the years and I am not just talking about the world wars and Nuclear Bombs, but things like the plagues, Spanish flu and all the rest of the diseases; the host of predators for which we are no match at all (a standard monkey is stronger than all but the best trained humans), the environment we inhabited (today that is not much of an issue) when we left Africa (why do you think we left? Properly because stronger tribes where pushing us further and further away and the alternative was dead on the shore), when we left the jungle, when we went to Siberia (again almost certainly pushed by stronger tribes) and when we went to Europe (same reason) where we had to fight the Neanderthals. Basically humanity has been on the winning sides of terrible odds since we started out. Don't forget that we nearly didn't make it in Africa (http://news.nationalgeographic.com/news/2008/04/080424-human...).

No wonder we love and underdog -- there is no greater under dog than humanity.

There is an amusing passage in "The Black Swan" about how a turkey would estimate its own life expectancy as Thanksgiving draws closer...

Don't confuse the fact that we've been lucky not to go extinct yet with evidence that we won't, especially when you being around to make that inference is conditional on said luck.

Amusingly that's not a bad argument that we're living in a simulation. Disregarding any particular human, humanity itself has done remarkably well against the odds in the same sense that story-book characters do remarkably well against the odds and that, at least when they're on a winning trend, player-controlled populations (whether in Populous, C&C, etc.) do well against the video game's odds or against other players.

Isn't that mostly survival bias at work? No matter what the odds were, the winners write history. If we had died out in one of the steps along the way, we wouldn't be here writing about it.

Once humanity spread around the world, the odds of a natural catastrophe killing all of us at the same time became very low (limited to planetary-scale distasters).

Only since the industrial revolution we've been on a more dangerous path toward central points of failure. On a geological time scale, that's only a very short time. But we've been testing our luck really hard.

Or perhaps it's an argument for multiple realities. There may be uncountable parallel timelines where humanity died out at every possible point during our history.

That said I get the simulation theory feeling pretty solid sometimes, to the point where grappling with it is one of the major themes of the graphic novel I'm in the middle of...

> No wonder we love and underdog -- there is no greater under dog than humanity.

Tell that to the Dodo, the Quagga, the Javan Tiger and the Thylacine. And many others besides.

Weak and winning is "underdog", weak and losing is "loser".

By that definition, every loser that hasn't yet lost is a winner. There is no way to tell the difference except hindsight. I bet the dinosaurs told themselves that they were totally different than all those other weak-ass species that had joined the annals of history 76 Myr ago...

I second this. We can either destroy our-self or become a more vibrant, multicultural civilization. See Physicist Dr. Michio Kaku on this:


I think this is a good thread to recommend Andy Kessler's book "How We Got Here" which can be downloaded from here (http://akessler.blogs.com/andy_kessler/2005/04/hwgh.html). It's a parallel history between economy and technology evolution up from the steam engine genesis.

I think you are right.

I don't agree with PG when he is talking about Apple. "None of them are run by product visionaries" is simply not true. Steve Jobs had great respect for companies like Sony because there are product visionary companies. Sony, IBM, Philips, and a lot of others, all companies that gave us great products we now are using every day. But it took years to get there.

E-ink for example was created in 1997 and is still in development. But we all know e-readers. A great example of a visionary product imho.

Sorry, Sony was a great engineering company, worthy of being respected, until they sold out and became a content creation company. Look at how the hobbled the mini-disc years ago. Sony deserves absolutely no respect these days.

While I can totally see where you are coming from, I think it is not fair to say that sony does not deserve any respect whatsoever anymore. They were innovators at first, too and have come a long way since.

I do not agree with sony's content creation affairs, however they continue to innovate with good products (e.g. look at the digital cameras, camera sensors etc.)

More often than not, there is no clear distinction between good and bad, certainly less so with amazingly huge companies such as sony, apple, exxon etc.

The key is 'gave'. Past tense.

I think a lot more start-up people would fear (or even care about) Microsoft if Bill Gates was still running it. The same goes for the rest of the companies you mention.

Similar, but different story. In the 800s, looking for an immortality elixir, man discovers a solid black chemical explosive. In time (around 1132), this finds itself as an early propellant in ranged missiles. Later it uses in other applications: mining, firearms, entertainment, bombs and myriad others.

But the use as missile propellant remains. Looking for similar, but more powerful propellants for these missiles (to give them more kinetic impact, longer range, and other properties).

Progress was slow until the late 19th and early 20th centuries when liquid fueled rockets were invented as well as an entire host of various chemical explosives with vastly different properties.

Progress was rapid, and soon missiles could travel hundreds of miles, and had enough extra capacity to deliver even more powerful explosives to their target.

However, accuracy was poor at best. Various types of complex control systems were designed and built into the missiles. As man realized that missiles could be scaled up and lift payloads up into orbit and beyond, bringing the payloads down in the desired location became even more important.

Inertial systems, radio control, celestial navigation and even attempts at manned guidance!

However, navigating is a general problem, and not just useful for missiles. Ships, people, surveyors, and others, all need to know where on the earth they are. In the 60s, with the advent of orbital missiles and satellites, a system of satellites was placed in orbit. And using a complicated collection of quartz oscillators, early computers, and various radio receiving equipment, one could (after a number of minutes collecting data and inputting it into dedicated guidance computers) determine where they were on Earth to accuracy of a few hundred meters.

Enter the nuclear age which gave us ultra-precise clocks, combined with transistors, then integrated circuits and all segments of the navigation problem were improved. By 1978, these components were small and reliable enough that missiles pushed constellation after constellation of satellites into orbit. Ground receivers were small enough to fit into the bodies of other missiles enabling them to navigate to targets with accuracy in the single meters with constant positional fixes along the flight path.

Realizing that the navigation computers and radios were now small enough to fit on a missile, it also meant they could be fit onto ships, large aircraft, and large trucks.

Later improvements to atomic timekeeping, computers, radios and other pieces meant that the receivers could be made man carry-able and fit into backpacks, small vehicles and on and on. Accuracy was improved to inches, and missiles could suddenly be dropped into selected air vents in specific buildings.

Except suddenly nobody cared as much about moving missiles, they found that knowing where they and their stuff was in the world was far more interesting and useful. Further improvements in computing meant that the navigational radios and computers could fit into a handheld device, then be integrated with high quality geospatial data, heuristic path finding algorithms and suddenly we have satnav in our cars.

Reusing the same navigation tech and we find we can improve the geospatial data considerably, improving navigation. Fixing these devices to the ground and we can measure plate tectonics for the first time, track fleets of vehicles in real-time, saving millions of dollars in fuel, compute phasors in power lines improving power delivery systems, ultra precise atomic clocks means precise time synchronization (with accuracy +-10 ns).

Further improvements in miniaturization puts navigation into handheld phones, and suddenly we know what restaurants are nearby. Tie it to a database collecting reviews and we know if it's good. Tied to the previously developed navigation system and we can even get walking directions there.

In other words, most people don't need to guide a missile, but they do need to find a good place to eat. And when the technology developed for busting a bunker left the avionics systems it got people and their stuff where they wanted to go faster and with less confusion than before.

Today it's hard to imagine a time when we didn't know the exact location of just about everything.

As of 2012, start-ups with over 1 billion growth potential appears to revolve around services or products offered to a large portion of the society or companies and which can generate 10 to 100 USD per person ( employee or individual) per year such as Search(ads), emails, education, showbiz, healthcare. Here are some additions; cheaper fuel, integrated software-as-a-service(kill salesforce or gapps), faster travel, space travel, home robots/ai, and better science.

In some cases there is a need for someone to open a new market. If Apple or Google had build a home robot with few basic functions, wouldn't a lot of people buy it and then it would open a new market?

More ambitious things to kill;

- Kill the "house"; since the days of the community cave a house has been human shelter #1; nowadays there might be better alternatives to an owned house.

- Kill the "state" or "a better citizenship"; provide the same things the state provides, leaner and cheaper.

- Kill the capital investment; create an automatic investor which selects virtually existing start-ups based on instant financial metrics.

- Kill the "company";

- Kill "democracy"; a better voting system

- Kill placental reproduction or sexual production; 9 months are too much.

I've been thinking about the 'Kill the state' idea for a while. It just makes sense. I have more in common with the average person in Germany than I do with the average person in Kansas or Alabama (not necessarily politically, but culturally). With online communities and instantaneous worldwide communication, I am also more closely in touch with my global counterparts than a significant portion of people in my own country.

In short, nation boundaries are slowly becoming outdated. We will probably have to wait for actual teleportation before this fully comes to pass :)

>It just makes sense. I have more in common with the average person in Germany than I do with the average person in Kansas or Alabama

Unless you socialize, talk to your family and take your entertainment in German, I struggle to imagine what you mean.

No, this works for me as well.

Not Germany specifically, but I left the US for France. I don't mesh with the culture 100%, but... more so than I did in the US, except for little localized pockets there. Just for starters, religion plays a far smaller/quieter role here; no one cares that I'm an atheist. When I come back to the US, it's just so obvious and... loud, and everywhere. It grates incredibly.

I certainly talk with my family frequently, though just between parents & siblings we're already split across both US coasts, the Netherlands, and (me in) France. My in-laws are in Malaysia & the US; we also talk regularly, and try to see everyone in person at least once a year.

But I never, never forget about state borders. I don't have that option; they don't let me. My wife is Malaysian; I'm American (as is our daughter), we reside in France and my employer is in the UK. We have to file tax returns in both US & France, and have wasted weeks of our lives doing paperwork and waiting in line in embassies and other government offices (sometimes forced to stay in hotels to be near an embassy in the morning... we don't live near one!) sorting out all of the incredibly stupid details.

My wife needs a visa for flights to the US, or Canada (and was once ejected from Canada because she had a US-bound flight with a Canadian stopover, and hadn't realized that mattered. Yup, it really does.

The hoops you have to jump through to emigrate to a country -- like she did to the US, and we both did to France -- are horrible, with uncertain outcomes, and often poorly documented.

I'll stop the rant, but I'd really, really love any progress away from current states.

I think he means he prefers the package of services provided by the German gov more than the package provided by the Kansas/Alabama gov.

Its not not about language or political views. I was referring to culture and values.

Can you expand on this? Because I would find it impossible to separate German politics from culture/values, and I think many, many Anglophones would find the Deutschsprachige love of orderliness maddening; easily 30% even in a country like Denmark where practically everyone who is capable of working speaks nigh on flawless English. I realise the Danes aren't German but they're more ordentlich than the Dutch and it's that that would piss so many off, so fast.

Language tends to be very strongly intertwined with culture as the main means for conveying said culture. So I'm curious in what way you feel culturally closer to Germany. (disclaimer: I have spent about 80% of my life in Austria and about 18% in Britain, speak both languages but identify much more strongly with British culture)

But the English language is mostly a germanic language, and the British people have mostly germanic DNA. Contrasting Austria and Britain is not that big of a divide. A better contrast would be between Britian/Germany/Austria and one of the latin countries, or a slavic country, etc.

When the US was founded, there were essentially two separate civilizations within the countries borders. This was true until one annihilated the other during the US civil war, but it was never assimilated until the last few decades. This means there is a major cultural divide that still exists.

I would say that the North took more of the Anglo-Saxon protestant values than the South. Hard-working, industrious, religious but-not-overtly-so.

You seem to either be defining "culture" very narrowly in terms of work ethic, or you haven't spent any significant amount of time in the countries mentioned. Or maybe both.

Even with the narrow definition of culture, I can't say I agree. Austrian society is rather conservative. There may be progressive and industrious individuals, but I certainly not the country as a whole. I haven't lived in Germany, but I'd certainly say British people are more liberally minded and individualistic than Austrians.

Despite their disdain for politicians, Austrians will typically expect the state to solve their problems, and accept paying vast amounts in taxes. An example of this is higher education: the majority expects it to be completely free for students, very much at the expense of quality.

The civil service is enormous, inefficient and somewhat corrupt, but it seems to be accepted as a requirement for stability and welfare as a huge provider of jobs. Contrast this to the British fears of a "nanny state" and general grumbling about taxation.

Here's my theory how the differences came about:

1. You're right to point out a certain north/south gradient across Europe, but you're forgetting that Austria and Bavaria were historically the centre of the counter-reformation, unlike central and northern Germany. So, for a long time, very much catholic like southwestern Europe; even anti-protestant. Religion obviously has very much taken a back seat in recent decades, but culture (there's that word again) changes much more slowly.

2. The Austro-Hungarian Empire used to stretch deep into the Balkan and Eastern Europe. Although the German language dominates today, Slavic and Hungarian surnames and people's appearances today still hint at a past where Austria, and Vienna as the capital in particular, was much more heterogeneous. Even the use of the German language is also not quite as straightforward as it seems: the names for many foods are different from those used in Germany and have Slavic, Hungarian and Italian origins. (and I would personally consider food to be a part of a society's culture)

The English-German connection you point out is, by the way, quite far in the past. The Norman invasion of England injected a lot of latin vocabulary into the English language, and that was in the 11th century.

I'm not trying to make a moral judgment by the way; I have family in and from both countries. I can't predict which country will be more prosperous in 50 or 100 years' time; Austrians are probably the happier ones at the moment. I'll readily admit to having my own strong opinions on whether it will stay that way, but that's not really the point here.

For what it's worth, I likewise can't really comment on the differences in culture across the United States from personal experience. I'd be surprised, though, if the differences were as big as across Europe. (I'm leaving aside insular communities such as the Amish who deliberately do not mix with general society; you get those everywhere, and their populations tend to be small)

Could you elaborate on the idea of "Kill the house"? It sounds fascinating.

If modern human life is a drama (consumption), the house is the center location where it mostly occurs. Most people work for a house for all their lives. You feel safe and comfortable in your house, you sleep there, keep all the stuff you bought such as computers there. Most of us are enjoying our sunday at our houses enjoying the Internet.

We had a lot of variations for the "house"; the wood > shared caves by a community > single house > apartment(flats). Hotels, cottages as complementary. So the question is, can the current tech start-ups disrupt the concept of the house? It is a broad topic, but random things which comes to my mind without going too deep:

- make virtual windows where you and your selected facebook friends see a common virtual place or a real scenery where you install cameras (for example in beautiful places on earth), use high quality display and cameras. (google's very fast internet may have a use here)

- kill the walls by replacing them with always-on displays and cameras to your remote counterpart. two distant house should feel become virtually "one". (rasperry pi or cheaper hardware may help here in the next one)

- rethink home automation (irobot does it in some sense) or simply build a branded robot which can simply fetch a sandwich. a central web service can be at the center, or a social web service.

- make location-independent apartments(flats) which are stackable, moveable, expandable.

- managed kitchen or fridge based on your dietary requirements by a web service.

I've always thought of creating a dining room where one wall would be a projected screen that would give you the illusion that your dining in a different part of the world.

It would be actually be even cooler if you could get a live stream from restaurants for their "virtual table" dining guests

Ballistic missiles since the early 60s had inertial internal guiding systems, fully independent of any satellites. Navigatinal Satellites would be shot down first in case of an all-out nuclear war.

I believe the UGM-27 was the first to be targeted via something like GPS, but it was only used for determining launch site, and I think that stayed the norm until the late 80's when the GPS systems could be shrunk and hardened for high-g operation. So yes, most Ballistic missiles maintained an inertial guidance system for many years after GPS was launched. But it was the requirement to know accurate position of mobile launch sites (in order to preprogram the missile's flight computers with appropriate inertial target solution) that motivated the system in the first place.

If you're interested in this sort of thing, you should watch the BBC series "Connections".


These ideas, and the idea that they are frighteningly ambitious, clearly come from the personal experience of living in northern California and dealing with tech startups most of your time.

These are largely first world problems. Here are some ambitious ideas:

- distributed power generation that's cheap enough and renewable enough so people in rural parts of sub-Saharan Africa don't have brown outs anymore.

- synthetic food generation a la star trek

- desalination that is cheap enough for a farmer in Mozambique to do himself

There are more, lots more. People outside the valley bubble have real problems.

Solving first world problems gets you first world ROI. Y-Combinator is an investment firm. I'm not judging, I don't have a dog in the fight. But I'm pretty sure that's why you don't see stuff like stopping brown outs in rural sub-Saharan Africa on this list. There's no first-world money in it.

It's a trade off. There's a lot more money in the first-world. But there's a lot more users in the third-world. That's why the third-world is called the majority-world.

Check out the Google Solve for X videos. One presentation is about a clever new idea for cheap desalination: http://www.youtube.com/watch?feature=player_embedded&v=R...!

Another is about synthetic food...the presenter said his company could feed the entire world from an area the size of Rhode Island: http://www.youtube.com/watch?feature=player_embedded&v=r...!

I think the best ideas improves first world issues while fixing third world problems. Energy generation for example, if you managed to figure out how to generate power/propulsion using a cheap and small physical object and a renewable resource you both solves first world issues and third world problems.

Education needs to catch up with the world in the first world, but if you do that in an async manner online then you can at the same time solve the problem with education in africa.

distributed power generation that's cheap enough and renewable enough so people in rural parts of sub-Saharan Africa don't have brown outs anymore.

desalination that is cheap enough for a farmer in Mozambique to do himself

I am sorry, but you can not fix broken states with technology. Want to help Mozambique and sub-Saharan people? Find a way to turn their governments into well working ones. This is at once infinitely more difficult and also the only thing which will really solve the common developing world problems.

The University problem is potentially a "3rd world" problem, especially inasmuch as you interpret it as a learning/teaching problem.

Non consumption is traditionally a good place to start and I would suggest that high school & junior university level is the most disruptive place to start. Non-consumption of senior high school to junior university level education is something in great abundance in poor countries.

There is also probably a more incentive for potential students to play along in poor countries. A 9th-10th year dropout is more likely to be in that position because of access or soem other problem that technological innovation is good at dealing with. More importantly, the ROI on those 2-4 years of education is probably much higher.

If you were to go after job skills/ training as the point of attack (as opposed to general education) poor countries are also a great place to start. For a lot of skills there is demand at the bottom: bookkeeping, graphic design, programming, etc. Bringing a person making <$1-$2 per hour to a point where they can command a $3+ is fundamentally doable. That's a big incentive.

What if we made people in subsaharan africa as rich as europeans & americans? Then the solution to "brown outs" there is the solutions to brown outs here. Better infastructure.

If you can solve that problem, you have will have done the world an amazing service, outperforming even Norman Borlaug, and the idea is certainly ambitious in its scope.

One question though: how would you go about doing it?

Let's study history and see how Europe and America got wealthy and try to do that.

Which lands and populations do you suggest Africans raid for resources and labour?

Oh yeah, forgot about that...

One big deal in Africa is logistics. The railways of the colonized era were designed from quickly moving raw materials to ports. And the new Chinese-built roads are no different.

The end result is that trucking food between cities and villages not that far away from each other is crazy slow, dangerous, and profitable. Not much differently from trans-Pacific trade some 100-150 years ago.

If somebody would come up with a transport solution that wouldn't require massive infrastructure investments (for which there is no money), and which would be safer and faster than current trucks, Africa could start changing very quickly. And there would probably be a decent profit in that.

Where to start? Could locally-produced, solar-powered small airships do the thing?

There's some interesting innovations in infrastructure-less transport:

1. low cost UAV's - have been used in africa to transport blood samples and meds between hospitals and cities. the MATTERNET is a project that on scaling a UAV transport network in africa, using UAV's , currently probably for small weights , but it would probably increase in the future.

But the UAV industry is improving in rapid pace, because of military innovations, so it's a good place to be.

There's also a lot of open source innovation there, radically reducing the costs.

2. There's some work being done on airships(and plane airship combo) for the army, and airships for commercial transport for areas with no infrastructure(oil fields, etc.). might also fit africa.

3. There's a company working on a low cost jeep , fit to the the African muddy roads. It's can also cheaply supported by current african repair networks.

Then resources can be raided in remote areas without doing anything for the people who live there. With efficient enough machinery you won't even need to train local people to take part in the operation from start to finish.

Having a major logistics route run through an area does not make that area prosperous. It does, however, make the owner of said trade route prosperous. What your arguing is an extension of the status quo for Africa and many other areas of the world with extensive natural wealth. Nothing gets locally produced if someone elsewhere can produce it cheaper in a free market.

Airships? How did Europe go from crappy roads to good roads? Not with airships.

Europe had a governance model where infrastructure investments didn't end up on Swiss bank accounts.

The point here is to figure out a way to fix logistics in a way that individual entrepreneurs can do it.

The problem with search is that not only is Google getting worse, but I've also mostly outgrown it, in that it isn't sophisticated to answer pretty much any scientific question I would want to ask.

- No way to search for a scientific question and get a summary of the current scientific consensus or viewpoints on specific issues

- It's really hard to access academic journal articles online.

- Even when you can access journal articles, it's hard to know which ones to look in to answer your question. Sometimes it's hard to even know which field(s) your question falls under.

- Even if you vaguely know which field your question falls under, you don't necessarily know any of the vocabulary used by that field.

- No way to search by dependent and independent variables, confounding variables, etc.

- No way to sort articles by the quality of their methodology, the quality of the journal they were published in, the quality of the researchers, etc.

I know this isn't a product that more than 1% of the population would use, but if someone built it then maybe there are other things it could be used for.

You're talking about highly-focused, or micro, search. Yeah, Google doesn't seem to do that very well. They have a few segments, like book search and image search, but it's not specific enough.

One thing I search for sometimes are code examples in a particular language. Search for something in C on Google and you end up with lots of stuff for C++, C#, etc... Github, with its large repository of public code, lets you filter by programming language, and is much better than Google in these cases.

Bing copies Google. DuckDuckGo returns different things than Google, but otherwise is a copy. There's no micro search engine for specific topics and sub-topics, outside of site-specific search. Market opportunity...

We're trying to do something like this, i.e, let the user build more complex queries (than a free text search) for their specialization without having them write actual SQL ;) We've started off with Biotech - http://www.distilbio.com


Dialog have been doing this over the internet for a long time. It's generalised search of the web I feel that is too expensive to tackle as yet.

> It's really hard to access academic journal articles online.

We obviously need something better than the status quo here, but the status quo isn't as bad as it seems if you know how it works.

Quick hints: email the article author. They'll probably more than happy to send you an article and a quick summary worded for a lay audience (not to mention talking your ear off about their more recent work...) They're not worried about you not paying the publishers, they want to spread their work around.

Another trick is knowing people in academia. Maybe you have a friend who's doing graduate work, or lecturing. You could ask your old lecturers if you went to university and if the paper you're after is in their field. There are also communities like /r/scholar on Reddit, though I imagine some people are against that sort of thing.

Actually the use case you described seems very ripe for disruption if you ask me. Because it's hacking your way to the solution, whereas we could need a better solution.

> The problem with search is that not only is Google getting worse, but I've also mostly outgrown it, in that it isn't sophisticated to answer pretty much any scientific question I would want to ask.

This simply means that Google doesn't work very well for you, and I mean no offense, but what you are searching is a very, very small minority of search queries. Google still serves a crushing majority of people very well.

You are making the same mistake that Paul is making throughout his essay: he wants startups to build products for him and not for regular users. Seriously, email is actually a todo list? Come on, now.

Google holds the elite back, holds science back, holds are collective knowledge back. Even if it is a minority of queries, these queries are more important than your typical query.

We're surely working all the time to make search "more sophisticated"; many special case queries are already smart (from "2+2" to geo, stocks etc., you don't need to go to special sites like calculator, maps and finance). And we surely have plans to go way beyond, but generally, this is Hard Stuff(TM). For things like journal articles, the information is often behind paywalls like ACM, and even when it's not, specialized engins like citeseer are hard to beat because the info has very special organization needs like collecting and measuring citations. On your most advanced requirements, I think only an Asimov's positronic robot would be that smart ;) unless there's a specific effort to curate this data... which requires tons of human labor, so ads served to the very small amount of people who needs this service will not make it viable. It's the same problem we have with patents (see http://arstechnica.com/tech-policy/news/2012/03/opinion-the-...).

Listing all the reasons it's hard is why the area is ripe for someone else to do it.

No; these reasons show why the are is ripe for anyone to do it. The only fallacy in Paul Graham's comment (or in possible interpretations of it) is that Google has a weak spot there so it's an opportunity for somebody else to beat Google. Trust me, we have a ton of resources dedicated to improvements in search and we have lots of cool things coming down the pipe, although maybe not in the velocity that one could dream (e.g. something like intelligent research for scientific papers is firmly in the sci-fi realm today, at least for fully-automated computing).

BTW, Paul's article has a big #fail when he mentions code search as a possible idea; dude, we do have that and it's amazing (but unfortunately we recently shut it down; not sure if this will eventually resurface as part of some other product).

All that said, of course some company can always make an effort dedicated to a specialized niche that we are overlooking and beat Google Search in that niche; an excellent example of that is Wolfram Alpha. Still, not a big deal; to really "beat Google" you need a new general-purpose, full-Web search engine that beats Google's. Not impossible either, but the barrier to entry is simply colossal and it amazes me that people don't realize that and dream that it just takes some cool new idea or clever new algorithm to do that--we are not anymore in 1998, when Google, still working off a garage, started to beat the current top engines like Yahoo! and AltaVista.

> The problem with search is that not only is Google getting worse,

Google, today, after all is a stock-holders company that is aimed to generate PROFITS, and maximize those. It should be kind of obvious that at some point they (as a company) will try to maximize cash coming in, and minimize going out (spent). Therefore, their product [search] is narrowed towards the ones who push the most obvious questions/search queries: what do they play in theatres, what car to buy, best lcd tv, pharmacy near me, etc. Thats probably 90% of search queries they getting. I say, as long as they work in this zone and make sure simple queries return the best results, they are winning - winning biggest chunk of market and smile on shareholders' faces.

> - It's really hard to access academic journal articles online.

That's because of the business model around funded research. Academic research funding is driven by a limited number of funding agencies being bombarded by huge numbers of proposals. One of the key metrics they use is how many peer reviewed journals has the author been accepted by. Those journals make money by charging access fees and by being semi-trusted gate keepers. Journals WANT it to be hard to access them online since they view the Internet publishing paradigm as a threat.

People have been trying to disrupt that business model since the mid 90s.

I use it more as a way to shortcut sites. For example, if I want to Wikipedia "pi" instead of typing www.wikipedia.org, in the address bar, then typing in "pi" in the site's search bar, I enter it in Google and find the link. Firefox's awesome bar is gradually taking over as I can favorite things and "search" for them using that just by typing in a couple letters, but I still use Google for anything I haven't favorited.

I've been doing that for years on Opera. Just gotta right click a search bar, give it a few letters to ID it and off I go with 'w pi' to search wikipedia. I do not miss having to go through google first if I just use the search bar, or even going to wikipedia.org or wherever first.

I like Firefox's Keyword Search bookmarks. The Awesome Bar becomes a "web command line". Some example search bookmarks I've configured:

* "w pi" to search Wikipedia articles * "d pi" to search Dictionary.com definitions * "am pi" to search Amazon products * "map pie" to search Google Maps locations * "g pi" to search Google

and many others. :)

I'm honestly surprised not every hacker does this. The vast majority of popular browsers supports keyword searching either out of the box or via a plugin.

This is something I've been thinking about seriously, building an "academic-level" search engine. I have the IR/NLP background. If anyone is interested discussing/collaborating, ping me!

Does it bother anyone that "frightenly ambitious" begins with search and email? Seriously? This is the pinnacle of our contribution to mankind - building search engines and to-do lists?

Where's the lunar base? The flying car? The personal robot? The cyborg? Meh, maybe I'm just getting old and grumpy. (To be fair I did like the other ones).


Search is another word for artificial intelligence.

Email means a universal way for people to communicate.

Lunar bases & flying cars aren't ambitious, they are just expensive.

Personal robots are here now in vertical spaces[1] and there are at least 200,000 cyborgs walking around now[2]. I think both these areas are worth working in, but I think you underestimate the ambition of a word like search.

[1] http://www.irobot.com/

[2] http://en.wikipedia.org/wiki/Cochlear_implant

OK, but "cleaning" is yet another word for artificial intelligence. I think you underestimate the power of economics. Can you imagine how many people are busy cleaning things?

I find it amazing that the difference between tasks like washing cloths and cleaning toilets is so small for humans and so huge for machines. We've had washing machines forever, but we're still cleaning our toilets manually.

And cleaning is just one of a large number of tasks that require similar kinds of intelligence. If robotics is able to close that gap, the world changes more profoundly and more quickly than ever before.

All of a sudden, most of the jobs performed by untrained people will disappear without replacement. The current thinking is that tasks that get automated are replaced by "higher level" tasks, but I think that's not actually what happened.

Automated tasks have largely been replaced by other tasks that are equally simple for humans but more difficult for machines. As a result, most people are still doing work that anyone can pick up in a week.

If that gap closes, we're in for a pretty difficult but ultimately wonderful transition. People have been crying wolf out of fear of this transition for hundereds of years. This time it's for real. Even if humans find new things that aren't yet automated, the transition will be too fast for at least a generation.

It seems like a lot of people disagree with you, but I agree with this viewpoint. That the Luddite fallacy will be a fallacy forever is a fallacy in itself.

I've said this before, but I keep repeating my viewpoint. At some point, we will have automated too many of the "useful" things to do for there to be enough work in the private sector for everyone. There are only so many cars, houses, gadgets and foods humanity needs, and the efficiency in producing these things just keeps growing. I'm not saying that we are headed for a dystopia where no one has a regular job - quite the opposite - but during a transition period a lot of people will be without work. Arguably this is part of the reason joblessness is so common in the US today. We will have to find some sort of tax system that makes it possible for those who can't participate in the elite to also live their lives, and hopefully contribute with some creative pursuit.

A lot of people will find this transition difficult, because a lot of people simply have no direction in life and just keep doing rote work. When the rote work gradually disappears, they will have to find something sufficiently engaging to do with their lives.

I've heard it described as the post-employment era. Scary consequences if you let your imagination run with the possibilities.

Some interesting reading:



The really interesting question is whether this will further exacerbate inequalities or whether society will necessarily have to transition to a more "socialist" model if work no longer needs to be done.

"...at that time the economy of the United States will be going down and the next boat people will be Americans leaving America looking for work abroad."

Jacques Attali in his 1990 book "Millennium: Winners and Losers in the Coming World Order"

Still quite relevant, though I'd alter the quote to say "virtual boat people" since jobs are turning out ot be anywhere where you have an internet connection.

There are only so many cars, houses, gadgets and foods humanity needs.

Have you watched: http://www.youtube.com/watch?v=audakxABYUc ? (Rory Sutherland's TED talk 'Life lessons from an Ad man').

Particularly the note about luxury trains.

"We will have to find some sort of tax system that makes it possible for those who can't participate in the elite to also live their lives"

Rather than a tax system that gives them pretty much no incentive to get a job (if you were essentially paid to do whatever you wanted to do, would you want to get a job?), it would probably be better to have government enforced jobs (IE: you want money from the government, you need to work for them until you find a regular job. Otherwise, you get nothing).

As I've said before, Norway has a de facto system that works like this already. Anyone that's deemed unfit to get a well-paying job gets a fixed monthly payment from the government.

What if you want more money than you can get through this system? Then you have to do work that pays. Money and status works relative to your peers, so there's an incentive to earn more if your peers are richer than you. I don't see that this is a problem if the society is wealthy enough.

You're line of reasoning is way off base. You're trying to shoehorn a scarcity-based economy into a post-scarcity society. The idea of "contributing" (the way its currently thought of) will have to go away completely. Private ownership is a social construct. In a post-scarcity world our ideas of private ownership will have to change. Society will simply not allow the owner of Acme robot company to literally own everything (because he has a monopoly on labor). A more communal-based society will necessarily spring up. No one will stand for a monstrously imbalanced society.

Unless the robots have guns...

you want money from the government, you need to work for them until you find a regular job

So you're saying the government should hire all the unemployed? That sounds like a surefire way to make even more people unemployed in sectors that have to compete with the government's army of cheap laborers.

Why should we incentivise people to work if nothing needs to be done?

Eventually, far out into the future, the trends might actually reverse.

Fundamentally, robots are made out of expensive stuff, metal, earth minerals. It can be recycled, but it's expensive, and not 100%. Humans, on the other hand, a pretty cheap, we run on carbon, oxygen and water.

In the future, it might make sense for robots to do stuff they're best suited for (e.g. very hard, dangerous work in human-unfriendly conditions (e.g. spaceship repairs in outer space)). However, unless we discover a way of making robots dirt-cheap and 100% recyclable, it will probably be humans cleaning the ship from the inside, no robots.

Who knows what robots will be made of far out in the future, and remember that humans have to be fed for at least a decade before they can do much of anything. Also, earth isn't losing any material as far as I know but I admittedly know very little about physics and chemistry. Can metals degrade into a lower kind of unrecoverable state like energy can?

If you have an unlimited supply of energy, you can pretty much recover whatever. The only elements hopelessly lost are those that involve nuclear reactions.

Even then it's just a question of energy. You can do a lot with a particle accelerator.

Well, while you can in principle do something, an accelerator is a pretty blunt instrument. You're unlikely to have practical success putting Krypton, Barium and a few neutrons together into a Uranium nucleus even if it's theoretically possible. The phase space for the reaction is just not there. Chemistry has a lot more tricks for synthesizing molecules than just shooting them at each other.

Fundamentally, robots can be made out of less stuff than us, by virtue of being a mostly-empty lattice material at all levels, and won't need recycling as repairs happen on smaller scales until the repaired item is as close to new as you want.

Please. Stop. Bringing that old saw back while claiming that 'this time it is for real'.

Humans can do wonderful things when the alternative is to starve or lose their habitat. It is not different this time, it will not be different the next time, nor any time after that. Won't happen until we have a perfect AI which can do all jobs. Then, and only then, will humanity take a permanent vacation.

I'm not talking permanent. What is different this time is this:

a) The transition will happen more quickly because the kind of intelligence you need to clean a room is very broadly applicable. It affects almost all manual tasks at the same time.

b) Contrary to earlier transitions, there will not be many tasks left that untrained people can perform and that are similar enough to the old ones. A farmer can go work in a factory. Most cleaners will not become artists and entertainers within their lifetime.

I don't think humanity will ever take a permanent vacation. There will always be something that humans want from other humans and there will always be something to exchange as payment.

The same arguments were raised the last time something like this happened.

And while it is true that a farmer can go to work in a factory there is very, very little overlap between these two professions (could a factory worker go tend a farm and survive? Unlikely) -- just as there is little overlap between cleaning toilets and being a cashier in a shop.

Oh and cashiers are not going to be replaced by the same AI that can clean a bathroom, do the dishes and wipe the floor.

Good comment. This is definitely a huge issue that is going to need serious thought in the coming years.

We're going to need to do some more serious big thinking about more than "work", which is too narrow now. We need to figure how how we're going to "occupy" people in the transition from post-industrial/service/information technology society to a roboticized, post-scarcity, arts and leisure society. If handled poorly, "social unrest", mass protests, and outright violence may be become a regular part of the landscape, what with millions of always-idle, impoverished people just sitting on the sidelines, ignored. How long could this last? One hundred years, perhaps? That's a really long time to have constant social upheaval.

Sci-fi has dystopias full of rebellious robots, human vs. robot warfare, grappling with what it means to be sentient, etc. but have startlingly little that deals with a much more realistic question: what does a society where human labor is being made redundant look like in terms of day-to-day human behavior? Over the coming years, the consequences of mass automation and even "stupid" AI are going to come to a fore.

Ok, so what are things we're going to need to do? Encouraging the arts and music, competitive games, and various leisure activities, are probably a given (and not much encouragement is likely to even be necessary, since many will do these things on their own given the ability to do so.) The other piece of the puzzle is the one that's really going to need a great deal of thought: how do we support them, and just as importantly, how do we run an economy where 40, 50, 60 of the population does not work (and thus, under the current system, have no income)? The most obvious answer is to move towards a massive expansion of what we currently call the "welfare" state along with planned population policies (hard, but probably necessary.) The big difference? We'll be not just supporting jobless people, but the economy itself (we can call it econofare or something.) What kind of economy can you have with few consumers? Not much of one. It will require a completely different perspective, one where it's not a bunch of unfortunate (or lazy, depending on your perspective) jobless people, but a bunch of people we essentially pay to be consumers. Items which are scarce will need special handling for sure (hopefully things like food and shelter can be made superabundant sooner than later), but for everything else, it'll be all about simply keeping the flow of money going. The main policy prescription here is the tax-free guaranteed income, like they have in certain Scandinavian countries (someone else in these comments mentioned this.)

That all said, the two types of jobs I would put on "won't be automated anytime soon" list:

1) Jobs that exist to make others feel powerful/superior. Waiters, house-servants, massage therapists. They do have jobs that could certainly be automated, but you don't get the same feeling of "lording over" others with robots that you do with people. Robots /will/ do these jobs for many, perhaps even most people on a day-to-day basis, but there will likely still be people who pay for this as an "experience."

2) Interdisciplinary generalists (i.e., modern Jacks-of-all-trades.) This job isn't a single "job" at all, but a collection of jobs requiring the understanding and ability to synthesize knowledge from different (and sometimes disparate) areas. We will continue to automate parts of specific types of cross-discipline "tasks", but we won't be automating the big-picture viewers.

They're expensive because they're inefficient. To accomplish them efficiently is ambitious and elegant without a doubt.

A startup that can chip away at what makes these things so inefficient is going in the right direction. Looking for inefficiencies, such as with present day email, is a good way to find a direction.

Economics is still at its heart the study of the problem of scarcity. Economies, and all economic activity, attempt to alleviate or manage that problem.

This is a nit pick, but much economic activity these days is nothing to do with alleviating scarcity and everything to do with introducing artificial scarcity where it didn't previously exist. See e.g. Intellectual property.

Economics != economic activity (at least in the context you are using it).

TheCowboy is right: Economics is still at its heart the study of the problem of scarcity.

I said it was a nitpick, but the post to which I responded did say "all economic activity".

Does it even matter? It matters if it means our fully automated jobless future is penury and starvation instead of the post-scarcity Culture we're looking forward to.

Yes, I agree with that. SpaceX etc are doing well at the "chip away" strategy.

I also agree that a cheap lunar base would be a wonderfully ambitious project!

That was just a selection of ambitious ideas. I didn't mean to imply it was a complete list; I never imagined anyone would think I meant that.

I almost added a section on robots, but the talk was already long enough and I figured 7 was enough to illustrate my point that ambitious startup ideas are frightening.

I'm not understanding where you figured my point was that the list was not exhaustive enough; my point was you left out robotics and kept search and email in.

Robotics actually, is an area where something huge seems right over the horizon. I'd certainly bet on the next game changer coming from robotics before any equally revolutionary activity comes from search or email.

I don't necassarily disagree with what you do say about the two, I just don consider advances in these areas terribly important in the grand scheme of things.

You, diego, et al. seem to have difficulty coming to terms with the author. It is clear from the article that by "frighteningly ambitious" the author means problems people currently have which seem insane to take on. They are problems people have because the author has indicated that he has them or has discussed with others who have them. Search and email are insane to take on because if you succeed with search, you are decimating a company with market cap in the hundreds of billions; succeeding in email means disrupting one of the original Internet protocols which virtually everyone uses. Thus, these are frighteningly ambitious.

You instead see "frighteningly ambitious" and assume your own term which is in the realm of science fiction (cyborgs, personal robots, &c.). Robotics, incidentally, may be a frighteningly ambitious endeavor but only if you are considering something like autonomously scraping and repainting ships or actuation in camera pills: personal (autonomous) robotics won't happen in the near future (see iRobot Corp).

So, assuming the author's definition of "frighteningly ambitious", search and email are important because they are problems people have now. If search weren't a problem, Union Square wouldn't have funded DuckDuckGo, and more to the point, DuckDuckGo wouldn't be growing. (I also have the exact problems the author describes with Google -- for the past 2 years I've had a problem with Google.) If it hasn't been shouted by Fred Wilson and Paul Graham enough: they have pain from dealing with email. They would throw their money at you if you offered a way to make email less painful. Maybe instead of spending two hours every day with email they could spend one hour with new/email and they could have a spare hour a day with their kids -- how valuable is an extra 14 whole days a year to these guys? How many people hate spending so much time on email who have means? Probably enough to make it worth investigating the problem. For what its worth, I'm frugal as Franklin and I would actually pay for email (i.e I wouldn't accept any common free email service because they aren't even worth $0.00 to me for a variety of reasons).

> I just don consider advances in these areas terribly important in the grand scheme of things.

What is the grand scheme of things?

Facebook and Google have improved the daily lives of billions of people. Tesla Motors' electric car? Very impressive and a lot of promise, but so far all it's done is increase the self-satisfaction of ~1000 people.

Look at Star Trek. Other than the spaceship itself the most impressive things are the technology the people use. The Siri-like computer interface, iPad-like computers, Skype-like communications technology, etc.

Facebook and Google have improved the daily lives of billions of people. Tesla Motors' electric car? Very impressive and a lot of promise, but so far all it's done is increase the self-satisfaction of ~1000 people.

The same could be told about computer in the 1950's, I am talking about the famous misquote : http://en.wikipedia.org/wiki/Thomas_J._Watson#Famous_misquot...

there was a world market for maybe five computers

If fossil fuels aren't going drive cars in the future its going to be something else. Fuel cells or Electricity.

If IBM had thought or if any body else would have thought how difficult it was to create computing power to derive any meaningful value. We wouldn't be having computers today.

> If fossil fuels aren't going drive cars in the future its going to be something else.

The other option is we won't have cars.

And yet, Star Trek falls firmly into the social SF camp. The tech is incidental, left abstract or waved away with babble so that the personal interactions can proceed. Obsessing over the latest gadget incarnation wrapped in shiny branding is just the opposite, consumerism rather than tech which quietly enables.

The frightening part is the idea that you would be trying to disrupt seemingly invincible tech companies like google, which would be an insane business plan for a startup looking for a $50k kickstart investment.

But, I remember when google was first released. I thought the search engine market was already full with yahoo and alta vista dominating at the time. Who knew?!

What is it they say? It seems invincible until it suddenly doesn't any more.

But at any count what companies seem like or appear to do doesn't matter to your company. What matters is what they are and what they do. If they seem scary, good, that will only serve to keep others from trying to beat them -> less competitor for you.

Now go forth and conquer.

Google offered to sell to Alta Vista (for somewhere in the 1-2 million range) if I recall correctly and they were turned down. Sometimes the giants don't see change coming.

It was $1,000,000 to Excite, not Alta Vista.


you are vastly underestimating how much email has changed the world. if a better email can do for this generation what email did for the last one, i'll back it against anything on your list for sheer bang for the buck.

Maybe it's not as bothersome as it sounds. Donald Knuth said the two biggest problems in Computer Science are searching and sorting.

The lunar base and the flying cars are pointless if your goal is to improve people's lives instead of just satisfying egos.

And the personal robot is already being worked on, but instead of a single humanoid robot (which I always considered to be a bad approach), they're small and specialized.

What about this?

"In the developing world, 24,000 children under the age of five die every day from preventable causes like diarrhea contracted from unclean water." - UNICEF

That's 54 jumbo jets a day.

I also remain unimpressed by the ambition of his list. And by yours.

That's an important problem. But my list was a list of startup ideas, not important problems generally. I'm not certain the problem of water supplies is best addressed by for profit companies (it might, but I'm not sure), so it doesn't make a good example to put on a list of startup ideas.

That's fair, but I think we, in this community, have a stigma that we work on trivial and superficial problems. There are far too many cat photo sharing sites like Color and not enough efforts like 1 Laptop Per Child. We have to be diligent in clarifying that.

Our goal has to be to make the world a better place, not simply to make a few of us more more money than we can spend.


One of the core problems the world faces going forwards is resource supply - water, food, fuel. It's long been predicted that the next large-scale war will be over water supplies; there are already a number of simmering conflicts that go back to access to water.

Why are we running out of resources? Significantly, because there's too many people for the resources we have.

Why do we have too many people? Explosive population growth over the past 100-150 years, particularly in the last 50. World population has gone up nearly 50% in 25 years.

What's historically been the best way of arresting population growth? War; kill lots of people. Hmm, not really an acceptable plan. What's the next best? Education, in particular female education.

Now, at its core I agree with your point that the list was under-ambitious. Well, let's refine that - only parochially ambitious. So let's go big. We want a sustainable, long-term end to hunger and world peace.

What do we do for that? Give a man to fish - teach a man to fish. Food parcels and free medicine are a sticking plaster - necessary in the short-term, insufficient in the long-term. Fair trade helps but alone is improving their position at the bottom of the supply chain. Get improved education and access to knowledge though and we enable a long-term culture shift that both better equips the presently impoverished for the future and attacks the root causes of the resource instability that currently so impoverish them.

Which is where projects like OLPC and the Indian tablets are so important, and where (if done properly), the startup idea to replace universities is the biggest and most important on the list. Take it all the way and it's not about knocking down Yale and Harvard; it's about world peace and an end to famine.

OLPC has been such a failure because it was designed without understanding or respect for the situation the people live in. I agree with you about medicines etc. being bandages and education being the most important thing. But I think when you veer into "universities" and "cheap computers" that things become sketchy.

Replacing universities doesn't help the developing world, not really, because the education would have to be tailored to them.

If you want to read a book about one of these problems (sanitation) and how aid efforts tend to go totally haywire for the same reasons OLPC does, read The Big Necessity.

The needs are more basic. People require self-sufficiency in the basics before having a tablet computer is going to help.

Agree with you on female education… especially also business education and entrepreneurial loans. In this vein, there is a great charity here: http://www.girleffect.org/question

And there is that Indian entrepreneur who developed affordable sanitary pads for women:


Note that this wasn't a charity effort, but a social entrepreneurship effort. And pads are important. Women are not just uneducated, they are hampered even when healthy.

Finally, many people in developing countries have a cell phone which they can use via SMS to determine market prices for their farmed produce etc., which is already a great boon.

I personally know some folks, who grew up in Africa, who are doing great & interesting things to serve these people where and how they are, not by assuming they need somebody to come in and "revolutionize" them. (Which will inevitably fail.)

The reason replacing univeristies will help the developing world (and why I still argue for cheap computers tailored for the needs of the developing world) is that it makes access to aducation that much easier. Yes they need primary rather than tertiary education, but tertiary's the easier market to start deploying in so use it as a beach-head and work down.

At present, why don't they get education? Is it that they don't want it? Aside from a few communities actively holding back education, by and large they know the benefit and want education. The present model doesn't work for them though; the communities are frequently rural and some distance from the provision of education, the families are operating in the subsistence economy and can't afford to release the children to education.

Technology can help attack this from two fronts. One, it enables distance learning at the time of the student's convenience. Education need not be directly opposed to family responsibilities this way, the two can be more easily dovetailed. Education can also more easily be tailored to individual needs; by removing the need for geographic concentration to provide the required training, more appripriate education can be delivered.

Two, education is a (potentially fundable) need on such a scale as to enable transformational economic change society-wide. So much of the economic disadvantage is down to lack of information and poor access to markets from the impoverished communities. An infrastructure to enable distance learning on a whole-society scale is just as able to provide general community information and revolutionise access to markets. Think eBay's had a tranformational effect on some businesses? Wait until you see what the same thing could do to third-world subsistence farmers.

Transform education through technology-based distance learning. The easiest starting point for that is tertiary education in the west, which shows high demand but poor utilisation of possible technological effects coupled with high costs from the incumbents. Then use that infrastructure to achieve the really transformational change, by educating the third-world poor.

They die because they can't afford clean water, so there's no money to be made in solving that problem. Capitalism does not have a heart.

No, they die because they are ignorant of sanitation and therefore don't have any. Have you read anything on the subject, or are you just assuming?

Secondly, there is certainly money to be made in solving the problem, if you're not a truly lazy thinker who bases his conclusions off what he hears in TV news soundbites:






Lastly, I'm sure that TOMS and Warby Parker would be interested to hear that there's no money in helping poor people gain access to shoes or glasses, as well.

My statement was not meant to be factual but rather indicative of a certain cynicism on my part about what I think the motivations behind people starting (the vast majority) of companies are. Sorry that wasn't clear.

Well it worked out for the best! No worries. But I know there are people who are actually thinking that.

Sarcasm: it's dangerous (but sometimes useful) on the internet ;)

I thought about this too!

Disease are a serious case.

But its shameful that at this stage in the evolution of human kind we have people dieing of hunger and malnutrition.

It's not a disease which causes children to die of diarrhea, it's inadequate sanitation. That's right: exposure to human waste.

In my experience, Sand Hill Road does not want "frighteningly ambitious" startup ideas if substantial capital expenditure is involved. (In fairness, they are willing to hear those pitches – I guess that's something.)

> Now Steve is gone there's a vacuum we can all feel.

Pixar got funded only because Steve Jobs (Steve Jobs!) paid for it of pocket to the tune of $50 million total. It's Pixar that made him a billionaire (not Apple, as most people assume). How often does Steve Jobs invest in companies? Virtually never. But he knew (correctly) that Pixar was on to something.

I'm dealing with the Pixar bootstrap-problem at my own company, Fohr. Fohr is the live-action version of Pixar (photography, not animation, is what gets computerized), and requires $32 million in capital to do the process today on a feature film (well over half of that is for hardware - $2 million alone for electricity!).

Fohr is only constrained by capital – the R&D has already been done (it took nearly 13 years to develop the tech) – so you'd think Fohr would be ripe for funding. And you'd be dead wrong. There are no Steve Jobs left to pay for it.

The startup world today seems to only want tech innovation on the cheap, and that includes Paul Graham and all the rest.

So if I wanted to do PixActing like I wanted to breathe, my five year plan would be a) get VC funded for anything, b) achieve a modestly successful exit, and then c) recruit one similarly situated person and just shake the money tree. Without making disparaging comments about identifiable businesses, it is not a controversial observation that proven entrepreneurs with existing networks have vastly superior access to capital compared to first-time entrepreneurs with no network, independent of idea quality, target market, or execution ability.

$40 million is not a number that is unachievable in 2012. The password is just a bit different than for $200k, $700k, or $5 million.

It's probably worth mentioning how Steve Jobs was introduced to (what became) Pixar:


> One of my champions at Xerox PARC was Alan Kay. So I knew Alan Kay, who was by this time a fellow at Apple. And Steve Jobs had expressed some interest in computer graphics, so Alan Kay said let me introduce you to the guys who do it best. So Alan Kay brought Steve up to spend an afternoon with us at Lucasfilm. That’s when we first got to know each other. I had actually had one earlier conversation with Steve at some design conference on the Stanford campus one summer, but that was just a first meeting sort of thing. The first serious meeting with business possibilities was that one at Lucasfilm with Alan Kay.

> Shortly after that, Steve and Apple broke up. And meanwhile, Lucasfilm was trying to sell us. Steve ended up buying us from Lucasfilm for $5 million.

So not only was Jobs alerted to Pixar by an existing contact, in buying it he was to a large extent reusing the business model that had already worked with the Macintosh: take PARC goodies and commercialise them, hiring some of the PARC guys themselves.

I'm basically doing that, actually. Fohr has ridiculous technology, and I'm parting it out (feels like chopping a car) as you describe.

I can do Fohr without the capital, it'll just takes me longer as hardware gets cheaper and my own net worth goes up.

18 month ago, it would have cost over $100 million to operate Fohr, so time is on my side.

btw, Fohr looks really cool. However, the pull quote at angel.co/fohr is a little unfortunate:

  “Our dream of building a Pixar for films that are
  photographed is just weeks away from being realized.”

  (Posted 4 months ago)

A good laugh.

Thanks, I've updated AngelLest with the latest news:

> Pre-production continues on the first computer-photographed film, Carpathia, and production begins on June 1, 2012 in Los Angeles.

At the time, we were very close to going through with a deal. (Obviously, that fell through.)

Pixar did it for 16 years (1979-1995) before they released a movie.

1986-1995 is a bit closer, I think; from 1979-1986 they were in effect an R&D division of Lucasfilm, and it wasn't their job to even think about making films. They were supposed to develop new tech and do special-effects in films, which they did do for quite a few prominent films.

You're right that VCs tend to be leery of the most ambitious ideas. That's another of the obstacles in your way if you pick one. But you shouldn't let your ambitions be limited by what VCs will fund.

(In any case you can trick them by only telling them about the initial few steps.)

Looks pretty interesting but I find myself having to guess what you're doing. It's important to be concise and reference things people already know ("Pixar") when you want to convey information quickly, but it's unclear what the value of the technology is in the 2 paragraphs I could find written about it. Everyone knows "Pixar" by name but I'm struggling to understand what you're doing. Is it animation software that maps photographs to the virtual world and renders it "almost-real"?

Please see my reply to @ricardobeat in the parent post.

Part of the issue is that Fohr has two sources of funding. One source is for the technical side of the company, which I expected tech funding to pay for. The stuff on AngleList is basically only about that.

The other source is film funding for the first computer-photographed film, Carpathia. I have a completely different talk for that, which is more about how the tech is actually used to make a live-action film.

Point is, AngleList is only a small (but expensive) part of the story.

It's likely that even Steve Jobs would not have invested in Pixar if he knew how much it was going to cost to make it successful. He originally put up $10 million ($5 million for Lucas and $5 million to finance it). It became a money pit that either pride or faith compelled him to keep funding.

Sand Hill Road is just one source of money and money is the most fungible representation of wealth, so if they won't help you, go to somebody for whom 32 million is chump change or who are used to pay way more for a movie. Hollywood way be more receptive to your ideas (making a block buster isn't cheap and there is always a risk, so they should be used to taking them).

I'm curious. How does that work? You transform the film action into 3D and can then manipulate the animation?

The video on AngelList looks like just a rendering demo, and I can't find any other references.

See: http://erichocean.com/fohr/index.html for more info on how the tech works.

Filmmaking is a technical and artistic discipline, and only the films themselves are sold to a mass audience, so my pitch really only makes sense (and is tailored to) those in the industry.

Thanks for the link, that clears it up nicely. Maybe the video could explore the production workflow a bit more, it looked like a simple tech demo to me.

As an amateur historian, I found the Colombus bit a bit interesting, and probably more on-point than Graham might have even known. Columbus, his backers, and his detractors all accepted that the world was round. What they disagreed about was how big it was, and how far it would be to Asia by sailing West. Everybody, pretty much, by that point knew that the world was literally round (and flat only in stories). This was especially true in monastic and church circles which had known this for longer.

In other words they all agreed it was a great idea and an ambitious project that might succeed. They disagreed about what it would take to get there, and whether there might be obstacles in the way.

Seems like a very fitting metaphor for an ambitious startup.

Edit: For sources, you can start with "Heaven and Earth in the Middle Ages: The Physical World Before Columbus" by Rudolf Simek, which is a book uncommon in its level of insight. His description of Marco Polo's purported encounter with a unicorn had me laughing in both humor and amazement.

Simek's basic thesis was that Columbus's expedition was important historically because it blew away an important piece of medieval ethnographic thought--- once it became clear that the areas he had reached were not India, but were inhabited anyway, it doomed the Augustinian argument against the existence of inhabited continents beyond Africa, Asia, and Europe. This then paved the way for questioning the religious and classical basis for some aspects of the physical world, and lead in many ways to the Renaissance (though I think the failure of the Crusades and the translation of Arabic writings into Latin had a strong hand there too). The importance of Columbus's voyage about changing the way we think about our place on the world was still important. Another good point about ambitious startups?

I'm currently working through SICP and watching the 1986 lectures of Abelson & Sussman, and one interesting bit in Lecture 2b on Compound Data is when a student raises his hand and asks Hal Abelson about the axiom of doing all of your design before any of your code.

Abelson's response: "People who really believe that you design everything before you implement it are people who haven't designed many things. The real power is that you can pretend that you've made the decision and later on figure out which one is right, which decision you ought to have made, and when you can do that you have the best of both worlds."

Probably the same holds true for startups.

Thanks for the reference.

Of course, every sailor knew the Earth was round. Just take a morning off and watch the boats going out of a fishing harbour should your location allow this...

I found that pg quote useful as well, in a 'look for local advantage' way. Plan the next step based on how the weather and sea look today. Tomorrow it might be different.

The Simek book is quite interesting. Of course it wasn't just the sailors. By the 13th century, pretty much everyone knew the world was round as it was a common description in popular literature. Their ideas of antipodes were rather funny and something the church struggled against for some time until the discovery of the New World but an end to the question.... And until Vasco de Gama proved them wrong, they might not have believed you could sail across the equator.... but they knew it was round.

But the Simek work is interesting beyond that. It's largely on the basis of his work and the understanding that Europeans often knew more about Asia than Europe that it's fairly clear what a unicorn was: it's what you get when you describe an Asian rhino using a horse as a reference point. (Pliny's description of a monoceros is also frighteningly like a rhino, although some things are exaggerated.)

In re #4, I'd suggest that your biggest hurdle isn't movie studios (as we often like to suggest here). It's Comcast. It's Time Warner Cable. It's AT&T. These companies exercise an oligopoly on most people's internet connectivity, TV UI and UX, DVR experience, etc. They also set the terms, with the networks and studios, for what you actually get to watch on demand. They pushed their crappy DVR onto the masses, effectively killing off the far more innovative and superior TiVo, because they offered their boxes at point-of-cable-hookup to consumers. They control so many strategic channels in the TV business, on both the B2B and B2C ends, that they're basically running the industry. (They were also the prime movers in the PIPA/SOPA legislation, and they'll be back with another attempt as surely as the sun rises in the East.)

Netflix, Apple, and Amazon look like compelling alternatives to the cable oligopoly. Unfortunately, studios are deathly afraid of handing over monopolistic control of their distribution to a single player like Netflix, so they're fighting with Netflix and trying to push their own alternative onto consumers (Ultraviolet). Meanwhile, they remain relatively oblivious to the real snakes in the grass (Comcast, et al.) -- an obliviousness that's going to get even worse, now that Comcast owns a major player in the production system.

To beat Hollywood isn't to beat the studios. To beat Hollywood is to beat cable. This isn't a war over content; this is a war over distribution. Technology vs. technology. Content producers will go wherever there's distribution to be found, and money to be made.

Here's another tip: I'm African, and I don't understand what you are talking about here. Maybe the next entertainment innovation should force global scale...

It's an interesting analysis but still the problem for any upstart distribution technology is to get content, it's a chicken-and-egg problem. I've worked in the past for content distribution technology company and the main issue was to get content, there were also specific issues to the technology chosen that made it sort-of-dead-end but I didn't see how it could get the content at relevant terms.

The technology to distribute the content is out there already, bittorrent showed the way and it is working at a fairly large scale. Any problem down the road technology-wise can be solved by some (non-trivial) amount of money and creativeness. It's not a technological problem.

The main trick is to get good content on a trivial distribution method. I've been thinking about this but I'm a technology guy and couldn't figure how to get the content.

I think personal health monitoring is probably the most important thing on that list. The thing that excited me the most when smartphones started becoming popular was the prospect that they could coordinate data collection from a number of sensors always collecting data - basic ones like Nike+, but perhaps also sensors measuring sleep, taking periodic bloodwork, etc. At the same time, perhaps you could automatically monitor personal behavior such as foods eaten.

Personal diagnostics would be an important use of that, but I think more importantly, with a very large public dataset of basic biometric data correlated with behavior data and medical results across a significant portion of the population, we could stop treating human health studies as bespoke one-offs put on at great expense and start treating them as data mining problems. You could begin to spot correlations between behaviors and results that are unintuitive given conventional wisdom. I think that the resulting burst of discoveries would be on par with any of history's great scientific revolutions.

There is a fundamental difference between a scientific study and data-mining.

Science is based on probability theory. Until we discover the "grand theory of everything", out other theories will be only approximate, and out experimental results not 100% predictable. Therefore, scientists consider a prediction as correct if the chance of predicting something at random is less than some probability, usually 10%, 5% or 1%.

However, for this to work, each study must be based on new data. If you use the same data to check e.g. 10 predictions, each of which has 10% chance of happening even if incorrect, you will in average confirm 1 of your predictions, even if all are incorrect!

It's also fairly easy to, even unwittingly, begin tuning some of your procedures to the data set (i.e. overfitting), even if you're employing the usual precautions, like cross-validation. As an extreme example, if we had one gigantic fixed data set and gave researchers 10 years to work on it, by the end of that 10 years there would be entire techniques at least partially specialized to that specific dataset, with poor generalization outside of it.

No, if 10 predictions each have a 10% chance of a false positive, you will always have on average 1 false positive. Whether you test them on the same data or not doesn't matter (unless 10 predictions test the same thing, of course.)

What you need is a control sample that you know should be negative, so you can actually measure the false positive rate. (But with a sufficiently large base sample, you can look for correlations in small subsamples and use the whole sample as a control.)

> No, if 10 predictions each have a 10% chance of a false positive, you will always have on average 1 false positive. Whether you test them on the same data or not doesn't matter

That's true.

I should have said it differently. In fact, I'm not even sure that my understanding is correct.

The problem with not using new data to test each new prediction is, that if a scientist wants to show A on data X, but data X doesn't confirm A, the scientist modifies A slightly and now tests A' on X, which is again rejected, and then modifies it again, testing A'' on X, and so on... until the data X actually confirms hypothesis A'''''''!

That's the real problem - using data without a predefined plan for how you will use this data. In the above example, the data that you collected affected your decision-making process, so your results are not independent of the data (and thus not replicable!).

Yes, this is correct. It can still be useful to look for things that warrant further study, but it won't be proof of something in and of itself.

I was talking about discovering correlations which aid in potentially discovering relationships between behaviors and results. If you have a sample size which includes everyone in your study, that's as good as you can get on your confidence interval... you can use a different subset of the data for each experiment if you'd like, and that would be functionally equivalent to running individual experiments with a subset of the population each time.

Scott Adams (Dilbert guy) proposed this a few years ago as a way to win the Nobel Prize in Physiology or Medicine.

Being able to "tivo rewind real life" would be pretty amazing -- watch a bunch of things passively, record data, notice a spike in mortality, then find the common factors and stop the new plague (or the kids who found a Cobalt-60 source, or the pump infected with Cholera).

I think it would bring about enough breakthroughs for a lot more than one Nobel :-) But yeah, I'm definitely not the first one to fantasize about being able to correlate behaviors or even just basic bio signs across a big chunk of the population with the medical results.

I even think it'd be feasible to pull it together as a company, since there are lots of immediate potential benefits to using the things that would collect this data. A much better replacement for Life Alert, which works even if you're unconscious and can't press the button. Dieting aids, sleep aids, exercise aids, passive diabetes monitoring. It's a lot to bite off, especially as a startup, but I think a team that knew how to execute on this kind of product could roll out products serially and pull together that dataset. That dataset would enhance each other product in the way that each of Google's views on the internet (DNS traffic, browser feedback, analytics) help its core product. A hell of a defensible advantage. Not sure how one could convince that company to give up its crown jewels for the sake of medical research, though :-)

Interestingly I have a friend (a hematologist) that says that they actually try not to overscan for fear of finding something. There are plenty of cases where a scan reveals something that may not manifest itself into a noticeable issue in the lifetime of the patient but they then need to treat. Unfortunately often the treatments themselves are invasive (sometimes far more so that the issue itself might have been).

I guess that problem will diminish as advancements in medical diagnosis and treatment progress.

There's a definite opportunity to build a competitor to Apple and reach the hackers first: building a better PC for hackers to create new software built on Open Source and Web technologies.

The cracks are starting to show with using Mac OSX as a primary machine for hacking. It's got unix under the hood, but every successive release has become more consumer focused and less hacker friendly. The proprietary nature of developing native apps also turns off a lot of the great OSS hackers.

If you could get an all star team together with someone like Rahul Sood to design the hardware and someone like Miguel De Icaza to design the OS and developer APIs, you'd be well on your way to tackling this problem and building the next Apple. And this time, it could be a lot more open source friendly.

Do you think there is really a large number of hackers who are not satisfied by the Mac on the one hand, or Linux on a Thinkpad on the other, and who would buy into this new system?

OK, so you really just want to target hackers first and then move into the consumer space once you have some traction. But consumers are starting to move away from laptops and toward things like smartphones and tablets. Apple is moving in this direction and has a head start. If you start by making laptops, Apple's head start is just going to grow.

Better to figure out what is going to come after smartphones and tablets, and get there before Apple does. I don't see any particular reason to retrace Apple's steps and start by building a laptop.

Some companies I suspect might be following a long-term strategy similar to this already:

* Google, with the rumored heads-up display project based on Android

* Jawbone

* Razer (in this case Apple is not the target)

> * Jawbone

+1! I think Jawbone has a great opportunity for post-phone consumer devices. They have a strong brand and are branching out to non-headset form factors.

I am surprised at how readily mainstream consumers have adopted Jawbone Bluetooth headsets. People where them all the time, even though 99% of the time they are not taking a phone call. They look like dorky Borgs, but I think this is a sign that mainstream society will be open to transhuman/cyborg enhancements.

> Do you think there is really a large number of hackers who are not satisfied by the Mac on the one hand, or Linux on a Thinkpad on the other, and who would buy into this new system?

I don't know if it's a large number yet, but it's growing and you can see the cracks sprouting. Most of the hardware outside of Apple's really sucks, and Apple's software ecosystem and the way they run things turns a lot of people off. If there was something viable to switch to, I think it could get some traction quickly,

> Better to figure out what is going to come after smartphones and tablets, and get there before Apple does. I don't see any particular reason to retrace Apple's steps and start by building a laptop.

This is intriguing. A Post PC device that hackers can use to create software. Definitely something that hasn't been fully explored.

There are a few challenges, however. Hackers need significant hardware to get anything done. Also, most of our tools that have stood the test of time (Emacs, Vi, C, etc) require traditional keyboards as input.

Text input is the biggest problem of post-PC. While it may eventually get solved, I'm exploring ways to make non-text-based programming productive: http://noflojs.org

What's wrong with running Linux on Apple hardware?

Better to figure out what is going to come after smartphones and tablets, and get there before Apple does.

I would figure out what the developing world needs the most and how they will be using computers and communications.

It's here, but not how you think: http://www.raspberrypi.org/

Do you know any hacker who knows about Raspberry Pi and isn't planning on buying one?

Radically undercutting existing platforms in price, but with comparable functionality enables totally new uses for general purpose computing devices. That's how to take on the Mac/PC business - not by hitting it head on.

rπ is a great project, but it's hardly out of left field.

There have been plugins for years, at seriously affordable prices. You can buy a netbook for $100 these days and run Windows/Linux on it happily.

rπ is interesting because of its positioning as a learning platform equivalent to the BBC Micro which nurtured David Braben and many of the other backers. Not because it holds some magical quality over Gumstix, BeagleBoard, PandaBoard, CottonCandy, GuruPlug, DreamPlug, Arduino etc.

I think that purpose designed hardware at the $25 price point is a radically different proposition than a second hand netbook for 4 times the price (because you sure can't get a new Intel netbook for that price).

It does hold a magical property over all the platforms you've mentioned: price. At $25 it's close to disposable, whereas I'm going to look after a $90 BeagleBoard.

I once walked into the Arduino IRC and asked if it could run maxima. The response was that I'd be lucky to get it to run Linux. The arduino costs more than $25. Everything you listed there costs more than the utility value it has for most people.

For $25 a pop you can do innovative prototypes on the kids-allowance cheap. The size is an extra bonus. There is no end to the number of ~$50 projects I've thought up that consist of a pi and peripherals. To say that those other projects you mention are somehow comparable is ridiculous. They all cost multiples more money than the pi.

The pi is really going to speak to my demographic: Teens in their basement who have no money to spend on flashy prototyping equipment. But have tons of ideas they want to try out.

EDIT: And continuing, for doing a "production run" having multiples lower production costs is a serious advantage for cash strapped endeavors. Not sure if the pi foundation will let you order enough to do that though.

I don't see the difference between what you just described and what Canonical is doing with Ubuntu.

Canonical != Hardware manufacturer.

The problem with the hardware business is that good hardware takes money to make. It's not the sort of thing that you can fund on the kind of margins that most startup rounds deal with.

(Disclosure: I have never attempted a startup or tried to obtain funding for one.)

I will not be upgrading to Lion or later -- I prefer Finder to have ~/Library, I don't want to see all the thousands of tiny UI images on my computer, I don't want the mouse to scroll backwards, or a stray finger on the mouse to go back a page in webkit when I wanted to scroll left. Don't get me started on the latest Xcodes, which I have to use at work.

So, speaking as the target market, Ubuntu has created a horrible user experience with Unity, and have made it not worth the effort it takes to remove it. I will never run Ubuntu (desktop) again.

My Linux laptop now runs Mint, which is fine but cannot connect to a network without the Gnome graphical configuration tool for whatever reason. probably layer 8 or ID10T in this case.

Agreed. My next laptop is going to be a Linux one, and I'm going to pay for a high-quality hardware system... I can only hope the drivers are high quality.

I'd pay $2250 for a high-quality Linux laptop (roughly the price of my OSX laptop). :-)

Linux runs great on an Apple MacBook.

"GMail is slow because Google can't afford to spend a lot on it. But people will pay for this. I'd have no problem paying $50 a month."

Ok. Number of Paul Grahams in the world times $600/year = ?

Most people on the web are ridiculously stingy. "I would pay for this" is a terrible way to think for an entrepreneur. Believing that what we think represents the masses is a rookie mistake.

Number of business email users? Tens of millions certainly.

"Would I pay for this?" is a great question for founders to ask, because it combines two of the most powerful techniques for generating startup ideas: solving problems you yourself have, and using payment as a test of how much people want something. One of my techniques for helping founders to come up with ideas is to ask them what they need so much that they'd pay for it.

But there are decades of precedent against how much people will pay for email.

Look at a large enterprise org - you think that Lockheed with 150K employees would spend $50/yr/user on email?

Hell no. they dont spend 7.5MM per year on their email accounts for the employees.

Thats the problem with enterprise scaling vs cloud/startup scaling. They are inverse;

The enterprise wants the cost per unit to go down when scaling. The startup/cloud wants the profitability to increase at the same rate when scaling.

We all want great services, but NOBODY wants to pay for it.

I myself seek to offload cost at every opportunity; work pays for machine, phone, travel, software etc...

Same model.

Yeah - I'll seek to solve problems I have, but not based on how much I would pay - but how much I would like to offload that cost.

(Clearly there is a lot of grey here, and there are areas where this doesn't make sense -- and others where it does -- and these are not mutually exclusive. (i.e. in areas where I am both building for the consumer and the provider (healthcare))

"Look at a large enterprise org - you think that Lockheed with 150K employees would spend $50/yr/user on email? Hell no. they dont spend 7.5MM per year on their email accounts for the employees."

Yes, I certainly do believe that any American corporation with 150K employees spends significantly more than $7.5mm/year on their messaging system.

These systems actually get _more_ expensive as they grow larger - Disaster Recovery, Business Continuity, Sarbanes Oxley, Customer Service, SLAs, Data Loss Protection, Intrusion Detection - All these email services that the small enterprise doesn't worry about (that much) - add up significantly in larger enterprises.

Lockheed is a horrible example too because they have extensive classified operations (their support costs for email within classified projects probably exceed 7.5mm alone), and because Lockheed IS&GS is a major contractor for outsourced IT services.

I think the Gartner figure was something on the order of $500-1000/yr per employee for messaging in large high tech businesses. A lot of that is IT staff, and all the other systems for security and compliance. Email is one of the big apps within enterprise.

Would it be possible for you to elaborate on what your email needs are and how they're not met?

Would your problems be solved by hiring a personal assistant (a real person)? Then the solution is AI and it's hard.

Or do you believe it's impossible for anyone but you to sort through your mail?

If that's the case, what we need is to make sorting email "fun" (more enjoyable than TV).

We may be looking for the Angry Birds of email.

The problem with emails is not the spam or the sorting, it's that they're not actionable. Tasks in a to-do list are almost directly actionable, and that's what most emails aim for.

One of the simple ideas we could do with email for power users is to show the sender a list of imap folders the receiver has created in his inbox and allow the sender(maybe an assistant) to target the emails in that folder thus turning it into a collaborative effort in organizing stuff

Seeing as how you're on HN you may have heard of 37Signals, Fog Creek, and Atlassian. All sell products which are domain-specific improvements for email. They collectively have, conservatively, X00,000 paying users. These are small companies in this problem space - IBM has at least 48 options for I-can't-believe-it's-not-email at every price point between $200k and $200 million. Ditto Microsoft and a dozen other big software companies.

"The web" includes a bunch of stingy twenty something's on the consumer Internet, but it also includes the producer Internet, and the producer Internet is one giant system for turning piles of money into bigger piles of money. Something which makes that happen is cheap at any price.

Rich people have the same number of seconds per day as poor people, except each second is worth more. Right now everyone is driving a black Model-T; there isn't a premium edition of email available, though one is certainly desired (personal assistants and secretaries currently fill this gap).

If the cost of a premium email/task-list system is less than the dollars saved in time, and less than an assistant, people will buy it. Aim for the higher-end market, solve their problems first on their dollar, and later expand downward to everyone else.

there isn't a premium edition of email available, though one is certainly desired (personal assistants and secretaries currently fill this gap)

I think you misunderstand. The premium edition of email is much like the premium edition of anything else- it's the email you don't have to use, the plane you don't have to fly, the car you don't have to drive, the food you don't have to cook. So I don't know that personal assistants and secretaries are "filling a gap" in the stop-gap sense, but rather they are the solution.

If you want to capitalize on it, do like a chauffeur company would do, but with email.

Somehow people manage to pay Dropbox hundreds of millions of dollars a year, in spite of their stingy-ness. The email version of Dropbox would almost certainly be a bigger market.

"I would pay for this" is a terrible way to think for an entrepreneur.

Peter Drucker said: "The purpose of business is to create and keep a customer."

So an entrepreneur tackling this problem would already have at least one prospective customer, thus they'd immediately be in business. And who knows, maybe something "big" grows out of it (e.g. maybe you discover there's a big market for this in the enterprise space).

Most households have no problem paying $50/month for a service that their whole family uses frequently: cable TV. So here is my idea: make and sell this great email-like/todo-list/file-sharing/whatever service that an entire household can use, and charge $50/month.

That's only $10/month/user for a family of five. Come up with reasonable limits to ensure enough revenue (eg. max 10 users per plan).

I'm not sure you can compare cable to email. I may be in the minority here, but I agree that email is not really a pain point for me. I pay $50/month for cable because that represents a choice of having ready access to television or nothing.

The idea that households would pay $50/month for email is a bit of a stretch given that the alternative is free email that works pretty damn well.

I am not talking about email as you know it.

I am talking about this service that pg thinks needs to replace email. I do agree with him that people currently use email in a way it was not intended to (todo list, sending files, etc), and that surely there must be a better way of doing these tasks, that would be worth $50/month, while getting rid of the limitations of email such as attachment size limits.

In fact I know I am right, because the reason we, in 2012, still have no easy way to share large files between friends and family members, is precisely because the storage and bandwidth costs are higher than the cost of email.

You honestly believe there are a lot of families that would pay you $50 a month for a service that enables them to easily create todo lists and share attachments? You're a pretty optimistic person.

How many services do you yourself pay for today on a monthly basis? I can count mine on one hand. To get $50 from me every month you better have a life-changing service.

I think email needs to be changed from top to bottom, but offering speed alone would be a hard sell to launch with. You can get the Microsoft offering (Exchange Online or packaged with Office 365) starting at $5 a user. Microsoft can afford to spend money on it. I've used it and it is a lot faster than GMail (although it wasn't enough for me to switch).

One thing I noticed is that email is a problem for "important" people who receive tons of messages per day. They must read/answer that one vital email right away. Kevin Rose talked about this a few years ago, and I've heard many high-profile investors complaining about this.

For the vast majority of the people on the planet, GMail is just good enough. Yahoo Mail and Hotmail are still doing well.

Perhaps email is perceived as a problem just because it gets tons of "face time" with us every day. I'd leave it alone and focus on one of the countless unsolved problems in the world.

GMail is just good enough

It is, but only in terms of what we think of email in today's usage. I think the next step will be where you can make the messages dynamic which would basically allow you to receive/deliver an interface to an app. Of course with all that power comes the issue of how to control it, but that would be an interesting problem to solve. Imagine instead of getting a notification email for an update on a Basecamp todo list, you would see the actual todo list and could manage right from your inbox.

The only reason you can't do that already is because email clients prohibit it for security reasons, and Hotmail is already experimenting with emails that allow that within a sandbox.

Emails can send HTML and JS, there's nothing innovative about it. You only need to convince developers to add JavaScript sandboxes to their email clients.

for some reason, shades of two-way-rss hype just came flooding back from 2006.

I thought of 2009, when Google Wave was announced.

That's exactly what my future start-up will do! Is anyone interested in making it happen?

Perhaps, but as Bentley and Gulfstream discovered, there is a tidy profit to be made servicing the niche needs of high-profile individuals.

Undoubtedly, but tidy profit and frighteningly ambitious are two different things.

People are like sheep. If you get the "important" people to use it, you have most certainly will have a large user base of "regular" people

Then what's a good way to think? If you can't start from "I would pay for this", then where do you start?

If you wouldn't pay for it, then why would you expect others to pay for it?

Would you pay for computer support, or extended warranties? I wouldn't, but millions of people do.

It really depends on the context. When thinking about something for the masses, you are one very particular data point. You want evidence that lots of people would pay for something.

If you are building something for people like you and:

- there are lots of people like you, and you can make it very cheap


- you ARE paying tons of money for it already, and it could be better and cheaper (e.g. travel)

then you may be on to something.

The last graph of #6 is great.

I hate to stray into politics but my scary ideas revolve around public policy and the various actions people undertake in the public sphere that affect it. More specifically: Is it possible, by providing better tools for publishing and accessing information, to substantially improve public policy debates? Can we reduce the very large rewards for dishonesty and the use of disinformation?

This is the crux of the problem with our current political system, I think. It's not campaign finance, it's not religion, it's not disagreements about economics, foreign policy, security vs liberty (a lovely false dichotomy) or what have you. It is simply the fact that lies win and truth loses. Or, if that statement is not necessarily true, it is true in the current practice.

So, if you buy my premise, how can technology help? Isn't it a problem of human nature? You can't force people to be honest. You also can't force people to learn how to recognize dishonesty in spheres where they have not much competence. You can't impose good sense or decency.

But human nature is varied, and so maybe the seeming ascendancy of its more unfortunate aspects is situational. Maybe by improving the context and presentation of information they can be mitigated. Maybe technology can be used to recognize and reward honesty and to point out and discourage dishonesty. It hurts to think about, doesn't it? It does for me, because it is so hard, and that's what I took from pg essay. Granted, I may not be talking about problems to solve which would make you the next Google.

As an aside, I think that the utility of greater transparency of public actions (governmental or corporate) is already well-understood by many and much work is already being done in this direction so I am leaving out. But that doesn't mean there isn't room for new solutions there, as well.

Interesting idea. I think the most intractable part is that many people believe what they want to believe. They gravitate to a world view for various reasons, backfit it to the data, rationalize the cherrypicking of supporting data and discarding of refuting data, integrate it into their id or ego, fight tooth and nail to protect it, and happily accept the political dis/misinformation you're referring to.

I'd imagine you'd have to identify why people do that, why others don't, and whether it's formalizeable and transferable. Pretty sure psychology has done some work in that area, but brain-fried atm and drawing a blank...

Yup, psych has been looking at this intently for a while - I think cognitive has been doing work looking into this.

There was an article on this a while back on HN as well - why walmart knew someones daughter was pregnant before her dad did. (forbes, after taking it from wired iirc)

Its based on the fact that peoples habits (in regards to shopping) are ingrained, and that there are only a few times in life when those habits are open to change.

Thats when there is a major change in their life - like a new job, baby, marriage and so on.

Dan O Reily (from arming the Donkeys) also had an old pod cast on this.

It was Target and the New York Times Magazine, for what it's worth.



Yes, I agree. But these people do continue to be influenced by information in the public sphere, although they do tend to select sources that they agree with. They also were influenced when settling into their original positions.

I really don't think this general behavior will change, but it's not all or nothing, it's a matter of degress. I think everyone does this at least a little bit. Unfortunately, a lot of people do it a lot. If you can shave away at it a little bit at a time, it could have a large impact in the end.

This is the crux of the problem with our current political system, I think. It's not campaign finance, it's not religion, it's not disagreements about economics, foreign policy, security vs liberty (a lovely false dichotomy) or what have you. It is simply the fact that lies win and truth loses. Or, if that statement is not necessarily true, it is true in the current practice.

I think Eric Drexler had hopes like this for hypertext before the World Wide Web started to hit it big. However, "In every age, in every place, the deeds of men remain the same."

You should check out Robin Hanson's idea of 'futarchy' here: http://hanson.gmu.edu/futarchy.pdf

But don't miss Mencius Moldbug's rebuttal of Hanson's Futarchy:


The problem here is that politics is largely not about facts, but the difference in interpretation of those facts.

The thing about replacing e-mail is that is isn't just a todo list, for many people it's just a receipt box - the thing I keep all my notifications that I bought stuff from amazon. For others, it's still the primary means of business communication.

My work e-mail is largely about communications, with a todo element to it and unfortunately some file storage too. My "home" e-mail is completely different. It's where I get my monthly statements for banks and investments and where my notifications go. When replacing e-mail you would need to service all these components of what e-mail is.

The thing that originally made e-mail so important was it's identity factor. That seems to have withered away as other services have replaced some components of what e-mail was for.

I would argue that e-mail needs to not be replaced, just reclaimed. My e-mail client (web or otherwise) should know that an e-mail in this case is actually just a twitter DM notification and be smart about how it presents that to me. It should know that something from Bank of America is probably something I want to keep, but something else from Bank of America is just marketing junk.

I haven't seen anything that is smart enough to do that on it's own. I don't want to have to deal with creating filters - it should just know. I would totally switch from gmail if this were out there.

> for many people it's just a receipt box

There's a good startup idea right there! Sign up on receiptbox.com and give it my email username/password (or maybe some sort of oauth token). It periodically scans my email and looks for receipt emails from well known e-commerce sites. It knows how to parse them and pull out the relavent details (like TripIt does for travel stuff) and it builds a builds a nice searchable catalog of all my receipts.

I would sign up for this tomorrow if someone on here goes and builds it. :)

Give a third party my password so it can scan my email for financial data? No thanks.

I realise there are people who would love this convenience, and you'd make a killing on targeted ads, but this is a privacy nightmare. Good luck getting people to trust you. Furthermore, you really want the results of the filtering to be applied in the user's own mail client rather than having a separate UI..

Might be feasible as a client-side app. How about a Thunderbird/Outlook addon with a subscription service for known filters?

(What is the Google Chrome of desktop mail clients, anyway? Hardly any seem to use WebKit.)

Re: the first part, I'm reminded of this web application called Mint.com...

> I would sign up for this tomorrow if someone on here goes and builds it. :)

But would you pay for it? If so, there is a way to generate almost infinite revenue with this service, which is to charge a small fee for each receipt stored. Naturally, when receiptbox.com charges this fee, it issues a receipt, which it emails to you...

You don't need to charge me. You're getting information about everything I'm buying online (which increasingly means...everything I'm buying period).

Predict what I want to buy next and take a cut of the purchase.

A replacement for email should be a lot smarter and I think what pg hints at is pretty much the same as you're saying here, but broader. If I receive an invitation to something, it should end up in my calendar and whatever gadget I have on me should notify me and ask me if i wanted to participate.

If i receive a receipt it should be stored and analysed. For example if the item had a 30 day guarantee it should ask me before that if I am satisfied with it.

If i receive a shipment notice it should automatically tell me on the day it arrives and alert me when I'm in proximity of the post office that I need to pick it up.

Actually I would want almost all of my emails to be read solely by a computer so that all of these emails I didn't even see. I don't need to see that I've bought something — I know that! I need to be told when its in my post box though, or if its a license key I need my computer to pop up a question if I should apply that license.

So a good startup idea here would be something that took your email, filtered it and just removed all of the receipts/etc messages from your view, while keeping it neatly organized somewhere else for the future.

That's exactly what I'm planning to build! I'm not sure if I should go hybrid or all-in.

By hybrid I mean that people could receive regular human-readable emails, but senders could include a small url or tag that links to the semantic information (it could be an event invitation, receipt, valid email confirmation, password changing, task proposal, marketing offer, flight information, etc.).

The "smart" email client could then automatically interpret semantic emails, and act accordingly. It would also hide those emails, and only show you the relevant notification.

there were sites like this , swipely and blippy.. both failed as I don't think people in general are ok with giving out their purchases information... I'm really against giving permission to anyone for my email.

Try otherinbox.com

Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact