It's hard to understand the danger machine intelligence poses to humanity for the same reason it's hard to understand the danger tiny startups can pose to big industries. Current implementations look like toys, humans have bad intuitions about exponential growth, and most will disagree on how _probable_ the threat is (something that might not be known except retroactively) while systematically underestimating how _large_ the threat is if it does come to pass (because it's so far off the scale of what's come before it)
Maybe Sam (and Elon Musk, and lots of other silicon valley types) are talking about this problem because they read too many sci-fi novels or are too privileged to worry about Real Problem X which affects Y group in the here and now.
But what if instead, they're talking about this problem because they've spent a lot of time seeing this sort of black swan pattern play out before, and they know the way to assess the impact of something truly _new_ is to see envision what it could be instead of looking at what it is now?
Some of my personal skepticism boils down to: well, what are we going to do about it? There are only really two options:
(1) The methods to create strong AI will become known to us before we actually build something dangerous. At that point, since we will better understand the nature of the potential threat, it will actually be feasible to put safety restrictions in place.
(2) Someone will stumble upon strong AI in secret or on accident. I don't see how this is preventable, unless we issue a moratorium on AI-related research, which just isn't going to happen outside of scenario 1.
And so the answer becomes: let's wait and see.
That said, I don't believe there's anything unbearably harmful about the current level of speculation and "fear-mongering".
Perhaps, or maybe the right data just isn't tracked / accessible yet.
Basketball is an instructive example --- as recently as 10-15 years ago, it was thought that this game couldn't be quantified/predicted nearly as well as baseball, that it had a lot of the same fluid properties of soccer and (US) football. Fast forward and a lot of work has been done to push basketball much closer to the baseball-side of the spectrum. Whose to say whether or not taking detailed data of every movement of every player in a soccer match might yield similar breakthroughs.
From what I have read (previous 538 blog post on Messi), they are already tracking a good deal of data about the games. I think one issue with soccer is that there is a lack of discrete, measurable outcomes in the game. I read a while back that one of the breakthroughs in basketball analysis came when they started tracking the total point differential during each player's time on court. Because so many points are scored in a game, and because so many games are played in a season, this stat was a fairly reliable and accurate picture of how a player would impact the team's performance (which allowed teams to measure the impact of players who may not rank high in the more traditional stats).
In soccer, you don't have a lot of data points to model against. The number of goals scored is typically low. Because of this, there is probably a higher level of uncertainty and variance in the outcomes of soccer games (and the prediction models as well).
You do have lots of data points. Each pass is a data point, each shot is a data point. Opta logs mores than 2000 events per game, each with an outcome and pitch coordinates. Yes, soccer is more complex than even basketball, but there's a lot more money involved and people watching. This stuff is being worked out right now, and it's an exciting field.
Often shots on goal or number of corners are used as a proxy variable because those events occur much more often than scored goals. But you're right that football is incredibly hard to model. For example, what would happen to the Argentinian team if Messi gets injured? Any pundit can tell you that it would probably be "really bad", but quantifying exactly how bad is currently impossible.
I don't understand why trusting LI with all your email is worse than trusting Google with all your email.
Sure, if you do it for your corporate email, you may be violating the rules of your employer, but that's between you and your employer, and not enough reason to keep others from using an amazingly useful service for their own personal email.
Lost in all this discussion is just how awesome Rapportive is - the desktop gmail version has concretely and significantly changed my life for the better, and that's not hyperbole. Being able to research people without leaving my inbox has saved hours of time in my life, made my communications with those people more effective, and prevented me from making at least a couple serious errors.
All that is worth the added risk, especially for my personal email. Curious: does everyone in this thread have equal outrage for those widgets that log into your email clients so that you can invite your friends?
> I don't understand why trusting LI with all your email is worse than trusting Google with all your email.
This is like trusting LI and Google with all your email. trusting any 2 parties with your email is less secure than trusting 1 party with it. This increases when only 1 of them is in the business of providing email. What is the other party's interest, and does this conflict with your trust?
Revenue growth is better than active user growth, but growth is the high order bit.
In other words, growing revenue 10% a week is better than growing active users 10% per week. But for a startup (in the startup=growth sense), 10% growth in active users per week and no revenue is much better than being ramen profitable and growing revenues at 2% per week.
If you've done this, you've been an extremely fortunate outlier. The people I know who have done "a successful side business selling apps" have had just as many emotional highs and lows as friends who have done venture-backed companies. The problems can be even more acute when you consider that founders who elect to go that route usually cannot quit their day job until the company has started to take off, adding another large source of stress that they have to balance with everything else.
I don't know if recs are weighted based on that particular recommender's history, but they are definitely weighted based on how well the recommender knows the applicant, and how strong the recommendation is.
I still recommend that applicants seek out YC alumni when applying, but this is because they will give you good feedback on your application and advice on the interview process, not because their recommendation of someone they met once over coffee will move the needle.
Amazon is much more accustomed to operating a low-margin business than Google or Microsoft. Google's overall profitability can allow it to take bigger losses and subsidize its cloud services division if it wants to, but at some point, executives have to wonder if that's the most strategic use of those resources.
It's also not always the case that the leader competes on quality while the followers compete on price. In many industries, being the leader means you have infrastructure advantages and nobody can touch you on price, whereas others must serve smaller markets based on quality. Retail (both online and offline) is a good example. You'd be foolish to suggest that Amazon (the retail side) or Wal-Mart give up competing on price and let their smaller competitors take that mantle.
Noone on this planet can compete with Google on storage prices. I dont want to know how many petabytes of space they have laying around "just in case" another planet comes around and they have to index/mirror all the information.
Extraterrestrial planets aside, it's a fair point. Google's fleet undoubtedly has more machines than Amazon's in total, and that might give Google better economies of scale even though it isn't the leader in the utility computing space. However, there are several other considerations. We don't know if the main google.com fleet is made up of the same types of machines as the ones used for Google Compute Engine; for example Google can much more easily tolerate a high failure rate for a single machine than a startup renting a couple of instances. There are also more costs to consider than just the hardware, customer support being a big example.
Overall, I'd say that Google can certainly undercut Amazon on price, but at the cost of reducing the margins that its executives and shareholders are used to seeing. They might see this as a worthwhile tradeoff if they see utility computing as a strategically important space; but personally I don't see that space as a long-term threat to Google in the same way social networks or mobile phones were. But I'm certainly happy to see Google compete here; I benefit as much as any other startup from a price war.
> .. Google's fleet undoubtedly has more machines than Amazon's in total ..
Given that this assertion is mandatory for your argument to hold water, actual numbers to support your claims are required here. And since Google and Amazon don't publicly share this data, where does that leave us?
From 2009 to 2011, Google spent ~$8B on capital expenditures (buildings, property,& equipment) while Amazon spent ~$3B on capital expenditures. Given Google's scale, they probably have better economics than Amazon.
Technically, this argument is mandatory for an argument against me to hold water (that I was simply trying to say was a fair point against my original claim, even though many people were unfairly downvoting spdy for it). Feel free to discard the post if you'd like, which would leave us with the argument in my original post. I'm pretty confident in what I wrote, though.
Dunno why that would be scary. Ties (or really, any allocation of electoral votes where no candidate gets to 270) are resolved in the House of Representatives, and this was common practice in the early 19th century. If it were to happen now, there would be a lot of complaining, but overall it would be less of a constitutional crisis than the 2000 election. If anything, it might be a spark for true electoral reform.
Ah yeah, I stupidly misinterpreted the bottom row of the graph and thought there were 5 ties out of the 20 or so possibilities (forgetting the branches that didn't need to be resolved by New Hampshire), but now I realize that was only for decisions ultimately decided by New Hampshire.
That Wikipedia article talking about the 1800 tie is pretty awesome.
It's possible that both this report and the "double digit millions" report are true. $2M divided among 20 engineers isn't enough to convince each engineer to stick around at Apple for a few years vs. look for another job immediately. Usually in this case, there are separate retention bonuses for each employee that the acquiring company wants to ensure they keep. Sometimes these retention bonuses are included in the reported price of an acquisition to make everyone feel better, but the investors don't see any of that. So it's possible that from the investors' perspectives, it's a trivial $2M deal, but from Apple and the acquired employees' perspective, it's significantly bigger.
I have no information about Color so I'm not saying that necessarily happened, just that it's a way of reconciling all the conflicting stories. The submitted story on its own doesn't quite add up either ($2M for 20 employees is too low).
Well, considering Color's history, the promise of a steady paycheck for a few years might be enough for that team right now. And the $2 million could be signing bonuses for the engineers, which would be a decent chunk of one-time money.