It's all driven by real estate: residential, not commercial. Salaries are higher because people need to spend so much of their income on housing, and living in a high-cost area like San Francisco requires a higher income level.
>Given that Tarsnap is bootstrapped, that's not an option for me.
Indeed. I think the advice is for VC funded companies. Whole different ballgame. In the ideal case, VC funding lets you grow faster and ultimately make more money. Whereas bootstrapped companies are constrained by the requirement to make at least some money from the getgo.
On the flipside, this constraint of bootstrapping ensures that we actually do make money. Whereas in the less than ideal case for VC the outcome is zero for the founder.
(Bootstrapping can result in zero too, but you usually find this out faster than in a startup.)
There are tradeoffs to either method. I prefer the bootstrapped way. But you can apply that rule to a bootstrapped business to some extent: if your budget allows it, then it can make sense to trim margins if your growth rate is high and you can increase growth by spending.
Notably, this fundraising round is with Goldman Sachs' private wealth management clients, which sets the stage for Uber to go public in the next few years. (Goldman Sachs tried and ultimately withdrew a similar deal in 2011 for their private clients in the United States to invest in Facebook. Of course, this was prior to the enactment of the JOBS act.)
That's the way you get rich. Make apps, social networking..stuff like that. Or invest in already-successful, viral app companies like Uber, Snapchat, Tinder, etc. Why anyone would go the brick and mortar route is beyond me. I would rather invest $100k in successful web 2.0 companies (like the ones I listed) and double my money in 6 months than start a crappy regular business which has a 50% chance within five years of totally failing and taking all my money with it. Like the Dire Straits Song Money for Nothing, except it's apps and web 2.0 instead of music.
but regular business takes work and risk of total loss of capital. I can double my money buying Snapchat and then do nothing but watch TV and eat Cheetos for the next 6 months as VCs bid it into the stratosphere
As many, many, many people have discovered - investing in startups has as great a risk of losing all your capital (actually, a much greater risk) than investing in a bricks and mortar - the major difference is that there is also a much greater chance of huge returns.
The reality is only 1 in 100 people are likely capable of building a great startup (in the PG sense of "Startup") that succeeds - but there is a huge part of the population that can build out a brick and mortar traditional company.
To put things into perspective, On the App Store, about 3,000 of the 1mm or so apps are capable of supporting a developer in the United states with the Median US Income ($53,800/year).
As any trader will tell you, "the price justifies the risk". You won't be able to buy a share of SnapChat now at a low price relative to fair value ( if, as others have suggested, you're able to buy at all ). Look close and you'll see that for top deals, even VCs and bankers margins are getting squeezed.
you're not a top VC firm - nobody is selling you Snapchat shares, so your argument is moot. if you want to invest in every startup looking for a seed investment, you are going to have to invest in a lot of them before one becomes the next snapchat. most VC funds have poor returns or even losses.
I doubt it will be the VC's bidding once it goes public, typically it will be the public and investment funds. VC's on the liquidating side once one of their investments go public so they can get their exit.
In that case, would you like to buy some shares? I'm planning on disrupting disruption itself with a new social app for users to generate social apps for user generated content. No more investing in many companies waiting for one of them to succeed, give me all your money now instead and one of the users will eventually generate the platform of tomorrow without either of us having to do anything today. It cannot possibly fail.
can you elaborate on how you are going to make investment of $100k to private app companies like Uber, Snapchat, and Tinder given that you didn't get to invest in their angel rounds and it was hard to tell that those apps were gonna succeed?
It was a puzzle so bewildering that, in the months after his talk, people started dignifying it with capital letters – the Hard Problem of Consciousness – and it’s this: why on earth should all those complicated brain processes feel like anything from the inside? Why aren’t we just brilliant robots, capable of retaining information, of responding to noises and smells and hot saucepans, but dark inside, lacking an inner life? And how does the brain manage it? How could the 1.4kg lump of moist, pinkish-beige tissue inside your skull give rise to something as mysterious as the experience of being that pinkish-beige lump, and the body to which it is attached?
To analyse its contents, I'll ask this: what's to say that a computer doesn't experience some very primitive form of consciousness? If we unplugged everything except the power cord, but left a complicated simulation running, it would still have something like "a rich inner life." Its peripherals and sensors give it a sense of a body and, with abstract drivers, a degree of conceptual separation from said hardware. Doubly so in the case of virtual machines. After all, we can't "truly" experience the same thing that a computer might from the inside, so who are we to doubt "computer consciousness?"
If the proponents of "The Hard Problem of Consciousness" can't give a quantified explanation of how to distinguish a theoretical computer consciousness from a human one, that raises the question of whether the problem actually exists.
For my part, I don't believe in consciousness as a concrete thing, only a label we use to group together quite a few disparate systems and phenomenon. It's the same way that I don't believe in "Ruby" as a concrete thing, but only in the unit tests, the sample code, the docs, and the thoughts in Matz's head that we subconsciously conflate.
Is there anything for which you cannot ask a similar question of the form: Why does xyz exist? Why are things not just [this simpler thing]? For those types of questions, I imagine one would answer: because if it were not so then things would be nonsense.
If this question is asked of consciousness without saying what consciousness is doing such that if you took it away then you would have nonsense, then I do not see how anyone can answer it.
The way the question is framed up, consciousness accomplishes nothing as far as survival goes. "evolution could have produced zombies instead of conscious creatures – and it didn’t" Zombie seems to be defined as: phenomena that you think has consciousness but does not.
If consciousness has been defined as some phenomena where you have an intuitive understanding of what is being communicated by that word "consciousness" but also it (consciousness) does not do anything, then how can anyone explain why consciousness should be? Anything you can think of as a function of consciousness, can then be hand-waved as able to be accomplished by zombies.
Here's a more concrete and less "woo woo" (although only less, not necessarily "woo woo" free): Suppose for the moment that consciousness exists on some sort of scale (i.e., I'm not presuming binary have/have not, and I'm not really presuming total ordering of "quality" either). Humans have "more" and a dog has "less" and a fly has "hardly any". We are not too far out into the water, here, really. There is some sort of meaningful something that humans have lots of, dogs have less of, and flies have hardly any, even from a strictly physical perspective.
Now, suppose we assume that the end of the article is correct and "consciousness" is really just a form of integrated information processing, and their device for measuring the interconnected nature of a given brain is measuring something meaningful.
(And let me take a moment to observe that when I say "assume", I'm serious. I'm not asking for agreement, I'm saying, work with me and assume for the sake of a discussion.)
Now, we are still not too far out into the waters here. We have a device that is measuring something real, and fairly non-mysterious. It would not be hard to create an equivalent for a computer system by considering the system as a set of graph nodes, then probing the connectivity of the system. We're still standing on fairly concrete ground.
Now, consider that in the near future we will be able to build computer systems that by any basic connectivity metric (average number of neighbors, various stats of connectivity shape, etc.) would match even human brains, but can be demonstrably doing something very simple like just copying data from here to there, something that is clearly not "conscious" behavior.
Characterize the difference between my hypothetical computer system that matches the human brain on $RELEVANT_CONNECTIVITY_METRIC and the actual human brain. In theory, as you refine the connectivity metric you should also be describing to me how to build a human-equivalent brain, too, or at least telling me exactly how I'm falling short.
Note this is a sort of mathematical argument, where I'm asking you to bring your own $CONNECTIVITY_METRIC to the party. It seems pretty clear that it can't be as simple as whether or not it is or is not of a certain size and happens to follow a power law in the connectivity graph, because it's really easy to produce a program that would have that level of connectivity and still not be conscious. (A naively-written non-trivial program naturally has a power-law distribution to it.)
This phrases the problem in a fairly practical way ("how do I build a human-class AI?"), demonstrates the difficulty of answering it even given a powerful and potentially incorrect assumption, and is phrased virtually entirely in physical terms such that a sufficiently concrete and accurate answer would very likely lead to the ability to build AI. And we have no clue about that $CONNECTIVITY_METRIC. All we've got is some people asserting that the answer lies in this direction and others asserting it doesn't and very little hard data to help us answer that question.
And I'd also observe that I'm not even worried about the answer, this generalizes trivially to the "set" of conscious behaviors, and also your choice of whatever test you like to apply for consciousness up to and including "I'll know it when I see it", since right now we can't answer any of these variants.
I just wish they made a regular, non-virgin olive oil. (I understand why they only produce extra virgin olive oil, but refining it neutralizes the strong taste and acid content and makes it more suitable for cooking.)