Maybe it's true that firewalls are less effective in 2015 than they were in 1998. The problem is: customers don't buy on effectiveness, they buy on cost-benefit. Firewall effectiveness can drop by 90% and they will still have a better cost-benefit than the alternatives. There's a reason for that: firewalls are the most straightforward network implementation of Saltzer & Schroeder's principles, and those principles are probably Right. Everywhere. In code. On the network. In identity and access management.
Why is 2015 different? "The cloud"? If "the cloud" is what's changed, that should be the thesis: we need security solutions for the cloud. Unfortunately, that is also a tired thesis.
Similarly: the shift from prevention to recovery seems like a manifestation of the narrative bias. Sure, there are lots of newsworthy cleanups, and one very successful consulting- to- product- to- consulting pivot company in that space. But customers don't derive the same value from recovery as from prevention. The dirty secret of "recovery" work --- forensics, attribution, &c --- is that it's driven largely by legal compliance concerns, and probably doesn't have a great intrinsic ROI.
Maybe there's an opportunity for a "full stack" vertically integrated insurer informed by a compliance and forensics practice.
There are markets that seem to work the way VCs want security to work. For instance, mobile happens, and all the sudden you can build and 10+x a company that just does for mobile apps what Google Analytics does for web pages. Security just doesn't get valued by customers that way.
Also: "if you fight fire with fire, you're just going to get burned"... what does that even mean? P(burn|fighting-with-fire) ≥ P(burn|fighting-without-fire).
It feels good to try and form a business model on prevalent security themes like, "Think like an attacker" and "How do we develop preventative measures before we get hacked?" but that doesn't actually stop incidents from happening. There's no perfect security, just good enough security based on what a company can invest and how much risk it wants to manage.
This is something I've found most startup folks are really resistant to, because it means that they can't commercialize security services in the same way you can commercialize web hosting. That's understandably upsetting to them, when you see how trendy security has become in the media, and how successful a company could become by capitalizing on it.
I think the section any VC or investor has on security potential is going to remain naive for a long time. Semantic desires like "I don't want my mail to be read by other people" simply don't map very well to purely mathematical operations encapsulating privacy and security in software. Organizing security vulnerabilities into neat taxonomies makes security folks sleep well at night and gives the appearance of an ordered checklist you can build a product on, but in practice that's never the case.
Personally, I don't think security should be the sole product or service anyone tries to base a startup on. I am really excited about virtual reality and machine learning companies, however. I'd really love to see some hard innovations and improvements in that space. It'd be nice to have more hardware companies in general.
As mentioned elsewhere in this thread, it's a very complex problem involving operational, economic, and technical factors, suggesting (as others have mentioned) it's not something that really can be "sold". Watching bugtraq for a while, I saw a lot of pure tech exploits (buffer overflows, SQL injection, other silly things like that) but also quite a lot of misconfiguration -- insecure passwords, lack of an enforced password policy, employees leaving the company without revocation of their credentials, etc.
Maybe a good commercial opportunity would be policy compliance checking tools. Imagine a simple policy like "the corporate network should not be accessible from the outside world". Would it be possible to check all firewalls/routers/NATs/etc. for compliance with this policy?
Web applications are not the same way. For example, enforcing policy restrictions between users of different permission levels suddenly becomes a custom project depending on what each user can do, what the application does, what functionality is mapped to different permissions, etc...it is not as simple as whitelisting. It is highly contextual.
Unfortunately, web applications are also where most vulnerabilities are found, not the network (at least not anymore).
This is already a very big part of the security industry. Countless companies and products (claim to) do this.
Firewalls have proven to be ineffective and companies are willing to pay for solutions that are more effective. Attackers have also gotten a lot more professional and sophisticated.
There's certainly been plenty of billion dollar security companies been built outside of the firewall space in recent years (FireEye, Varonis, Lifelock, Trusteer, Cloudflare, etc) and there's undoubtedly going to be many more.
Security also has played out in mobile the way you suggested can't happen. Companies like Lookout and BYOD management companies are well on the way to building billion dollar businesses in spaces traditionally controlled by entrenched vendors enabled by the shift to mobile.
(incidentally the security section was written by Scott Weiss who founded IronPort and later ran Cisco's Security Technology Group; so not someone unfamiliar with the security space)
Varonis, Lifelock, Trusteer, and Cloudflare aren't reactions to deperimeterization and the declining effectiveness of firewalls. (Ironically, Cloudflare is if anything a cause of the declining effectiveness of firewalls, not a solution). Also: my argument isn't that it's impossible to build a billion dollar security company! It's that the dynamics of doing so aren't isomorphic to those of other startups.
I think you missed the point of my comparison to mobile, which was not that there wouldn't be viable mobile security products, but rather than shifts in technology produce explosive returns for things like adtech and video, but tend not to do that for security. Lookout is I think the closest you come to an example of a breakout success for security, amidst the most important shift in computing since the personal computer, one that has minted a bigger number of larger successes outside security.
The STG has been beating the drum on post-firewall broad-scale deployment of security technology (= more blue pizza boxes) since Jayshree Ulal started it a decade ago. Have you read a lot of Jericho Forum stuff? If you found Weiss' piece interesting, I think you'd find Jericho especially interesting. Maybe even lucrative. ;)
(Voted you back up)
I'd say in mobile security space Good Techonologies, OpenPeak, Ionic, Telesign and Okta all probably have valuations in the mid-hundreds of million of dollars.
The big winners in mobile have been gaming and advertising, but I'd suspect that in terms of enterprise software security companies are probably out-performing the average.
Even Bromium was pretty upfront about the use case for their product though (high-value targets like executives who travel to China). They were very honest about it being overkill for an entire enterprise.
I think securing endpoints is basically a lost cause though (I'm happy to consider that a minority opinion however). My company spent many years trying to get TPM's to be the solution to this problem, and I'm pretty sure that ship has now sailed; with the only 2 sectors of the industry that are continuing to grow being completely unsuited to TPMs (virtualization and mobile).
I think we'll eventually realize that much like networks, devices have to assumed to be untrustworthy, and we have to route accordingly.
A counterpoint is that mobile platforms often have some form of secure enclave, but sadly not standardized. Even AMD's low cost x86 CPUs are adding an ARM coprocessor, which could in theory be used for functionality similar to TPM, DRM, or AMT. Some of those are more useful than others. On the Intel side, SGX will add more enclave options, and complexity, but hopefully will be open and well documented.
There was a brief window in time when you had to go out of your way to buy an Intel laptop "without" a TPM (even Macs had them for a time, even if Apple never made use of them). The Trusted Computing Group failed to capitalize on that timeframe by providing both a "reason" and decent solutions to that problem.
There's a lot of reasons why that was, if I've been drinking I'd happily go into many of them.
On the mobile side, I agree, it's a hodgepodge. Apple has their secure enclave (which doesn't quite act like a TPM, even though it theoretically could), and there exist vendors who could theoretically include a TEE in their phones (right now they're almost entirely limited to special "government-specific" use cases).
And I'm ignoring Samsung's solution (which is basically snake oil).
Intel's SGX would be great, provided that the industry suddenly switches to X86 for mobile (which I don't think is going to happen).
The mobile industry is way too fragmented from a hardware perspective for any type of trusted computing platform to achieve even a modicum of install base. That might change in the future, but I wouldn't bet on it.
If modular mobile architectures succeed, there will be a better chance of combining one's preferred hardware TCB with one's preferred sensors. Sometimes, it only takes one counterexample to move entire markets, look at the time interval between the first Galaxy Note and Apple iPhone 6.
Things have changed a lot.
"The threat of people getting into our systems today is so great that every company in the world has to embrace the notion that not only are they going to get hacked, there’s a good chance hackers are already inside … and they just don’t know it."
"This set of companies comprise a very interesting category because everybody’s going to get hacked, so now it’s just a question of how quickly we respond when we see odd stuff going on within the company."
Specifically "everybody" and "every company". 
The idea that "everybody" is going to get "hacked" reminds me of the early days of the internet when newspapers were confused by what a "hit" to a website was. Not only would they print whatever you told them but they didn't recognize that serving up a graphic file which created a log entry wasn't significant in the way they thought it was. So we can just change the definition of "company" to suit our purpose and goal.
The fact is not even close to "everybody" is going to get hacked at least in a way that actually matters. Correct me if I am wrong (you would know the answer to this better) but are there even enough bodies to take advantage of all the targets assuming they had the skills and motivation to break into the targets and do something with the information?
 Is this the Valley's idea of saying that they can define things in a way that suits their purpose in other words only what they think is a company is a company?
So many companies, particularly younger ones, have zero interest in putting up barriers to access as the company grows because in the early days, everybody was trustworthy and "because bureaucracy bad". So all the customer emails, phones, addresses, birth dates (and, I'm guessing, in the US SSNs) routinely fly around in Excel files called something like "Order Metadata Report" and sent to 50 people in 5 departments each of whom has their own use for it (like counting customers). Judging by the Sony hack it's not just SMEs.
If you want to steal data from a company, just pay a student a few hundred bucks to take up an unpaid internship in marketing (particularly anything to do with emails or customer segmentation) and give him a USB key and teach him some VBA and basic SQL (making him useful for reporting). The interns always end up running the reports so have a lot of access, usually complete access - financial information is the only thing that's not shared around. More advanced companies have a shared database access built into the excel files with a single login for everybody which never changes (hello 300 angry users) so with a copy of this file, you have perpetual up to date information long after you're gone.
Then you try to stop them from doing this and the C-level folks will say something like "it's OK just this time" and "please stop slowing us down". Most of them will be gone to the next thing by the time the black swan lawsuit hits - if there even is one. How would customers know? Why would they care?
Cf http://xkcd.com/538/ and http://www.commitstrip.com/en/2014/10/28/security-checklist/
Next best thing is to sanitize your data; hash any personal information like emails or phones, take a day or two to build a rudimentary BI database that has sanitized information on it before giving people access, use work emails to manage access to everything and log it (my team built https://github.com/zalora/sproxy for this purpose), silo access, teach people SQL, and so on.
But honestly, to most management teams security is dead last on the list of priorities; it's just another tail risk that probably won't happen, if it happens it doesn't matter that much, doesn't cost that much, and there are a thousand other things on their mind like growing the company which are more important ('compliance is for when we'll be profitable' or 'we're not a bank, it's ok'). You can't do very much when working in such a company.
The point is to make detection and remediation important parts of risk management as well, not just prevention. Prevention is spell check, it's not always going to catch everything. Because the reality is, anyone (to your point, not necessarily everyone, but certainly anyone) can be hacked. Rather than focusing exclusively on a hard crunchy shell, make sure you can detect someone already inside and lock them down when you do. Corporate security needs to be right 100% of the time. The attacker only needs to be right once.
But yes, it's certainly possible that everyone can be hacked, and for certain definitions, it's completely likely that every company will or has been hacked (if you include malware, and information disclosure). How much malware is on your network that you don't know about?
Kind of an electronic view of "it won't happen to me"?
I think those are separate questions. Consumers largely are not.
Enterprises are getting wiser on the risk management side and are starting to use things like "Factor analysis of information risk" (FAIR) to create a framework around the effect of various incidents. Assessing chances of being attacked quantitatively is probably much more difficult than influencing their chances of being attacked (which includes the various best practices tptacek alludes to such as firewalls, having a SOC, utilizing proper controls, AV, etc. (the implementations of the S&S 8 principles.))
As to chances of being attacked, I think it could be examined similar to something like a health issue. What are my chances of getting cancer? Well, I can read the literature and follow behaviors which should reduce my chances of getting it (in the risk world that would things such as using antivirus, not sharing passwords / SSNs / etc in plaintext, over the phone, etc.); however, I should also be preparing for what do should I contract cancer.
NSA/North Korea/China/Eastern Europe/Anonymous and Sony/Target/Home Depot.
I think far less has changed about what we're trying to secure. Far more has changed about who we're trying to secure it from and, as others have pointed out, the consequences of not securing it. In 1998, hackers didn't represent an existential threat to the company. I'm not sure you can say the same today.
This is true of a lot of security work. Maybe even most of it.
Now the leading Benefit is "not having embarrassing company documents on the front page of newspapers every day for a month".
That's quite a "new" Benefit.
1. Offline Big Data - This is mostly the ETL crowd - Scalding, Cascading, Spark & associated novel startups, who provide technology to run Map Reduce jobs on TBs & PBs of data. This isn't going away anytime soon. Investment Banks & enterprise, financial institutions are the big customers with risk analysis( Var, CVar) & large scale monte-carlo scenarios on diverse financial instruments being commonplace.
2. Online Big Data - Storm, Summingbird & friends - continually ingesting high volume realtime data streams to provide realtime insights, which can be substantiated by #1 later, as and when those jobs run. For eg. say you ingest tweets realtime via a Storm pipeline & give me a running time series of how many tweets were from which city. Meanwhile, you squirrel away these tweets in hdfs so the offline MR job runs later & gives you exact counts.
3. Small-data ML - The result of #1 is typically a dataset of modest size ( few MB - few GB ) that can be ingested into your favorite ML solution ( too numerous to mention) for predictive analysis & BI purposes.
4. Soft "AI" - Using #2 + #3 in intelligent ad serving, traffic routing, realtime pricing to match inventory ( eg. there are several hotels in Las Vegas who reprice rooms based on number of passengers from commercial flights arriving into Vegas, local weather (sunny,rainy etc.), industry convention dates & such - all the ML + AI done out of a tiny office in SF), electricity regulation (https://news.ycombinator.com/item?id=8280315) etc.
5. AI without the quotes - tiny startups using rnn's to predict time series, using cnn's for image captioning & other really nifty AI applications not currently commercially exploitable at scale but definitely primed for acqui-hire.
The two definitely complement each other, but they are not the same.
Cdixon even retweeted this recently: "Holograms are like print preview for 3D printing."
Also where is Drones on this list?
But at the end of the day, it's still a16z list and subject to their interpretations.
I second that, that was the first missing item that popped into my head as I scanned over it.
The idea of a crowdsourced insurance company is not a good one (to put it mildly). The expected returns of an insurer are highly correlated with the returns of the broader market, because a typical large insurance company makes little to no money writing policies and generates most or all of its income from investments. But maybe he's thinking about crowdsourcing the insurance risk itself, not the whole insurance company with its massive portfolio of stocks and bonds (although that's not what he said). In that case, you get an investment that yields X% a year until and unless the underlying insurance contract is triggered, in which case you lose your principal. These securities actually exist, but as you might imagine they are not typically purchased by individuals.
I do think the insurance industry can be disrupted. It's harder for a startup to gain traction because economies of scale work differently in insurance than they do in other industries, but a Google or an Amazon could do some real damage if they wanted to invest the resources to do so. There are a lot of interesting problems to solve. But this article totally misses the point.
As to stodginess, I'd say there's a big difference between personal/small commercial lines and the big ticket enterprise-type stuff -- the underwriting process goes from something data-driven to something relationship-driven very quickly indeed. The bigger the commercial line, the more likely it's all about who throws the best yacht parties (reinsurance in particular suffers from this massively).
Crowdsourced insurance is indeed a terrible idea, though you could imagine 'web of trust' insurance that almost made sense -- say my ten thousand best friends and I know that we're all great actuarial risks, perhaps because we have some kind of information on which it's illegal to select (that we're all in the same gym, say). We could then try to write ourselves health insurance for cheap, because our plan would select only us gym members. You can sort of make it work, as long as you're prepared to make the regulators hate you.
Which is the real problem, of course -- most people buy insurance because they have to, not because they want to. Auto insurance that wouldn't pass muster with the police, or home insurance that wouldn't satisfy the bank holding your mortgage, doesn't solve the problem.
Do good problems exist? Sure. A web-based Managing General Agency, for instance, could do very well for itself, but the expressed ideas in this section are pretty terrible.
For example a YouTube-like podcast portal seems like a potential option (i.e. moving away from the need for dedicated apps/clients for a more mainstream audience), but I'm sure this isn't a novel idea.
I am not doing anything unique or novel though, there are countless sites that I have learned from, railscasts.com, railstutorial.org, egghead.io, laracasts.com, gorails.com, etc. Granted, so far this has all been free, and I have not started to monetize yet, but I do have traction, and see a clear need. So, take that for what it's worth. Very happy to have any feedback, or answer questions, just shoot me an email (see profile). Also, if you have a talent in a particular area, I would highly suggest thinking about how to share that via video with other people.
How long does it take to write and produce each episode? Would you care to share the toolchain being used for screencasts?
I miss the lost promise of SMIL which would have allowed in-context video links to URLs for related sites, or deep-linking to related video snippets in other episodes. If your project continues, you will quickly grow a library which will benefit from cross-indexing. You will also find that a subset of content will need to be refreshed, as tools and environments change. Metadata for modular snippets will improve discovery, reuse and update.
Rails Tutorial has generated six-figure revenue from screencasts combined with an ebook, so there is precedent for monetizing self-service video training.
> How long does it take to write and produce each episode?
Rough guide is about ~2-3 hours per minute of video. Research, playing around with ideas, demos, writing, recording video, recording audio, editing, etc. So, ZFS part one was 12 min, that's about 24 hours, and part two is 18 min, so about 36 hours. Those two episodes are about 60+ hours of solid work. ZFS did take a little longer than 60 though, since I ended up doing tons of research, but I rather error on a better quality video, than putting out sub optimal content. These figures are shocking to many people, but I have honestly not found a quicker way. As I learn to use the editing software a little more, I have been able to shave an hour here and there, so that is nice!
> Would you care to share the toolchain being used for screencasts?
Sure, here is a basic dump of the tools I use:
Kazam (desktop recording)
Audacity (audio recording)
Kdenlive (edit the audio/video together)
Screenkey (keystrokes on screen)
Decrypt (intro screen)
OpenOffice Writer (transcripts)
Shaky (ascii diagram to png file)
Asciiflow (diagram tool input for shaky)
That makes sense, given the high quality result. A 2003 thread estimated production cost for training videos at $1000-$3000/minute, http://answers.google.com/answers/threadview?id=254620 and that range still seems relevant, https://forums.creativecow.net/thread/17/871420
> here is a basic dump of the tools I use
Great that you were able to achieve this level of quality with FLOSS tools. Have you considered pay-what-you-want pricing, with a suggested anchor? Or creating (for pay) screencasts for commercial software? I've often felt that training material should be included in the marketing budget, as it increases the size of the market. E.g. a portion of donations/fees could go towards the OSS project.
> Have you considered pay-what-you-want pricing, with a suggested anchor?
As of late 2014, I am working on this full-time. Scary, but I think I can pull it off. I am shooting for a $12 monthly subscription fee (auto-renewing), that will give you blanket access to everything for 30 days. My thinking is that, I am targeting sysadmins, devops folks, and developers who want to learn about ops. So, if I can save you even 1 hour per month, then it more than pays for itself. I am hoping to push this out in a few days actually, so I will have more data soon! I don't think it will be an overnight success, but hope to build it over a couple years. At least that is the plan.
> Or creating (for pay) screencasts for commercial software?
I have looked at this. Even brokered a deal, but ended up backing out. It just takes way too much time and I needed to focus on putting out my own content. I just need to get some funds coming in rather quickly, or go get a job again. So, I am putting everything I have into this. Once I have some money coming in, then I might look at this again.
Background: https://news.ycombinator.com/item?id=8224896 & https://news.ycombinator.com/item?id=7350265
The other thing I think would be cool is if with something like the oculus rift you could tune is and sit as the third person or at the table that it was happening at and actually look around. For example imagine sitting at the table with Charlie Rose interviewing Bill Gates.
I messed with live streaming audio quite a bit back in the 90's and early 00's and bandwidth/latency definitely could cause issues but I haven't seen a whole lot of interest in that format for quite a while. For better or worse, people seem to prefer that DVR/on-demand format where you get something delivered to your computer or device and then watch or listen when convenient.
This is what Jason Snell & co. do with The Incomparable. They also do a significant amount of editing to cut out people talking over each other, and make everything sound nice. The result is great, but it's a lot of manual effort:
I wonder if soundcloud will become a major player in this space since a lot of the podcasts I listen to end up being hosted by them.
Depending on what you're doing at the time, you'd be able to 1) watch the hosts talk along with guests, relevant images and all that, basically a youtube show, 2) Just listen to the audio like "normal", or 3) read a transcription of the audio just as you would a blog post.
Allowing a person to choose how they get the podcast's information depending on what they're doing seems very logical to me.
And I don't think there's anything special about podcasts. You can get the same information through blogs currently. What's different is the passiveness of listening. It allows me to listen to something interesting when I'm working out rather than just music. But sometimes I would want to watch, and other times I would want to read the text. That's why having the content that's currently on podcasts move multiple formats makes the most sense for the future.
I feel like a site like Grantland is pretty much there.
Of course, they do a much better job than just providing a transcript. Bill Simmons or Zack Lowe might write a long article about a topic, then discuss it on a podcast, then maybe extract some video clips and add some animation to spice it up.
The key, I think, is the creative people driving the process. Having the right instincts for what deserves a long form piece, a tweet, a blog length take, a podcast, or even a short documentary.
Might be a stupid idea as I'm not a big podcast consumer.
Personally as a huge consumer of podcasts, the only thing I've been curious about is if they could do custom ads. So say you click play on your favorite player, it would stich a relevant ad immediately during that play.
What do they mean? How was that calculated? It sounds completely wrong.
First of all, battery life. He specifically calls out phones, and "not just phones, they could be wearables and other...", as targets for this. Every bit of computing you do on my device is battery life I lose. You're welcome-in-theory to use some compute on my CPU, but stay the hell away from my battery life, which in practice means stay off my CPU. So there's that.
Second, latency is a big thing in user experience. Go ahead, follow this author's advice, and do your JSON-to-HTML rendering on the client. See how it affects your latency. See how it affects your user experience. See how the latency affects your SEO standings. Try it out.
So once you realize you don't want to use client battery life, and you don't want to use client computing anywhere it would make the user experience perceptibly more latent, what're you left with? Yeah, sure, you could use some background computing power in the style of SETI-at-home and so on... but if you want users' explicit consent, you're competing with those existing for-the-betterment-of-humanity projects, and if you don't get explicit consent, you'd better tread mighty carefully.
I think this _is_ actually worth trying out (albeit as an experiment). If you can send JSON to the client (and have already cached the templates) rather than full rendered (uncacheable) HTML, you can (hopefully) reduce the amount of data that's being transmitted. This saves you in
* latency - downloading a small JSON file will take less time than downloading a large HTML file (although with 4G and later high-bandwidth mobile data this becomes less relevant) - at what point does the additional download time offset the template-rendering CPU time?
* CPU usage (and hence battery life) - if we assume HTTPS for the download, the TLS decryption isn't free - at what point does it use less CPU to render your JSON client-side than to download a big file?
* radio usage (and hence battery life) - downloading more content means your radio must be on for longer, which is likely to use more power - at what point does the additional radio usage offset the CPU usage?
In each case, I don't know where the balance lies, but I don't think it's clear cut that server-side HTML rendering is always a better thing on mobile devices.
Having said that, I definitely agree with you on the battery life for general computation point - I'm not going to be bitcoin-mining on my cellphone! ;)
A) Whenever data volume is large and it would take forever to shove it up to some server.
B) Whenever offline service availability is crucial (i.e. you don't want to be dependent on network service availability)
C) Whenever you want to be in control over where your data ends up.
For example Computer Vision is an instance where these criteria are usually met.
Not in a law-of-physics point of view, but I suppose, it is from the perspective of the entity that pays for the server.
If you can push the computation to the end point, and simply spend fewer resources (let's say, 1000x fewer resources) to coordinate the task, then, behold 1000x cost reduction!
Also, you don't have to buy it! (the user bought their phone/computer and pays for the power)
I agree its a bit handwavy. I suppose the remaining cost is the cost of transmitting more data?
But regardless, it is a good point - we are absolutely crazy to not be taking advantage of all the free computing and power that our users have purchased for us to use (and are paying the costs up upgrades and maintenance). I've felt this way for a while (especially when OnLive came out), but it seems that servers have so far been cheap enough that its been cheaper to buy more servers than spend valuable engineering time making your app distributed.
If someone can find a vertical where they can penetrate and provide real business value, they'll do well.
Those reasons have also been widely known for many years. They boil down that enterprise software companies optimize for their ability to sell to decision makers who never actually use the software. See https://email@example.com/msg001... for a detailed description. See http://futureofwork.glider.com/why-enterprise-software-sucks... for verification that this is not simply an isolated disgruntled developer's opinion.
And modern B2C-oriented startup thinking doesn't get it. The sort of Lean/MVP/failfast thinking doesn't work so well when you're building for a half-dozen meetings spaced out over weeks, to match arcane localization requirements for the target customer. The reason startups rarely penetrate is because it's slow, hard, and painful - the sort of thing where big deep-pockets entrenched companies have a huge advantage.
Worse, young startup founders don't have the enterprise experience to grok why everything is so slow and so hard. It's easy to look from the outside and think those silly enterprise people must be stupid and/or malicious to make such an opaque maze of red tape. Hardly! The enterprise is filled with smart, committed, hardworking people who almost inevitably wind up in the same boat, across the many enterprise verticals.
Enterprise is valuable because it's expensive. It's expensive because it's really, really hard. Don't forget that.
On the other hand, if I can make it work as a founder, it's going to work huge.
As an example, according to a software engineer's review on Glassdoor, Concur's web app is about a million lines of classic ASP (and as a user of it, I can vouch for how crappy it is)... yet their mobile app is really slick and user-friendly. Netsuite is another company gaining momentum, with their premise essentially being that companies want an ERP that doesn't require the infrastructure or overhead that Oracle or SAP do. Netsuite, however, doesn't do the same kind of "consulting" as the big guys and you basically get what you get.
Big companies tend to be risk averse, and taking a risk on good software that can't be molded to existing business processes can sometimes mean it's not actually good software for that company ... or so the perception goes.
Y Combinator breaks the two out as two parts of the same RFS 
This could be great news for social media consultants who have seen their wells run dry. Now these consultants can set their sights on convincing companies to add We Accept Bitcoin buttons to their websites.
But I believe a16z is thinking too small. Companies need to look beyond Bitcoin. Personally, I'm tired of the Benjamin Franklin branding on the $100 bill. Bring on the modern brands. I for one am looking forward to the day when I can convert all of my hard-earned money into LouisVuittonCoin and PepsiCoin.
Bitcoin is the interesting one, as I would love to know in 10 years time if we look back at that with a wry smile as a fad, or if it ends up being something everyone uses.
The list is nothing more than some of the things they find interesting.
If that is true, I don't see the point of the article or that it warrants much discussion.
I don't think that is the intent of the article though.
I took it as they expect these 16 themes to yield a lot of the new ideas they will invest in. If that is the case then these 16 are quite arbitrary and I don't see why these were chosen.
The majority of VC companies have investiment thesis. In those thesis they put the kind of market and companies they would like to invest and why. It helps to guide the new associates and partner on where and what to look.
I assume you meant to say: "You never need to share the car WITH strangers..."
I love the idea, though.
Coming from a smaller town it's pretty hard to fathom a helicopter for hire service ever taking off the ground where I live. So it really made me think!
If you can really cut a 2 hour commute to 6 minutes for the cost of $99 USD and make money doing it, I imagine you will do quite well. There are many people in NYC who value their time much higher than $49.50 an hour.
Maybe it's an attempt to influence entrepreneurs to create ideas in those 16 areas.
I mean, only those 16 areas? We are in the middle of some unprecedented cultural shifts, and those are the 16 areas to focus on?
It would be interesting to see potential tech investment areas related to your list of unprecedented cultural shifts.
(See Internet of Things category)
Which at a more abstract level means focus on the goal not the process.
LOL, I love these stories, especially when featuring games like these:
> [...] Tomorrow? To understand your personal diagnostic data, you might soon depend more upon an iPhone app developed in a garage than on your local MD.
This garage theme is annoying. The fact that Jobs and Woz had a garage ruined garages... Seriously. They literally changed use in the post-jobs era. If your father owned a garage and you didn't come up with (at least) Dropbox, you're a loser!!!!
On a more serious note now.. The author seems to prefer applications written in a garage to measure things like blood glucose levels instead of machine-based lancets. If he were diabetic, I wonder, would he use an iPhone application to measure his blood glucose levels, or the MD?
So, not as silly as you might think.
Actual diabetics I know do occasional in-office tests, but for routine testing use at-home personal testing kits, which one could easily imagine syncing to devices and online services the way personal fitness trackers do. With severe diabetes, you need monitoring more frequently than is practical with in-office-only testing.