Hacker News new | past | comments | ask | show | jobs | submit login
How Doomed is NetApp? (storagemojo.com)
142 points by PaulHoule on Apr 18, 2015 | hide | past | web | favorite | 67 comments



There is one thing that Netapp does very well, which nobody else can really deliver:

A file server that can serve over NFS & CIFS, whilst simultaneously be able to serve block traffic over FC, SAS and 10/40gig ethernet. With snapshots, decent performance, and once setup, can be left for 3 years (at which point its either too small or needs a new maintenance contract.)

Nimble, they can do small, cheap and easy to configure. But they can't do fast(unless you are just doing small VMs), they can only do block storage, and FC is a beta product. No real HA.

Pure, Fast, but limited to 250tbs. No metrosync. Block only.

nutanix needs infrastructure upgrade to work at any decent speed. Limited to VM storage. (I think there is a licensing cost for that from VMware too. )

However that's not to say that netapp are the only show in town. They can be utterly shit for many tasks. More importantly, if you let them, they'll bleed you dry. Most people reading this, will never need a system that does block and file storage. But medium sized boring companies need them.

IBM with GPFS have an interesting product, its just marketed and supported by morons. Its fairly simple to create a global file system, with geographic data affinity, that scale linearly when you throw hardware at it. It seriously is amazing. If it wasn't so held back by IBM, you could have a simple globally shared filesystem across your entire cloud infrastructure, that's fast (unlike EBS/EFS/S3)

Object storage is another matter. Object storage is another way of saying that globally coherent fast posix file access is hard, so here's some thin layers over a keyvalue store, you do the rest.

Some people do it right (S3, cleversafe) A lot of people do it wrong. There are important tradeoffs, Like latency, data affinity, or mutability of data. Like a Netapp, no one system will do everything at the right price.


I spoke with both IBM and NetApp vendors several years ago, just seeking an appliance. We didn't want phenomenal capacity, but needed something with good performance.

I've never dealt with such completely incompetent salesmen before. Other vendors I've engaged have had salesmen relentlessly pursue me for the sale, even when it was very likely a small contract to them.

Both NetApp and IBM left me having to do the chasing. I don't mind doing that to some extent, but the amount of chasing I was having to do left a bad taste in my mouth and pretty much ensured I've no real interest in dealing with either vendor again. I wonder if companies really realise the importance of care even on small contracts. People change jobs all the time these days, and just because they are buying something small from you now, doesn't mean they're not going to be interested in something very large in the future. How you treat them will have a large impact on whether you even get a look in on something bigger.


I was spending almost $2 million on a 1PB solution and I couldn't get NetApp to call me back.

EMC knew the name of my senior admin when they called me back in 2 hours.

Guess who got my money?


Heh, I had the almost direct opposite experience. However we only had the netapp for VMware and home directories(and some cache stuff). only added up to about 300tb. The petabytes were scratch built jobbies. (as no appliance maker used the super dense 60 drives in 4u cases, despite Netapp making them)


The reverse is true, too. After IBM couldn't even send a quote for a seven-figure purchase, we went with Sun's amd64 servers. For years, they got additional business because the process was to email or call & wait 10 minutes for a detailed quote which was correct on the first try. Every so often we tried to get IBM or HP involved but their sales guys talked us out of it within the first 15 minutes.


Yes. NetApp products tend to be really great but dealing with their sales guys directly sucks.

Next time contact an VAR like Vandis who will size it, sell it, and help (or just do) set it up.


I agree, I had to chase them for more than 2 weeks to get a disk shelf quote which was overpriced around $3k competing with a local vendor that buys from them. I bought from a vendor and not directly from Netapp


Sounds like IBM... They can be pretty dysfunctional.


(disclaimer, former NetApp and Nimble VAR)

NetApp is beat regularly for being too slow to new markets. Compellent was beating them in deals every day with Automated Storage Tiering (AST) and NetApp didn't respond (Compellent was acquired by Dell). If SSDs hadn't come along, Compellent would still be eating their lunch.

Then Nimble started beating them in deals every day with cheap SSD. NetApp only offered big monolithic PCI-E flash cards with a huge markup / licensing cost, while Nimble let you leverage "cheap" SSDs in a direct and scalable way. Nimble would give you a SAN with SSDs for $20k, where NetApp couldn't include their PAM flash card in any array under $40-50k. $20k of disk and SSD beats $20k of just disk most places. The new tech (SSDs) took NetApp several years to leverage. Even though NetApp's tooling, software, and capabilities were light years ahead, people could "make do" with Nimble and cheap SSDs. NetApp has only recently added reasonably priced SSDs to the mix.

NetApp regularly billed itself to partners as a "software company" who differentiated itself with software. They still have some of the most robust software in the business for encryption, replication, compliance, and orchestration. Their suite blows Nimble out of the water. But more and more, people are OK with running cheap, dumb, fast SANs and leveraging software on the other side (Windows, OpenStack, VMware) to handle storage intelligence.

As a hardware company, they're too slow and charging huge margins for consumer-available parts won't work forever. As a software company, their rivals in the non-storage world are quickly preparing to make them obsolete with server-side storage intelligence. Tough place to be, outside of enterprises who need your software stack (custom replication / cloning, retention, etc).


Nexenta can do all this on on commodity hardware. Also checkout what http://www.ixsystems.com is up to if you want to see what they have done with FreeNAS.


I have several peers who use nexenta, whilst I love ZFS, unless I had the staff budget I'd never recommend it. Considering its just a thin wrapper round samba and linux(or illumos, I can never remember), the support should be much better.


I agree with what you said. I used Netapp around 2007 in one bank as iSCSI storage for a SQL Server solution, and later on in 2008 we selected it for a small online bank (80 employees,500 financial advisors, 20-30k customers) as crucial part of our infrastructure. We were able to design the IT infrastructure around it using it as CIFS file servers for internal Windows users, NFS datastore for the VMware infrastructure (we were almost all virtual) and iSCSI block access for some special machines (e.g. SQL Server) with relevant software in place for backups and with full sync metrocluster replica between the two sites. No FC host-access. It was clearly expensive, maybe not that exceptional performance-wise, but I'm quite sure even today you can't do it with all the new fancy storage vendors.At that time, we could have done it with EMC, but our experience with the sales process was opposite to the one reported by some here: during the 1st purchase for SQL, I was able to find a Netapp pre-sales technical guy that was very competent, while I struggled to get the right replies from an EMC reseller.

EDIT: I think I should probably add that the competent Netapp pre-sales guy moved to DataDomain, then Fusion-IO and now he's in PureStorage..


Yes, Netapp does CIFS alot better than the rest. But you can always setup a File server cluster and run it on any storage you want. A bit of a hassle on setup but there is an alternative.


but the important thing to remember is that kind of setup requires OPEX to run.


I am currently struggling with NetApp support. After 2 weeks of constant effort they finally admitted of disk latency.

More digging revealed that they sold an under performing system at a inflated cost to run MS exchange 2010. NetApp do not add any value since Exchange do not need or actually utilise any of the features.

NetApp's WAFL( write anywhere) and insistence on using 4k block for disk io is the biggest design fault. Most systems nowadays uses bigger IO blocks to get more throughput with less IOPS.

Files on the disk also becomes highy files on the disk. Ironically they recommend to keep the defragmentation process turned off because there controller do not have enough comute powere to run the process in production. This is completed dishonest but very cleaver as it helps them sell more disks if customer doesn't dig deeper or don't have competence to ask right questions.

Another laughable design choice. Your size of SSD pool is very limited (12 TB on a FAS6000 series filer).


NetApp do not add any value since Exchange do not need or actually utilise any of the features.

I never used Exchange on Netapp, but aren't they supposed to have a "SnapManager for.." tool to coordinate the Exchange backups with the storage snapshot process? I was using it for SQL Server.


Yes they do have snap manager but backups mostly keep failing. With exchange 2010 you have database, mailbox, and item level redundancy and protection so I don't actually see the value. S


HA != backup

you can have whatever redundancy you want, and maybe even lagged copies, but if you need to restore something months older, you still need backups..


Agreed

Exchange does not needs NetApp HA capabilities if you use multiple copies of database.

In our case NetApp sold snap manager for 30 remote copies of already triple redundant (one remote copy) database and then tape backup at month end.

In my view they just fooled the unsuspecting customer and just oversold everything.

That was fine too if it would have worked.


I think you should take a look at nimble, especially at the slide for exchange efficiency, it is on a microsoft page btw. Disclaimer I wrote the plugin to backup exchange at nimble.


Thanks I think we will keep this in mind for next iteration. Currently we are neck deep into NetApp due to an idiot of a CTO.


As a former NetApp person this makes me sad. I really respect Robin's opinions and have been following Storage Mojo for years. That Brian Pawlowski left to join Pure[1] is perhaps the saddest thing. During my tenure there he was pretty much the heart and soul of engineering. That he left to join a company that is succeeding with a product that he was tasked with building back at NetApp [2] boggles my mind.

Connecting the dots on that one leaves me quite amazed.

[1] http://www.channelnomics.com/channelnomics-us/news/2402365/p...

[2] http://www.forbes.com/sites/jeanbaptiste/2013/02/19/how-neta...


FWIW, I think Robin is just parroting information the Register incorrectly reported some time ago. The departure of beepy is certainly true, though I think speculation as to his departure's significance has been a bit over-dramatic.

I respect Robin's opinions, but these days I don't think his finger is as firmly on the pulse of the industry as it once was.


"File Server Obsolescence" seems wildly unlikely.

"Commodity software defined storage" (IBM selling XIV on generic hardware, Ceph, and so on) seems far more likely.


>"Commodity software defined storage" (IBM selling XIV on generic hardware, Ceph, and so on) seems far more likely.

Ceph and the other distributed things are pretty great if what you need is block devices. RADIOS, by all accounts, is pretty good. CephFS, on the other hand... not so much. It's hard to make a distributed coherent shared filesystem.

If you need a filesystem that can be mounted by multiple clients, NFS is still dang hard to beat.

NFS 4.1 with pNFS looks like it might take away a lot of the advantages a dual-head NetApp server has over a single zfs server.

But NFS itself? NFS has never been a real force in "production" - and that will continue.

Here in corp-land, though? and in some lab uses? where we really need a shared filesystem? NFS is not going anywhere.


All I can say is that I am moving everything I can to cloud object storage from my Netapp because of 2 cents vs. $2 for GB of storage. Also Netapp wouldn't be my third chios on the next purchase of storage


So what do you do when you need fast local storage?


We're looking at on-prem object storage (Cloudian on commodity hardware) and public offerings (Azure, S3) and NAS gateways. Avere, Panzura, CTERA, TwinStrata. There are several options depending on your use case. With dedup and compression, we can get on-prem pricing that is very close to cloud providers with the benefit of controlling the network end-to-end for.


He didn't say he was doing anything differently there.


Running on premis VMWare (but sadly my Netapp isn't that fast FAS2240)


"All I can say is that I am moving everything I can to cloud object storage from my Netapp because of 2 cents vs. $2 for GB of storage."

Where are you getting 2 cents ?

S3 is ~3 cents, glacier and nearline are 1 cents ... what are you buying that costs 2 cents ? (genuinely curious)


got the price a bit wrong sorry. I use at my company S3 at 3c per GB

and I am starting to try out Google Cloud Storage, I think that they out preforms AWS


Google Cloud Storage's DRA - same durability, slightly lower availability SLA - is 2c/GB.


what are you doing with your storage?

I don't get all this relationship between "cloud object storage" and the dawn of Netapp as stated in the storagemojo article. For me Netapp is, like said by KaiserPro, a multi-purpose system where I can do FC, iSCSI, NFS, CIFS, with HA, snapshots etc., where I can host my VMs, my Windows files, my block-access disks for Databases etc.. It's not the place were I'd just put files that I'll serve via a S3-like API interface...


What I don't understand is on their flagship product, an all SSD drive array, they claim 500,000 IOPS (sustained I/O rate), yet i have heard of many people doing more than 1 million IP's/s in http and in database. If others can do a million how are they if something that is seemingly as fast as can be with an all SSD system can only do half that at a pure I/O level no less.


I think 4kb blocks is the biggest design fault and performance killer for databases. They claim to store blocks serially if IO size is larger than 4kb but in practice I haven't seen it working very well.


RAM caching.


Yes, but it should be noted that to see this speed up you can only do read caching. You can do write buffering in RAM but you need to mirror it to another node or you risk data loss in a power outage situation. RAM-over-network is roughly equivalent to SSD performance. It should also be noted that not all SSDs are created equal. You can get an SSD that will do tens of thousands of IOPS or you can get SSDs that do over a million IOPs per drive.


NetApp can join Sun and Oracle in the pile of great technology that was massively overpriced and oversold by short-sighted fools.


These companies always seem to find a niche in companies that need hardware that is 100x faster so they can write software that runs 100x slower.


And I remember the Sun X4500 - specifically intended to be a NetApp killer: a bucketload of disk, with ZFS.

Unfortunately it didn't actually work out much cheaper to buy or run.


Codename Thumper. I really, really wanted to like it, but it ran like a dog for our workloads, which flew on Netapp.


That too. Nice try, but ehh no.


What are you talking about? Oracle's main goal with the Sun acquisition was to bolster their Engineered Systems business. The result so far has been quite successful:

- http://www.eweek.com/servers/slideshows/oracle-continues-to-...

- http://www.forbes.com/sites/oracle/2015/03/31/how-engineered...


So successful Oracle keep missing their numbers and refuse to tell anyone how their divisions break down, leading analysts to assume it's because the hardware and OS division is a money sink.


Oracle is a massive company, mostly software; it's very hard to keep engineering growth in enterprise software given most innovation is driving prices down.

Keep in mind Oracle's hardware division is a little over $5b, or 14% of their annual revenue. Not likely the reason for their problems. Most of the press has been pretty positive on how they've leveraged hardware compared to their competition (IBM, HP, Dell). [1]

[1] http://www.theregister.co.uk/2015/01/27/oracles_sun_microsys...


The article you quote puts the hardware revenue at 5%, not 14% - but perhaps that is only "Products" excluding "Services"... hardware services revenue should be smaller than products, so 14% still seems wrong.

$5b is less than 2/3 as much as Sun's yearly hardware revenue in FY2009, which was substantially lower than in 2008 due to the financial apocalypse (big finance was busy trying not to go bankrupt instead of buying the best hardware money can buy).

(Unfortunately Sun's 10-K does not break out hardware revenue specifically, but was split up in "Server Products", "Storage Products", "Support Services", "Professional and Educational Services" so it's not obvious to me how the news reports back in the day got their ~$9 billion number for hardware, but at least they're consistent on that)


Nimble, Pure and Nutanix are delivering easy to use, high performance storage/compute for a fraction of the cost of EMC and Netapp. This is reason why Netapp is losing clients to Nimble, and EMC is losing to Pure. Cisco should stay on their toes with Nutanix line of Software defined hardware.


EMC XtremeIO is giving Pure a strong run for its money.

EMC's biggest problem in storage isn't on the All-Flash side (XTremeIO is working as intended), it's that there's fewer reasons to buy the high end, high margin (VMAX) storage these days, and EMC needs to come up with a big margin patch... which it thinks it might have with DSSD [1] [2].

[1] https://storagemojo.com/2014/11/11/pure-vs-emc-whos-winning/ [2] https://storagemojo.com/2015/02/18/dssd-hiring-is-exploding/


Companies follow a predictable lifecycle. E.g. several decades ago Netapp killed Auspex (and then acquired the IP!).[1] It's quite possible that one of these new companies will kill Netapp.

Edit: BTW Auspex had very nice hardware. But it was over-engineered and more expensive than Netapp. And snapshots on Netapp were better than anything Auspex had. So, more expensive hardware, inferior software. Not a good combination.

[1] https://en.wikipedia.org/wiki/Auspex_Systems


Then you need a new sales rep. Nimble markets themselves as being a cheaper alternative but they absolutely are not. I've priced both and the 2500 is consistently cheaper per TB. Pure isn't even close and doesn't claim to be unless you're comparing $/IOP instead of $/GB, and even then if you find the right NetApp rep they'll at least make you decide on product features instead of price.


What about Simplivity? Do you have any experience/opinion on them?


I do not! Would love to hear more info, I am really only commenting on the hardware I know/own. But I have heard pretty good things about their replication and how they are neck and neck with Nutanix. I have UCS for my main sites, and Nutanix for my Remote offices and its working great!


If you have any UCS C240 M3's then you may have SimpliVity options to explore. https://www.simplivity.com/products/omnistack-cisco-ucs/


I'll bet many of those nimble customers go back to netapp.


I went to a cloud dev summit a year ago and they flat out said on stage that Microsoft and Amazon were under pricing cloud to kill competition. So this is just as expected...


I think there is a fine line between under pricing competition and having exorbitantly high margins.


"Microsoft and Amazon were under pricing cloud to kill competition"

Who can forget the stories of Microsoft offering their browser for free in order to cut off Netscapes air supply:

http://en.wikipedia.org/wiki/Steven_McGeady


I still think NetApp has a shot with large storage solutions. I got the chance to intern at one of the newer companies last summer and most of their products revolved around a hybrid solution of flash and hard disk. One of the customers kept asking about a storage heavy solution as opposed to having more flash. Their solution was to buy more appliances which may not be ideal for everyone.

So at the end of the day, if you still need lots of disk, I suppose NetApp has a lot to offer.


Sure, if all you need is block or object storage there are many alternatives. But if you need NAS which satisfies basic requirements (performant, NFS, HA, snapshots, writable snapshots) it's hard to find better than Netapp. All I've seen is poorly integrated NAS gateways on top of block storage. Perhaps Oracle ZFS is somewhat close.


Cleversafe seems to be eating NetApp's lunch.


Different products for different things.

Netapp are not growing like they should, but its not cleversafe knicking the money.

What cleversafe do that is unique is create a global encrypted, highly available, redundant namespace. If you want a fast safe redundant backup system (and don't mind using API access) then cleversafe is your goto product.

Netapp doesn't do anything similar.


You don't think that the shift from traditional RAID / NAS to object storage isn't really what's hurting them?


not really. I think its Nimble and the like. Much cheaper, better interfaces.


Only on a Saturday would this make it to the top of HN.


I am curious why you said that?


This is just a guess, but maybe because our CIOs all screen our web traffic and would be offended to learn they wasted so much money on these products?

Crap I left my VPN on!!!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: