Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I used to build my own stuff, but now I buy Synology. It's really convenient, has great UI and support, a bunch of different packages you can install, etc.

Although I ran into a problem a couple of months after setting it up where the NAS became so slow, it was unusable. I saw that my IOWait was 100%, so I figured it was a disk, but nothing indicated a problem. Eventually I was able to get to a log that showed that one of the drives had some weird error messages, so I bought a new drive from Amazon that came the next day, pulled the drive, and replaced it and instantly everything was okay again.

I would have expected something on Synology's status programs to show a problem, but they were all green, so that was annoying.



  > It's really convenient, ...

  > ... would have expected something on Synology's status
  > programs to show a problem, but they were all green, 
  > so that was annoying.
So perhaps not all that convenient.

I went down the QNap path a couple of years ago, on the assumption it'd be robust, well patched, more convenient, and basic functionality would work / be tested. I was wrong on most counts, and am now basically scared off these low-end SOHO systems entirely.


It's convenient most of the time, but sometimes building your own stuff is inconvenient most of the time. It's all about trade-offs.

For example, a while back I had to track down an issue on a server with Nagios constantly triggering memory low errors. Turns own that when you stat a lot of files, XFS will load all of that file information into cache in the kernel's slab. This causes some RAM stats to be skewed. This RAM will be instantly freed if needed, but some system stats count it towards "RAM usage."

My question is: How does one incorporate all such edge-cases into something that is user-friendly? How many system admins even know how to track down such an issue (in the absence of a flashy UI)?

Unless you want to vertically integrate the system from hardware => kernel => UI, then you're bound to have these sorts of issues where everyone is only paying attention with what's happening in their own little area of concern and multiple decoupled systems can end up interacting poorly (even when their local actions make sense to them).


Re cached memory, isn't that how all file systems work across all the major operating systems? The kernel space will cache files to reduce disk IO but will immediately free that memory if a requesting application allocated more memory that's what's available in free memory alone. Which is why file servers perform best with ooodles of RAM and why one should always look at both the free + cached memory when calculating available memory on more general purpose systems.

Or was your NAS behaving subtly different from the standard above? (eg how ZFS cache doesn't appear as FS cache in ZoL despite freeing in much the same way as the above)


This is a specific XFS behaviour. In this case, the slab was using up 4GB of RAM on a 8GB machine, and all of the normal tools were telling us that we were running out of free space in RAM. No such problems on other file systems.


What they offer is a good form factor with lower power. What would you suggest for a home brew, that would match size and power consumption?


Form factor?

A case like these: http://www.newegg.com/Product/Product.aspx?Item=N82E16811219... http://www.newegg.com/Product/Product.aspx?Item=N82E16811219... http://www.newegg.com/Product/Product.aspx?Item=N82E16811163... http://www.u-nas.com/product/nsc800.html

Low power usage?

Well many Synology units use Intel Atom CPUs. Anyone can buy a small board that has these same Intel Atom CPUs and stick them in those cases.

Or make your server with NUC parts which use laptop CPUs. Like <15W parts.

Even higher end CPUs these days idle at really low wattage. So your NAS when idling wont use much power.


Well that's disappointing.

I just bought a QNap TS-251+ to replace my Microserver FreeNAS since I was tired of all the sysadmin work, needed something lower profile and wanted features like automatic Google cloud storage sync. It seemed like QNap has a boatload of excellent features, and I was really hoping it works without glitches. But now that I'm stuck with it.. we'll see...


Patch support for SOHO NAS systems is miserable. We bought some Cisco and Qnap NAS systems a few years ago. Every feature I touched had some bug or another, or not the features I needed, or remote changes (Google sync, Apple Time Machine) broke a feature and it was never patched, etc. pp. …

After six months they were all reconfigured to expose a single iSCSI drive and a Linux box did everything else.


I got a low-end 2-bay qnap for home and quickly ended up putting Debian on it (http://www.cyrius.com/debian/kirkwood/qnap/)

It's very well supported (running kernel 4.3 from testing now, the only thing that doesn't work is the crypto accelerator) and it gives you a lot of flexibility while keeping the advantages of the hardware (low cost, footprint and power consumption).


That would have been an interesting alternative, yes. (The NAS systems are long sold or dead, but the Linux boxes that replaced them are running Debian too.)

The hardware is decent for the price, it's just the software that's problematic.


Very similar to my experience. A multitude of features I would never want to use, partly out of doubt (security, stability, etc).

My primary need was native (robust, sane, GUI-driven) iSCSI presentation ... which it completely failed to provide out of the box. That is, with a single iSCSI target, any significant activity over a GbE link would cause the QNap to crash. QNap support guided me towards a beta release of their software -- this solved the immediate problem, but I've never felt comfy upgrading (a one way process) from that release for fear of breaking iSCSI or other basic features. A less than ideal situation.


I've been using 4-bay QNAP for about 5 years now (don't remember the model off hand). The only thing I'd recommend is that you schedule SMART tests and make sure you have the email functionality configured properly.

During that time I've had to replace two drives (I'm still running 4 x 2TB in RAID5), and rebuilds take me about 10 hours.

It's not a perfect device, but it still fits my needs for now. Eventually I'll probably build my own, likely using FreeNAS, but then I'd probably want to at least go with 8x6TB drives, and I'm not ready to spend that money yet.


Well, I've had a QNAP TS419 for years. It's been great, never had a problem, receive regular updates, has a decent package manager (optware) and decent UI.

My one complaint is that when I use the web-based file browser sometimes it tries to generate thumbnails for all the items in the current directory and ties itself up for a few minutes if there are lots. I've only encountered it a few times because I use SSH.


The prices for the larger ones are ridiculous though. 800$ for a 5-bay NAS without HDDs is a lot. You could easily build 2 or more NASes with the same specs for that money.

Also, having used a Synology I think it's a niche product. It's too powerful for people who simply want to put files and backups on their network. But for people who like to tinker and control, it's too restricted. You can basically only install software that has been ported to Synology. Getting it to do automated tasks the way you want to can be annoyingly complicated. I would recommend it to an advanced customer, who needs features like Owncloud or VNP without having the burden to maintain a system.


For me it was a matter of wanting basic network backup and storage as well as DVR/storage for a couple of IP security cams I have around the house. I know I could have set up something from scratch but instead I just went with the Synology for ease of setup, relatively affordable price (didn't have enough of the right parts laying around for a franken-build), and as you say, less burden of maintenance.

I wouldn't necessarily want to depend on it for something "mission critical" but it's a convenient solution along with online backup for media storage and security cam management. My only regret is not buying one with a more powerful processor since this one isn't really capable of transcoding media. Instead, I have all of my home media backed up to the Synology and run a Plex server on my primary desktop. The Plex server reads the media from the NAS and can then send to Chromecast or be accessed from Plex or Kodi running on my Android TV.


I was playing with the parts in this article, and it was up near $2500 from amazon. $250MB motherboard, 4x$60 8GB dimms, etc.


I was talking about the 5bay Synology model.


xpenology lets you run synology on commodity hardware - I have a 212j, and run xpenology on an old Optiplex I had lying around for Plex with Transcoding. pretty neat


Also got a synology -- pretty UI, but keep running into weird limitation (ex. only two timed snapshot, inability to get standard linux tools, ex rsnapshot, DLNA slow to list files, PhotoStation unusably slow, fs weird and does not logically match to what is presented in UI)

And built a multi VM box (with VM in VM support): https://pcpartpicker.com/user/okigan/saved/#view=nmQD4D

Btw, pcpartpicker was useful during my built, Node 804 was a case I considered, but wanted to have an External 5.25.


I like pcpartpicker quite a bit especially their $/GB search feature for hard drives. I do wish they would include some basic server hardware for enthusiasts though, big ECC dimms are getting affordable and it would be nice to do some price comparison.


You can buy a FreeNAS box built by the same company that runs the project on Amazon. A bit pricey, but it's been perfect for me for over a year.


I cannot recommend prosumer level synology gear after having had nightmare after nightmare with it. The chips not used in their pro line are just not powerful enough for rebuilding large arrays in a reasonable amount of time.

RAID6 rebuilds of a 12 4tb drive array take nearly a full week on my DS1812+

I'm in the process of moving everything to a OmniOS server and will not look back.


The Linux kernel can do some really strange things when a drive fails. I had an external drive fail to the point where if I wrote to it, commands like ls would fail and be unkillable until I rebooted the system. I wonder if that's similar to what you were experiencing, masked by the shiny UI


It's not limited to Linux - the problem is that, while you can go around resetting increasingly large parts of the IO stack, lots of devices in the path may or may not actually respond to "off" or "reset" when they're in a bad state, so a drive being in a bad state can cause the entire IO path it's in to go south for the winter if nobody is smart enough in the path.

This is one of the actual reasons that enterprise drives can be better - almost all of them support an equivalent of TLER (time-limited error recovery), which is basically a programmable timeout on read/write errors.

Most parts of the IO stack, hardware and software, on every platform I've used deal a lot better with explicit errors than a device that hasn't appeared to vanish but is acting like a black hole.


Out of curiosity what RAID setup would you use for replaceable media (e.g. blu ray movies)?

Is there a synology solution for checksums to pevent bitrot which ZFS advocates talk about a lot?


For bluray movies I simply use JBOD and a worldwide distributed network of peers who keep hashed pieces of the files...


Synology are adding Btrfs support in the next version of their OS, for what that's worth.


Except they are using MDADM and LVM on top of it so it can only detect corruption, it cannot self heal like BTRFS can when it's in it's own redundant configuration.

https://www.reddit.com/r/synology/comments/3qpezu/btrfs_and_...

https://www.reddit.com/r/synology/comments/402m8d/so_i_was_g...


Can existing synology boxes upgrade for that or would it only be in newer models?


"Btrfs is a modern file system developed by multiple parties and now supported by select Synology NAS models."

The higher performance models I would expect. DS716+ and various rack-mount models presently listed... I think you would be needing an Intel-based model at least to support this when DSM 6 is released.


You can run periodic scans to look for bitrot. I think this is the standard way of handling this, my ReadyNAS before this had the same functionality.


How can it look for bitrot without any checksum info?


Instead of running some homemade script, you better use something like SnapRaid (http://www.snapraid.it/)


RAID has redundant information so they use that.


This doesn't unambiguously detect mismatches though. If it's a mirror, you have two copies of data, which one is bad? All RAID 1 knows is that they're different. If it's RAID456, again, all it knows is that there's a mismatch, it doesn't know if the data strips are wrong, or if the parity strips are wrong. The way ZFS and Btrfs deal with this is data is checksummed, and the fs metadata which includes the parity strips, are checksummed. So there's a completely independent way of knowing what's incorrect, rather than merely that there's a difference.


In RAID6 you can find which of the different combinations shows a mismatch and which combination doesn't. Run through all combinations and find the one that shows no mismatch, the dropped drive is the one with the bad data, rewrite it and go on with your life.

This can only detect one bad drive, if you have two you are toasted.


FWIW Linux software RAID doesn't do that. I think the argument was that differences like this were mostly related to power loss where some disks have the new data and others the old data. And at the block layer, it's impossible to tell which is which and so the code just picks a winner basically at random.

I'm not 100% convinced myself that a 'majority wins' strategy like you described wouldn't be superior, but I can see why they decided otherwise.


Except Synology uses MDRAID (Linux Kernel RAID) and even in RAWID 6 mode it doesn't do this.

It just overwrites the corrupt sector with a new value to make the parity data consistent. It doesn't know which is right or wrong even though technically with RAID6 it would be possible to determine.


Depending on what RAID you are running, you will only know you have bitrot, you can't fix it since you don't know what harddrive has the correct information.


Theoretically for RAID6, but not for anything less than that. And in any case I'm pretty sure Linux RAID doesn't implement that. Though, as the owner of a Synology NAS I'd love to be wrong about that.


I am not aware Synology offer this at the moment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: