From that page:
> The code has 27 occurrences of e-mails: firstname.lastname@example.org or email@example.com in the code.
More information is available here:
This is a multi-faceted fuck up, and several people are responsible. This includes the management who decide on processes, like QA and security. Someone should have caught this in some kind of review at a company as big as QNAP Systems shipping real hardware to all kinds of businesses and consumers.
Maybe Walter should never have coded this in, but that doesn't mean that it should even be possible for that to reach an end-user.
Other companies and other industries have such processes.
In short, that would just be blame-shifting by the management who are also at fault.
I don't understand why people who care about security and have linux knowledge would use Synology/QNAP. They are both proprietary, often exposed to the internet, and packed full of so many features that they are consistently full of vulnerabilities (SynoLocker/QLocker etc).
I finally got one (SmartOS; I also tried FreeNAS) working, but I used the intel chip with a timebomb clock line for the build.
Then, I gave up. 4 hours after the synology was home, I was much farther along than I’d gotten in a month on the other machine.
I’d definitely pay a premium for a supported open source + hardware NAS combo that supported docker, vm’s and offsite client-side encrypted backup (with dedupe/compression) out of the box. Also, I want it to draw < 10W, excluding disks.
Until then, synology wins, and isn’t a hobby project.
The one potential downside is it's not as beginner friendly as Synology or QNAP UI-wise, but I actually like that about it as I'm not a fan of the UI on either.
With Synology you go "oh, I'm down to 1 TB free, well there's this deal on a 10 TB drive, pop it in, now I have 11 TB free"
This is the attitude I see a lot in ZFS support forums. "I don't see the problem, just buy twice as many drives!"
This is incorrect on several levels.
You most certainly can create a vdev with a single drive in it and add it to the zfs pool. So go ahead, buy that single 10TB drive and add it to your pool.
That's not a wise thing to do though, so I don't understand why you'd want to. You'll have no redundancy at all, as soon as the drive dies everything is lost. Which pretty much completely defeats the point of having a NAS. So don't do that. But if you really want to, you can.
I want to add a single drive since I can't afford more than a single drive. But I still want to keep the data security of one or more parity drives. Synology lets me do that. ZFS doesn't.
On a Synology NAS (which just uses Linux mdraid underneath the hood so this part isn't exactly some proprietary magic) if you have an array with parity (the equivalent of raid-z/z2), you can add a drive, and it expands the array with that one drive, keeping the parity and recalculating it for the new configuration of drives.
So I can go from an array of 3 x 10 TB disks where one is parity (20 TB usable storage), and then just pop in one more disk and now I have an array with 4 x 10 TB disks (30 TB usable storage) with the same one-disk parity. I can lose any one disk, and lose no data.
ZFS can't do that, since it does't support modifying vdevs. So if I want to be able to add a single drive and expand my storage at any time while keeping the same level of redundancy, ZFS makes no sense.
Synology's configuration of mdraid+BTRFS makes way more sense than ZFS. Unfortunately they haven't contributed it to free software so nobody else can have it (specifically the part of passing through the parity data so that checksum errors in BTRFS can be fixed with mdraid knowledge). I would prefer to not have to rely on Synology's cost-cutting hardware and raft of probably not very secure software. But for the use case of me and the small businesses I support, ZFS has been a non-starter due to the costs.
Based on those numbers and https://www.synology.com/en-us/support/RAID_calculator I'm guessing you're using RAID-5?
RAID-5 is fragile. You can lose only one disk as you say, but the odds of succesful rebuild are not so great (assuming you have a NAS for data reliability in the first place).
> expand my storage at any time while keeping the same level of redundancy
But you don't keep the same level of redundancy when adding a drive. The more drives you add in RAID-5, the lower your probability of a successful rebuild after the loss of one drive.
I've seen a lot of articles and blog posts like this, but their numbers never seem to make sense. It says that reading through a 4-disk 8 TB array you only have a 15% chance of success. I have full-array BTRFS scrubbing scheduled monthly, according to this my array should have reported errors many times a year...
And of course, no matter what, no form of RAID/ZFS is a backup.
It really surprises me that zfs apparently cannot do this.
The main reason I use btrfs is the flexibility. Subvolumes instead of partitions, and easy expandability. Storage should be dynamic, not static.
Likewise. I really want to like ZFS, but with the 'buy twice the drives or risk your data' approach as above really deters me as a home user.
ZFS has been working on developing raidz expansion for a while now at https://github.com/openzfs/zfs/pull/8853 but I feel that it's a one-man task with no support from the overall project due to that prevailing attitude.
BTRFS is becoming more appealing, even though it has rough edges around RAID write holes that really isn't a big deal, and reporting of free space. I can see my home storage array going to BTRFS in the near future.
This is a debate I would love to see with people who have experience. Since I've seen individuals speak with authority on both sides.
I get that if you have a basic array of disks humming along with a big-ass ext4 partition, once one drive dies, the risk of the other drives being riddled with errors is huge.
But what if your array is both (1) using ZFS or BTRFS (with data checksumming) and (2) has scheduled full-disk data scrubs once a month or so? Wouldn't you catch the initial recoverable errors quick enough?
Not always no.
I've had drives reporting failures for months that zfs scrub keeps fixing, tons of time to get a spare.
But drives also fail suddenly with no history of zfs or SMART errors.
ZFS is sexy, but it requires planning and understanding and (as stated by another poster) adding storage in pairs of drives if you want to increase storage incrementally and maintain drive redundancy.
One of the perks of something like a QNAP or a Synology is the support for simply adding a single new drive to an existing RAID5 or RAID6 array, and having the storage box add it transparently while data is migrated to the new, larger RAID array. You pop in another 10TB drive in your RAID6 array and you increase the size of the array 10TB as you'd expect.
Or, if you've finally outgrown your 6-bay device which is full of 3TB drives, you can replace the existing drives with 12TB drives, then once they've all been replaced increase the size of the array to match the new drive sizes. This is done while the device is running and serving data - no downtime, though things may slow down as you would expect during migration operations.
From an end-user perspective this is a very different experience. Yes, FreeNAS/TrueNAS is cool, but I put a Synology at my dad's house.
I've been running zfs on my file servers for ~17 years, have expanded the pool many times. In all that time I've only built a new machine once. Currently still running on my 2009 file server build. I've swapped and added drives to it over the years though.
If I want one disk redundancy.
Today I can afford 2 10 TB disks.
Next year I need more than 10 TB capacity and I can afford one more disk.
Two years from now I need another 10 TB capacity and I can afford one more disk.
How can I perform this migration with ZFS? Going from 10 TB - 20 TB - 30 TB of capacity, adding one disk at a time, without losing redundancy.
Or say next year and two years from now 12 TB drives are cheaper. So with (10TB+10TB) + (12TB) + (12TB), Synology will give me 32 TB of usable space and I will have one drive redundancy throughout the whole time.
Honestly curious, this is a real-life situation that me and several of my friends have done with Synology NAS. For this use case, I would love to use cheaper and more performant used hardware, and not have to rely on proprietary software that phones home. ZFS requires upgrading your disks all at once, unRAID has single-disk performance, straight-up Linux BTRFS is "unstable".
I guess I don't understand why optimize for the cost of a single drive, above all criteria?
Between this and the other comments, you've mentioned that Synology is over-priced, lower quality, lower performance, proprietary and phones home. Are you really better off vs. building a higher-quality more performant lower-cost ZFS server that's fully open source and has better reliability?
If Synology is higher cost, maybe take that difference in price to buy an extra drive or two?
To me a NAS is all about reliability.
> and I will have one drive redundancy throughout the whole time
Mentioned in the other comment, but that's not a good way of looking at it. What matters is the probability of loss of data while rebuilding the data after one drive has died. The more drives you have in that set, the larger the probability of loss. Your risk is increasing with every drive you add.
I simply can't afford to buy a whole array upfront. I can just afford to expand it every other year or whatever.
I don't really understand why pay for dedicated NAS hardware if reliability isn't priority #1, but that's me.
Personally, for stuff that I care about but not quite that much, I just keep on the SSD on my laptop. It'll very probably be fine but there is risk of loss (same as Synology).
For the things I care deeply about, they go on the ZFS server with tons of redundancy, snapshots and backups. I'd never trust the truly precious data to anything other than ZFS.
You can't replace drives with bigger ones and expand the pool. This is important, if you have 4/5/6/8 bay chasis and exactly the same amount of drives in the pool.
I guess I'm not alone: https://www.google.com/search?q=zfs+autoexpand+not+working
Yes you can. That's exactly what the 'autoexpand' property is for. It's odd how this kind of thing floats around on the internet.
So take it as an piece of the puzzle why is Synology more popular.
All of their hardware is off the shelf parts, including the case, the motherboard and the drives. I built my own FreeNAS setup using the same components that FreeNAS was selling bundled together at the time. It ended up being about 2/3rds the price.
Seriously considering a Helios64, once they get their supply issues resolved.
SHR is just a friendly gui to automatically juggle mdraid arrays to fit when you have different-sized disks (e.g. if you have 2x8 TB disks and 2x10 TB disks, SHR will create one 4-disk 8 TB mdraid array and one 2-disk 2 TB mdraid array and append them to a single volume).
The one proprietary bit Synology has is a way to use mdraid parity to fix checksum errors detected in BTRFS.
I'm looking to move away from a QNAP box, and one of the driving reasons is the horrible "hard-plastic hard-mount everything" design that couldn't amplify hard drive noise any more if they'd done it on purpose.
(The other reasons are that I'd rather manage ZFS myself, and the need for more than gigabit ethernet)
In my experience, BIOS/EFI comes up if you mash F2 with a HDMI monitor and a USB keyboard and mouse attached. Your mileage may vary.
A few niggly bits: the LCD says “System Starting” until LCDd/lcdmon starts and there is no control over the HDD activity lights. Fan Control is sufficient to quiet the fans to a tolerable level once Smart Fan is disabled in the EFI.
Perhaps I should document this somewhere …
Or is it a question of budget? If that’s the case, what about a used server (like those from UNIXSurplus)?
Or is it a question of power? If that’s the case, then... I don’t quite know in that case.
Without getting into questions of possible security implications/perceptions of where servers are designed and manufactured... I do like the simplicity of some of the Supermicro options. I currently have a short-depth 1U Atom-based one, which runs passively-cooled except for the PSU fan, which I've replaced with a soldered-in practically silent Noctua. I intentionally got a mobo without a crazy BMC with IPMI, but I still don't assume the hardware is very trustworthy. It might still be more trustworthy than a popular consumer board.
(BTW, if you're looking at any quiet/cool-running server that uses an Intel Atom C2xxx or some other Atom models, make sure that either it isn't a lemon one, or it has a mitigation. 
Not ARM-based though, but they do have a variant that can host 4 pico-itx boards: http://www.casetronic.com/corporates/42-t1040.html . I gather you may be able to convert that one easier to fit an ARM board, or RISC-V for that matter.
Thank you for linking that.
http://www.istarusa.com/en/istarusa/products.php?model=EA-1M... (love the "XServe" aesthetic)
http://www.istarusa.com/en/istarusa/products.php?model=M-140... (extra short!)
You had my hopes up for a moment there, haha
Unfortunately I only want 2 Bay.
I dont quite understand "excessive cloud speed". Assuming I am only using it for file transfer and nothing more would it still be a problem? Or is it something to do with Filesystem. I haven't checked out if the default support something like ZFS or BTRFS.
It was obviously a typo.
A mobile power bank-sized battery like this can probably power a NAS like this for at least 10-15 minutes (personal experience messing around with USB-C).
Most home/small business NAS usage is SMB file sharing, and SMB writes are async. Just a minute to sync writes and close the file system safely is huge for most users.
As someone who has supported a small business, just being able to handle the 5 minutes between someone running the coffee maker at the same time as the fridge and then resetting the breaker is huge.
Don't get me wrong, I can totally understand why people (without much technical background) are tempted to do this. But with all the complexity these NAS systems nowadays have it was only a matter of time for something like this to happen.
There is just too much surface area for device software now and cost pressure doesn't allow for security to be much of a priority.
Source: I have a QNAP NAS and after the first week, I couldn’t figure out who was trying to login to it as an admin account. Thankfully, I had changed all the passwords, but by default it had connected to their cloud service and was remotely accessible. I’m still not 100% sure I have it completely secured.
Not unless you intentionally opened a port on the router to allow inbound access. Even default cable modems come with every port blocked by default.
Even cloud-enabled services require that the machine behind the modem/router open the connection first, so unless you're getting MITM there's no externally available access.
That's what I used to think. Then I found out about upnp on routers. I'd like to have a quiet talk with whoever thought that was a good idea.
In the last 5 years, it has crashed zero times.
Once, after a power loss, fsck blocked until I pressed y over and over again.
But I would love to understand my router better and why/how to trust it, or that I've configured it the best way, to protect from threats both inside and outside my LAN.
At home I have sold my soul to Bezos and just use an Eero.
I would of added a second pfSense for the NAS and cloud but I thought it would be an overkill.
Competing products are marketed in the same way.
But they seems to think they dont add value without the Internet stuff.
I dont use any of the Internet stuff. I only want my files to be shared within the network in my home. And doing it myself require so much tinkering.
The OpenVPN also had a hidden password:
The funny thing is that, they didn’t even bother to choose a longer password (the password is synopass). Even if people haven’t found them, an attacker brute forcing these passwords would easily find them.
I genuinely believe you're better off with a combination of:
A. an integrated solution like freenas/truenas, unraid or even ceph if you want even more steps. Install and configure. Done.
B. a base linux install with just the particular file servers you need. Install and tinker. Auto-update. Remove unnecessary packages.
Best line of the thread
My point, this isn't on Walter alone, in fact, most of it isn't, it's the software development processes (or lack thereof) that allowed this happen. My guess, Walter will be shown the door, QNAP will be able to say we took action and got rid of Walter but the true issue, the bad process that led to this, is probably still there. Worst, Walter's knowledge of the code base will also be gone.
And no, I'm not Walter if anyone is wondering.
* they won't
**'almost' is in quotes for plausible deniability reasons on my end..
Walter's a popular guy. (Apparently he's QNAP's Technical Manager)
My current NAS is an old PC that I built for the purpose many years ago with ECC RAM and an unlocked Phenom II, and currently runs Ubuntu Server after I experimented with OpenSolaris just in time for the Oracle takeover, and then took a detour through CentOS. It's getting kind of long in the tooth now, and I could get a lot more oomph for the same power consumption, or the same for less power.
It's clear that my next server is going to have to be one I build up myself, just as before. I'm leaning toward an AM4 server board (such things do exist), as it offers lots of CPU options from cheap/low-power to Ryzen 9 5950X. The latter is extreme overkill, but it's an option nonetheless. ;) I'd be most likely to go midrange on the CPU. ECC RAM is a no-exceptions must.
I'm on the fence whether or not I should spring for 10G Ethernet. I have absolutely nothing else that uses it right now, and I have perfectly good gigabit gear that has served me well and would rather not throw out or try to sell. It might be worthwhile anyway as a direct single-client SAN.
Currently, I have an old PC running Linux with software RAID. My motivation to switching to an appliance was power consumption and heat/noise. I live in a tropical country so I can't get away with passive cooling. Due to dust build up, the Intel Celeron CPU and motherboard broke down.
It's been replaced with an AMD Athlon. My plan was to replace the entire setup with an appliance NAS the next time it breaks down. I'm now hoping it will last long enough that an ARM-based CPU solution will work out. My top candidate is the ROCKPro64.
Smallest one I've found is https://www.u-nas.com/xcart/cart.php?target=product&product_..., but not quite as compact as I'd hope.
As I can't find DIY hardware like that, Synology looks to have a slightly more mature vulnerability response program than QNAP -- apparently they have a bounty? I've heard about less Synology flaws, so hopefully they're a slightly better choice on the software side.
The airflow is relatively ok, the exhaust fan is directly opposite (and pretty much centered on) the harddrive cage, and the backplane is organized in a 3x2 configuration so it leaves some room for air to pass between the drives. The motherboard is flat on the bottom of the case, so doesn't impede the airflow, but also doesn't benefit much from it. If you need lots of CPU power, an active fan on the cpu cooler is recommended.
The drive cage itself and the trays are metal with hard-plastic rails, similar to QNAP, but the build quality is surprisingly solid. There's no audible resonance from the case due to the spinning drives. The case feet are padded with soft foam, vibrations don't travel from the case to the shelf it's resting on. I expect it will take a year for the foam to compress, but it's a nice touch. Other than that, yes, seek noise is pronounced. It's not something that bothers me because it's not in the living room, but I wouldn't want to put it next to my HTPC set.
My main criticism is the chosen positon and form factor of the power supply: it's mounted at the same height as the drive cage and projects inwards, leaving almost no room for the hard disk connectors (angled molex connectors for HDD power are a must!). They should have used a Flex-ATX form factor instead of SFX, and mounted it length-wise across the case's back plane. I have half a mind to get a Dremel and do that myself, but the other half is winning out for now.
I've been running the Gen 7 since January 2013 with Ubuntu 16.04 then 20.04. It's travelled between New Zealand, Australia and South Korea multiple times. The Russian BIOS enabled hotplug and took the speed restriction off the CD-ROM SATA port.
I run a Sandisk SSD (purchased 2014) in the Cd rom bay and 3x 8GB WD Red (CMR) drives. The fourth bay I use for transferring or backing up other drives. I used Mdadm for software RAID as the "hardware RAID" needed special drivers and it was too hard at the time.
I haven't played with the Gen10Plus yet but it'll probably be the direction I head instead of a NAS. They come with Xeon processors and 4 ethernet ports!
That’s not to say I necessarily love any of these vendors too much. They feel a bit too much like feature mills that have lower incentive to adopt better security practices and higher incentives to add features and, well, provide a decent user experience. I appreciate the latter, but it isn’t ideal.
Still, as much as I’d love a NAS running open source software and maybe even open hardware, I think the amount of time and effort spent on doing so would not be well rewarded. So for now, I guess I’ll ride the useful life of my Synology NAS out and go from there.
As for this incident, it is embarrassing, but it happens. Hopefully this will motivate more people to do security research on these devices.
I'm currently using a 3 disk setup with WD Red (CMR) drives + SSD cache on ZFS with it and have had a good experience in the past year using it so far. I've had to replace one of the disks due to age and ZFS makes it super simple to replace and resliver disks.
Asustor and WD seem to be making more advanced and larger drives, maybe they're options...
There is no real competitor on the market right now except QNAP. And who wants to deal with FreeNAS, I have better and more important things to do with my time at work.
I've had the pleasure of setting up rsync between Synology and QNAP and I would say the Synology software appears to be better but actually isn't as good.
Synology appears to use older versions of a lot of tools like rsync. Although it doesn't say so, it doesn't rsync the data files, it rsync's the files that make up the backing of the software-raid. It's like rsync of the blocks of a sparse disk image instead of the files within the disk image. This makes it impossible to resume or adopt a previous backup. If any of the configuration for the rsync-send changes, it appears to download the entire remote so that it can compare the contents of the files to the local instead of hashing remotely, which nearly completely defeats the point of using rsync. It took my backup task WEEKS to adopt an existing backup that had very few changes.
Thanks for the point about NFS though.
Haven't run any performance numbers on them, though.
You never want an SMR drive. Just say no.