In my experience, the larger organizations will have a "security" questionnaire required of their vendors, and the person administering it is a droid, incapable of evaluating whether the questions, originally written in the mid-00s and only updated for buzzword compliance since, are applicable to modern security practice today, or to the particular product/service/vendor in question. And no firewalls or routers would be massive, disqualifying red flags on such a questionnaire.
Never mind that a KISS setup tends to bring security because of its minimized attack surface. In the minds that write and administer those questionnaires, security only comes from sufficient amounts of the right kinds of complexity.
I'm sure it can be done. IIRC, Cloudflare doesn't use any firewalls, and they do some big business. It just isn't easy to get past the droids programmed to ensure that all pegs shall be properly square, IME.
We frequently fill out very detailed checklists and questionnaires related to our quality policy, standards, internal policies, etc.
We're also very honest about how we approach these issues:
... and they generally appreciate the honesty.
EDIT: It’s marked as "PASS" though, so it’s all fine, just funny.
We sent them back a link of prominent servers that respond to ping.
Including the web server of the expensive agency that had produced the report. And whose web server had an expired SSL certificate.
I do not believe ICMP (ping) is an automatic-fail condition for PCI (at least for certain SAQ levels that I'm familiar with) - however they do show up as warnings, particularly if you can get a timestamp response (to be used in timing-based attacks).
PCI prefers systems that handle CHD be "invisible" to the outside world, in an attempt to hide the systems an attacker might take interest in. Not always feasible (eCommerce, for example), but you gotta jump through the PCI hoops if you don't want to be stuck holding the bag if there's some breach.
They reported a number of purported (non-existing) "vulnerabilities" against said Google site that included that it stopped responding to their probing soon after they started hammering it with sketchy requests... They did, to be fair, point out that this could be a defence mechanism, but dinged it for preventing them from checking for other vulnerabilities.
At least I didn't have to explain why that one was nonsense - it was rather obvious to my client that the agency they'd hired were being idiots. It's not like it was difficult to see either - the domain name of the site they'd hit had "google" in it.
And the system by definition could not be invisible - the ip in question was in DNS and was what you'd connect to the web servers on.
When they told me I informed them I stopped using their vulnerability scanner years ago because they would not allow me to chnage anything in it, including exclusions to icmp time stamps or other vulns Ive mitigated while proper fixes were in the works.
So I rolled my own and use that to audit my systems. They don’t care because “policy”. My c levels will just ask and then promptly disregard all future reports, adding to the noise
HPE iLO doesn't support MFA or any form of public key authentication, and its security history is much worse than SSH. It requires several ports open and the old version they had required Java plugins on desktops and all sorts of nonsense. Using it outside of emergency repairs is a terrible experience due to console refresh lag and the fact you can't copy + paste.
The reason I had to do this insecure and annoying process is that a PCI assessor had told them it would be a hard fail to have port 22 open on the Internet, but this would apparently be fine.
("This thing requires a Java applet and is slow as hell. Screw it, let's just pwn the bank across the street")
I'll call it Security by Inconvenience.
The hackers were annoyed by the compromised machine so they installed security updates and did other system administration tasks.
You can’t inconvenience a bot.
Every time I notice an obscure feature in a Google product or service and go "hm, I wonder if that could be exploited", I then always go "...meh, it'll take too long and require too much concentration to figure it out."
See for example @jonasLyk who spent the last half a year (?) trying to abuse almost only the alternative streams and junction folders in windows.
I can't help but picture him as a sysadmin walking away from a bunch of servers that are mysteriously 40% faster than ever before, but then he gets stopped at the door of the datacenter by some unimpressed looking lawyers who glare at him until he puts everything back
Thanks for the reference, and fair point, yeah that's not how it works at scale.
Is there a modern, no-nonsense guide to filling these out honestly without telling the person doing the checkbox checking that their form is dumb?
I realize there’s a lot of rent seeking and money to be made by consultants in this space - I’m looking for the GitHub published guide or wiki to help smaller no-nonsense shops navigate the phrasing and map these vendor security questionnaires to “modern” technology.
Looks like Stacksi is aiming to fix this pain point.
I do security and I title this "Most secured platform in the world."
We initially had some troubles navigating these waters in the financial sector, but once we were able to convince 1 big customer to try our system on a trial basis, everyone else started to play along really nicely. No one wants to be the first one to try a new thing and get burned by it.
In 2021, you can sometimes leverage things like technological FOMO to make a business owner believe that they are going to lose out on future business value relative to competition, who you might frame as be willing to take on a bigger technological risk. And indeed, smaller clients in our industry are willing to overlook certain audit points (at least temporarily) in order to compete with bigger players.
Some might not like it, but being able to engage in the sales process and bend some rules occasionally is absolutely required to play in the big leagues. Once you are in, it's a lot easier to move around. No one has a perfect solution and everyone knows it. It's just a matter of who is the better sales person at a certain point.
Could we have argued with them during the sales process? Only if we wanted to lose the sale. The Fortinet was cheap compared to the value of the contract.
I know you mentioned 2000s, but it's funny that these contractually obligated boxes might introduce more worry: https://www.bleepingcomputer.com/news/security/fortinet-fixe...
Nothing says end-to-end security like terminating TLS at a network choke point so intruders can easily snoop all traffic.
The phrase represents this idea: I have a case to make (or an argument), and there is a single element that is conclusive enough to make the whole case in and of itself.
You might say, "I can address this entire case in a single point." This shortens to "case in point." The implication is that the single point you make is enough of an argument to prove your whole case.
(Note, I do exactly this a bit myself - terminate TLS at Elastic Load Balancers - and I feel a little dirty about it ever4y time I'm reminded... I sometimes wonder if I spend more time ensuring VPCs are appropriately isolated and keeping instances running untrusted or less trusted code out of vpcs with production customer data flying around unencrypted, than I would setting up to use encrypted data-on-the-fly everywhere. The big inertia holding that back is that we have so much legacy stuff running on stuff like Grails3 and Java8 that) he benefits of starting "doing it right" are not going to be fully realised for many years while those old platforms still need to run, and the added complexity of running two differently architected platforms is a big issue... I know what we should be doing, but the path to get there and the expense of travelling down it are high. We'll get there in "drip feed" mode where new projects and major updates to existing projects will do it right, but I'll be astounded if we don't still have some old untouched Java8 or Grails3 running in production in 5 years time...)
And in many cases on the vendor side its some dude from sales filling it out... so pretty noisey on both ends.
“Could” being the key term here: unless it’s epically bad and unequivocal, there are pretty good odds they could skate a long time without anyone asking and even if they did it might end up simply being that they blame long-departed sales guy and patch things up with the CIO over martinis.
This is a little disingenuous because their product is a modern firewall. It drops packets and conditionally allows sessions to your backend.
Cloudflare’s product is just a web application firewall as a service. It’s not magic. The major difference between Cloudflare and a pizza box WAF is the global distributed nature to absorb ddos in the same product. The things it does to the packets are the same though.
I say that now, but I wish I understood that a few years ago, when faced with the WAF line item on an a WAF, promptly went "WTF is a WAF? googles No, we have a WAF." Could have saved myself some security audit pain.
On the flip side, I did manage to get some upgrade budget out of failing that battery of line items.
"A firewall is a network security device that monitors incoming and outgoing network traffic and decides whether to allow or block specific traffic based on a defined set of security rules."
"usually firewall \ ˈfī(- ə)r- ˌwȯl
\ : computer hardware or software that prevents unauthorized access to private data (as on a company's local area network or intranet) by outside computer users (as of the Internet)"
This and other "web application firewalls" seem to meet these kinds of definitions. Add to the fact more and more "traditional" firewall appliances are adding behavioral filtering and have had application tracking for decades and we're far from firewall being limited to only a layer 3 device.
Pretty sure getting your PM to just drop the firewall icon into their Visio diagram is a better way to meet stupid compliance requirements than explaining the difference between user space and kernel space to a just-graduated big4 consulting company intern "auditor"... /s
They may be even aware, they are just bound by their companys ruleset...
Security is also about depth. You should assume breaches can happen and have another level of defense.
That only increases the attack surface but it's a much better approach for imperfect beings.
You just described my workplace. We have some rules that nobody understands and nobody remembers where they come from, but we have to follow them blindly. For example, they require that any access to the web services should go through a VPN, which would be fine if:
- The VPN actually worked, but it doesn't.
- The servers already uses TLSv1.3, all the services require user authentication, and there are 3 layers of firewalls and an integrated virus scanner in front of the services.
- We are an international project with people from 10 different organizations in 6 countries on 2 continents, and it's really difficult to impose these kind of rules.
So for example, I'm managing a GitLab instance that I can't use myself. I can only SSH login from a very specific computer to manage it, but I can't upload my own code from my office computer.
And I don't want to go into their blind devotion to the firewall and their concept of one way connections...
So I'm just letting time go by, until everybody is so angry they are finally forced to change. Doesn't help that this is Japan, the epitome of rigidness and "even it is broken, don't fix it".
On the other hand, a firewall is an explicit declaration of the ports you want open and who you want them open to, which seems like, at the very least, a useful thing to do. If nothing else it seems like defense in depth. I'm not sure I buy that a system designed around "default deny" is an increase in secrity complexity, certainly it's complexity that would hurt availability, but complexity that would hurt security?
Either way, the real security comes from monitoring the reality of what ports are actually open/listening and verifying a person's assumptions about their systems.
Higher complexity = larger attack surface.
For example, if they used a firewall with one of Cisco's infamous backdoors.
If your network is secure, and a well configured machine only listening on port 22 is pretty secure, you have to ask how the production machine will interact with the outside world. Well every update is an inverse remote code execution. You are getting remote code from an external location and then running it directly in production. So while you might trust FreeBSDs package manager, do you trust Cisco? Do you trust SolarWinds? Even if you do, it's hard to argue that your attack surface hasn't been increased.
For example you have your main machines set up securely, and requests go through the firewall. Of course a compromised firewall doesn't make it easier to compromise the main machines. But because your users are going through the firewall, _they_ might now become vulnerable to some classes of redirection attacks.
Even in a scenario where you're using the firewall as a passthrough, you're still looking at a scenario where (For example) your DNS entries are now pointing to a machine you have less control after. It might not mean that now HTTPS doesn't work anymore, but that (combined with some other mistake) might be enough.
One potential class of vulnerability might be related to recent git client issues: the software your client is using might have an issue that would be a security issue when connecting to an untrusted source. You wouldn't try to get a keylogger on your clients' machines, and the software is always pointing at your own domain etc etc. But the firewall vulnerabilities have opened up that angle of attack!
It's definitely a balancing act, and dependent on how much you trust each layer of your stack.
In fact he mentioned port knocking. You pretty much need some kind of host firewall for that.
I like that. I like that a lot. That's a very enviable practice. I think I know what I'll be experimenting with this next week.
Did not expect to read that article and have the most stand out thing be a routine change I'd want to copy. You never know.
On the laptop-front, I find myself drifting towards a similar setup to John. I have a hefty workstation laptop but the battery life is dire and it weighs a ton, so I pretty much just run it as a headless machine next to my server now. I'm planning on picking up a Pinebook Pro as an "outdoors" machine to just remote in. I also find myself extremely unwilling to arse about swapping multiple machines on my monitors so being able to keep my work machine separate and secure but operate it from my desktop is a nice compromise.
I've been using them in a small but important-to-me way continuously since 2008, and I have occasionally forgotten the service needed maintaining at all - at one point I forgot to pay them for an embarrassingly long time after a credit card expired, and they kept my storage going for me until I finally got myself in order. Please don't try that.
(My first contact with them was in 2007, to ask whether they supported pushing directly from git - the answer was no, though they added the feature a few years later - a bit ironically, I've never used it)
We just added git-lfs / LFS support. So now, when you do things like:
ssh firstname.lastname@example.org "git clone --mirror git://github.com/LabAdvComp/UDR.git github/udr"
So instead I have to use restic, which re-implements many features of ZFS and this also feels wrong.
We support encrypted zfs and raw-send, etc.
The pricing is the same but there is a 1TB minimum because we need to give you your own VM (bhyve) and we have to burn an ipv4 address for you, etc.
Is this still true for these special ZFS enabled accounts?
My solution to the `zfs destroy` risk is to make my backups pull-based, where rsync.net connects inbound to my production server, and rsync.net specifies the necessary commands on the production box to grab the raw encrypted streams. That eliminates the ability of an attacker that is on the production server to run arbitrary commands at rsync.net.
There is still a small risk of data destruction if an attacker gets your rsync.net credentials, but those can be protected via off-line storage and secured workstations, which works pretty well.
(you could route the ssh traffic similarly based on login)
I've seen several people report using OpenZFS encryption on FBSD on various mailing lists, so I'm 95% sure it's not secretly broken on there.
I really am enjoying the developer Q&A interviews that console.dev is putting out.
They're very much like the "usesthis" profiles but more in-depth and with more interesting details ...
I did a usesthis a little while ago. https://usesthis.com/interviews/matt.lee/
"I want no local storage anywhere near me other than maybe caches. No disks, no state, my world entirely in the network. Storage needs to be backed up and maintained, which should be someone else's problem, one I'm happy to pay to have them solve."
> Now everything isn't connected, just connected to the cloud, which isn't the same thing. And uniform? Far from it, except in mediocrity. This is 2012 and we're still stitching together little microcomputers with HTTPS and ssh and calling it revolutionary.
ChromeOS was great for 'switch to a new machine and log in' use... but it's so much more complex than that now. Only the most locked down managed devices wouldn't have to worry about anything left behind on a device before someone abandons it.
At this rate, short of a native package manager and repo for installing applications to run on ChromeOS (instead of a container Linux distro on top), ChromeOS might as well be considered a distro in of itself. Especially with CloudReady being installable on non-Google sanctioned hardware.
Thank you for your great product and support, John!
This is what I get for iPad emulation: https://i.imgur.com/FriTf6X.png https://i.imgur.com/gLhqvFO.png
It looks just fine.
Any web designer who doesn't, in 2021, understand and make allowances for 4k/5k ultra wide monitors as well as phone sizes screens in portrait mode - isn't doing their job right.
The problem here is not the monitor shape or the user's browser window width, it's the css (and maybe html) and the lack of understanding of how to use it properly (or, more sympathetically, perhaps a conscious choice on the part of the people paying for the website to not allocate enough budget to cover all their competent webdev's suggestion?)
1. Have an ultra wide monitor.
2. Regularly use their browser in full width mode on it.
Pretty much everyone I know with an ultra wide snaps their browser to the left or right side - having a single browser window that wide is not very useful, and makes lots of websites look weird.
http://motherfuckingwebsite.com/ looks great on an ultrawide. It's pathetic that so many websites don't.
I was going over the lines with my finger to see if I could feel them. Thought it was internal. Wasn’t until it switched to another app that I saw them move.
Scared the shit out of me.
Does this make anyone else a bit uncomfortable?
I don't think MacOS is still receiving security updates on that hardware. I'm all for using old hardware for as long as it keeps working, but I would never browse the internet with a vulnerable OS on a vulnerable processor (spectre etc...)
Or am I missing something?
Yes, one minor thing ...
Although you are correct that Apple is not officially supporting the latest versions of OSX on that hardware, there is a trivially easy hack of the system that will allow you to load newer versions of OSX.
So, like many of you, I am not running Catalina but I am running an updated, patched version of OSX.
There's a simple patcher you can use for these old macbooks:
I'm pretty sure sip gets re-enabled each boot, but to check for authenticated root volume i think i need to install the g20 to run crutil or whatever it is.
although i use Windows, i do have Catalina installed [and Debian for the triple boot]. also using open core. I'm pretty sure i downloaded a copy of osx from one of their repositories 0.o
I'm super lazy, it's really not that hard.
my average cost for hardware since i bought my Mac is now less than 400/year CDN. is it worth it? while I'm slightly concerned about the security [I'm probably the biggest risk anyways since I'm not confident in my knowledge of secops], i get 95 fps playing pubg, can edit in 4k, run 100+ tracks in Cubase, and run 3 different OSes or as many vms as you'd like [which i think can also run bare metal vm on the 144 firmware upgrade]. on top of that the case still looks good and I've kept at least 50+lbs of ewaste out of landfills or whatever... seems pretty worth it [hopefully no one ever tries to steal pictures of my cats]
[we could also get into a discussion about the right to repair bill in the EU, talking this way]
do you game? i feel like that might have been intentionally left out of the interview?
what info would you keep unencrypted on your servers?
how much does a colo cost for a 2u server typically? how about back in 06?
is rsync a good solution for video files backup?
what are the benefits over say, running a home server and keeping physical backups at your friends house or iron mountain or something?
can rsync use 'live' encrypted data? in other words, how do you encrypt/decrypt on the fly? say for streaming an mp3 or something? [not that you would do this if you were paying per GB...]
please excuse my ignorance. I'm not a real sys admin, just an old wanna be hacker that could never get his shit together.
You might be paranoid. I've been browsing on a few 2008/2009 obsolete Macs for a while, on the highest OS that they will run.
Eventually they'll be a pain to use because of browser incompatibility, pages will get even more bloated and these machines will run them even slower.
Yeah, I get it, people love their Mac's... but the company that produces them actively undermines your ability to continue using perfectly good hardware past what they feel is "profitable". This leads to huge efforts to hack/reverse the updaters, or alter newer OS versions to trick them into installing, etc.
I'd personally jump over to some system that doesn't hate it's users nearly as much. But, that's just me.
I made the switch a year ago after having reached my breaking point with Windows and it still was a massive pain and daily loss of performance. For comparison, I also rooted my Android phone and installed LineageOS without google services which crippled it significantly and it still wasn't as much as a pain to do as using Linux on my workstation.
People often say (not talking about you, just something I see on HN often) that it's easy nowadays and anyone can use it but it's not been my experience and I think it's the very attitude that keeps it from being a commonplace OS for the consumer market. I keep a list in a file I call "linux sins" but without having to look at it you can figure out the problem by just googling any benign problem someone might encounter on their OS and checking the answers. Do the answers start with "Click there" or "Open your terminal"? I don't see the situation changing since people who develop for linux generally refuse to acknowledge the problem.
Although, I feel the specific issues you raise are less of a problem on a desktop-focused distro like Ubuntu or Linux Mint. Those distros really focus on a complete desktop experience, and really try to never require a user to drop into a shell to get anything done. So, perhaps it's a case of people using the "wrong" distro for their needs?
Here's the first line from my "linux sins" file as an example:
If you copy a large file to a USB drive on either Ubuntu or Mint the progress bar goes to 100% instantly and closes and the actual transfer of the file is done in the background without the knowledge of the user. And the answer is "It's your fault, just try to eject the drive until it works."
And even beyond the OS, the whole software ecosystem is broken. It's impossible to find simple, working UIs for the most basic pieces of software, everything goes through the commandline.
It's just how device writes work, and is why Windows users have been told for years to select their device -> Eject instead of just yanking the USB drive out when Windows says 100%.
So, not exactly a fair criticism in my opinion, but your overall point stands - Linux can be rough around the edges for some use cases.
The poster of the question explicitly states that this behavior does not happen on Windows using the same hardware. And indeed, Windows doesn't cache as aggressively as Linux does (which is one of several reasons why Linux tends to have better disk performance and less risk of disk fragmentation), so no, by design, this issue is more pronounced on Linux.
The actual reason why Windows users are told to explicitly eject instead of just yanking the device is because there are various background processes that might be writing to the device (particularly relevant if you're using SpeedBoost or whatever it's called), not because of file copy progress bars being entirely unaware of the OS' caching mechanisms.
And, I say this with no ill-will toward you, I'm not trying to be antagonistic but you're having the same response as all linux users I encounter online. You're denying the problem even exists, saying it's not fair and it might be rough for some use cases? This is transferring a file to a USB stick, this is a very basic use case, and the UI is broken and the UX is dogshit (excuse my french). If we can't admit there is a problem we're never going to get around to fixing it.
I also have a 2005 car that still runs - should I get rid of it because the company that made it stopped providing any kind of support for it long time ago? Or you know....keep using it because it works?
It just seems like wasted effort, since the company all this supports really has made it clear they do not want you to have this ability, and can at any moment make future updates break everything all over again, leading to a new effort to reverse engineer the changes.
I feel the same way about computers - like, who gives a damn what apple thinks. I have a laptop that is still going because people keep making it compatible. That's a good thing, not a bad thing.
Very few non-classic and/or popular cars receive massive aftermarket support for all parts - often the aftermarket supports parts that are in common with a lot of vehicles or are vehicle-agnostic (such as belts, etc), and in some cases you're plain SOL (try replacing an airbag on a 1993 Dodge Caravan, for example - all you can find are OEM used ones pulled from junkers).
I think your comparison would be more apt if, say, Ford disabled all vehicles that were 10 years + 1 day old. While Apple isn't disabling your OS, they leave you exposed without security patches, etc... - making it approximately the same.
How long do we have to wait for the early Teslas to be considered "classics", because they're doing worse than this already...
"Self driving? No, that was only licensed to the original purchaser, you need to pay us $8000 now because we just remote disabled it when we worked out you bought this Roadster second hand. Hope that helps, have a nice day - Elon"
However, in this case, the tweak I needed to do to the mac pro was so trivial as to be (essentially) cost-free. No need to alter the installer, etc.
It pleases me to be (re)using this machine for over 12 years now - especially given what a triumph of workstation design these mac pros were ...
I've also used FreeBSD on (non-Apple) laptops in the past. It actually worked ok, I even had wireless working (this is very hardware dependent though, and things may have gotten worse over the years for all I know).
Based on the rest of your profile I think you might enjoy switching that workstation away from OS X to FreeBSD. Of course, it means some tinkering and looking for new tools to replace the ones you use now, but the tinkering is half the fun... :)
Since I'm (excluding Win10 for gaming when I rarely have time) exclusively a Linux user I get to use the old hardware for other purposes at the end until it finally becomes either useless or lets out the magic smoke (as my 2004 R50e Thinkpad finally did - man I miss those keyboards, so much better than the T470P (which itself is excellent)).
It paid of just recently, I had 2012 Vostro 3750 kicking around and when schools went into lockdown with a quick wipe and Fedora install it made a perfectly serviceable machine for my step-son to do his remote learning on - there was an irony in running MS Teams on Linux on a machine that wouldn't have been able to run current generation Windows 10 and Teams anywhere near as comfortably.
It started life with Windows 7 (Win7 was like a month old at the time) and was subsequently upgraded to Windows 8, then Windows 8.1, then finally Windows 10 (and all it's "feature" updates) until it was retired. It ran slower than a new system, but fit my needs perfectly.
If Microsoft had arbitrarily decided I wasn't allowed to run Windows 10 on that hardware, it's very likely I would have installed Linux or BSD - after all, the hardware was a non-trivial investment and discarding it purely to please some company really rubs me the wrong way.
So, I guess I can sort of understand why people jump through these hoops... although personally I would just move onto some other OS that doesn't undermine my ability to operate my personal computer.
Anyways, similar story: I'm not about to put up with Microsoft telling me my machine is too old to us; that just promotes e-waste.
You download an ISO, put it on a USB key or burn it to a CD, and install it like you would Windows10 or any other OS.
If only it was that easy all the time.
I have an old laptop (2017) that I wasn't for anything else, using so I tried putting Linux on it. Nope. I went through five distributions before I found one that would finally work. And then, it was not really useable.
The whole reason people use MacOS is because they know what to expect. Linux is still a crapshoot.
I can't agree more with the "no firewalls" approach to things, though I prefer to call it "host based" firewalls as it scares people less! I'm glad you've had no compliance/audit pushback on that, I architect things similarly and have had success pushing back on the requirement as well.
I'm very surprised by the l2 switches and actually choosing to run completely unmanaged switches. I assume you're running all 10G or more? Maybe i'm overthinking the complexity of your network but I would be lost without snmp counters on my switches and running switches+networking in fully l3 mode has some great isolation benefits, especially if you want full switch-level redundancy.
Do you have some more details on your data architecture? I'm very curious how do you do data direction/redundancy/sharding and balancing customer data across servers. I'm not trying to pry for things you consider secret but I think you have a very similar architectural mindset and I'm curious how you solve these things.
The benefits are tremendous, however, and go beyond day to day operations. A dumb switch has no credentials to protect and there is almost zero attack surface.
Further, if our switch dies we can immediately replace it with any other dumb switch that just happens to be lying around.
If you read failure studies - like those in the excellent Charles Perrow book _Normal Accidents_ - you see that in many cases there is a very special component that fails and everything goes to hell when they can't find a replacement for it.
So, while I can't encourage everyone to use dumb, unmanaged switches (because not everyone can) I can encourage everyone to remove as many very special components as they can.
Having the handicap of needing to wait 3-4 hours before being able to access your backups in an emergency, could make a day-and-night difference for continuity.
So I would argue it has nothing to do with "being a backup service", but rather that their users could afford a 3-4 hours of waiting. Or that they don't think like that.
In the right situation it's doable and potentially highly desirable due to the simplicity, but requires a lot of discipline by everyone involved, and the right conditions to make it work.
It was a design I supported and thought it was a great idea for the right situation, but I also was hesitant to introduce it to anyone but the 'right customer'.... who probably already knew what they needed to know about it.
Scrolling through the cert pages 2015 seems to be in the future though?
> We personally toured every single major datacenter in Hong Kong and Zurich to choose the facilities that best met our old-fashioned standards for datacenter and telco infrastructure. The same will be true of our upcoming Montreal location in Q4, 2015.
The only exception is special purpose backplane networks that are designed explicitly to be isolated. These are basically data busses for clusters, not user-facing networks.
I came to the same conclusion. I was aware of rsync.net and tarsnap, and have checked their prices in the past, but for raw storage it's simply not competitive. Some of the other features they offer might make it worth it though if you need those.
Personally i just need a place to dump a backup of my family photo albums and documents. A full backup is around 1TB (deuplicated, somewhat larger raw), and for that there are much cheaper solutions.
Self-hosting at home (or in the office) is a great option for some if you’re not worried about needing an offsite backup. For those that do care about this sort of thing, though, the extra you pay to have someone else manage the thing is well worth it.
Calculation: pi 40eur, 1TB external disk 50eur, typical lifespan of a disk 5 years (when excluding including infant mortality which falls under the mandatory 2-year warranty for new electronics), ~8W power draw is ~€15/year. Let's say you also need to replace the pi after 5 years just for good measure. That's 15+(90/5)=€33/year for 1TB, which gets cheaper per terabyte with bigger or multiple drives.
DIY is often cheaper if you ignore the cost of “doing it yourself” - and securely storing an offsite server is more than just the cost of a disk.
Native ZFS is also a feature for those who can use it.
If you have everything on one host I'd say your overall setup on that host becomes much more complex because you only need to get hit by one successful exploit chain and all logs on that host cannot be trusted any more.
In the past, the benefits of a firewall were more clear-cut, but these days I think that it’s reasonable to have “defense in depth” without using a firewall as part of your solution.
In order for someone to accidentally delete a production database like in the linked article, many people have to make mistakes.
> The firewall is still helpful in case they hire a new person who opens a port and forgets to close it one day
Let’s talk about this scenario a bit.
What does it mean for someone to “open a port”? Really, what it means is that someone is running a program on the machine which listens to a port. But, why should anybody be running services on production machines manually, except in an emergency?
Normally, any changes you make to production machines go through some kind of configuration management system. You can’t just SSH into one of the prod servers. It doesn’t matter if you are an intern or if you’re the CTO. You don’t have the credentials. Nobody does.
Instead, if you want to run a service on a production machine, you have to make that change in source control, send the change to somebody else for review and approval, and once it is approved, submit it. Your configuration management system then pushes this change out to production systems according to the script.
Of course, not everyone works this way. Not everyone can work that way. But many companies do have tight controls over the production environment and the decision to forego a firewall isn’t unreasonable.
One part concerned me though, in the interview, it mentions "we own (and have built) all of our own platform." and it fails to mention a few critically important key parts of a storage platform, first being encryption. How are personal files being handled? Is encryption being used? Are you able to access this data using a shared key?
As well as contingency, what happens if critically important data is stored on your platform. On your website you mention:
"We have a world class, IPV6-capable network with locations in three US cities as well as Zurich and Hong Kong"
however fails to mention if replication is done across these locations. If technology (drives) is stolen from your datacenter, or mechanical failures beyond your control happen, how will you be able to recover from physical failure if you only appear to be serving from a single location?
Excuse me if I'm wrong but I couldn't find anything concrete in either the interview or your website. The premise of the platform seems quite well aligned with keeping alive the the UNIX philosophy, and reminds me of Tarsnap.
Either way, well made interview and interesting approach to a storage platform.
As a sidenote, what keyboard are you using? It seems really interesting and you failed to mention it in the interview :)
EDIT: It appears that you offer Geo-Redundant Filesystem as as separate product, maybe you would want to make this a bit more visible on your website except for only the FAQ and order pages. Either way, it seems like a sufficient move, that does still leave the topic of encryption though.
As mentioned traffic is encrypted using SSH ofcourse, but is the data itself encrypted on your platform?
We give you an empty UNIX filesystem. So, if you push up files over rsync or sftp, they will sit here unencrypted.
However, there are now excellent "tools like rsync that encrypt the remote result with a key rsync.net never sees" - chief among them being 'borg'. Other options include duplicity and restic - all of which transport over SFTP.
So it's up to you and you have total control. If you want ease of use and you want to browse into your account (or one of your immutable daily snapshots) and grab a file over SFTP you probably don't want to encrypt everything on this end.
On the other hand, if you want a totally secure remote filesystem that is nothing but encrypted gibberish from our standpoint, you should use 'borg'.
"Are you able to access this data using a shared key?"
We are running stock, standard OpenSSH and you can, indeed, use an SSH keypair to authenticate with. In fact, you have a .ssh/authorized_keys file in your account so you can specify IP restrictions and command restrictions as well ...
" ... how will you be able to recover from physical failure if you only appear to be serving from a single location?"
A standard rsync.net account has no replication. We are the backup and your account lives in, and only in, the specific location you choose when you sign up. However, for 1.75x the price (ie., not quite double) we will replicate your account, nightly, to our Fremont, CA location.
"As a sidenote, what keyboard are you using?"
It is a Keytronic E03600U2.
 We create and rotate/maintain snapshots of your entire account that are immutable/readonly - so you have protection against ransomware/mallory.
 ... which happens to be the core he.net datacenter - one of the nicest and most operationally secure datacenters I have ever been in.
If you are interested, I would be more then happy to have an extended discussion with you going over implementation options, and updating the client side script to make it work better with your service. (https://www.snebu.com, https://github.com/derekp7/snebu, and the tarcrypt extensions to tar are described at https://www.snebu.com/tarcrypt.html).
I'd be happy with a socket/pipe to 'zfs recv zpool/benlivengood/data' that I could throw send-stream data at once a day or so.
As well I mean no offense, the entire platform seems very sturdy though it leaves some questions which aren't apparent immediately (which may just be me)
If I wasn't contempt with my current backup solution I would seriously consider yours, and I wish you guys the best of luck. You're one of the few keeping simplicity as a key value.
and cross geographic region replication to protect against natural calamities (earthquake, tornado, floods etc).
It also conjures a managed service with object-level (volume, directory, file) metadata, versioning and strong identity access management capabilities.
rsync.net doesn't seem to do any of these and charges 0.5 cent more per GB/month. What's the secret advantage I'm not seeing?
As I mentioned - you can have that. That "geo redundant" service is managed by us and requires no intervention on your part. It costs 1.75x more.
"It also conjures a managed service with object-level (volume, directory, file) metadata, versioning and strong identity access management capabilities."
We give you an empty UNIX filesystem that you access over (Open)ssh. Whatever metadata and identity management comes with that (or with overlay tools, like borg or restic) you may use as you see fit.
Notably, their website only claims transfer encryption, not encryption at rest. You can of course encrypt your files yourself with your own keys.
While I agree in general, I think rsync's case is special: Unless the file encryption on their side is somehow derived from the SSH connection (so the files are only readable by your connection and while you're connected - is such a thing possible?), it would mean that they have to store the encryption keys somewhere. The far better approach is to treat them as completely untrusted and only store content you locally encrypt before sending it over. That way you don't have to care about them encrypting your data, it's completely in your control. I use restic for that. Works great.
Personally, I feel like if you're going to encrypt your data, you should be encrypting it on your end, before sending it to some backup provider who may or may not be keeping your data secure.
What does that mean exactly? Is your IP provider quintuple-homed? Or are you running a bit more complicated setup than you explain but the gist is that you have no particular routing mechanisms?
What does that say regarding your high availability? If one of your location is down, then it's definitely down until being fixed?
Anyway, that was interesting, just curious about the fact of having no router at all. Thanks!
So we have a dumb switch in our rack, but they have routers.
In 2021 that's a weird bandwidth product and a weird setup but in 2001 it was "normal" and we just stay with that setup out of inertia (and the fact that we can't connect to he.net in San Diego).
A similar setup exists for us in Zurich with init7.
However, you are correct and we need to edit that FAQ language: our geo-redundant site in Fremont does not work that way.
(I will note that it has been 11 years since we put that location in place (he.net in Fremont) and it has zero minutes of downtime)
A tremendous amount of complexity and attack surface are eschewed by living with that setup and we're always looking for new ways to make that tradeoff.
 Castle Acess datacenter on Aero drive. Is now a KIO managed datacenter.
We were there at a similar time - I probably saw the rsync.net servers.
Not lack of physical memory, but lack of ability to address it as the UFS2 tools, like fsck, were not written to handle billions of inodes ...
We really can't thank Kirk M. enough - he wrote custom patches to ufs and fsck just for our (dirty) filesystems and, as I mention in the article, eventually gave us the push to migrate to ZFS.
> I’m down with Bill Gates
> I call him “money” for short
> I phone him up at home
> And I make him do my tech support
If you had to do it all over again, what would you do different (if anything)?
In terms of product / tech-stack I don't think I would change anything.
In terms of marketing and word of mouth I think we should have given away hundreds of free accounts in the early years (2006-2010) rather than trying to chase them down as paying customers. I believe we had a lot of decent word of mouth but I don't think I appreciated the power of influencers and their ability to amplify a message.
As for business decisions, I continue to wonder how much business we miss due to not having a Canadian location and we have considered deploying in Montreal for years now but have not pulled the trigger. I don't know if a Canadian location (but still a US company) solves the regulatory requirements of Canadian customers.
Even though I love free plans I think it’s better for small startups to grow organically with “cheap and easy to cancel” instead. Or offer credits for new users.
Is this a ( lack of ) capital issue or simply an uncertain sustainable revenue stream issue?
You said: This might seem odd, but consider: if an rsync.net storage array is a FreeBSD system running only OpenSSH, what would the firewall be ? It would be another FreeBSD system with only port 22 open. That would introduce more failure modes, fragility and complexity without gaining any security.
You seem to suggest the big firewalls do not bring any value to the table. I always thought they had more "intelligence" - dropping sessions based on some bad patterns, guarding against DDoS (to some extent), etc.
Are you saying BSD is as good as these expensive boxes? Does it apply to SSH only or HTTP(s) and some other traffic as well?
- No nonsense description of what they do
- Clear and simple pricing
- Simplicity as a core feature
Big fan. Look forward to using your services in the future.
Having said that, I do think the site would really benefit from a new paint job. A good UXer can make it so much more aesthetically pleasing, while still retaining its simplicity and quick load time. It doesn't have to be fancy. Just static HTML with elegant styling and a few minor tweaks.
For example, I was really surprised to see big name clients such as Disney, 3M, ARM & ESPN hidden a few clicks away (behind a button which wasn't very informative, from what I remember). Same for being in business for 20 years. A good UX/product person will tell you to put this front and center in your landing page, and rightfully so.
@rsync: I love what you're doing, but please get a UX person involved :)
Their support people confirmed it doesn’t work (though they didn’t seem to understand why it would be fine for them to support it as advertised...) yet 6 months later they still advertise that they support it, even when I have e-mailed to remind them (and it still doesn’t work either) :(
This doesn’t make sense given that the specific invocation of “rclone serve restic --stdio” doesn’t open any network sockets, it’s no less safe than e.g. “tar”
However, in 2005 or 2006 when we spun out of JohnCompanies and incorporated under the name "rsync.net" I requested, and was given, explicit permission to use the name and domain by the maintainers of rsync.