Hacker News new | past | comments | ask | show | jobs | submit | alexwilliamsca's comments login

Oh it's nice to see how well you've pivoted from PullReview! It's a great idea and I'll be trying it out shortly.. As a side note, @crohr was one of our first beta tester and paying customer of jidoteki.com (now on-premises.com), sadly their PullReview service didn't do as well as planned, but PullPreview seems to be a home run. I'm happy to see them building this service and look forward to watching the development.


Hi Alex,

I think you’re confusing me with the original creators of PullReview. My other venture is packager.io, which you might remember as well. Congrats to you for the continued development of your on premise packaging solution!


Ohhh Cyril haha you're right! Yes of course I remember packager.io. Sorry about that!! In any case, I still find this very interesting and will definitely check it out. Thanks for the heads-up.


A minimal immutable VM image which is finely tuned and hardened is the exact approach we're taking with https://on-premises.com - except we haven't focused on k8s workloads. We've found that customers would much rather import a VM than "install" something, however there are valid use-cases which require special monitoring tools and other customizations which are not possible on an immutable system. Gravity is interesting and seems to meet that demand.


This look interesting, is there a way to try out meta without being funneled into the sales pipeline?

What is the workflow for baking a machine? Are you using packer under the covers or some other tooling? What on-prem machine image formats are supported?


There's still a bit of "manual" work to get someone up and running on our Meta appliance, so unfortunately you would have to go through our sales process. However if you just want a video or screen recording of how it works then I can put something online in the next hour or two.

For baking the machine, we use Ansible under the covers and have a set of Lisp scripts to manage everything. As for image formats: qcow2, raw, vhd (and vmdk in the .ova file).

Not trying to hijack Gravitational's thread, please contact me (email in profile, or 'aw-' on FreeNode) if you want to discuss more.


A few years ago I wrote a Ruby example if anyone wants to try it: https://github.com/aw/ruby-decision-table


I've noticed this as well. The trick is to _remove your credit card_ from your Amazon account. It significantly reduces the "traps" and prevents them from automatically subscribing you to Prime. I now purchase Amazon Gift cards instead (also, free shipping when using a gift card).


Oh hmm, the gift cards thing -- are the gift cards no-overhead? Like, can you buy a $100 gift card for yourself for $100, and then use it to buy things with free shipping?


Yes, exactly! In Japan there's also no fees or sales tax on Amazon gift cards (physical ones), and you can purchase them in arbitrary amounts I think max 100,000 JPY (~$1000 USD). Amazon.com provides one-day free shipping with gift card purchases, but could have a surcharge for oversized, heavy, or international items. I'm not sure if any of this applies to digital gift cards though.


A few things you can do: 1) place Nginx in front of the webhooks service (it will allow you to filter traffic by IP, request headers, etc - a bit less efficient than a proper firewall IP filter). 2) listen for webhooks on a port other than 443 (GitHub allows this). 3) use a unique URL for webhooks, with a unique/random/long token as part of the URL (this way only someone who knows the exact URL will be able to reach it). 4) of course, use a valid TLS certificate 5) validate all headers sent by GitHub (user-agent, x-github-delivery, etc). 5) provide a unique/random/long shared "secret", different from the URL token, for validating the sha1 signature of the request. 6) only accept a valid JSON payload and application/json content-type. 7) only accept specific events from the x-github-event header (ex: push, ping). 8) reject EVERYTHING ELSE with a 404. 9) validate the actual content of the JSON payload (does it contain the proper key/value pairs you need? discard the rest). 10) enable audit-logging of requests, so you can see any attempts at people trying to "hack" your webhooks service.

I recommend running the webhooks (external service) as an entirely different application from your internal services. If it's a nodejs app, and your main internal app is nodejs, then you'll need to run 2 nodejs processes (and not as root).

Also if you can, try running the webhooks service on an entirely different machine (vm?) - and have it talk to Jenkins through the network (ex: as others have suggested with a message queue or API call).

If you're filtering by IP (might be troublesome if GitHub's IP range changes), most of the above will be overkill.

Edit: to answer your last question: security is a process, whether it's full-time or not depends on how much you care. Edit 2: fix typo


SEEKING WORK - Remote

Ex-GitHubber specializing in infrastructure design, automation, disaster recovery, capacity planning, and scalability.

I'm not interested in AWS or other cloud platforms, since I prefer working with bare-metal and VM deployments, switches, routers, and hardware appliances.

Tech stack: Linux/BSD, PicoLisp, Ansible, Cisco, *SQL, Redis, HAProxy, Nginx, Lua...

Contact:

GitHub: https://github.com/aw

Email: alexwilliamsca at gmail dot com

Website: http://alexwilliams.ca

Business: https://unscramble.co.jp

I'm always open to interesting projects, so let's talk!


Founder of https://jidoteki.com here - we provide the ability to go from SaaS to On-Premises in less than one week[1] as well, without the need to jump into Docker.

I think ReadMe's approach is pretty good, and I applaud them for making the move.

Our aim is a bit different from others, as we focus exclusively on "actual" on-premises (local virtual machines), as opposed to "on someone else's premises" (AWS).

[1] http://blog.unscramble.co.jp/post/128610241043/production-re...


This is an interesting post, as Gravitational is doing something very similar to us - https://jidoteki.com - in fact that blog post reflects much of what I wrote a year ago on our company blog - http://blog.unscramble.co.jp/post/134388066008/why-we-care-a.... We focus on the more commonly known concept of on-prem (not AWS/Kubernetes), in other words on dedicated hardware in a private datacenter or hosting facility, by creating fully offline, updateable, and secure virtual appliances designed specifically for self-hosting.


Let's just skip this like we did with IPv5.


The Keyless SSL server is open source and available on GitHub - https://github.com/cloudflare/keyless - although I'm wondering if its dependence on OpenSSL was a good choice.


Open source but a rather restrictive license!


The source is published but it wouldn't count as a OSI open source licence.

Also note the patent mentioned in yesterday's discussion.


My guess is that the customer for whom they developed this has crypto-acceleration hardware that is supported by OpenSSL. Also, this is only doing the actual crypto operations, not running the whole SSL protocol.

Edit: I guess it's also exposing SSL publicly for talking to CloudFlare, which is I guess why they suggested IP firewalling as well (in case of another HeartBleed).

And it's only a reference implementation!


You've hit on the most dangerous part of this whole scheme: increased attack surface. There is now a much higher likelihood of a successful attack taking down the site, due to the client needing to both open up and harden its keyserver on the internet (i'm presuming they don't have private connections between cloudflare and the customer site).

And obviously if any hole is found in either the client's keyserver or the network or host it runs on, the private key is now at risk. One could argue that the client is much less reliable in terms of hardening their security than CloudFlare is.

In order to reduce the attack surface, the client's keyserver needs to be connecting to CloudFlare, so they don't need to expose a service on the internet and thus an attacker won't know what to try to attack to take down the keyserver.


What? Increased attack surface? For the people that this matters, the alternative to this service is running your own https web server, where the server with the keys in its' memory is also running server apps that directly interface with the outside world, or at the very least a proxy through which all traffic is funneled.

This system reduces the attack surface to the minimum possible: a single oracle that takes encrypted data and returns it decrypted, all secured with pinned, internally signed client certificates where all connections are coming from a trusted partner and where the TLS security can't be downgraded.

To me, this seems like the maximum reduction in attack surface that is theoretically possible.


I'm talking about DDoS attack surface, not private key security attack surface. If you want DDoS protection, this scheme is less reliable than the alternative (having CloudFlare host the private keys).

To get the most secure private key security, a system like this would be the least attack surface only if the keyserver is connecting to the http server. By having the http server connect to the keyserver you expose the keyserver to attack, no matter what protections [like IP whitelisting] you put in place.


How would an attacker learn of the private key server? If that were easy, the very concept of CloudFlare would be undermined, which obviously is not the case. Having learned the address of the private key server, how would an attacker proceed? ISTM that all she could attempt would be to repeatedly try to set up the "secure tunnel" mentioned in TFA. At some point the private key server could just ignore those. I don't see much DOS potential there.


> How would an attacker learn of the private key server?

If they're using CloudFlare and they're a large financial institution we'll just assume they're using this new service. Now we just need to find the location of the keyserver.

There's a variety of ways to locate a server when you don't know where it is. The default is to scan the target's network. To find the network you can look at the ARIN info for addresses associated with various hosts under their domain and look at non-CloudFlare allocations. Or you could do a simple brute force search of DNS records (there's bound to be a DNS record for it so cloudflare can connect to it) and look for an interesting seeming host. Or you could look at the output of HTML for embedded hostnames in comments (pretty common for large sites). This is a very brief list of examples.

> Having learned the address of the private key server, how would an attacker proceed?

Once you have the keyserver address you can just DDoS it, and all of a sudden no new SSL handshakes can complete, meaning bye bye static content over SSL. The ISP'd have to null route all the DDoS traffic to the keyserver, which can take time. But that's just a network attack.

You can investigate further and attack the host/service. Maybe they forgot to whitelist the CloudFlare IPs and you have open access to the box via some random service, or the keyserver TLS port. Maybe you abuse BGP and find a way to spoof one of CloudFlare's IPs. Or maybe you social engineer somebody into adding a new address onto the stack of their IP whitelist. Or maybe it's "in the cloud" and you can attack the machine from the cloud provider's network.

Once you can connect, you can pick your poison with how you want to abuse or attack the machine, but there's almost no point when a plain-old DDoS will do.

All of these are the same attacks you'd use on any origin server for any CDN. The difference is that normally a CDN will keep serving cached static content if your origin goes down. With the keyserver down, you can't even serve cached static content, except for non-encrypted HTTP of course.


This is CloudFlare's entire core business model. CloudFlare is just a reverse proxy+cache. The original web server is still run by whatever company is using their services.

Fixed: "If they're using CloudFlare and they're a large [anything], we just need to find the location of the [origin webserver]."

This hasn't been a problem for any of their customers that do this so far (hint: all of them), adding a keyserver is no different.


It's different in that now attackers can take down even cached sites. Before it was only dynamic content that was vulnerable. It's the difference between having your home page up and serving cached content, or your entire HTTPS site being down.

To give you an idea how this affects people in the real world, some websites will make thousands and even millions of dollars an hour in advertising revenue and paid services. They depend on CDNs to handle the traffic of that many users and make it seem like everything's moving smoothly even in the event of a temporary outage. If that site goes dark completely, they lose tons of revenue, and people get fired.

In another case, let's say a large financial institution, they might need to provide authoritative and highly sensitive information around the clock to organizations that basically control the flow of money around the world. Downtime isn't really an option. Without Keyless, this information stays up, cached. With Keyless, an outage can make this information disappear, with potentially far-reaching global financial repercussions.

To reiterate: if you don't use Keyless, your (HTTPS) static content stays up under an outage. If you do use Keyless, your (HTTPS) static content goes dark under an outage. (For https clients that don't have an existing valid session ticket on the CloudFlare server)

You can always use plain HTTP and avoid the outage, of course. But for large financial institutions that's probably not an option.

Also, please note that i'm really not trying to be inflammatory. I'm just pointing out that this is a new, additional point of failure and it can have real consequences for the content people provide over HTTPS.


You make good points.

> It's the difference between having your home page up and serving cached content, or your entire HTTPS site being down.

To be fair, any clients with a valid session would see no difference between Keyless all the way down to plain HTTP (i.e. only static content). So the real difference between keyless and more typical setups is that new users can no longer see static content if the key server is down.

Given that the key server would see a vanishing fraction of the bandwidth and number of requests, in addition to its' extremely simplified and locked down API, I would guess that it's much more difficult to take down compared to a normal web server; you'd essentially have to take out the network equipment around it before it became overwhelmed itself. In addition, only a tiny fraction of legitimate packets would need to go through to be able to support a large number of clients. But perhaps I'm mistaken.

Now I'm curious how often this type of attack occurs, i.e. overwhelm the tunneled servers behind CloudFlare's back.


If the http server is compromised you're screwed either way.

If the SSL key is on the http server when it get compromised, you've just lost your SSL key. Very bad!

If it's not, the http server could DDOS the key server. Less bad! Just change the key server address and point a new http server at it.


No, the alternative in situations like these when you need the added security is to use an off the shelf HSM.


Out of curiosity what would you have used?



GPLv2


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: