Hacker Newsnew | past | comments | ask | show | jobs | submit | ech's commentslogin

had to edit title to have it fit length limit. paper title is "More is Less: How Group Chats Weaken the Security of Instant Messengers Signal, WhatsApp, and Threema"

my edit shouldn't change the meaning.


that is not what an air gap is. if at any time there is an exchange of data between the air gapped hosts/network and a non air gapped one, there is no air gap anymore. and yes, that includes an analyst taking a sample to an internet-connected host.

side channel exfiltration techniques like "fansmitter"[1] and "diskfiltration"[2] or the well known TEMPEST attacks are more interesting challenges in air gapped designs than relying on a very basic security boundary violation. (and solved in very much the same way, by having the secure terminals be as dumb as bricks as possible, with extremely limited known inputs and outputs, preventing the insertion of unclassified electronics in the secure zone through policy, and enforcing it through physical searches, RF hardening the zone with a faraday cage, absence of windows, etc... the "Technical Specifications for Construction and Management of Sensitive Compartmented Information Facilities"[3] is a great ressource for those interested in such designs)

incidentally, there are awfully little true air gapped networks (which are not ICSes and/or assorted single purpose hosts/networks whose absence of network connectivity is not a primary design feature) in the wild. they're operationally heavy precisely because of their nature, and outside of very specific situations, their security is not perceived as cost efficient by management, for good reasons.

[1]https://arxiv.org/abs/1606.05915 [2]https://arxiv.org/abs/1608.03431 [3]https://fas.org/irp/dni/icd/ics-705-ts.pdf


The paper never once uses the term "air gap." It's only in the headline as submitted to HN.


Ok, we'll revert the title to the original. (Submitted title was "Air-gapped data exfiltration via cloud AV sandboxes".)

Submitters: please read the HN guidelines, which ask you to use the original title, unless it is misleading or linkbait.


My bad, it was titled like that on Twitter, but I should have checked the title in the PDF.


it has been submitted as such on both HN and reddit. i think it is reasonable to clear any misunderstanding.


I think looking at operational issue with air gaps is much more interesting than TEMPEST attacks which require physical proximity to execute. If someone has gone to the trouble of air gapping their network, they almost certainly have physical security.


you'd be surprised. when specifications get passed around in the endless maze of defense affiliated sub contractors and sub sub contractors, they may be implemented in all kind of creative ways. and i entirely agree that the operational issues of enforcing the air gap are by far the most challenging cost in this kind of environment. the culture of business environments is by and large ill suited to the kind of strict enforcement these environment require.

if RF leaks are part of your threat model, then i would definitely not class them as "proximity attacks". without adequate shielding, antenna black magic and a rented apartment on the other side of the street work perfectly well to capture signals of interest.


Could TEMPEST attacks be defeated by defenders transmitting RF noise around the target within the same frequency spectrum?


absolutely. raising the noise floor would make it harder for the adversary to pick up valuable signals. but why bother with complicated jamming setups that require you to cover a very wide spectrum, at high power, while taking care of not interfering with your own equipments, and still have signals going out that with sufficient means and motivation, a sophisticated attacker may still intercept (we're pretty good at extracting valuable signals from noise) when you can just put on your RFP to stick the lot in a faraday cage, use optical connectivity inside it, forbid wireless HIDs, buy shielded monitors and shielded phones, and put a couple of scary bald men in front of the door.


By that definition there's no such thing as an airgapped computer, unless you are talking about computers created by an alien civilization that arose in another galaxy from an independent origin of life. The useful definition of the term, the one that has a nonempty set as referent, is where there isn't a continuous network connection, and human intervention is required for physical transport of each sample of data.


air gapping removes connectivity to the wider world, and a computer system (as in, a system of computers) can totally have value without being connected to anything that is outside of the organization.

no need to resort to hyperbolic alien computers that are anyway easily hacked by plucky heroes with a taste for ascii skulls.

you're correct in that of course we may want to insert and retrieve data from a secure zone. but when those types of systems are necessary, "manual human intervention" is far too blurry. manual data entry/retrieval by vetted people ? a HUMINT problem the organizations that tend to require those systems are familiar with, and have adequate procedures to deal with. same with printouts and assorted "low tech" issues. inserting a USB key and ensuring that the whole chain doesn't compromise your very valuable system is a far more complex problem that you probably do not want to deal with if you can avoid it. there is a reason the "usb gadget give away" and "lost usb key in the parking lot" are a staple of flashy pentests demos. it works remarkably well, and is remarkably hard to defend against without enacting procedures that make the life very much harder on everyone. (try suggesting epoxying the USB ports of every single computer and servers in a medium to large company to its CISO)

like another commenter remarked. it is truly the operational challenges of ensuring your system says properly isolated from the rest of the world that are hard to deal with. cutting a cable is definitely not the end of it. i'm sure people that deal with high assurance environments and are more qualified than i am to answer specifics have their share of stories of creatively compromised isolated environments.


gitlab is gearing up to be the one-stop-shop from development to production. i really like that. unifying the toolset developers and ops have to understand is really a boon for everybody involved.

merge restriction as a feature may sound silly, but in my experience, when driving organizations toward CI/CD, especially with "mature" organizations, having the computer say no instead of a human is a major facilitator in adoption of good practices. now we just need to protect tests in tree. (currently, i tend to favor tests as a git submodule, so we can prevent chronic offenders from altering/commenting tests that should pass)

plus U2F integration, so now i won't hear anymore how "terribly slow" it is to have to whip out a phone to grab a TOTP code...


>merge restriction as a feature

Ability to enforce code review (allow users to approve merge requests but not push directly) has been demanded since 2014 [1] with no support from Gitlab. However, it looks like there's now a chance it's coming soon. [2]

[1] https://github.com/gitlabhq/gitlabhq/issues/6432 [2] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/4220


if locked code review is coming in, i'll be all over it. just let me select multiple approvals, and nested approvals.

it's an immensely useful feature. once again, the more i can define a process in software, the better it is for most non-western, non-modern-management companies.


Are you familiar with the merge request approvers feature in EE https://about.gitlab.com/2015/06/16/feature-highlight-approv... ?


This is nice, but does not solve the problem of giving a user the power to approve merge requests but not accidentally "git push origin master."


With protected branches you can prevent force pushes to master. You would have to do a manual merge and then git push origin master to cause a problem I think.


I don't generally use feature branches on solo projects. Working directly on master (and pushing origin master at arbitrary moments to back up my work) is a deeply ingrained habit.

A code review tool that's off by default and trivial to bypass in your sleep is a poor code review tool.

This is easy to prevent in Phabricator, which will block pushes to master if they contain changes not also present in an approved unit of code review. I run into that wall a few times a week, and am reminded to move my commits into a branch.


Thanks, good point. We're extending protected branches so all code will have to undergo a code review, see https://gitlab.com/gitlab-org/gitlab-ce/issues/18193


i am, i get people to use it, and it works well. but superuser2 is right. to be honest, i think we're touching to fundamental limitations in the git model that gitlab cannot solve without becoming rational team concert, and catering to a fundamentally different public.


> plus U2F integration, so now i won't hear anymore how "terribly slow" it is to have to whip out a phone to grab a TOTP code...

I don't know if this applies to you, but 1Password 6 has support for generating TOTP passwords (they have a neato QR scanner built in, or one can always just input the key or key URI by hand)

It's been a revolutionary change in my day-to-day frustration level



Thanks for your kind words. We indeed want to be the one-stop-shop from idea to production. For the complete scope see https://about.gitlab.com/direction/#scope


yeah i read the scope. but since the border between scheduling, configuration management, monitoring etc... is extremely blurry, with a high number of interconnection between systems, and lacking any kind of standard contract between them, i'm more wary about the introduction of these features.


The idea is to not do container scheduling in GitLab. We're considering adding some monitoring. Standard contracts are not easy but I think it is doable. We already shipped deploy to Kubernetes https://about.gitlab.com/2016/03/22/gitlab-8-6-released/ and we're working on more. For more information about our deployment vision please see about.gitlab.com/direction/cicd


Isn't U2F a Chrome only feature at the moment?

Are you using that seriously already?


Firefox added it ~3 weeks ago under a flag in Nightly, should be in stable a few months from now.


firefox has a plugin that works fine from what i've heard from the others. i still rely by default on totp for generic dual factor auth, since i can provide already working solutions from web to ssh authentication. however, yes for web applications, some clients went the u2f options, and the users overwhelmingly prefer it. since u2f is getting very quickly traction in products (case in point, gitlab) and the user experience is so simpler for a similar level of security, it'll probably become my go to in the near future.


Thanks! Glad you like the direction!


pyrotherapy is a real thing, and has been used very successfully in the treatment of the later stages of syphillis. of course, it only works because syphillis is really temperature sensitive, which isn't the case of the pathogens concerned.


ccTLDs are the responsibility of "designated managers" for each sovereign country (see https://tools.ietf.org/html/rfc1591).


cough https://news.ycombinator.com/item?id=10352001 cough

we're in the process of getting all the zone files we can to reduce the amount of DNS requests we have to do, but the real kicker is whois databases, for example AFNIC asks for 10K€ to get access to a copy of the database...


Oh I missed that. You're doing good work there!


Hi HN

a lot has changed since our last Show HN, and i guess it was time to share these change with you.

for those who weren't here the first time, we try to catch domain squatting using a bunch of techniques we already used when doing it manually, but in a purely automated fashion. your root domain (think 'facebook', 'twitter', 'ycombinator', etc...) gets through blenders that generate variants, that we'll gather info on.

we now have all the basics in place so i can confidently call it production ready. free accounts are still the same deal, one domain, five TLDs, all present and future generators, whois and dns resolver, plus a few more still in the oven, and one run per week, which should be enough if your needs are not massive and/or specific. you also now have paying options for people with more intensive needs, either timing wise (down to one run per day, and one run per 4 hours will be a possibility once i'm confident 1: we can handle it, 2: it can actually provide value in the real world) or number-of-tlds-wise.

notifications! yeap i know it's basic, but we now send you a mail when a run is complete, so you don't have to bother reloading waiting for that progress bar to reach 100%. a few client asked us about sms notification, but i'm not sure about multiple notification channels yet.

so what's next :

we have a bunch of stuff that stayed on the backburner while i was working on making the production as autonomous as possible (complete CI/CD stack, built with chef, openstack heat, jenkins, the whole shebang) and the other dev was working on ironing the kinks existing when interacting with horrible protocols like whois (for the sake of everyone's sanity, i really hope rdap (http://about.rdap.org/) gets traction) or misbehaving dns, or just plain old bugs. we're now bringing them back on the front of the workbench.

parking detection.

this one is simple, and everybody will get it, but i noticed a large number of parked domains in resolution runs, so they'll be marked as such.

automated phishing detection.

this has been a major demand so i'm prototyping a CV system (ab)using ghost.py and opencv to see if i can get something that has a reasonable false positive rate.

malware detection.

a smaller demand because it's already well covered by other products. for the moment paying accounts get access to google safebrowsing, and i have a bunch of threat exchange APIs access ready to enter the quorum. there's a lot of datasharing between those, so i don't want to generate false positives because of data sharing. i have also been working slowly on PR for cuckoo sandbox that'll help me launch fleets of sandboxes in various configuration (hopefully i'll have enough variants that the client is able to more or less choose the one that correspond to its production environment to try and catch targeted attacks)

keyword prediction based on the root domain.

we have a keyword generator that can generate domain variants, think 'cheap-brand' for 'brand', but if you're like me you probably can't think of a lot of those (i had good success asking marketing guys for ideas). once again AI to the rescue, i'm tracking which keywords had the most 'success' in finding resolving variants, which means once i'm able to establish lexical domains i'll be able to offer everyone a 'most likely keyword for this domain' help to feed the generator.

an API!

at the very beginning, when dinosaurs roamed the earth and the iphone 5s the cool product of the year, squatmon was just a very large and very ugly python script i used in various recon engagements. as we decided to slap a shiny web interface on it and share it with others, we didn't take the time to make an API the first class citizen and the web interface just the reference client implementation. this is a mistake we intend to correct, so any person with an account, free or otherwise can integrate any of the functionality they have access as a part of something bigger. (i have written an example postfix milter that's too terribly slow to be used in production, but can participate in the spam score of an email for example)

if you have any question, want to report bugs, or anything really, don't hesitate to contact me, my email address is in my profile.

edit: i'm terrible and i said one run per month on the free account. it's of course one run per week


yes apparently i broke OCSP. thanks for the tip i'm fixing it now.

edit: apparently the error lies around certificate transparency and us lacking a public CT log. the security of the connection itself is not impacted, but i'm looking into it.


Hi HN.

(one of the) squatmon developer(s) here

this is squatmon, an application we built the last few month, with the goal of automagically detecting brand and domain squatting.

i'd love to read your comments.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: