By default, it blocks third-party scripts/cookies/XHRs/frames (with an additional explicit blacklist). You then manually whitelist on a matrix which types of requests from which domains you want to allow. Your preferences are saved.
It is a bit annoying the first time you visit any new domain, because you need to go through a bootstrapping whitelist process to make it work. After a while I find I do it almost automatically though.
I use it in conjunction with uBlock Origin and Disconnect, and it still catches the vast majority of things. As a nice side-effect, I find I keep pretty up-to-date with new SAAS companies coming out!
I don't even waste time or cpu cycles with browser based blocking applications. Steven Black's maintained hosts files are the best for blocking adware, malware, fakenews, gambling , porn and social media outlets.
 - https://github.com/StevenBlack/hosts/
Depending on your threat model, you might need to go the proxy/firewall route.
Hosts file is a weird middle ground - that has to be installed and maintained on every device, many of which (e.g. iphone/ipad) won't let you do that. It's better to set up a local DNS, which will serve every local machine; and as I mentioned, doing this at a firewall level is better yet.
It doesn't quite work well while on the road sometimes. For those cases, I have a docker running diginc/pi-hole (with some additional hosts file blocking going on), then I point my laptops DNS towards that and am good to go.
1) processes and machines that bypass hosts are also caught
2) a large hoss file takes time to parse, line by line, slowing every DNA lookup. A DNs server should cache the entire better.
Dnsmasq takes entries in a hosts file format.
Zone files are more flexible than HOSTS files, but I still use HOSTS as well. I have never had a concern about the speed of using HOSTS. It is certainly faster than DNS.
There is a comment in this thread where someone asserts that HOSTS was never designed to handle "millions of entries".
I would be interested in reading about a user who visits "millions" of websites or otherwise needs to do lookups on "millions" of domains.
I maintain a database of every IP address I am likely to use on a repeated basis, in several formats, with kdb+ as the "master" format. I believe most users will never come close to intentionally accessing "millions" of IP addresses in their entire lifetime.FN1 I could be mistaken and it would be interesting to learn of a user who can dispel this belief.
FN1. If you think about this, it may cause you to question the necessity of DNS for such users. Or not. Times have changed since the advent of HOSTS. They have also changed since the advent of DNS. For example, using "consumer" hardware, I can fit all the major ICANN TLD zones on external storage and the entire com zone (IMO, by far the most important) in memory. This is many, many more domains than I will ever need to look up. Assuming at best I will not live much longer than 100 years, I could not and will not explore them all or even a significant fraction.
Until DNS changes.
If I move a server from one IP to another, I change DNS, and in $TTL time everyone's pointing at the new server. Apart from you with a hosts file. How does that work if everyone has a hosts file?
If I say "check out this interesting story on blahblah.com", you don't have it in your hosts file, how do you get it?
I maintain a list of every phone number I am likely to use on a repeated basis, but sometimes I need to look up a phone number I don't know (in the old days this was a phone book locally, and directory inquiries further afield. Now it's ddg and assume they have a website. Which isn't in my hosts file or dns cache, and I've never visited before)
I maintain DNS entries for my home network of a dozen devices -- I host it on my mikrotik, but it's handy to have, when I type "ssh laptop" rather than remembering if it's on .71 or .73. It's one step better than a plain text file, as there's a standard based way to remotely query it. At work I maintain a DNS server with 2000 entries on my network, which is actually hosts file powered, but again I use dnsmasq for the DNS server rather than rsyncing that hosts file to 2000 machines.
In your particular case, I dont know. You have to do what best suits your needs, whatever they are.
Here is how someone else solves the problem of changing IP addresses. For my needs, I actually like this method.
The entire ICANN DNS used to be bootstrapped to a small text file called "root.hints", db.cache, named.root, named.cache, or something else. As far as I know, it still is.
How does one know the IP address from which to retrieve this text file?
Maybe they have it memorized, or written down somewhere, or perhaps it is written into some DNS software default configuration. In all cases, they have this address stored locally.
No remote lookup.
What happens when the administrator of the server that publishes the text file wants to change IP addresses?
This does not happen very often, but it does happen. What do they do? Considering that the entire ICANN DNS was bootstrapped to this one file, and assuming this is truly meant to be a dynamic system, then this is arguably the most important IP address on the internet.
They notify users in advance that the IP address is going to change.
As a www user, of course I would have to do a DNS lookup for blahblah.com. However I do not do lookups for the server with db.cache, for the .com nameservers, and in most cases not for the nameserver for blahblah.com either, and I do not do lookups using recursive caches. If blahblah.com changes its IP address I do not have to wait for changes to propagate through the system via TTLs. I am querying the authoritative nameserver, RD bit unset. If an IP address changes from the one I have stored, I know immediately when I try to access it. (I like being aware of these changes.) If I was relying a recursive cache I would probably not notice that the IP address had changed.
IME, IP address changes happen less frequently than people writing about DNS on the web would have one believe. Hence this system works well for me. Most domainnames I encounter are keeping the same IP address for long periods.
Ideally, if blahblah.com is not changing IP addresses frequently or unexpectedly but needs to make a change, she could publish a notice somewhere on her web server informing users she will be moving to a new IP address, just like the server that serves db.cache.
One benefit to the hosts file is that it travels with you everywhere you go. I have my DNS configured at home, but my hosts file for when I'm at a coffee shop, on a plane, work trip, or vacation.
I heard that blackholing requests to Microsoft telemetry URLs also has no effect. Any way of finding the unlockable list I wonder.
Malware would redirect facebook.com to some scam site probably.
Given how popular FB is, Microsoft decided to "fix" this.
(This is all a hypothetical, I don't actually know this for sure.)
I guess they hardware the IPs into Windows.
More likely they just bypass looking at the local hosts file for such names, so the request always goes out to your DNS servers.
Therefore blocking these names by redirecting to 127.0.0.1 will work if done at your DNS server (for instance if you run an instance of https://pi-hole.net/ for that).
Unless of course they make the lookup use specific name servers that they run, instead of the local resolvers that your machine is configured to look at, for those names but that is less likely.
nslookup google-analytics.com 22.214.171.124
This is objectively false. A hosts file blocklist cannot:
- Global blacklist and whitelist select URLs
- Block all sub-domains of a given URL
- Block third-party iframes and scripts (with whitelist)
A hosts file's feature set is not a superset of a browser based solution and the two complement each other.
Mind, I use both hosts and uMatrix. Each has its place.
Hosts file is a real pain to disable/enable, at least quite a bit worse than clicking one button in your browser.
I do something very similar to hosts file (at router level rather than os), but there are a few drawbacks over an extension.
On my jailbroken iphone(s) one of my first steps of security hardening is to replace the hosts file.
* Pi-hole®: A black hole for Internet advertisements – curl -sSL https://install.pi-hole.net | bash || https://pi-hole.net/
My 1blocker rule called "Bye Facebook" is:
-  https://1blocker.com (This costs money)
-  https://developer.apple.com/library/content/documentation/Ex...
> Safari Content Blockers apply globally preventing web visits
Unfortunately, this is not true. They only work in Safari and Safari view controllers.
> Safari Content Blockers also prevent Messages from rendering Facebook content inline
Are you sure about this? As far as I know, Messages uses a web view.
Sorry about the confusion... I'm really doing a lot to keep myself off of Facebooks radar.
-  https://itunes.apple.com/us/app/adblock/id691121579?mt=8
What's your threat model? Mine is third-party tracking cookies, and desktop apps don't share my browser's cookie jar. So while technically I can be tracked by IP from a desktop app, Facebook can't tell if it's me or someone else at the same coffee shop.
In particular, one nice thing about Chrome extensions is that they don't apply to incognito windows. I regularly use HTTPS Everywhere in block-all-HTTP-requests mode + an incognito window on wifi connections I don't trust, because the incognito window will permit plaintext requests, but it doesn't read my cookies or write to my cache, so it's sandboxed from my actual web browsing. I can safely read some random website that doesn't support HTTPS with my only concern being my own eyes reading a compromised page; none of my logged-in sessions are at risk.
> any software dependency library that you install without properly checking if it's got some social media tracking engine built in.
... is this a thing? (I totally believe that it's becoming a thing, I just haven't seen it yet and am morbidly curious.)
Eventually, they will tie your various devices to you.
These a chapter / section on this (and FB) in Chaos Monkeys.
That book was published 2+ yrs ago. I can only assume the technology is more thorough and sophisticated now.
p.s. see also Dragnet Nation
I use uMatrix only experimentally (I rely on NoScript) but it offers a fascinating flexibility of control if one is in the mood. As well, NoScript is near useless when doing stuff with AWS where uMatrix offers the right flexibility (allow from site Y, but only when fetched from site X).
I had heard of uMatrix but didn't realize it had that functionality, which is pretty cool! Thanks for sharing!
Edit: your browser history (which may contain your profile URI) might be pretty out in the open, too.
My threat model is a developer who includes a standard tracking snippet from a third party but is not going out of their way to reliably violate my privacy at all costs (because they have other features to ship, and the tracking snippet works on most computers). If your threat model includes actively malicious developers, stop running native apps from them at all.
I would dearly love to, if all OSes came with a permission system other than just "run in admin mode/sudo".
you can enable extension to run in incognito mode in settings
IP whitelists could also be aggregated and shared on github similar to the current DNS blacklists.
HOSTS files are static. They were never designed for blocking ads or tracking. And for all we know, every connection does a linear search through the HOSTS file so the larger it gets, the more wasted time, because it was never designed to have millions of entries.
My public broadcaster (http://www.abc.net.au/news/) for instance is completely reliant on third parties for it's "live story" functionality. It loses half it's functionality at work where twitter is blocked and uBlock kills the other half. It also kills the live stream when it can't load one of the half a dozen trackers on the page.
I default deny all 3rd party scripts and frames, in addition to the blocklists, and I only sparingly noop relevant domains, the bare minimum to make pages work, on a page-by-page basis.
On top of that, I have Privacy Badger, Cookie Autodelete Decentraleyes and I've turned on first-party isolation.
It's mostly unobtrusive once my most important websites have been properly noop'ed, and it's relatively simple to add temporary exceptions if needed.
That's just putting rules into /etc/hosts ?
edit - answered my own question :) Yes it will.
My ISP won't but there are ways around that. The biggest problem I've faced is on the modem side of things, finding something I'd trust to be open to the internet, ideally something I can install openWRT or similar on and something I know will work in my market. It's an options minefield.
Not to say /etc/hosts doesn't work, these days I just find I prefer things with better UX.
I also don't pre-emptively load in rules into Little Snitch - I have it running in active/interrupt mode, so it prompts me whenever it tries to make a new connection I haven't signed off on before. Unsurprisingly, not very many apps try to connect to Facebook.
The dynamic filtering matrix should be available after that.
Granted the defaults are not as strict as uMatrix, it's a good middle ground.
I would recommend "Medium mode" as defined in the uBlock wiki.
It runs after any add-ons. It's a useful second step that can catch things add-ons miss.
What do you mean?
And, btw, I use Privacy Badger instead of Disconnect.
Free Software Foundation got there earlier. From publishing https://www.fsf.org/facebook published on on Dec 20, 2010. FSF & GNU Project founder Richard Stallman has been rightly objecting to Facebook for years in his talks and on his personal website at https://stallman.org/facebook.html.
Long-time former FSF lawyer Eben Moglen rightly called Facebook "a monstrous surveillance engine" and pointed out the ugliness of Facebook's endless surveillance (at length in http://snowdenandthefuture.info/PartIII.html but in other places in the same lecture series as well). See http://snowdenandthefuture.info/ for the entire series of talks.
At least in the software licensing arena, having personally visited a lecture from Stallman, I was left with the impression that he wasn't offering a solution, just a vision of a Utopia without any guidance on how to transition to it -- more specifically, how would we make money from open source software, when currently proprietary software is the default for making money.
There are many existing examples, so this is clearly a solved problem already. You charge for support, or for feature requests, and so on. That's how SUSE and RedHat make their money.
The flaw with looking at proprietary software's monitisation is that it usually just boils down to "pay for the binary". This obviously won't work with free software, you need to charge for development rather than access (though you can also use a seat-based model where you only provide support for machines that have valid licenses).
(I work for SUSE.)
And then there's the question of SaaS. OSS exists but a lot of high quality alternatives are paid. I don't think services like Todoist, Pocket, Evernote etc would exists on the open source model you described.
None the less, it's admirable, and hopefully a net benefit for everyone.
My point was more about Stallman and co calling foul with regards to software freedom, codifying their own ideal, but not giving directions to reach that ideal. This feels like a safe pulpit to sit upon, where their view isn't falsifiable, useful when they want to say "I told you so", and eventually taking all credit for everyone else's efforts in between to make the end goal possible.
This is incorrect, you can download the full ISOs for SLE from the SUSE website, with 30 days worth of updates. The source code (and the system required to build it) is all publicly available on https://buid.opensuse.org/. I beleive RedHat have something similar.
I'm also not sure that an interview from 1994 with the creator of Slackware is a good indicator of the current state of distribution business models. Though even in 1994, both RedHat and SUSE were selling enterprise distributions.
This suggests that the business model benefited from restricting redistribution and modification of the source code, so breaks the assumptions that the business model was purely based on making money from open source, and so doesn't fully support the idea that proprietary software is unnecessary, in the case where we take SUSE as an example of saying it is "already solved".
I remember when the internet was mostly people's personal websites that they didn't make money off of, and frankly, the internet was better then. The best websites that exist now started in that era.
I think you can probably find a suitable subset of the internet and it still feels like the old days, but then you have to be happy with a much smaller community.
And fair point to regard money from software as a potential net negative, and I just don't have an answer that is objective. There is a lot of software that highlights the creativity of people, and I like it, and am happy to pay for it, and also happy to get it for free. Like paying for books or borrowing them from the library.
I think most people would be happier with a smaller community. Facebook encourages a large number of low-quality connections, which are actually worse than not being connected at all: I don't want or need to interact with my friend's racist cousin I met at a party three years ago or my ex-roommate's mom who always wants to sell me homeopathy supplies. These people are actively detracting from your life.
Those are extreme examples, but even people I might get along with suck up your time. If I am not connected emotionally/socially with someone enough to get their phone number and send them a text occasionally, I probably don't need to give them even a few seconds of my time on a regular basis.
> And fair point to regard money from software as a potential net negative, and I just don't have an answer that is objective. There is a lot of software that highlights the creativity of people, and I like it, and am happy to pay for it, and also happy to get it for free. Like paying for books or borrowing them from the library.
I'd be happy to pay for good internet too, but unfortunately there are few businesses willing to do this. Ad sellers won the race to the bottom on price (a strange game--the only winning move is not to play) by simply being "free" to users. This works because of short-term thinking: users don't think ahead to how ads will affect their lives, and content providers don't plan to grow a business slowly the way a for-pay business grows.
Stallman quit his job to write an entire free software operating system and essentially dedicated his entire life to it. What more do you want?! I don't even want to imagine a world without Richard Stallman.
Stallman judges those that write proprietary software, calling this type of software immoral, and yet doesn't offer guidance to get from the current situation to a better situation. Without realistic and generalisable guidance, it is simply self righteousness on his part,
the same as any extreme idealist.
If you have anything at all to do with software you're "getting paid" by using the wealth of free software that makes up GNU Linux, OS X, etc. Free software constitutes a powerful non-scarcity-based model that is extremely good for productivity. Just because he doesn't kowtow to standard capitalist race-to-the-bottom models doesn't mean he's not "realistic".
And as for calling something immoral which is the result of someones hard work and doesn't impact them if they choose not to use it, simply sounds like sour grapes and self righteousness.
A comment I made two weeks ago that is pertinent to this discussion:
Niche market software, used by a limited number of highly specialized professionals, is somewhat incompatible with the open source economic model. When a piece of software is used by very many users, and there is a strong overlap with coders or companies capable of coding, say an operating system or a a web server, open sources shines: there is adequate development investment by the power-users, in their regular course of using and adapting the software, that can be redistributed to regular users for free in an open, functional package.
At the other end of the spectrum, when the target audience is comprised of a small number of professionals that don't code, for example advanced graphic or music editors or an engineering toolbox, open source struggles to keep up with proprietary because the economic model is less adequate: each professional would gladly pay, say, $200 each to cover the development costs for a fantastic product they could use forever, but there is a prisoner dilema that your personal $200 donation does not make others pay and does not directly improve your experience. Because the userbase is small and non-software oriented, the occasional contributions from outside are rare, so the project is largely driven by the core authors who lack the resources to compete with proprietary software that can charge $200 per seat. And once the proprietary software becomes entrenched, there is a strong tendency for monopolistic behavior (Adobe) because of the large moat and no opportunity to fork, so people will be asked to pay $1000 per seat every year by the market leader simply because it can.
A solution I'm brainstorming could be a hybrid commercial & open source license with a limited, 5 year period where the software, provided with full source, is commercial and not free to copy (for these markets DRM is not necessary, license terms are enough to dissuade most professionals from making or using a rogue compile with license key verification disabled).
After the 5 year period, the software reverts to an open source hybrid, and anyone can fork it as open source, or publish a commercial derivative with the same time-limited protection. The company developing the software gets a chance to cover it's initial investment and must continue to invest in it to warant the price for the latest non-free release, or somebody else might release another free or cheap derivative starting from a 5-year old release. So the market leader could periodically change and people would only pay to use the most advanced and inovative branch, ensuring that development investment is paid for and then redistributed to everybody else.
Thanks for such a well thought out response.
I've long been skeptical of the effects of social media though, and I'm taking this job mostly just because doing otherwise seems like a really poor career choice. Plus it seems like Facebook is here to stay, and I can dream of helping to fix the problem instead of just enabling it.
EDIT: Is HN's Facebook hate getting so heated I'm getting down-voted for sending some good vibes to a newgrad about to start her/his first job?
I bet you all took you first job at Doctors Without Borders helping children in Angola. FFS the Waltons are the scourge of this world but I don't blame the kids going off to work at Walmart. I bet a lot of you pay taxes in the US too-- those taxes financed the war in Afghanistan but you didn't move to Morocco, did you?
Give me a break. Let this kid come in with a good attitude, eyes open, loud and proud. Who knows maybe he'll turn some heads. The guy signed and it is a good career move, what's wrong with cheering the guy up. Disappointed at you HN.
That's about all though, isn't it? There's a negligible chance you'll actually fix the problem, unless you manage to leak evidence to the media or similar.
I write software for biologists, am I feel I'm much, much happier doing that than I would be working at Facebook.
But seriously - do we know any single example of an intern coming to a big corp and "saving it" - by that I mean steering it off the dark and deceiving waters and actually bringing it into light for the good of society and people in general??
You are when your job requires you to sidestep your morals.
That doesn't bother me. I still go to the pub after work with friends and have a good time.
Media are the information sourcing and feedback loop for societies. The print media went through its crisis of awareness in the early 20th century. See especially Lippmann's Public Policy.
There are differing levels of how strongly I feel about certain moral values I hold. For example, working for a company that dealt in wholesale killing of others is obviously worse than working for an advertising network. Would I work for doubleclick for $1M a year? Hell yeah. Would I spy on citizens of my country for the same amount? No.
Does that make me not have morals? I don't know.
'Peer' is very flexible. This could be a comparison to people the same age in other careers.
Also, keep in mind that Facebook engineers are constantly surrounded by other Facebook engineers so their SE peers probably do approve. They collectively don't think Facebook is a problem so they implicitly approve of each other.
I mean, let's just admit that you don't have to be a Facebook user and you don't have to sign a Facebook TOS for them to accumulate data about you, so it's not quite as cut-and-dry as you make it out to be.
As far as the "good" and "evil" parts of the stack... fair point. I think most devs are somewhat abstracted away from the collectively malicious vision, since most of the constituent parts are relatively benign on their own -- "let's identify faces in photos!", "let's automatically identify faces in photos", etc. It's product folks, or maybe even higher up than that, who connect the powerful pieces produced by devs to actually make Facebook the monster it is today. I'd guess that even the devs who have impact on that vision don't really have the power to dramatically sway that vision, they've got a bit of technical input at best.
Still, have you ever worked on a product you don't believe in? If you're just cashing in a check, I guess it could work, but if you're as idealistic as me you want to work on something that's doing good in the world. Especially when so many tech companies proclaim their intent to "make the world a better place" or "do cool things that matter."
Not really. Many people just feel they have to use Facebook to connect with the society efficiently. Some people even consider those who don't use Facebook weird.
All of these products are designed to be as addictive as possible (to varying degrees). The whole point of of an addiction is that your are there voluntarily. (Not saying facebook is as bad as the things above, just that they are all designed to addict.)
Or we could have a legal system where everything is forbidden except for those things explicitly allowed.
Slack vs irc
Facebook vs myspace vs geocities
I hope this is sarcasm.
Isn't that the going ethos currently?
Something akin to: Pay me as much as possible, don't ask me to be part of your culture, don't ask me to work more than 40 hours a week, don't ask me to take stock, don't have a mission statement, make sure I'm working on something that is engaging mentally.
It’s wise in the Valley to hold such an opinion but not make it very prominently known. There are a number of people who want your job and will say something more palatable to your employer, and eat your free meal while shipping a feature at 10pm because they buy into the posters on the wall. Many allegations of ageism (but not all) can probably trace back to something like this, in my uninformed gut opinion, because you will almost certainly get replaced by a new grad when the hammer falls. I don’t even see it as personal, but a demonstration of incentives: they can get a loud 40 from you or a quiet 80 from a newly minted BSc. QED.
Just decline invites and push back on over 40. Expressing that attitude at a typical company within this audience basically paints “please lay me off” on your back. When you’re getting into post-senior titling, or you’re really specialized in a tough req to fill, is when that approach becomes more feasible.
As a Facebook user I obviously don't like what they do with the data, but at the same time I think they provide an OK service that is beneficial to many. I wouldn't mind working as a developer there on what I imagine is overwhelming majority of positions.
while true; do printf "blocked by hosts file" |nc -q 1 -l -p 80; done
For Ubuntu this should work (one versions from Trusty and newer):
sudo touch /etc/NetworkManager/dnsmasq.d/local
Put these lines into the above file and save:
"If you want to block facebook you need to block almost a thousand websites!"
Bit sloppy because it doesn't pick up the domain names with dashes. But my point was that if you want to blacklist *.facebook.com you shouldn't try to enumerate every single variant of it, that's not durable.
Adblocking is a non-trivial task, but there are trivial solutions.
1.) Install hosts-gen from http://git.r-36.net/hosts-gen/
% git clone http://git.r-36.net/hosts-gen
% cd hosts-gen
% sudo make install
# Make sure all your custom configuration from your current /etc/hosts is
# preserved in a file in /etc/hosts.d. The files have to begin with a
# number, a minus and then the name.
% sudo hosts-gen
2.) Install the zerohosts script.
# In the above directory.
% sudo cp examples/gethostszero /bin
% sudo chmod 775 /bin/gethostszero
% sudo /bin/gethostszero
% sudo hosts-gen
Experience suggests that most of the time when people say exponentially they mean an exponent greater than 1, but I have been surprised by what people have meant before so I personally wouldn't say that probability is greater than 90%. That's what I meant in more detail.
I don't know what Androids do.
You need to learn a little about what you're doing if you want to go this route, and there is some setup. But basically, you're taking on the role of a corporate IT department, pre-configuring and possibly locking down the phone.
I set up a profile in Configurator a few years ago and am a little afraid of touching it - that application makes iTunes look thoughtfully designed and stable.
Process of enumerating and rejecting facebook IPs :
* Query the RAD http://radb.net/query/ , search for AS32934
* Enumerate ip ranges by http://radb.net/query/?advanced_query=1
* Check inverse query by origin, use AS32934
* Grep the response route and route6 CIDR ranges
* Build a netfilter script with REJECT
Gives those scripts for iptables (updated once in a while) :
To enable :
* iptables -I OUTPUT -j no_facebook_out
* iptables -I INPUT -j no_facebook_in
* ip6tables -I OUTPUT -j no_facebook_out
* ip6tables -I INPUT -j no_facebook_in
By design, instagram and connect-with-facebook get muted too.
whois -h whois.radb.net '!gAS32934' | tr ' ' '\n' | awk '!/[[:alpha:]]/' > facebook.list
whois -h whois.radb.net '!6AS32934' | tr ' ' '\n' | grep '::' >> facebook.list
Unsurprisingly, there is recent stuff on https://github.com/jmdugan/blocklists/pulls. If anyone notices it getting updated, could you tell us? email@example.com is best.
I'd likely never look at the bulk of commercial websites if I had to render them the way owners intended them to render.
If you run your own DNS resolver you can use the wildcard trick.
Something like this in an RPZ zone should do it:
facebook.com IN CNAME .
*.facebook.com IN CNAME .
facebook.net IN CNAME .
*.facebook.net IN CNAME .
fbcdn.com IN CNAME .
*.fbcdn.com IN CNAME .
fbcdn.net IN CNAME .
*.fbcdn.net IN CNAME .
fb.com IN CNAME .
*.fb.com IN CNAME .
fb.me IN CNAME .
*.fb.me IN CNAME .
tfbnw.com IN CNAME .
*.tfbnw.com IN CNAME .
*.facebook.com IN CNAME .
(This is also why you don't CNAME your root domain, CNAME conflicts with any other record type)
A good entry point for reading more about it:
$ man nsswitch.conf
If your /etc/nsswitch.conf file's "hosts" line contains the keyword "files", then it potentially uses /etc/hosts. If "files" is first (typical default config), it looks there first, before the other places listed.
This is done under the hood when programs use resolver functions like gethostbyname or getaddrinfo.
- Function that actually parses /etc/hosts is name_from_hosts(), implemented here: http://git.musl-libc.org/cgit/musl/tree/src/network/lookup_n...
- Which is called by __lookup_name() on the same file:
- Which is, in turn, called directly from getaddrinfo() [http://git.musl-libc.org/cgit/musl/tree/src/network/getaddri...], the actual function exposed to you as libc user.
-  https://github.com/StevenBlack/hosts/