Using Tor to anonymously and privately educate yourself about embarrassing or potentially ostracized problems with yourself is a great use of it. Just remember that you should not ever enter any identifying information while using it.
Tor is more than fast enough for every day browsing, heck I use it to watch Youtube without major problems. I also use it to read the news, find recipes or lyrics (or similarly shady web circles) etc.
If the other side does not need to know who you are and does not have to synchronize that information into a vast tracking/advertising network, why should you willingly submit it?
"It has been said that capitalism is the worst form of Government except for all those other forms that have been tried from time to time" (paraphrasing a speech Churchill gave about democracy).
If you don't use Tor, you're on the list and they've got ready access to your browsing data and metadata.
If you do use Tor, you're only on the list, and their workfactor for accessing your data and metadata is far higher.
Plus you're providing more cover for those who have strongly urgent needs for similar levels of protection.
...the best place to hide is in a crowd, and the bigger the crowd the better :)
Then you need to verify the integrity using information they provide on their site.
On the other hand, it irks me that people should have to be afraid of wanting privacy. For my own part, I use Tor partly as "civil obstinacy" – if we do not exercise a right, we risk losing it. I feel it's wrong to target someone simply for using Tor / wanting privacy, and I think it should ideally be considered more normal.
Every 3rd party HTTP request can already be used to track you -- being logged into a social account just makes the tracking way easier for them.
The prominent example to trigger a 3rd party request is the 1px GIF. Incognito mode won’t help you in this case.
Google despite other problems (below) actually Does The Right Thing and presents an image which I can fetch and verify, though many of the graphics are exceptionally difficult to interpret.
I've documented my own other hassles accessing Google via Tor (G+, Gmail, etc.) in "How to kill your Google account: Access it via Tor":
(The problems were compounded by Google's account recovery and verification procedures, though I ultimately did recover control thanks in no small part to intercession by Google's Yontan Zunger, for which I remain grateful).
Other options include the /etc/hosts file mentioned above (I've extended my own set with 62,000+ entries from a set of blockfiles used by the uMatrix Chrome extension). There's also Privoxy (though supporting _both_ Tor and non-Tor variants might be useful), and various browser extensions including Ghostery, Privacy Badger, AdBlock+, uMatrix, ScriptSave/NoScript, etc.
It's getting more than slightly tedious and is eroding trust in the Web generally.
The other area of significant interest is seeing work toward reputation systems which are compatible with Tor use. There are two I'm aware of, FAUST and "Fair Anonymity", though I've seen little discussion or adoption of these anywhere.
"Fair Anonymity for the Tor Network"
Briefly discussed here:
There's a fairly detailed description of the differences here:
Its still possible to browse without JS most of the time. Some pages are crippled by design, so disabling CSS might show the content. Others provide a escaped_fragment variant. But a stupid JS antipattern is sometimes used to display normal content with JS. One big problem are domains like ajax.google. This is often used to enhanced website, but google using it to track users.
When talking about evil Google, one needs to add YT. A friend of mine once claimed: You watch a stripper, if you visit YouPorn. But you strip your privacy, if you visit YouTube.
Much as this sort of thing makes me glad I don't need to purchase private health insurance, the article would be a lot more helpful if it distinguished more clearly between what is and isn't legal use of the data as well as between the Experians and Google Analytics of this world.
That said, the original source paper probably if any thing plays down the potential concerns, contending, for example that a URI like http://www.ncbi.nlm.nih.gov/pubmed/21722252 contains no symptom-specific information when any sufficiently motivated actor can write a scraper that links anonymous looking URIs on healthcare domains to conditions and symptoms referenced in the page content.
Are you in the USA? Thanks to Obamacare your medical history doesn't matter anymore. I purchase my own insurance and only three things matter:
the type of coverage (bronze, silver, gold, etc)
my policy cost based on age and gender
my wife's policy cost based on age and gender
each of my children's policy costs (I don't
remember if age or gender matter, I don't think
They can charge tobacco users higher premiums in many states (there is a federal limit of 50% more, but states can impose a lower limit, and some do).
I'm far from an expert, but I do think that the majority of legislative efforts as well as many initiatives from browser makers are approaching this wrong. Privacy, as much transparency as possible and optional setting for anything that comes with a trade-off need to be built into the browser, and not as a request sent to websites.
Transacting, being logged in, and certainly browsing are not inherently hindered by privacy. It's up to users (or their browser really) to demand it, in the economic sense of demand.
For now, there is no cost to this kind of tracking so it happens almost by default. Moral or even legislative pressure will not have the same effect as economic pressure. The decision to protect users privacy or not needs to come with costs.
It's a shame so many people use Chrome. They're effectively giving an Ad company which specialises in tracking people, power to control how the web develops.
If I, Mr. Spy Provider, start seeing a single user who has every possible documentable illness, that user's search has been polluted and is worthless.
So, how does one do this? Someone needs to write a search algo that pulls 100 crap medical searches for every good one. All you need to do is query the 1px image on the page. I'm guessing that could be done with 10KB/illness search for privacy pollution.
Should we have to? No. But this is the reality we live in. We can use the tools to keep us from being "found", but we still are querying the server the content is on. Nothing we can do about them selling that log. But we can pollute that log.
That said, there are browser extensions which run random/arbitrary background Web queries.
Instead, lets use The Pirate Bay. We can build a scrape of WebMD and a few other places. The front page would have every disease WebMD has. And then we upload it to TPB. Highly illegal, but it does solve the problem of tracking our individual illnesses.
And Wikipedia's fully syndicable.
Improve Wikipedia medical content, syndicate.
Problem solved, laws unflouted.
In the US maybe, but I would guess the business practices of most data brokers are already completely illegal in the EU. We have many laws and requirements for keeping and selling data on EU citizens. I would welcome stronger actions against these companies in the EU.
But somehow I fear that enforcing EU laws on US companies is not part of the TTIP trade agreement under negotiation between US and EU.
Google Analytics cookies are first-party (i.e. only available to the domain of the site).
IIRC just few years ago when EU stared investigating Google and asked Google where does user data come in from, Google wasn't able to answer. I don't think they are able to track it anymore.
Google Analytics can be set up in a few minutes by anyone who could set up their own web site in the first place.
Setting up Piwik means understanding this: http://piwik.org/docs/installation-maintenance/
If you run web sites for a living, the latter is no big deal. If your company is a florist and you just learned a bit of basic HTML to write your blog about flower arranging, what's a MySQL?
I tried setting up Piwik for my website. Instead of showing N number of visitors, it consistently shows one or two. Tried googling, read the docs, nothing! Gave up. I have no idea why it fails so spectacularly.
Google Analytics will do just fine for now until better understanding my site's visitors becomes a more valuable proposition. Right now it's not worth it.
This kind of might be. Ideally, anonymous is supposed to mean collecting no data at all.
So i started using Disconnect instead...
So it seems we could do with strong adblocking, but more useful (given spam email still exists) more useful will be actual enforced laws.
(I may be getting a bit old...)
The only thing that can be done is to make privacy and ad blocking tools universally deployed, and let the fallout happen.
Google AJAX Search API
ScoreCard Research Beacon
Put these in your hosts file:
At least on Windows, the former is much faster to return with an error.
It also avoids potential conflicts if you happen to need to run a server on port 80 for any reason.
Further reading: http://serverfault.com/questions/78048/whats-the-difference-...
On *nix, the behaviour is slightly different and is supposed to ping localhost instead (confirmed on one of my Linux machines):
Perhaps it is left up to the implementation, in which case both Windows and *NIX would be, strictly speaking, correct.
I've puzzled over using 127.0.0.2. What I really want to get is an RST immediately upon sending a SYN.
I strongly recommend using a hosts file these days, as well as tools such as uMatrix, privoxy, and where possible, Tor (more discussion below).
And thanks, by the way.
Better: install CyanogenMod.
Edit: I see some of the ones you posted are not on the file.
I do wish the "defaults" were more trackless oriented though, for example, !i searches images... on google images.
My pattern is if DDG finds nothing of value or brings up too much noise, hop over to Google and sometimes it does better. If I can't find it via either search engine it's probably not out there.
I will say that there's rarely a case where Google is worse than DDG... unless you count the tracking and near-monopoly issues but those are peripheral to the core function.
- Privacy Badger by the EFF: https://www.eff.org/privacybadger
- uBlock Origin (blocks ads as well): https://chrome.google.com/webstore/detail/ublock-origin/cjpa...
- HTTPS Everywhere by the EFF: https://www.eff.org/https-everywhere
I recommend running all three, they each do a different job and cover all the bases. If you are on Firefox, RequestPolicy (https://addons.mozilla.org/en-US/firefox/addon/requestpolicy...) is also useful but I find PrivacyBadger simply does the job better.
lots of websites pull shit from 20+ 3rd party domains.
An obnoxious number of sites block/don't render completely ( or at all ) without allowing shit from all over the web to load.