Obviously you can start a whitelist, but as some other comment said, it's pretty limited... and I don't want the hassle to add in every domain I go to (given the number of different blogs posts from different domains that show up on HN).
For now, I'll be sticking to AlgoVPN because I can't imagine how they'll protect the safety of users too (maybe a blacklist... although that'd be hard to given the numerous "bad" websites out there that make you suspect of further surveillance).
Given this relatively bad information, it's somewhat hard to trace. But as I've said: Government surveillance is government surveillance. These petty tactics to get around AWS/GCP won't cut it. Pretty sure they'll call up my bank using the card details and get my info real fast.
What about sharing it as an internal non-exit VPN in a nested chain?
It's not as much "non-exit" as Tor middle relays. Because it just connects to the VPN2 server using OpenVPN over standard TCP/IP. Instead of some proprietary protocol. But at least it's locked down with pf rules, so that it can only connect to the VPN2 server.
The diagram shows a nested chain with just two VPNs. But you can add more layers. As I recall, as many as six or so. Latency goes up, and MTU goes down. But throughput doesn't crash as much as you might think. I don't know why. But maybe it's caching.
So basically, you have a NAT chain locally in VirtualBox or whatever. And each NAT router includes a remote VPN server.
In order to share it, you'd need to open a port for incoming OpenVPN connections. Either locally, or forwarded to one or more VPN servers. And then you could route traffic through another VPN server in the chain.
It feels like the only reason it is being pushed so hard is so BAT holders can make a buck. It might be a great browser but I will always think of it as onecoin with a some chrome tossed in.
Unfortunately, the ad-system that has become infested with malicious tracking and more is also a means by which creators across the Web find support. This is why Brave introduced the Brave Rewards component. So that we can create not only a safer Web, but a sustainable Web.
Opting-in to Brave Rewards means you're able to earn tokens for your attention. By default, these tokens are then donated to the sites and properties you visit throughout the month. The more you visit a property, the higher their end-of-month contribution will be.
All of this works without violating your privacy, thanks to on-device matching of ads and machine-learning. The token integration is a minimal component, but with a massive impact on the long-term sustainability of the Web. I'm happy to answer any other questions you may have.
All the best!
I could use cURL to perform all web browsing, but my IP Address + User Agent could still be tracked by the website I visit.
With time, what seems to be occurring is a game of cat-and-mouse where trackers develop more powerful heuristics for creating fingerprints.
With that said, I think you're correct in that it'll be a game of cat and mouse, but I'm not sure what the alternative is. Are you implying that there is anything that can be done beyond the traditional cat and mouse? Because I feel that's the same with security, crypto, etc etc.
If we go back to basics, where I can make a network request, and the body includes a useful response (e.g., no need for running JS to populate the DOM, as is the case with SPAs that aren't server-side rendered), we can free ourselves from those more advanced heuristics.
It will likely always be cat-and-mouse, but we can rethink the universe of data available within the browser (that can be reported back via XHR requests), and make that universe much smaller.
I already wrote this before in another comment. Basic Attention Token is not a secure cryptographic system. The idea to pay tokens for shown ads cannot be cryptographically secure. There is no known way to have a cryptographically strong "Proof-of-Watch". All that browser does is, when a user watches an ad, it communicates to its backend and asks the backend to send a token to an address attached to the user. It's not a cryptographic system that mines coins by showing ads.
It's a useless gimmick that has nothing to do with cryptocurrency. The real coins are so valuable because they are cryptographically strong. This thing is centralized and its mechanism of payments for ad views is not cryptographically strong. The token has some value only because of peoples' stupidity.
Judging by your choice of words, I assume you're a proponent of using Bitcoin. We did this, originally. Unfortunately, Bitcoin was at that time experiencing serious issues with network congestion and large fees. Our users (who only with to buy $5 or $10 at a time) would often have to pay nearly as much in fees. That clearly isn't sustainable. Introducing BAT (on the Ethereum blockchain) meant we had a faster, more reliable system. It also meant the creation of the User Growth Pool, a reservoir of 300 million tokens that could be gifted to early users to raise this novel apparatus off the ground (and it has been working wonderfully at that).
If there are any questions I can answer for you, I'd be happy to chat further.
1. The power of blockchain is in its cryptographic strength. Without cryptographic strength a blockchain is worthless. Strength of a system is defined by the weakest link. The weak link of Brave + BAT is in inability to mathematically prove an ad view. Neither there is a known way to cryptographically mine coins by viewing ads. This means, there are no cryptographically secure methods to pay for ads. What you made is a program that displays an ad and ask your server to send coins to the user. Of course, this can be spoofed. Hackers can reverse engineer how Brave communicates with your backend and spoof it. There is no cryptographic way to prove that an ad has been shown. Hackers can make the windows with ads invisible etc, and still receive reward. And I'm sure they are doing it, but as long as spoofing rates are within your business model, you don't mind because everyone is making money and you don't want to ruin the party.
2. I'm not a proponent of Bitcoin particularly. I dislike everyone who creates a new coin for a fake reason, for something that doesn't need a new coin and issues a trillion tokens. I am for progress, and I don't mind when a new really innovative coins appears with a separate blockchain, but I hate when a new coin is created just to issue a trillion of tokens, give it away for free, and in this way giving it perceivable value. It at least must be mined, and some resources (electricity and hardware) must be spent to back up its value; a trillion token issued out of nothing don't have value. I would not care if it was just a silly useless project, which GitHub is full of, but there is an irresistible temptation to create a heap of tokens, keep a little bit, give the rest away, apply some sleazy marketing and make people believe that there is some value behind the tokens. A decent project must avoid at all cost creation of a new token without a reason that absolutely requires a new token, and instead use an existing token that has value behind it (resources are being spent on creation of that heap of digital money).
I always liked blockchain, I will always like it. I use Monero much. But the current blockchain industry is full of projects that are fake blockchains, centralized blockchains and especially systems where a blockchain and a product cannot be cryptographically linked. Such as reselling electricity through blockchain, track fruits from a farm to a shop through blockchain etc. I only don't understand if people pretend that they don't see this because everyone has a share in the growing industry, or they are really so stupid that they don't see the fundamental problem.
The BAT component is off-by-default in Brave. Only enabled when the user explicitly opts-in to the feature. This is a necessary component, as blocking-alone is not a solution to the sustainability problem. Blocking trackers and their ads means blocking revenue for the content creators and publishers we all know and love. Extra steps have to be made if we're going to continue to foster and grow the Web we have all come to love.
With Brave you enjoy a base-line experience of privacy and security out of the box. Opting into Brave Rewards means you can earn tokens for your attention, without giving up your data. Those tokens are automatically queued up for an end-of-month contributions to the sites and properties you visit most. Or, you can tip those properties in a one-off-manner (like I do every time I land on a Wikipedia page).
I hope this helps a bit. If there is anything further I can address, I'd be happy to chat. Thank you for your time and attention :)
As for long term sustainability of the web , Brave, imo, has a better idea on their hands than Google's proposed privacy-sandbox . For the sake of competition and innovation, I hope there are many more such initiatives.
* BAT is a non-financial utility token, not a currency.
* The Brave Referral Program specifically prohibited participants from making statements that BAT is a currency, a store of value, or an investment.
Examples of real-world non-financial utility tokens are amusement-park ride tickets and beer-garden food-and-drink tickets.
It should also be noted that the law determines what qualifies as a currency, not the issuer. If it looks like a duck, quacks like a duck - you know the rest.
I'm not sure why Brave thinks the public will seriously value these.
Here's the thing: the law looks to intent. If the sole purpose of issuing these things is to evade regulations by coming up with something that's not designated a currency but otherwise operates like one, no amount of jumping up and down and screaming "it's not a currency! don't call it a currency!" isn't going to make it something other than a currency. Courts aren't dumb and they don't look kindly to parties who try to game the system.
In your hypothetical, the existence of an option of selling this thing on an exchange for other currencies makes it very similar to any other kind of currency. I think you'd have to eliminate that option to avoid getting too close to the line. But if you don't have that option, I just don't see how it'll have any significant value to the recipient.
With a token such as a laundry/car-wash token, an amusement ticket, or a Disney Dollar, you can't get cash back from the issuer. Once you buy them, they're yours forever, unless you can find a third party to give you money for them. The fact that you can (try to) sell them on eBay to a willing recipient doesn't make them currencies.
So I think it comes down to who controls the exchange. If it's a third party with no connections whatsoever to the issuer, then it's unlikely to be considered a currency. But if the issuer is also operating the exchange, or has a connection to the operator, then I think it's going to look a lot more suspect in the eyes of the law.
Again, this isn't legal advice - consult a licensed attorney in your jurisdiction.
In something like Lokinet, the whole thing is distributed and the people that run the service nodes get rewarded with coins. But normal end users don't have to think about the coin at all.
Inaccurate. Brave wants to overhaul advertising: to be able to switch it off completely paying a fee, or earn money by not switching it off (and thus watching the ads), tune it, etc
Having said that-- ads are an unethical distracting nuisance. From the evidence of every use case outside of esoteric journals they will increase their aggression even to the point of threatening to destroy the value in the medium to which they are attached. Try listening to a Youtube version of "Tristan und Isolde" that has ads turned on. It becomes a broken video file at that point.
Worse-- web site owners have already shown that they lack the expertise needed to asses the ethics of the ad delivery systems they use. I can't tell you how many otherwise ethical open source devs used to have a fake download button from ad malware right above the real download button for their software. Never mind that the previous incarnation of Sourceforge just decided to turn evil one day and bundle malware. I don't think Github would do the same thing today, but most open source devs have no plan of what to do if they did. So we're not any better off today in terms of awareness of these problems.
Plus, adding cryptocurrency tokens to that same confusion in no way makes it easier for those same devs to suss out the ethics.
Edit: just to be clear-- the fake download button came from the ad network domain. The site owners almost certainly just leveraged ads to pay the bills under the logic, "How bad could it possibly be for the UX?" They'll ask the same question of Brave's system, or any system, and have the same lack of expertise with which to understand the given answer.
Brave's model is to try to remove the coercion from advertising. Right now most companies are spending immense amounts of efforts spying on your and then trying to shove ads at you, and fighting every effort to block those ads or to avoid their spying. Brave's model is instead to try to create a more cooperative system. You view ads if and only if you want. The motivations they give you for this is to support the sites you like while also getting a little kickback yourself.
Somewhat analogous business models have failed, repeatedly, in things such as 'socialism restaurants' that tried to operate on a pay-what-you-can scheme since enough people opted out (by paying $0) to make it a losing venture. But I think it's something that will likely succeed here since the purchase price is always $0 - you're paying with attention, not money. Hahah, perhaps one of these socialism restaurants could actually work if they also provided a "free" pay method such as watching an ad!
 - https://github.com/brave/
I'm not trying to cast stones here, but they're still primarily (last time I checked) funded by Google- not users. Your incentives are aligned with the people who pay you.
1. Build and maintain a browser which creates a local profile of its user, a profile which never leaves the user's machine.
2. Sell ads which can be targeted to users with certain characteristics. Distribute the entire catalog to every user's machine. The browser selects a suitable ad from the catalog and displays it to willing users.
Every browser has features that you don't use.
Facebook can't change the privacy rules or the settings of your browser.
The optional rewards program that happens to use a distributed ledger for settlement?
Lets see how that sounds when it is rephrased
“I refuse to use an airline with a rewards program attached to it”
“I refuse to use a credit card with a rewards program attached to it”
But since nobody says that, you lose your mind when a blockchain based one is used? Which is also as entirely optional as the above programs? Which you use as an ad hominem attack to add non-sequiturs to any contribution under the “Brave” brand such as this ZK VPN system which doesn't even use the digital currency? Fascinating, lets revisit this “taboo” next year to see!
“I refuse to use a credit card with a digital currency attached to it”
But since nobody says that, you lose your mind when a blockchain based one is used? Which is also as entirely optional as the above programs? Which you use as an ad hominem attack to add non-sequiturs to any contribution under the “Brave” brand such as this ZK VPN system which doesn't even use the digital currency? Fascinating, lets revisit this “taboo” next year to see if it is one at all!
Something isn't adding up, to me. If the assumption is that all "good" sites _can_ be enumerated, then wouldn't Tor (or other systems) exit nodes already be capable of blocking CP?
Someone connect the dots for me....
The whitelist approach may work similarly to adblocker lists, where you say "I trust Jim's List Of Friendly Websites". I don't know how good it is for performance though.
Don't want to take that risk.
Blocking *.onion on the other hand wouldn't be necessary from a "legal protection" standpoint: hidden services don't see the original IP of the client.
They can't do this to onion services
This allows edit nodes to decide what types of content will be routed to their node.
From the paper:
> Note that such a proof is not straightforward. We firstly
prove that a ciphertext, CS N I , is the result of an encryption
without disclosing the public key nor the plaintext. This
causes the highest overhead in our construction. We use
the construction presented in  for this purpose.
> Then we
need to link the public key encrypted in clause two, with
the one used in clause one. For this we use a proof that two
commitments hide the same secret .
> Finally the third clause
can be openly computed by A given that it received the public
key from R.
> Using this, S can convince A that the tunnel created is to
a domain that the latter considers valid, without disclosing
Tor absolutely spreads US propaganda. The "clear net" does too. The utility of Tor is that it prevents that clear net propaganda from being firewalled.
In the case of Hong Kong, we know who funds the terrorists. They meet with them in person and have been photographed, and the US makes no secret of how much they spend on terrorism in Hong Kong. But I still wonder how guides and instructions are given to the right people. Having read many CIA documents written to terrorist groups, I do wonder how they send these to HK terror leaders in the digital age. Tor is a candidate. Number stations, I kinda doubt.
I suggest looking into the Loki project https://loki.network/
I'm not completely sure how these two efforts compare, but Lokinet is essentially a more privacy protecting version of Tor.
I was thinking of approaching the white listing problem by whitelisting users, I. E. I share my node with friends I trust, and that gets propagated through the network depending on trust levels.
Trying to protect users through access control is foolish. It's like running a Tor exit from home.
1) Here users are using the bandwidth and it's not resold to companies like Hola does is with https://luminati.io. At least for now.
2) They whitelist domains, so they could only whitelist example.com and you know it's not like Tor where everything goes or Hola where someone is web scraping things through your IP.
But the very idea of sharing my uplink is anathema. Maybe if everyone curated their own whitelists. But once people rely on whitelists from "trusted" peers, all bets are off.
A safer alternative would have users sharing access to each others VPN service connections. That would at least insulate users somewhat from malicious/illegal traffic routed through them.
Indeed, I routinely route traffic through nested chains of 3-5 VPN services. A common criticism is the cost of multiple accounts. And I typically have even more accounts at any given time, for variety.
But if a bunch of people pooled access to their VPN services, or to VPNs that they ran privately on anonymously leased VPS, each one could have a much larger variety of VPN paths and exit IPs. And you could multiplex and split traffic through the VPN network, to increase anonymity. Or aggregate links, using MPTCP, to increase throughput. And you could even implement something like Tor's process of switching circuits every 10 minutes.
I bet that I could implement a simple version of that with routing tables and iptables rules. And some shell scripts. Perhaps with network namespaces, for a little more security. Even Docker, maybe.
But not just sharing ISP uplinks. That will end in tears.
That's not a bad idea, actually. Who maintains those whitelists and how do they get updated? If you want to make the web somewhat usable for others, is it enough to whitelist "google.com"/"youtube.com" only (for example)?
I run a relay node on my personal server and never had any issues. But 1) I rarely browse the Internet from that IP and 2) it's in OVH so if it were blacklisted, it could be because of that.
As far as I remember, in France, you can get the same status as an ISP (don't remember the name though) to be able to run an exit node without being held responsible. But you will have to respect certain rules.
So, a more complete--and somewhat more balanced--description of this is in the actual paper, for which this is just a blog post summary; I would think the paper is way more valuable than this blog post, and maybe should even be the Hacker News post target.
First off, the DHT here is unlikely to scale well to large whitelists; yet, for small whitelists, you will (of course) end up knowing the target domain to high probability--which, even for large whitelists, is going to be possible given just the target IP address almost all of the time anyway: even with a CDN, the set of websites you get overlapped with tends to not be extremely large; and, even when it is, it is almost always with a bunch of niche websites that are unlikely to be on your whitelist--so, the premise that this is all hiding from the exit node who you are connecting to is extremely weak.
Oh: and when it does even sort of work with the CDN (due to having the shared endpoint), the user can usually then use domain fronting to trick the SNI, which would bypass this proof and let you connect to any other website behind that IP address; so, really, the way they are doing whitelists is just wrong: the IP address you are connecting to and the totality of what is behind it is way more important than the SNI. Essentially, while you can do this (prove, in zero knowledge, the SNI of an HTTPS connection), it doesn't seem like it really helps a real-world problem (as the situations where the technique works correlate with situations where you failed to hide anything).
Meanwhile, this paper admits to taking 10-30 seconds per HTTPS connection (not per VPN tunnel!) as the DHT lookups and zero knowledge proofs are both slow operations. Somehow, before that completes, it sounds like you just get to use a different node to send "unauthorized traffic"? Why can't I just sit in that regime forever? I am hoping I just don't understand this part, but they say it multiple times as if it isn't such a big deal, and have a bunch of space dedicated to trying to make it sound like the unauthorized traffic would be a small portion of the total traffic (which doesn't exactly sound comforting).
And finally, domain whitelists don't work in the first place: I can post horrible things that get you in trouble to the comments section of a news site (their best example of a kind of website you might whitelist) quite easily; and, for their example of Facebook, it is actively dangerous: Facebook is an entire Internet unto itself that proactively scans for evil things, and so if you whitelist that you are essentially admitting "I would be willing to let you do anything". I could see a URL-based whitelist potentially having value, but not a domain-based one. We shouldn't be making users feel safer with systems that don't even slightly help :(.
(It is maybe also worth reminding that before the advent of encrypted SNI, this data could easily have been used to filter and whitelist traffic... and yet people working on projects like Tor still don't use it for filtering, as it just isn't enough, as you still don't know what the user is doing. It frankly just feels likely to me that the two goals that people want to simultaneously achieve here--"I don't know exactly what you are doing" and "I do know, to some reasonably high certainty, that you aren't doing something that would harm me"--are simply philosophically incompatible without some form of reputation/trust... which then makes achieving a third goal that people want--"I don't know who you are"--much harder.)
Regardless, back to the paper itself, I would argue that this is a single maybe-novel idea--that you can do a zero knowledge proof over the SNI packet of a TLS 1.3 connection with encrypted SNI--that is, as is common in academic papers, trying to be described in the context of a full-scale solution by surrounding it with the minimally-viable wrapper required to turn it into a product for an under-specified use case and then trying to type quickly past the serious downsides (such as the latency), all without being extremely critical of whether the idea itself is useful.
Me, I'd just say zero.