On the other hand... nobody's really got decentralised spam prevention working, and the only really effective anti-spam systems that I know of rely on hidden data, which implies centralisation.
The only serious ways of dealing with it are pay-per-use, which disproportionately affects certain subsets of the population, or web-of-trust, which nobody's got working for reputation on a grand scale yet.
How well does such a solution scale up? You need to keep the requirements low enough that it runs on mobile CPUs, but if it becomes widespread enough, doesn't it make sense for a bot farm to pick up a Bitcoin mining ASIC to grind out the hashes for them?
so far it work great. in future issues might arise, but then i can always tweak hashing algo or switch blockchain. So these ASICs, created purposely to crunch hashes for my service will be obsolete the moment i tweak it a bit. So it have to be software...
As for mobile/desktop/etc - I would expect each community to have their own main audience to which site owner can tweak `complexity` parameter. And in V2 work will be happening in background while you browsing site, so when time come to post comment - enough work already will be done. Hope this make some sense :)
"When you send a message, your client must first compute a Proof of Work (POW). This POW helps mitigate spam on the network. Nodes and other clients will not process your message if it does not show sufficient POW. After the POW is complete, your message is shared to all of your connections which in turn share it with all of their connections."
Rather different, actually. If I decide your inbound SMTP mail is spam, I can block it. Ditto for blog comments, XMPP contact requests, or really most any protocol in use on the public Internet b/c of course there will be griefers and spammers and all kinds of bad out there and we need tools to filter and protect against it.
You have the right to say anything you want (though not without consequences). Conversely, I should have the right to literally not see/hear it once I've decided it is causing me harm.
Claiming that your platform/protocol makes this impossible just sounds naive at best and nefarious at worst to me.
There was already an existing bug report in Bugzilla when I first ran into issue 1) years ago, and I added my comments to it.
Issue 2) I haven't. As annoying as it is when it does happen, it's not that common and although requires some extra steps, doesn't result in looking unprofessional to clients like in 1). However, there has been a few instances lately where my staff have created new subfolders within a shared imap folder and moved e-mails into them, and I've been unable to find those emails until I recall this issue. So it's probably worthwhile to start gathering details and file a report to prevent this interruption in workflow.
It's my understanding (I don't have a source, just a recollection of a BS episode on "organic" food) that we would be unable to support our current population with non-GMO crops, let alone support the growth in population we are anticipating.
So, what's the alternative? Let 30% of our population starve because food prices shoot through the roof?
I've been hoping for years people would wake up to the risks of these things.
I presented on the topic at Blackhat Europe a few years back, where I disclosed several certificate validation flaws in Cisco Ironport. I understand there's legitimate reasons for enterprises to want to decrypt and inspect TLS connections, but it's not without it's risks and downsides.
Yeah, it's a balancing act, and there's certainly a desire (and probably even a legitimate need) to monitor encrypted comms for malware C&C channels, data exfiltration, etc.
Your view seems to reflect a similar nuance as my own. Administrators need to weigh the risks and benefits as it relates to their own environment, and users should at least be aware that such monitoring is taking place. Beyond that, there's some technical challenges, but I see the bigger issues as political and expectation vs. reality alignment.
Those kind of monolithic network security systems see to be intrinsically pointless. If a user can run code on the machine then they can probably get around the network level security. So any implementation is dependant on AV software preventing circumvention. At that point you might as well install the tracking/filtering software on the local machine.
No. Network level security, if correctly installed, cannot be avoided by just running some code on your local workstation. If you have it installed on the station itself, then it is easier to avoid by just shutting it down. Also network based security can isolate workstations that are suspicious.
And your 'monolithic' is a symptom of architecture, that is either outdated ("not hipster") or just bad. But that does not mean that someone can't build hipster and good network level security. I guess, Google does not buy that off the shelf.
>> No. Network level security, if correctly installed, cannot be avoided by just running some code on your local workstation.
Don't you have to intercept/reject TLS to make that workable? Otherwise the user (or malware) can upload or download anything and all you see at the network level is a destination IP address. If a user has admin rights (which is common in corporate environments) then they can install software which can mimic a browser using HTTPS.
At the network level it is difficult to identify what program generated a request and which user was running that program. I am very sceptical of the heuristic approaches that try and solve this problem (Palo Alto App-ID for example) that display quite shocking emergent properties.
Surely it is technically preferable to track network requests within the OS and browser where you can actually get at information reliably without any hocus pocus. If a user can avoid it by just "shutting it down" then they can also remove the AV, connect to a proxy and spend the afternoon uploading client lists to a porn site.
Yes, the proxy has to offload the original TLS connection in order to do that. And the network owner must deploy its own certificate to the clients.
The whole X.509 infrastructure is based on trust. You have to trust your certificate store, the certificates, the network and its components and CAs need to trust those who request certificates. If you have to use a network that uses a proxy, you have to trust it aswell. If you do not, then just do not use it or at least don't do your online banking over that network (or use a VPN if allowed (sigh)). So a good network security deployment is not only well maintained, but also transparent to its users on what it does. The user must have a choice on whether a network is trustworthy or not.
The problem with SuperFish is that it shipped not only the root certificate, but the private key to sign new certificates on the fly. And the user was not informed about it and not given a choice. This is the problem here.
Most clients I worked for provided me with a separate network for unfiltered internet access (guest networks) in which I used a VPN to a network which I trusted. I was given a choice.
Edit: A thing that bugs me often is when I see a network proxy that does not use TLS for the proxy connections. Unfortunately that is happening in the majority of networks, I see. And that affects my trust, so I rather avoid accessing certain services when I cannot have my VPN.
That is true. That is why attackers (like the NSA) would be happy to infiltrate routers (less changes from the outside like administrators) instead of clients (more changes). A proxy is a quality target, too. But a proxy is also more visible and tampering is usually easier/faster to detect. Corporations need to TLS and/or message encrypt everything. But that is often not priced into (project) budgets and a hard thing to do (key exchange, managing certificates).
That is possible. But that depends on how TLS clients approve wildcard certificates. Wildcard certificates are considered harmful. And AFAIK, browsers will not accept 'star.star' (correct me if I'm wrong). So if I host a MITM proxy, I at least use FQDNs as subjects. It also works better with revocation lists/protocols.
An example for why wildcard certificates are bad is Microsoft. A couple of years ago, they had problems with subdomains which delivered malicious code through hijacked web pages that were hosted on those domains. Microsoft used a wildcard certificate...
I don't see a problem with those solutions that protect networks, if the users know about it. The alternative would be to have no Internet access at all in order to lower risks of loading malicious content.
I see problems with them as well. There's the security risk that the products might have vulnerabilities that expose end users. Secondly, they may cause other problems that are not security problems. For instance, I have experience of a solution where HTTPS proxy mangles AJAX stuff that goes over HTTPS. This will cause very weird problems that are hard to debug.
Here the problem is not that the proxy would be trying to insert advertisements to the content. Just changing IP addresses within AJAX content may break functionality in nasty ways. For instance, so that things work with some browser and not another one, or reuiqre a particular engine setting in MSIE11, or some such. There is no problem in the service itself, but the service gets the blame because people can't think that a Cisco product in between might be the cause.
Of course there are security implications with central services like an enterprise-grade proxy. And anyone using such a solution must do the best to keep it secure. It is all a question of probability and of costs. I bet, most vendors of such solutions will do their best to protect them and their customers. So a network security solution that might have a exploitable hole in a period of time is better than none.
I've been working my entire career for large companies. I've experienced many solutions and I cannot remember one technical problem that was caused by network security, other than "InsertYourSocialNetworkOrBinary was denied by SecurityRuleXYZ". At several companies I had to sign a paper that informed me about the security implications and my duties when using the companie's Internet/network access.
I have also worked for larger companies, mostly, and within them I have actually experienced many technical problems caused by network security solutions.
HTTPS man-in-the-middle proxying is one particular scourge that causes weird things - the problem reports being of the kind that in a completely legitimate and intended use case, "Chrome works, MSIE does not".