Hacker Newsnew | comments | show | ask | jobs | submit | joelthelion's commentslogin

I really wish something like this would take off.

Reddit and friends are great, but it's just too easy to game the system/censor if you control the central server.

reply


On the other hand... nobody's really got decentralised spam prevention working, and the only really effective anti-spam systems that I know of rely on hidden data, which implies centralisation.

The only serious ways of dealing with it are pay-per-use, which disproportionately affects certain subsets of the population, or web-of-trust, which nobody's got working for reputation on a grand scale yet.

reply


I created https://hashcash.io/ to try avoid spam by forcing bots and users to "pay" with CPU cycles.

reply


How well does such a solution scale up? You need to keep the requirements low enough that it runs on mobile CPUs, but if it becomes widespread enough, doesn't it make sense for a bot farm to pick up a Bitcoin mining ASIC to grind out the hashes for them?

reply


so far it work great. in future issues might arise, but then i can always tweak hashing algo or switch blockchain. So these ASICs, created purposely to crunch hashes for my service will be obsolete the moment i tweak it a bit. So it have to be software...

As for mobile/desktop/etc - I would expect each community to have their own main audience to which site owner can tweak `complexity` parameter. And in V2 work will be happening in background while you browsing site, so when time come to post comment - enough work already will be done. Hope this make some sense :)

reply


Bitmessage uses proof-of-work to prevent spam.

"When you send a message, your client must first compute a Proof of Work (POW). This POW helps mitigate spam on the network. Nodes and other clients will not process your message if it does not show sufficient POW. After the POW is complete, your message is shared to all of your connections which in turn share it with all of their connections."

https://bitmessage.org/wiki/Proof_of_work

https://bitmessage.org/wiki/FAQ#How_does_Bitmessage_work

reply


So what does a message actually cost in terms of dollars, power, time? How does it prevent low volume spam?

reply


And that may prevent botspam, although I'm skeptical. But I seriously doubt it will have an effect on abuse (people stalking others, posting harrassing Tweets, etc.)

reply


This. The project homepage literally claims (as a good thing) that "...no one can censor you. No one can remove your posts. Your account cannot be blocked."

So not only does this not help with abusive behavior, it states that abuse will live forever and be impossible to block or filter.

/me shudders

reply


Almost like the Internet. So spooky

reply


Rather different, actually. If I decide your inbound SMTP mail is spam, I can block it. Ditto for blog comments, XMPP contact requests, or really most any protocol in use on the public Internet b/c of course there will be griefers and spammers and all kinds of bad out there and we need tools to filter and protect against it.

You have the right to say anything you want (though not without consequences). Conversely, I should have the right to literally not see/hear it once I've decided it is causing me harm.

Claiming that your platform/protocol makes this impossible just sounds naive at best and nefarious at worst to me.

reply


Easy. You just get message from the people you "follow". So if you want to send a message to someone, you must make them add you to their "list" first.

reply


Thunderbird's spam filter works pretty well, doesn't it? It's completely decentralized.

I do agree it's a difficult problem though.

reply


Shared/public/open source warning lists are distributed intelligence, letting end-users pick their own feed.

reply


Web of trust.

reply


Have you reported the two problems, or at least asked about them on the Mozilla support website?

reply


There was already an existing bug report in Bugzilla when I first ran into issue 1) years ago, and I added my comments to it.

Issue 2) I haven't. As annoying as it is when it does happen, it's not that common and although requires some extra steps, doesn't result in looking unprofessional to clients like in 1). However, there has been a few instances lately where my staff have created new subfolders within a shared imap folder and moved e-mails into them, and I've been unable to find those emails until I recall this issue. So it's probably worthwhile to start gathering details and file a report to prevent this interruption in workflow.

reply


The problem is that this 5% hit very often becomes a 1000% hit when you let the ORM in the hands of people who don't understand it and they do a thousand queries when a single one would be enough.

reply


How can you tell if you're using HTTP2?

reply


In Firefox, HTTP responses using SPDY will have an added X-Firefox-Spdy header you can see in the inspector.

reply


This question is obviously asking about Firefox, but for Chrome users, here's how (starting in version 41, a few weeks from stable): http://ma.ttias.be/view-httpspdyhttp2-protocol-google-chrome

reply


The "HTTP/2 and SPDY indicator" add-on adds lightning bolt icon to your address bar: blue for HTTP/2, green for SPDY:

https://addons.mozilla.org/en-US/firefox/addon/spdy-indicato...

reply


And yet, is that argument strong enough to start using GMOs on 80%+ of the cultivated surface right away?

( Source: http://www.ers.usda.gov/data-products/adoption-of-geneticall... )

reply


It's my understanding (I don't have a source, just a recollection of a BS episode on "organic" food) that we would be unable to support our current population with non-GMO crops, let alone support the growth in population we are anticipating.

So, what's the alternative? Let 30% of our population starve because food prices shoot through the roof?

reply


The thing is, since software can be so easily copied and distributed, the protection that 20 years of exclusivity gives is much stronger and more restrictive than in other areas.

I think some effort should be made to adapt the protection a patent grants to software, in order to incentivize R&D without completely hindering things such as free software.

reply


I hope the people in IT departments who like to pretend things like Websense or Cisco Web Security are good things are reading this.

reply


I've been hoping for years people would wake up to the risks of these things.

I presented on the topic at Blackhat Europe a few years back, where I disclosed several certificate validation flaws in Cisco Ironport. I understand there's legitimate reasons for enterprises to want to decrypt and inspect TLS connections, but it's not without it's risks and downsides.

If you're curious about my past work, see: http://www.secureworks.com/cyber-threat-intelligence/threats... http://media.blackhat.com/bh-eu-12/Jarmoc/bh-eu-12-Jarmoc-SS...

reply


Good set of slides. Companies are more likely to be afraid of the other risk, which is why SSL interception is used - when malware makes use of it to avoid detection.

Security cuts both ways. I think the most important point is that the user should be in control of the traffic, which means knowing whether or not interception is being used.

reply


Yeah, it's a balancing act, and there's certainly a desire (and probably even a legitimate need) to monitor encrypted comms for malware C&C channels, data exfiltration, etc.

Your view seems to reflect a similar nuance as my own. Administrators need to weigh the risks and benefits as it relates to their own environment, and users should at least be aware that such monitoring is taking place. Beyond that, there's some technical challenges, but I see the bigger issues as political and expectation vs. reality alignment.

There's also a video of my talk online, which I'd honestly forgotten about. Maybe someone will find it interesting; https://www.youtube.com/watch?v=7TNdHzwTNdM

reply


Those kind of monolithic network security systems see to be intrinsically pointless. If a user can run code on the machine then they can probably get around the network level security. So any implementation is dependant on AV software preventing circumvention. At that point you might as well install the tracking/filtering software on the local machine.

reply


No. Network level security, if correctly installed, cannot be avoided by just running some code on your local workstation. If you have it installed on the station itself, then it is easier to avoid by just shutting it down. Also network based security can isolate workstations that are suspicious.

And your 'monolithic' is a symptom of architecture, that is either outdated ("not hipster") or just bad. But that does not mean that someone can't build hipster and good network level security. I guess, Google does not buy that off the shelf.

reply


>> No. Network level security, if correctly installed, cannot be avoided by just running some code on your local workstation.

Don't you have to intercept/reject TLS to make that workable? Otherwise the user (or malware) can upload or download anything and all you see at the network level is a destination IP address. If a user has admin rights (which is common in corporate environments) then they can install software which can mimic a browser using HTTPS.

At the network level it is difficult to identify what program generated a request and which user was running that program. I am very sceptical of the heuristic approaches that try and solve this problem (Palo Alto App-ID for example) that display quite shocking emergent properties.

Surely it is technically preferable to track network requests within the OS and browser where you can actually get at information reliably without any hocus pocus. If a user can avoid it by just "shutting it down" then they can also remove the AV, connect to a proxy and spend the afternoon uploading client lists to a porn site.

reply


Yes, the proxy has to offload the original TLS connection in order to do that. And the network owner must deploy its own certificate to the clients.

The whole X.509 infrastructure is based on trust. You have to trust your certificate store, the certificates, the network and its components and CAs need to trust those who request certificates. If you have to use a network that uses a proxy, you have to trust it aswell. If you do not, then just do not use it or at least don't do your online banking over that network (or use a VPN if allowed (sigh)). So a good network security deployment is not only well maintained, but also transparent to its users on what it does. The user must have a choice on whether a network is trustworthy or not.

The problem with SuperFish is that it shipped not only the root certificate, but the private key to sign new certificates on the fly. And the user was not informed about it and not given a choice. This is the problem here.

Most clients I worked for provided me with a separate network for unfiltered internet access (guest networks) in which I used a VPN to a network which I trusted. I was given a choice.

Edit: A thing that bugs me often is when I see a network proxy that does not use TLS for the proxy connections. Unfortunately that is happening in the majority of networks, I see. And that affects my trust, so I rather avoid accessing certain services when I cannot have my VPN.

reply


I guess that corporations need lots of network level security because they have so much unencrypted sensitive data on their networks which places a lot of implicit trust in that network.

reply


That is true. That is why attackers (like the NSA) would be happy to infiltrate routers (less changes from the outside like administrators) instead of clients (more changes). A proxy is a quality target, too. But a proxy is also more visible and tampering is usually easier/faster to detect. Corporations need to TLS and/or message encrypt everything. But that is often not priced into (project) budgets and a hard thing to do (key exchange, managing certificates).

reply


Why does Superfish sign new certificates on the fly? Why not just use wildcard certificates?

reply


That is possible. But that depends on how TLS clients approve wildcard certificates. Wildcard certificates are considered harmful. And AFAIK, browsers will not accept 'star.star' (correct me if I'm wrong). So if I host a MITM proxy, I at least use FQDNs as subjects. It also works better with revocation lists/protocols.

An example for why wildcard certificates are bad is Microsoft. A couple of years ago, they had problems with subdomains which delivered malicious code through hijacked web pages that were hosted on those domains. Microsoft used a wildcard certificate...

https://tools.ietf.org/html/rfc6125#section-7.2

reply


I don't see a problem with those solutions that protect networks, if the users know about it. The alternative would be to have no Internet access at all in order to lower risks of loading malicious content.

reply


I see problems with them as well. There's the security risk that the products might have vulnerabilities that expose end users. Secondly, they may cause other problems that are not security problems. For instance, I have experience of a solution where HTTPS proxy mangles AJAX stuff that goes over HTTPS. This will cause very weird problems that are hard to debug.

Here the problem is not that the proxy would be trying to insert advertisements to the content. Just changing IP addresses within AJAX content may break functionality in nasty ways. For instance, so that things work with some browser and not another one, or reuiqre a particular engine setting in MSIE11, or some such. There is no problem in the service itself, but the service gets the blame because people can't think that a Cisco product in between might be the cause.

reply


Of course there are security implications with central services like an enterprise-grade proxy. And anyone using such a solution must do the best to keep it secure. It is all a question of probability and of costs. I bet, most vendors of such solutions will do their best to protect them and their customers. So a network security solution that might have a exploitable hole in a period of time is better than none.

I've been working my entire career for large companies. I've experienced many solutions and I cannot remember one technical problem that was caused by network security, other than "InsertYourSocialNetworkOrBinary was denied by SecurityRuleXYZ". At several companies I had to sign a paper that informed me about the security implications and my duties when using the companie's Internet/network access.

reply


I have also worked for larger companies, mostly, and within them I have actually experienced many technical problems caused by network security solutions.

HTTPS man-in-the-middle proxying is one particular scourge that causes weird things - the problem reports being of the kind that in a completely legitimate and intended use case, "Chrome works, MSIE does not".

reply


I believe Vagrant takes care of that for you.

reply


I think the fact that MS has a good marketing is a better explanation.

Although I definitely agree that Word and friends are good products.

-----


To be fair, this has exactly zero effect on the NSA hacking your machine, especially if the hard disk firmware is compromised.

-----


Only if the firmware is compromised before you install the OS. It adds a significant hurdle to any malware gaining write access to your hard drive's firmware.

-----

More

Applications are open for YC Summer 2015

Guidelines | FAQ | Support | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: