Hacker News new | past | comments | ask | show | jobs | submit login
Libgen Storage Decentralization on IPFS (freeread.org)
281 points by jerheinze 57 days ago | hide | past | favorite | 111 comments

The IPFS mirror really gives a substantial performance boost. In the past, the files had to be fetched from a tired server far away in Russia, the speed can be as low as 0.5 Mbps in my experience (depends on your ISP and connectivity), now it can be lightning fast thanks to P2P, 2 Mbps+, more than adequate to download a 100 MiB+ file. You don't even have to install IPFS (but you probably should, to maintain decentralization), books can be downloaded straight via HTTPS from Cloudflare's IPFS reverse proxy (called a "gateway"), cloudflare-ipfs.com.

The IPFS network itself cannot be taken down, but I wonder when will the publishers discover IPFS web gateways, and start filing DMCA takedown requests against them, and what are the gateways gonna do? Will Cloudflare implement a huge copyright blocklist or even terminate its service (or worse, send a list of IP addresses to Prentice Hall or Elsevier)? What about the semi-official ipfs.io gateway? And the smaller independent IPFS gateways (there are currently 10+ on the public web)? Will they be DMCAed out of existence?

In my experience IPFS is only fast when you're using it through something like Cloudflare's IPFS proxy (which is basically a caching proxy). I haven't found IPFS to be actually usable any time I've tried running a node myself.

Is that unusual?

This has been the state of IPFS for years now. The protocol and client haven't been developed to where they can work reliably in a real P2P scenario. IPFS only appears to work because most requests go strait to one of the big gateways which already has the content cached.

Development of IPFS slowed to a crawl when most of Protocol Labs switched to working on Filecoin. It remains to be seen if either Filecoin or IPFS will ever be viable for their intended purpose.

IPFS is the engine of Filecoin, though. Filecoin just incentivizes IPFS nodes to store things. So further IPFS development should still be necessary from the perspective of making Filecoin development/adoption happen.

(I say this, and yet I know it isn't true in practice. But why isn't it true?)

Pump up Filecoin price. Sell Filecoin. Move to Fiji. Don't do any coding while in Fiji

Ha. I was super surprised that my dad(almost 70) mentioned IPFS to me the other day. It turns out the people he know are running or thinking of running IPFS nodes because of filecoin incentive. Apparently, the people don't want to miss another bitcoin opportunity.

filecoin is also promoting in China. https://mp.weixin.qq.com/s/T0Qt2CEP7QsA6Zy9Vzvdkw

Yes, this is still the case. For a file which is on exactly one other computer, you have about 45 seconds lookup time before the download can start.

I'm not sure you are right about this. The following link will stream big buck bunny directly from the network with full P2P support. It is not using a gateway node for the video content.


You can run the following in the console to see all your peers

    for await (const peer of await node.swarm.peers()) { console.log(peer.addr.toString()) }
The console currently get's spammed pretty hard because they haven't implemented filtering out plain ws (non-wss) peers if you are connecting https. But it will use webrtc to find peers, and find all wss nodes in the network.

For me at least the vast majority of the data is being downloaded from various servers under bootstrap.libp2p.io or preload.ipfs.io. These are clearly dedicated servers not true peers.

The contact info for these servers appears to be retrieved via the DNS rather than the DHT. This isn't surprising because browser clients can't participate in the DHT, but the DHT is currently one of the biggest weak points of IPFS. IPFS's DHT is so chatty that it consumes over a megabit of throughput just to maintain an idle node. Lookups are excruciatingly slow and often fail completely.

Sorta, a peer can advertise dns addresses, and the browser can participate in the dht as a client. The example above uses webrtc-star for browser to browser discovery.

    $ ipfs dht findpeer 12D3KooWJxNHY6zE1KnzFdBJrTMCbhCVNtSLvjp7qR5Wsp49DnUC

Took a good 10 seconds until the video started playing. Anything over a second is too long in terms of UX expectations.

I have to agree. I've had cases where I had to pin data that was on the node already and IPFS just hung the pin for minutes, failed, and then the GC removed the data later on. This is just one of the issues I had, other things include IPFS not being able to discover content on nodes that were already connected to it, IPFS just failing all my queued pins at once for some reason, etc.

I regret ever building a pinning platform on IPFS. The worst thing is that, when I expressed my frustration on Twitter about an IPFS release completely breaking IPNS updates, the founder called me a Karen.


The last time I tried it (maybe 9-12 months ago) it was a real resource hog (including choking the network) and really slow.

Are you opening the required ports (4001, 8080)? I found it was like that too until I did opened the ports.

Interesting, didn't notice that before. I believed my downloads are too obscure to be cached by Cloudflare, but I may be wrong. I'll do a comparison once I fixed my server at home...

IPFS gateways are similar to Tor exit relays, they just route the traffic to the public internet. Tor exit relays still exist, and haven't been DCMAed out of existence so far.

One could also imagine IPFS over Tor.

Tor exit nodes are forward proxies, they are used by the client-side to initiate connections to the public Internet but do not accept connections from the public Internet. You cannot host an Internet-facing webserver on a Tor exit (Onion Service exists solely within the Tor network), DMCA is not an issue, they do not act as a point of content distribution, there is nothing to take down. The main issue is the abusive outgoing traffic, which is seen as unavoidable and outweighed by Tor's greater benefits, so the existence of Tor exits is justified.

IPFS gateways (and Tor-to-Web) are reverse proxies, they are configured at the server-side to accept connections from the public Internet and route traffic to IPFS (and Tor), and they distribute content to the public Web. As far as I can see, this is a problem, someone can send DMCA notices, asking the gateway operators to take files down from the Web. The only protection I see is DMCA Section 512 (a.k.a Safe Harbor Provision), the operators are not liable for things solely hosted on IPFS, which is good, but gateways must comply with the takedown notices. So I think it's entirely possible for Cloudflare to put a huge blocklist on its IPFS gateway, and for smaller gateways that don't have the necessary resources to handle the requests, be DMCAed out of existence entirely. In the 2000s, RIAA launched campaigns against eD2k servers and BT trackers, including the use of honeypot servers to collect information. RIAA was not ultimately successful, but it created a major short-term disruption (eventually new servers would always appear in a different jurisdiction). A similar campaign against IPFS gateways sound possible if the publishers decided to launch an aggressive crackdown like the RIAA. The question is only a matter of whether the decision is made.

I'm not saying that IPFS requires the use of IPFS gateways, they don't, and in a sense, gateways are counterproductive for the decentralization goal. But they do currently provide an useful service for the Web, and I'm just speculating whether a major disruption is possible.

Thx segfaultbuserr for the detailed answer. I was wrong and I agree with you that IPFS gateways and Tor exits are two different things.

Well, then we might need something like IPFS-over-Tor, using Tors anonymity features to make it more safe for operators.

IPFS-over-Tor has been trapped in development hell for 5 years.


If IPFS-over-Tor is in the development hell, IPFS-over-I2P is in the development limbo.


You're almost right. The IPFS gateway nodes also cache the content. IE, they are both storing, hosting, and sharing to other IPFS nodes.

how can one validate that the cached version is indeed bit equal to the one stored within ipfs?

The address of the file is the hash of the file. (Really the hash of the DAG of the file)

So you could validate it if you wanted to. Because the URL _is_ the hash, if you did validate it you could be very sure that it is correct.

If a webserver hosts a file, how can you be sure that file is correct? Sometimes the ship a sums file next to it. At best this can find corruption, but it would not find malicious files.

You hash it, everything is content addressed.

Am I understanding it right that DMCA takedown requests apply to a URL?

Maybe IPFS can introduce a feature to generate one-time URLs, e.g. `oneTimeURL = f(realURL);`.

Websites then can generate new URLs for example every hour or even for each client request. Copyright bots will file a DMCA request to a URL which only they have.

I believe this is similar logic to what MegaUpload used. i.e. They would allow users to easily create links to content, and if DMCA'd would invalidate the links but leave the content alone (and the users free to generate arbitrarily many more links to keep sharing copyrighted content).

It kind of worked for MegaUpload. Their founder got rich and a version of the company still exists. However, the founder is also gradually losing a long court battle and may find himself extradited to the US where he will face significant jail time.

In general DMCA takedown requests reference a URL to locate infringing content, but the actual requests ask that the content itself be removed and not just the URL.

They'll probably lobby the governments for more laws.

It's a politico-technological arms race. Governments make laws. People don't agree with them so they create technology that subverts the government. The state must then become even more tyrannical just to maintain the same level of power. We'll end up with either an uncontrollable population or a totalitarian government.

> send a list of IP addresses to Prentice Hall or Elsevier

I don’t know, do you think nobody has noticed LibGen by now? Do you think the reason they’re still running is because of some jurisdiction issue? Even the lawyers would be wrong on that one.

> do you think nobody has noticed LibGen by now? Do you think the reason they’re still running is because of some jurisdiction issue?

I'm not sure about the implication of your words. But, clearly, Elsevier has already tried to take legal actions against LibGen previously, and once in a while, LibGen's domain names are still being blocked and routinely replaced, just like Sci-Hub. I'd still say the reason it's still running is a jurisdiction issue, or let's say it's a geopolitical issue, it's using Russia to shield itself away from US influences.

But an IPFS gateway hosted in the United States by Cloudflare does not enjoy these legal and political benefits.

And what about when browsers (like Brave) start supporting IPFS natively, and gateways are not needed any more?

Browsers still don't support (better-established, though I'm not sure older offhand) TOR or bittorent yet, so I'm not terribly optimistic.

What's missing from this system(and P2P systems in general), is automatic pinning - i.e. the system automatically choses which files to pin on which user, to ensure every file is available on the net.

It's a bit of tricky problem, because people may not like unknown, random files on their pc, and probably, because such algorithm is distributed and will need to resist attacks.

But i think that the payoff is big: what Bitorrent did to popular files, such an algorithm could do to rare files.

Ipfs has something called “bitswap” which solves some of what you’re describing but is lacking in areas such as long term file availability. That’s where filecoin sort of finds sort fills in the gap.


See the comment above about Freenet, which does something like this; although according to https://freenetproject.org/pages/documentation.html#content it seems to prioritise popular files rather than rare ones.

IIRC eDonkey prioritised rare files, although I don't know if this was a property of the network or just a common choice in clients.

There is a Freenet "KeepAlive" plugin which can be used to selectively pin files which you want to preserve, even if they are completely unpopular.

We built a simple pinning system for a project I'm involved with: each node queries a blockchain-based database for a set of resources to pin locally. Resource publishers pay blockchain fees to advertise their data for pinning.

We also found it necessary to build a kind of "overlay" to the p2p network (how each peer selects its peers) because the default algorithm produced too sparse a network. The probability of one node ever being connected to another node that holds content it would like to sync approaches zero. The topology for the overlay network is also fetched from said blockchain.

If the pinning came with encryption where the key is not known to the pinned system, I think most would be OK with it. There's no way to know, or demonstrate that anyone could know, the contents of any given file. Furthermore, the file could be broken into encrypted chunks and distributed redundantly to many users. Kind of like BT turned inside out.

https://en.wikipedia.org/wiki/Perfect_Dark_(P2P) is this kind of system. You allocate a slice of your disk for the program to use as a global cache, and random encrypted chunks from the network end up downloaded into it.

This is pretty much exactly how Freenet works.

This doesn’t automatically pin, rather distributes a pin set across a set of trusted nodes.

It’s good stuff nonetheless.

I wrote this https://github.com/frrad/skyhub the other day to provide a nicer interface on top of the scihub torrents.

A strange choice given that distributing copyrighted materials violates the IPFS Code of Conduct [0]. Why not use BitTorrent whose attitude toward copyright is much more in sync with libgen's?

[0] https://github.com/ipfs/community/blob/master/code-of-conduc...

Protocols don't have attitudes, everything you're using IPFS for you could also use BitTorrent for (almost). The protocols don't really care about what data you send, and no one can stop you from using them, BT or IPFS.

The CoC you've linked covers the IPFS Community and other Protocol Labs initiatives, it doesn't cover usage of IPFS itself. freeread.org is not bound by that CoC.

It's more that just the protocol though. TFA links directly to the IPFS Desktop Client, which certainly can have an attitude e.g. if a future update blocks this project. That risk doesn't really exist for any mainstream torrent client.

> TFA links directly to the IPFS Desktop Client

I guess TFA is Teach for America? Doesn't really matter, but anything that is linking to IPFS Desktop could easily link to their own distribution of the client as it's all open source.

And while IPFS Desktop could start blocking content, so could torrent clients, not sure what's different really. They both are just clients for a protocol anyone can write new clients for and both have the risk of adding blocklists and having people abandon the client, so the same risks exists for both of them.

For "TFA", see e.g. https://news.ycombinator.com/item?id=19781756

> so could torrent clients, not sure what's different really

This is precisely why the developers' attitudes do matter. The difference is that the BitTorrent community has a well-established permissive attitude toward copyright violations. This is what gives me a high degree of trust that my next distro update won't bring in any new content blocks. I don't have that high degree of trust in the IPFS developers, given the explicit statements made in favor of upholding existing copyright law.

> anything that is linking to IPFS Desktop could easily link to their own distribution of the client as it's all open source

Sure, but getting people to migrate isn't easy. Look at how many people are still downloading OpenOffice vs LibreOffice, for instance: http://www.openoffice.org/stats/downloads.html

The protocol can't, but the network can.


Your link is to a forum hosted by Protocol Labs for the IPFS Community. Of course they are trying to follow their own CoC on their own properties, especially when it comes to copyright infringement. As a US company, they're bound to follow US law. Hardly surprising.

What does that have to do with the network? Again, the network nor the protocol currently has nothing in place that describes a "attitude" of being against copyright infringement.

I remember that we (I'm a ex-Protocol Labs employee) used to have plenty of discussions about adding allow/blocklists to the protocol that people could opt-in to, but don't think that was never added (yet?). If it was, I'd understand where you're coming from.

They are specifically asking for help and guidance to break the law. Any public community has to avoid those questions like the plague.

Remember youtube-dl got taken down because it's README file referenced copyrighted material.

It doesn't matter if a tool _could_ be used distribiting copyright material. The tool (and communities) cannot promote anything related to that use case at all.

The same is true with bittorrent. Bittorrent _could_ be used to download copyrighted music and movies, but you won't find a single reference to anything like that use case on their web page because that is illegal: https://www.bittorrent.com/

In fact the bittorrent terms of service says you may not use the tool for any illegal purposes: https://www.bittorrent.com/legal/terms-of-use/

> given that distributing copyrighted materials violates the IPFS Code of Conduct

Everyone has wishes, but wishes aren’t enforceable. It’s the equivalent of “Stop! Or I’ll say stop again!”

Yeah, and the fact that they've already decided to blacklist sci-hub.


Am I missing something? I don't see any mention of a blacklist in that thread. They did close it because of course, they don't want to be in Limewire's position, where they get held responsible for the tech they built because they supported, even in a minor way, its use in piracy.

Just to clarify what josu means here:

> they've

The IPFS Forum - Run by Protocol Labs

> decided to blacklist

Banned discussions around breaking the law on a public internet property owned by a US company.

None of this concerns the protocol itself, which you are free to use for whatever you want, just like HTTP.

LibGen already has "torrent per 1000 files" for a long time, some books also have their individual torrents but it's not common. I'm not sure why, but I think the problem is that files on BitTorrent cannot be directly addressed by hashes, it had to go through an intermediate step called a "torrent", which does not uniquely identify individual files directly. You can always select the one file you need from a huge torrent, but it's not as convenient as directly addressing files by their hashes.

That's fair, BitTorrent v2 does solve this problem, but I can't say I've seen any v2 torrents around (not that I'm specifically looking)

Ah, thanks for mentioning that. If I understand well, it's a distributed filesystem but there is still an authority able to ban users or contents?

The answer for both BitTorrent and IPFS is: No.

They can blacklist hashes on the ipfs.io reverse proxy, but they cannot block you from accessing those through a different node like your own.

> distributing copyrighted materials violates the IPFS Code of Conduct

Even if so, what can they (the IPFS devs) even do about it?

I think such materials should better be placed to darknets protecting the privacy of users and seeders, e.g., to I2P.

If you take IPFS's model and add inherent anonymity, that's exactly what Freenet is. (To this day I'm confused about why IPFS is popular while Freenet isn't.)

I've not tried Freenet for a few years, but every time I did it was very slow; e.g. browsing basic HTML sites hosted on Freenet was quite painful. IPFS can serve streaming video, and host massive archives like Wikipedia.

Also, Freenet actively seeks out data from the network to store locally, to give plausible deniability to users (there's no way to tell if something is locally cached because the user requested it, or whether the client fetched it automatically). IIRC this can be disabled by limiting oneself to a whitelist of peers (preventing network chatter that might reveal Freenet's presence). IPFS will only cache what's requested, so it's more predictable and useful for building infrastructure (e.g. hosting assets for a normal Web site); and presumably leaner on data requirements too.

IPFS isn't really competing with Freenet IMHO; it's competing with HTTP and BitTorrent.

> I've not tried Freenet for a few years, but every time I did it was very slow

It has become a lot faster recently, try again :) When I look at its statistics during active usage it can easily show 1 MiB/s traffic.

You still can't expect HTML to load in sub-second delays, for large multi-MiB sites it may take a low single-digit amount of seconds, but there is an inherent cost to anonymity which probably induces a boundary on how fast things can be:

To achieve anonymity, data needs to be redirected across multiple people so the sender and recipient can't determine who each other of them is.

Redirecting stuff across a longer path than necessary slows things down.

> To achieve anonymity, data needs to be redirected across multiple people so the sender and recipient can't determine who each other of them is.


> Redirecting stuff across a longer path than necessary slows things down.

That's skipping past a lot of details. There are still choices there. Freenet traded speed, efficiency and scalsbility for a specific security model. There are anonoyminity systems (e.g. tor) that are much more efficient but have different trade-offs.

Biggest difference between IPFS and Freenet is that IPFS only stores what you explicitly request on your local node, while Freenet distributes content automatically, so you'll start hosting content without any actions when you run a local node.

I'm not sure why IPFS is more popular than Freenet, but the difference mentioned above certainly makes me more likely to run a IPFS node than Freenet.

Freenet didn't take off in part because there one hosts all kinds of distributed files you don't know the content of, because they are encrypted, but if someday that encryption algorithm is broken, there is a strong chance many people were (unwittingly, but still) hosting child pornography.

What is the point of Freenet when we have I2P, which allows any protocol on top of it, not just file sharing?

It'd be nice for IPFS to support an I2P proxy. That would be quite an important change and would save anybody downloading from LibGen over IPFS.

I agree. Actually in this day and age I think all information sharing should better be placed on darknets. I don't see the net advantage to anybody of being profiled constantly on clearnet.

Naive question: is there such a thing as Bittorrent over I2P or otherwise anonymized decentralized file sharing?

> Bittorrent over I2P

Yes. I2P is designed with P2P in mind. The default Java I2P client already comes with a bundled BitTorrent client, there is also a BitTorrent client called "Vuze" and is used by some non-anonymous users to help seeding a torrent across the clearnet and I2P simultaneously. Because of the nature of I2P, don't expect great speed. No seeder is "usual", 10 KiB/s is "normal" and 50 KiB/s with only 3 or 5 seeders is "good", overall, it feels as if it's still the early 2000s, but when you get used to it and adopt a "download and forget, come back in 3 days" mentality, it's not actually that bad. The fastest download I've ever seen was the movie Snowden, 20+ seeders and over 200 KiB/s+...

I see it as 2 separate problems: data availability and anonymous transfer. IMO, you need 2 separate solutions because they are 2 different problems.

IPFS solved the data availability problem because you can incentivize storage miners to make your data available. This is something new, IMO.

Accessing it anonymously is possible with an extra protocol layer.

For instance, using TOR to download books from libgen should be pretty safe, no?

The linked article does not introduce the project, nor does the project site have an about page. wikipedia to the rescue!



> The InterPlanetary File System (IPFS) is a protocol and peer-to-peer network for storing and sharing data in a distributed file system. IPFS uses content-addressing to uniquely identify each file in a global namespace connecting all computing devices.

"- Recommended at least 16GB RAM and Intel i5 or equivalent processor - Recommended at least 100 mbps, gigabit connection preferred"

And I was going to try this one of my embedded computers :(

Is book piracy a major concern for the publishing industry? Many avid readers that I know are happier buying the physical book because they like the feel of reading from a book. Have others found this to be true as well? I wonder if younger folks are less susceptible to this feeling.

Many avid readers I know are happy with physical books but also happy to use an e-reader for purely pragmatic reasons (easier to carry, custom font settings).

As for piracy if the places I see libgen and it's kind referenced are any indication then this isn't about just any readers but about students, professors etc. who want to access scientific publications.

Most people don't know how to pirate books but yeah it is. I got a lot of university textbooks this way.

I'm hesitant to participate in IPFS in the context of libgen.

Torrenting copyrighted materials (especially if you are distributing) can be quite expensive in some jurisdictions and I do not believe that IPFS is treated different than regular BitTorrent.

Check out https://gnunet.org/

It works well for file sharing as well. Don't know about copyrighted material though, have only shared my own files.

Does it actually work well? I tried to play with it a while back and while sharing files appeared to work, actually downloading them from other node didn't go that well..

The thing that doesn't work well yet in my experience is the NAT traversal and finding peers in general. But I tested the same thing with two peers and I could download with no problems. It also seems to get updated quite often, I tested very recently.

> Torrenting copyrighted materials (especially if you are distributing) can be quite expensive in some jurisdictions

Why? Your ISP charges for BitTorrent traffic?

In case you are serious: Lawyer agencies are joining the swarm, recording IPs, write letters to ISPs and send you fines. That's very common for example in Germany.

Edit: I didn't downvote you if it looks like that :)

> write letters to ISPs and send you fines. That's very common for example in Germany.

And you have to pay the fines? That's nuts.

In the US, you are pretty much as likely to get fined for torrenting as you are to get arrested for jaywalking (although without the racial disparity).

> And you have to pay the fines? That's nuts.

Yes, conveniently there's a second group of lawyers that focuses on getting people out of these fines...for a fee that's not so much lower. Unfortunately it's a bit of a business model in some countries.

In the US the (very) common scenario is to get a couple warnings from your ISP and if you continue torrenting copyrighted material you will be dropped as a customer and banned.

> the (very) common scenario is to get a couple warnings from your ISP and if you continue torrenting copyrighted material you will be dropped as a customer and banned.

This is not "very" common - the ISPs don't really ban you, they just threaten you. I no longer do this, but I extensively torrented over multiple home ISPs and they would send me warnings but they would never actually do anything.

AT&T has banned a friend permanently so I would be careful about over-broad generalizations. Folks in my my household were torrenting and we got our service cut with the second offense until we went through some rigmarole promising we'd never do it again at which point they stopped. The cost to an ISP's lawyers to field these things isn't free and at least some really do respond with action.

I'm curious if the action taken depends on broadband competition in the area - do you live in a city?

France was pressured by the US diplomatically to do the same in France. It's called HADOPI.

Source: wikileaks

By the way, there is also a LibGen plugin[1] for Calibre book manager and reader.

[1] https://github.com/MCOfficer/LibGen-calibre

does anyone know why libgen is still running over http? (But sincerely thanks for the site! it saved me on several occasions!)

They don't want the extra weak spot of certificate revocation being abused against them?

Though I'm not sure how big of an issue this is; sci-hub is using https.

Unless they activate HSTS, it shouldn't be a problem. They can always go back to HTTP if they want.

Also, Let's Encrypt doesn't even revoke certificates for malware and phishing sites. [1]

Maybe they're doing this for better caching performance? It makes SSL flood attacks impossible, but is that really worth it?

[1] https://community.letsencrypt.org/t/how-to-report-abuse/4110...

One of the new IPFS-backed mirror sites (https://libgen.fun/) is on HTTPS.

Great, however it currently doesnt have the sci-tech filter, and no comics filter too :-(

scitech is there, it's relabeled "Educational."

comics is not a public dataset -- it's privately run and managed, so there are no mirrors for it. I guess someone could scrape it.


I have heard that certain countries ISPs break HTTPS, that might be one reason.

It's hard to believe, but it's certainly happened/happening -- plenty of democratic countries are filtering the internet, like Turkey.

Probably because they don't care enough to add https.

Is LibGen safe? I remember testing some of its mirror links on virustotal and saw some red flags, so I decided not to risk downloading.

Some of your concerns are addressed in the wiki on freeread.org: http://freeread.org/reddit-libgen/

VirusTotal includes some very poor quality Chinese AVs that produce positives for nearly every file. PDFs are pretty damn safe to access today -- the days of exploding macro exploits are about a decade behind us.

Yes it's just ebooks.

Even an ordinary .PDF can have executable content. This isn't a baseless concern.

and scientific papers

what red flags did you see?

It was some book, and when submitting the download link, 3 of the antivirus results were red. Perhaps they were false positives.

Anti-Virus tools have been flagging keygens and cracks for ages, and also hacking tools that grab passwords from (system) stores even if the tools have absolutely nothing in their code that talks to a server (I only ever used those for legal purposes, but can't tell you how often I had to disable an annoying AV...). Just because the AV says something is malware doesn't mean you can trust its claim. From my perspective, they've undermined their trustworthiness in the area where you are indeed most likely to rely on an AV.

I'm not surprised they're now flagging what is basically text and images as malware.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact