I'm talking about things like small tools that were shared on e.g. xda-developers before Github came, about fan-mods for games etc... The 'big' ones continued living, but if you now e.g. search for a special kernel/ROM for your G1/ADP1 you are mostly out of luck.
It's sad that there's basically no way for an organisation like archive.org to archive things from sharehosters given the unclear (or quite clearly black/gray) law situation and also the missing cooperation with the sharehosters themselves.
This is something that has always sketched me out about the Android rooting and moding scene. "Download this suspect binary from rapid share and run it as root" seemed to be a cornerstone of it.
I honestly don't understand why more people don't create Github accounts and use that to distribute, or at least use their ISP's free web space. Most of these tools have names, are well-known, and are the top hit on Google, but none of them have an actual website that you can go to to see if they've released new versions, something for other games, etc.
It's all very sketch.
Dwarf fortress stuff is like that, Minecraft is even worse, you get adfly in the middle :D
dffd exists for things that aren't github(etc)-appropriate such as tools. No real reason to use anything else unless you're trying to monetise it with adf.ly etc (which might also count as a reason not to download).
To me it seems very simple: level of effort. Uploading your hack/mod/whatever to rapidshare takes about one minute, or less. On the other hand, if you want to learn about git and github, you have to spend plenty of time on that.
(Plus I suspect that if a lot of people started hosting multi-megabyte binaries on github, their policies would change pretty quickly)
But yes, the most simplest site is the best for the most simplest people. However we're talking about people who spent a lot of time into creating their mod/whatever here. They can spend a minute or two more to figure out distribution.
Zero knowledge of git was necessary. Oh and also if you want to edit that README.md? You can do it from inside Github too, still zero need to know git.
The (pseudo)anonymous aspect of not having a fixed identity which could easily be linked to something else is both convenient and valuable to privacy?
There's a huge amount of "pirated" software/porn shared on many of those platforms for that reason.
The warez/cracks scene was essentially the same thing, and yet if you knew where you were getting things from, it was quite safe. The antipiracy groups have since been spreading plenty of FUD (and some possibly attaching malware to releases, I don't know) and working with the AV/security industry to make you believe otherwise, however.
Just as a warez/cracks group would be called out for it and very publicly shamed if they put malware in their releases, the same would happen in the Android scene. It's true that there are many rather clueless users (known as "leechers" in the vernacular), but there are also many knowledgeable ones and all it takes is one to give sufficient evidence of malice to trigger the "immune reaction".
And there are so many places for things to wrong. Any one of the following could be malicious, incompetent, or compromised:
* The ROM's maintainer. There are many groups here, for example many ROMs are based on ParanoidAndroid, which is based on Cyanogenmod, which is based on AOSP.
* The device maintainer. Typically each brand/model device has its own volunteers to maintain any proprietary blobs or special upgrade process
* The hackers who provide special binaries that root each device, unlock the bootloader, etc.
* The added packages you typically get separately from the ROM, for example Google Apps.
* The build machine, typically just some random box donated semi-anonymously by someone
* The web hosting (without TLS, of course) provided by some other random person.
I love Android. I compile and run my own ROM. But the current scene scares the shit out of me.
How much would it cost to buy off, for example, the entire radio hardware/firmware team at a manufacturer in your own country (meaning pretty much either China or South Korea), and on a governmental scale how reasonable or unreasonable is that number?
The fact that antipiracy and AV groups have an interest in getting you scared does not mean that there's no reason to be scared of running random binaries you found on the net.
And most people do not feed directly off the warez/cracks hubs - they feed of whatever they can find. Which means a lot of opportunity for bad actors.
Isn't this racketeering?
Furthermore, AV programs which classify keygens, etc. in similar categories as keyloggers/ adware, etc. (such as Microsoft Security Essentials) also have a net effect of increasing malware prevalence by training users to ignore AV warnings.
I'm not an Android user myself, but I will nevertheless suggest that optimally these things would be distributed as source on GitHub, with a deterministic build system guaranteed to be able to reproduce binaries in the future, and only secondarily as binaries.
Android has "regular old joe sixpack" users by the millions, but has the Android community actually benefited from that? The existing "desktop linux" community has their act together far more than the Android community, despite (or because of?) not having those legions of unskilled users.
Hardware support in official ROMs from phone manufactures is good of course, but what is freely available to the Android community is much worse. It turns out joe-sixpack doesn't really give any shits about hardware support being open-sourced.
It's nice that Google has given back some stuff, but that's peanuts compared with what the "year of the linux desktop" meme promises.
It would work kind of like an escrow, right? Where the "release/publish clause" could be tied to copyright expiration or other legal events like you say.
The problem I see is, how do you filter/reject the amount of stuff you would have to store without even being public? AFAIK, the IA has issues storing so much of the already-public data, so storing "dark" (non-public/unpublished) stuff would potentially mean lots of cruft and garbage so to speak.
However it does sound very interesting. I hope we can some day achieve truly permanent data storage systems were we could just dump all of this and not worry much about it again.
Edit: Thinking about it a bit more, how feasible does the following sound:
Anyone interested in helping the IA could buy a sort of Drobo/NAS that is able to store only IA stuff (ala Freenet). Everything is encrypted of course, and then only way to access the files is when the escrow trigger fires off, the private key is released at the IA archive and then every owner of the IA-box will have access to that particular part of the archive (as well as regular IA users through web).
It's kind of like an HDD preloaded full of torrents, and then the differences or new additions can be streamed to your local IA-box as needed. You could even filter what kind of stuff would you like to help the IA archive. For example, I'm a big fan of movies so I prioritize that category (up to a certain % so that no one category is forgotten).
Does anyone know if anything resembles this? I mean, I could very well leave a low-powered NAS to help the IA serve their content, store it for later use, etc. And I imagine (hope) that a lot of other people would too. It would be a way of donating electricity, space to a worthy cause.
The matching of Google's prices when they released was a good indicator of how much was profit though.
Ninja edit: That is not super price prohibitive.
Sure there is, you just micro-engrave it onto a nickel disc with a laser... http://blog.longnow.org/02009/05/21/what-13500-pages-micro-e...
Edit: Also paper. Low-acid paper in dry cabinets keeps a long, long time.
The Internet Archive serves torrent files for every object they store, so in theory, anyone could have a NAS that would crawl those torrent files and then join the swarm/seed for all of those objects.
That would be a hell of an open source project. The Internet Archive would then be a metadata repository and seeder of last resort. #shutUpAndTakeMyMoney
There are many obscure demos, rehearsals, etc. that disappeared from the internet and most haven't reappeared since. I knew some bloggers who had uploaded probably in the range of 5000-10000 old metal demos, and these guys were careful to not post copyrighted material, but it seemed like if they got even a single strike against their account, everything was deleted.
I hope someone imaged those servers, otherwise a lot of that content might be lost forever.
For the ones that do not know what is meant: http://www.geeksaresexy.net/2008/04/24/rapidshare-captcha-wi...
Monetising would be a challenge, since rapidshare et al mainly made money by selling premium accounts, which are generally only attractive for piracy purposes.
This is what people who were appalled that megaupload would only remove one of the links to a deduplicated file, instead of removing the file itself, failed (or refused) to understand.
However, I've never gathered statistics on it, so this is purely anecdotal.
Rapidshare was the most responsive to copyright complaints out of all the filesharing sites, they took down links within 2 hours of being reported. But instead of nuking cracks to my software on Rapidshare immediately, it meant I let the Rapidshare links stay alive longer, because I knew I could turn them off whenever I wanted. I'd rather people uploaded cracks to RapidShare where I could see how popular / unpopular a link was & had control over when to remove it, than somewhere like MegaUpload that would deliberately take a long time to remove links.
I never saw evidence of piracy helping sales (always hurt sales) and I never used it for promotion, but I was more worried about cracks that came bundled with a virus, or that came bundled with a collection of illegal images. That stuff had to be nuked straight away for the protection of customers (and since much of the time, customers never understood that cracks don't come from the company that makes the software).
I never saw that. Even when I was young, that was something obvious, and I never heard someone else mention that he beliefs cracks come from the company making the original software. Where is that coming from? Is that your impression because you get support requests for cracks?
It's worth noting that while I have customers of all ages, many are older / elderly (many in their late 60s and a few in their 80s & 90s). They're not the most tech savvy, they don't understand the cracking scene & some need a lot of time-consuming handholding. Often wonderful & friendly folks, but they don't grok computers the same way the usual Hacker News reader will.
"Hoping to clear up its image the company made tremendous efforts to cooperate with copyright holders and limit copyright infringements. Among other things, the company adopted one of the most restrictive sharing policies while (re)branding itself as a personal cloud storage service.
"The anti-piracy measures seemed to work, but as a result RapidShare’s visitor numbers plunged. The dwindling revenues eventually cost most of RapidShare’s employees their jobs."
RapidShare was clearly only popular in the first place because of piracy. Once they started restricting that they became just another Dropbox/OneDrive/Google Drive competitor.
The problem was their new model completely precluded sharing of any kind. Imagine Drive or Dropbox, but without the ability to give links to your files, and you can see how useless the service became. You don't out-dropbox Dropbox.
As usual, lawyers ruin everything.
By being bought by Google, then having Google spend 100s of millions of dollars defending YouTube in court.
Thing is sharing is a human trait, a trait required to live in a social group. The internet is about sharing and either the copyright industry will adapt/disappear or the internet will disappear (probable eaten alive by fecesbookians and politicians).
The most successful cyberlockers do what Rapidshare decided not to: pay uploaders, even those who share illegally. And also pay linking sites through referral schemes far more resilient legally. They aren't trying to appease anyone not either a customer or a very active uploader. Working with copyright owners beyond base legal requirements (DMCA et al) isn't the business plan anymore. Getting into bed with copyright owners was megaupload's and rapidshare's first mistake. The new plan is to make as much money is possible then abandon the ship the moment the MPAA looks their way.
Filesharers are ok with this. They purchase monthly subscriptions in full knowledge that the service might disappear any day. They aren't looking for a long term relationship anymore. The blind panic resulting from the megaupload raid ended that expectation.
Argubly one could argue that megaupload was mainly used by non-profit uploaders. It gained popularity with affiliates but afterwards the average person and non-profit uploaders used the website to upload files more than the affiliate users. I believe you had to become a premium member to actually gain access to the rewards feature.
Rapidshare died because not only did they combat copyrighted files, they also blocked the ability to share legit files. As pointed out in the comments elsewhere in this thread, they tried to out-Dropbox Dropbox.
edit: also, Mediafire before they went all-cloud had no affiliate system and is very popular. And look at Mega.co.nz, a very popular website which is used for movies etc.
For me, the line is when you start paying website that host links to files on your service (referrals) or when you reward known pirates even after, literally, hundreds of valid takedown notices against their files.
"Kindly note that RapidShare will stop the active service on March 31st, 2015. Extensions of STANDARD PLUS and PREMIUM will be possible until February 28th, 2015."
They weren't competent though, so maybe he just didn't want to give that employee access. I asked them what they did at Rapidshare and the only answer I got was "multicore stuff" (sic).
However it seemed inviteable, Rapidshare tried to rebrand to a personal cloud storage provider without providing the features needed to be one while still cracking down on piracy.
Maybe they should have pulled a dotcom, shut rapidshare down and announce rapid. And then stick to the old business model.
These all hall of fame, rise and fall of empire, giving way innovations and technologies.
I came across this thread whilst searching for info on File Hosting and there's certainly a lot of good information here.
I'm writing an app that uploads small documents to public file hosting sites (e.g. via REST-HTTP Post) so that others can download them again, preferably via simple HTTP GET.
Rather than coding to a particular API for each and every Hosting site, are there any public code libraries (preferably C++ or C#) that do this already presenting a common API across a number of different supported hosting sites?
Ideally I'd be looking for an Apache/MIT license code rather than GNU.
I remember once upon a time if you downloaded leaked music or even software, then Rapidshare was the go to site for that kind of thing. Funny how things change.
Nowadays it's mostly torrents.
I think appearance of google drive/dropbox is the main reason of this. People easily share data with them.
I will really miss the golden era of filehosting services.
Or maybe they just couldn't compete with Dropbox et al.