Hacker News new | past | comments | ask | show | jobs | submit login
Megafail (fail0verflow.com)
738 points by comex on Jan 23, 2013 | hide | past | favorite | 198 comments



The wonderful thing about the ridiculous level of Mega publicity combined with the availability of code is that Mega are getting this sort of excellent feedback.

Early evidence is that they are listening and responding. https://mega.co.nz/#blog_3

Let's see what the response is to this piece.

I for one am not going to judge them too harshly if they keep listening and responding, as the product will just keep getting better. Sure they are blessed with I guess several million customers a few days after launch, but many here have also been through early product issues, albeit with far fewer customers. Like Mega, we learn and move on by fixing the problems.

I guess Mega appreciate the value of this sort of feedback, and the attendant publicity.

Edit: and just after publishing this, a tweet from Dotcom: "We welcome the ongoing #Mega security debate & will offer a cash prize encryption challenge soon. Let's see what you got ;-)"


I agree, the big hype is paying off for them because many people looking into their code for free just to show how wrong they are but at the end they can fix these and then the criticism will not be true anymore. After the community "fixed" their front-end code they should open source the back-end too so they can get feedback on that too. After that the only thing that users should trust is that they will not alter the fixed codes.


I guess it is the right time for such a review of how they work as they are on an early stage of deployment. They will benefit greatly security-wise from it.


For users who truly care about their data, the review should have been done before launch.

But Mega isn't a site for governments or enterprises, it's for day-to-day users and file sharers. So I can't be too upset they decided to half-ass it and let the public fix their code.


When I started the article I thought, "Oh great another speculative Mega bashing."

Then you actually wrote the proof of concept and proved it to be a very real problem.

Nice work. Also the mouseover on your site title is sexy.


I'm not surprised. These are the same guys who thoroughly ripped apart the PS3 security system and presented it for a more-or-less general developer audience: http://www.youtube.com/watch?v=LuIlbmn-4A4


Well that was an hour productively wasted.


Just like reading a mystery book :)


"You do not understand cryptography" should never be dismissed as speculative bashing.


Looks like they are still serious with it: https://twitter.com/KimDotcom/status/293983892090793984


Reminds me of old school ultrashock.com


Ultrashock.com was awesome!


TL;DR - Mega designed their own cryptosystem. Not surprisingly, it's broken: They used CBC-MAC as a one-way hash function, but CBC-MAC is not a one-way hash function.


Maybe I am being stupendously dumb here but why didn't they just use md5? Is it that easy to compute to make it unsuitable here?

[Edit] Huge thanks to tptacek for the thorough explanation. He did make one mistake in that I was being dumber than he thought. I believed that being able to create a static file with matching hashes would be too much hassle to be worth bothering with, other replies have stated that this is not the case, hence the MAC route.

[2nd Edit] Ok, so it looks like I may not have been as dumb as I thought I was, good discussions further down.


MD5 by itself isn't a MAC; it's a (bad) one-way hash function. HMAC-MD5 is a MAC, like CBC-MAC; it computes an authentication tag from a key and a message, and it uses MD5 instead of AES. So I guess HMAC-MD5 is what you mean.

CBC-MAC is an unusual choice, and is known to be particularly hard to get right. Very few new systems are designed with CBC-MAC anymore.

The "industry standard" choice for a MAC in a new system is probably HMAC-SHA2; if you wanted to use a cipher core instead of a hash function for your authentication tag, you'd probably use something like OMAC (or, better, EAX mode, which does both authentication and encryption at the same time).

Long story short, you're not wrong, CBC-MAC was a weird choice.


Why would you use a MAC at all here? There is no secret. The hash comes from the "secure" server. What they needed was a plain old hash, not a (H)MAC. This is just an integrity check.

Also, MD5 isn't broken for second preimage attacks.

People like to repeat the blanket statement that MD5 is trivially broken, but in this case, it isn't. Still, it wouldn't be advisable to use here because it's broken in enough other scenarios that it makes people uncomfortable in general now. If they did use MD5, it's still a nation-state or significant crypto breakthrough before anyone could attack it.


Read downthread; we are saying the same thing.


The answer is now in the article: "If the goal was, as I suspect, to use the same AES core for everything, then a Davies-Meyer construction around AES (which is about the same amount of code as the CBC-MAC is) would’ve worked too, although"

Mega has to include the lengthy JS AES code anyway and did not want to add more files just for MD5 or SHAx. By building a mac/hmac on AES they save code size


As someone else mentioned, MD5 has been known to be broken since 1996, but I'll assume you meant something like SHA-256 or SHA3-256.

I haven't looked closely at the code, but the author of the article speculates that the code was implemented by people who are unfamiliar with cryptography. I think that's a pretty reasonable explanation. Cryptography looks deceptively simple, but it's actually a complex and specialized field that takes years to get good at. Even experts in the field routinely build cryptosystems that are later discovered to be flawed. After all, they're building roadblocks that are designed to prevent everyone---even well-funded experts from the future---from finding a clever way around. Oh, and while they're at it, they have to consider CPU and RAM constraints, too.

Many developers don't treat crypto with the respect that it demands, and, unfortunately for end-users, this is the result.


I wouldn't go using SHA3 just yet, probably best to wait for more research into it.


It's relatively easy to make collisions with MD5, and attacks like it have been documented against SSL certificates. Something like sha256 would probably be more appropriate.


M55 is also known to be insecure: it's straightforward to find hash collisions with the MD5 function.


Interestingly, there aren't practical attacks against HMAC-MD5 yet, though there obviously are attacks against more naive MD5 MAC constructions.


Second preimage has no practical attacks though, which is what is happening here.


Really?

Can you generate a JS file (that will actually execute) with the same MD5 as

    alert('Hello World');
MD5 is 7ecf458bad499f6815cbc10ed597dd3a


Practical collision attacks against real systems employing MD5 have been successfully carried out in places where people thought a lack of collision-resistance wasn't a problem. Cryptography software is hard enough to implement already; Let's not make it harder by building brand new systems using weak primitives when there are better alternatives available.


While it might take some time to solve your particular problem, similar problems have been solved previously. Take a look at the hello and erase programs at http://www.mathstat.dal.ca/~selinger/md5collision/ for a nice example of constructed md5 collisions.


I think that's slightly different. It's based on

    if (data == x) then { good_program } else { evil_program }
Generating a collision with a given MD5 hash is much more difficult.


I agree; it would be very difficult to do this: you would have to essentially define some padding area at the bottom of the file for random data (e.g. stored in a string) then generate data to go through all 2^64 possible hashes until you get a hit. I don't know if this is practical or not but it certainly sounds difficult.


Exactly. If you test 1 million scripts per second, it will still take you quarter a million years on average to get the collision.


You underappreciate what computers can do these days.

GPU password crackers run at billions/second. "The cluster can try 180 billion combinations per second against the widely used MD5 algorithm" ( http://arstechnica.com/security/2012/12/25-gpu-cluster-crack... ). That's not exactly the same task, but close enough that I'll estimate the actual performance as 100,000 faster than what you estimated.

That takes you down to 2.5 years with 25 AMD Radeon HD6990 graphics cards, which costs $1000 each. For $100,000, based on your estimate, a dedicated and well-heeled hobbyist can probably find a match in a year.

Wait a few years and that price goes down quite a bit. In a decade it will likely be a semester project at some schools.

Even if I'm off by a factor of 100, GPUs are fast enough that a mid-sized organization would be able to brute force it, should they be motivated.


This assumes that those .js files on Mega will stay the same for those 2.5 years (or semester). Much more likely is that those files will be updated somewhat regularly, and any attempt to collide a specific hash will be rendered irrelevant.


This is incorrect and doesn't apply for several reasons. It does not find collisions. It finds a pre-image (which is already known - it's the JavaScript file) for very short message lengths. Very different.


What you're describing is very different. They are searching for the shortest possible text that will result in the same hash. It's possible to generate hashes very fast because the texts are so short.

What we're talking about is not just generating some random text, but an actual JS file (that will execute malicious code that you want) with the same hash.


I don't believe that it's different. All you're looking for is the small bit of extra characters which converts the desired malicious Javascript into the target MD5.

You take your JS payload, add a terminal "#", then generate the hash information for that content. This gives the initial hash state. Now set the brute force GPUs on a mission to search for the smallest string of non-newline bytes which, which added to that hash state, gives the desired MD5 result.

A problem is that this requires 2^128 bits to brute force, not 2^64 as was mentioned earlier. I didn't catch that. 2^64 is brute-forceable. 128 isn't.


Another thing you didn't catch is the difference between MD5 and the broken Windows password algorithm.

And the fact that you still have to calculate MD5 for your WHOLE malicious javascript file, plus the comment with the random tail with all the random stuff you modify.


The link and numbers I used were in regards to MD5. I even quoted "The cluster can try 180 billion combinations per second against the widely used MD5 algorithm."

And no, you don't need to recompute the whole MD5 each time. Here's the Python code which shows that you can capture the hash state at an intermediate point:

    >>> import hashlib
    >>> h1 = hashlib.md5("This is the start")
    >>> h2 = h1.copy(); h2.update(" # Blah!"); h2.hexdigest()
    'a309b70b0bc8de4e7aad1d0ac6e14b16'
    >>> h3 = h1.copy(); h3.update(" # Fnord!"); h3.hexdigest()
    'a0a5979225e0ba46543856242daeab3c'
    >>> hashlib.md5("This is the start # Blah!").hexdigest()
    'a309b70b0bc8de4e7aad1d0ac6e14b16'
    >>> hashlib.md5("This is the start # Fnord!").hexdigest()
    'a0a5979225e0ba46543856242daeab3c'
You may think that perhaps the copies are keeping track of the entire string. However, this is not correct. Indeed, it would make the MD5 rather useless, because it would limit processing to available memory. How would one MD5 a multi GB file with only a small amount of RAM?


I wouldn't say its easy, but the obvious known example is the forged microsoft certificates on the Stuxnet/Flame/Gauss viruses (widely believed to be developed with nation state backing / resources).

edit: I don't know what I'm talking about :)


That example does not apply here. A collision attack doesn't help. You need a second preimage attack, of which a practical one doesn't exist for MD5.

The difference is with the collision attack, the attacker controls the inputs and has to find any two valid messages with the same hash - any hash. That's what was done with forged certificates.

In this case, the hash you need to match against is fixed. You have one preimage, which is also fixed of course. You have to find a second preimage that also matches that specific hash. If you can do this at all, let alone "easily", you will significantly advance the field of cryptography.

(To raise the bar even higher, your second preimage has to be valid javascript that executes your malicious action.)


That last point isn't particularly deep; the same techniques used in http://www.win.tue.nl/hashclash/rogue-ca/ can be used for two files which are UTF-16 encoded and contains sections which are commented out.

With this said, that is a collision attack and not a preimage attack and so your first point is crucial.


Fortunately, even the output of /dev/random is usually valid javascript, so that part isn't much of a concern ;)


alert('Hello World'); // add random chars here until hash fits

sure its not trivial, but not impossible.


Actually that particular attack is impossible as far as humans currently know.


Are you saying that any given substring of characters will provably rule out a chunk of "hash space" for the entire message? Because that property sounds sort of interesting in itself.


No, I'm not. Also "impossible" was a bad word for me to use. It's impossible in the "not enough time before the sun burns out" sense, not in the mathematical proof sense.

I should have said impractical, but then people sometimes respond by talking about how fast GPUs are advancing, not getting just how far off they really are.

The best known attack to find a first pre-image is 2^123. To put this in perspective, using a slightly modified common analogy to describe how long 2^128 is:

"Imagine a computer that is the size of a grain of sand that can test inputs against a hash. Also imagine that it can test a hash in the amount of time it takes light to cross it. Then consider a cluster of these computers, so many that if you covered the earth with them, they would cover the whole planet to the height of 1 inch. The cluster of computers would find a valid pre-image on average in 1,000 years."

Even then, you would not have a useful preimage to mount an attack. You wouldn't even have ASCII. If you got ASCII, it wouldn't be syntactically correct javascript. If it was, it wouldn't do anything remotely malicious.

You would have to keep doing this until you randomly generated an input that happens to be valid javascript that performs your malicious action.

So, I rounded up to impossible.


Sure, it's not impossible if you throw a few trillion dollars at the problem and wait a few years for the computation to complete.


No. But what if I was the consultant who wrote that file and I picked it particularly because it shared the hash of one of a family of malicious auto-generated files I had already created?

That'd still be hard but much more likely.

This has implications for people who use tools like tripwire. If you didn't create the original file it might be the benign half of a set, if the details of your hashing are known to the attacker.


Because MD5 is considered broken. Use at least SHA1, or even SHA2/SHA3.


So you can steal the encryption keys from the browser. What a surprise.

I don't think anyone at Mega gives a damn though, or the people who are going to use it to upload movies. The only reason there's an encryption key at all is to cover their asses on future lawsuits, not to make Mega amazingly secure.


This is what I think is the speculation. Let us speculate you wanted to build a "legitimate" Pirate Bay type system, what would you have to do in order to pass the 'sniff test' for litigators while providing an otherwise easy to use service?

I am not in any way at all suggesting that Mega is doing this, or that they are even trying to do this. I am simply observing that there is an interesting case to be made where you can swear under oath you did you best to make the site secure, and yet people more sophisticated than you broke into it and had their way with you. How much liability does that protect you from? Any? All? I haven't got a clue here but I find the question an interesting one. Perhaps Eric Goldman will chime in.


Stopping your digital locker online service from being used for mass copyright infringing material is easy. Go ahead and host a movie on dropbox and share a public link on a popular site. See how long it stays valid.

A normal system admin might decide a single 4gb file accounting for tens of terabytes of traffic is unusual and investigate the cause. Odds are pretty decent you'll be able to find a source driving all the traffic.

Kim Dotcom is going for the blind eye approach. Just close your eyes and cover your ears so it's not your fault. He did this with MegaUpload to the tune of hundreds of millions of dollars of profit. I wonder if the new site will reward users who have highly downloaded files like his last?

Can you prove, or disprove, that the cryptography was intentional broken? Doubtful. Can you prove a website didn't try to prevent the mass sharing of copyrighted material? Yes. Is that illegal? Maybe. Should it be? That's whole 'nother can of worms.


So you're one of the people that buys into the whole "It's easy to tell what files are legally shared" and it's just not true. Which is proven time and time again when publishing companies send DMCA takedown requests for the content another department in their company published intentionally.

Almost everything is covered by copyright, it's indeed not too hard to figure out if something is covered by copyright. It's much harder to figure out who owns the copyright on something, and it's almost impossible to figure out if a certain file is authorized by the copyright owner.

Just because a file is popular doesn't mean it's infringing. Can you guarantee this popular file wasn't posted on twitter by the artist for a marketing campaign? Can you guarantee the artist didn't upload it to a torrent site to get attention? No, you can't. You'll just take a flamethrower approach and many legitimate files will be removed in the process.


I took for granted that most ingringement policy is done locating the links in forums, checking if they really point to copyrighted content, then requesting the file locker to remove the file.

Actually I can't see how encryption could change this procedure. It seems a technical solution applied to a social problem.


You can't take for granted that anyone actually checks the file contents. I'm pretty sure the laser printers that got sent takedown notices weren't actually sharing any content.


I didn't mean that every denounce comes from content inspection, just that this is the most common case and that encryption couldn't stop it: if it's public, someone can inspect it.


Popularity is not proof of illegal activity. It seems implied that only restricted copyrighted information may bring high traffic, but that's not true. One of the highest traffic torrents I ever had, and which got the computer club network cut off from the university because it was "suspiciously uploading high amount of data" was in fact a linux live cd... It was also a highly effective way to test out the new server's hardware :).


Of course it's implied because > 99% of high traffic volume files on Mega are going to be copyrighted material. To suggest otherwise would be insane. Is high traffic proof of anything? Of course not. Is it reason to investigate? Absolutely, unless your stated mission is to cover your eyes singing "lalalalalala".

Kim Dotcom has already made hundreds of millions of dollars off copyrighted material and he's trying to do it again. I'd say the fact that anyone supports his antics is bewildering but the power of selfishness knows no bounds.


Investigation takes time, effort and money. Its why the police is a group of people being paid to do that work. No one expect the police force to be made out of voluntaries. Why do then suddenly people expect system administrators to do police work for free?

How common is it that people go to the local police station in an afternoon and offered to do some work with no expectation of some kind of salary?

System administrators are not people that work for free. They expect to get paid like everyone else. They also expect to be able to decide what work they accept, and which ones they do not (ie, they are not slaves). Some do investigations for the police without getting any payment because they feel they want to perform some civil duty to do so, but the above comment imply that they must do so. That is wrong. System administrators should not be forced labor for the police.

If there is investigation to do, call the police, pay your taxes (so the police get paid), and let the system work as intended without putting forced labor onto the system administrators of the world. If you dislike a service because its used by criminals (like say, gmail which is abused by spammers), it still the police you should call and not the system administrators.


>99% of high traffic volume files on Mega are going to be copyrighted material.

This is a completely baseless assertion with no facts to back it up. (Difficulty: Pointing to a dodgy DoJ case does not count as "facts")

Attemping to shift the burden of proof by saying "to suggest otherwise would be insane" is not appreciated or necessary.

Futhermore, it's a useless assertion too, since pretty much everything is copyrighted. Whether it's authorized is another matter and not near as easy to figure out from a service provider standpoint. Neither is it their job.


Dropbox doesnt have copyright checking last I checked. The way that they deal with the problem is by bandwidth limiting public urls. You would have the same effect with a video that you legally publish as with an infringing video.


Are the cloud providers (dropbox in this case) even allowed to look at the contents of your files?

I guess this would be up to the TOS for each site but the few sites i've actually read at least pretend that they value your privacy.

Also remember the shitstorm after 37signals announced that the millionth file ever uploaded to basecamp was an image of a cat? The official story after all complaints was that they never opened the file but they still "knew" because the filename in the logs showed up as cat.jpg.


> Are the cloud providers (dropbox in this case) even allowed to look at the contents of your files?

DMCA safe harbor means the hosting provider should not have editorial control. If you have editorial control, you're open to liability for not exercising that control. Better to go the legally proven route of getting a credit card and an adult signature, to keep all the liability on the user.


What I got from this is that the code is intended to be set up so you don't have to trust the CDN, however due to an error in the way it's coded, it turns out you do have to trust the CDN. Given that CDNs are reasonably trustworthy in the absence of government intervention, I expect that Mega will fix this before there are any exploits.

It's good work by fail0verflow, but as long as Mega addresses it soon, I don't think it's a big deal. In fact, it's a good test case for how seriously they take this kind of problem.


Isn't this missing the entire point? The only purpose of the encryption is to give Mega a strong basis for being able to deny any knowledge of what the content stored on their servers represent.

It isn't like people are going to be using it to store trade secrets and diplomatic cables. The government isn't in the business of attacking a crypto system just to go after jor pirate consumer.

These keys are going to be leaked all the time - after all, it's for piracy so you need to give them out to your friends etc. The trick here is that the resultant takedown won't include any obligation to take down all the other copies of the same film, since they've all been keyed separately and aren't deduplicated.

Mega is an encrypted document service in the same way the pirate bay is a self publishing channel.


I was wondering if the flaw was intentional.


Kim Dotcom and those in the Mega posse know the encryption they are claiming is mega secure is in fact a mega lie. We all know it, but does it actually matter? The whole point of the encryption wasn't so you can upload files without fear of them being looked at by staff, it's to prevent Mega from being taken down by the US government again by claiming they truly can't see the file contents of their server.

The encryption, it is a lie but at the end of the day it shouldn't detract from the fact that Mega is a decent cloud hosting platform with decent ideas. Compare Mega to offerings from Mediafire, Dropbox and insert other cloud file storage provider here and Mega is quite compelling considering the free 50gb of space you get.


I don't think it is fair to dismiss it as a "mega lie". I am actually surprised, how thought out the encryption scheme is. The first version might have flaws, but at least they thought about the case that the CDN might be compromised and implemented a quite clever (but flawed) work-around. Most banks don't put such an effort into security of their website.

Time will show if they fix this flaw, or not.


Maybe not a lie in the sense of the word, but it's obvious looking at their crypto algorithm that at present it is a gimmicky feature. The lack of a strong encryption scheme doesn't degrade the service in my opinion and it's more of a feature to protect Mega and not its users like they are portraying it to be.

Given the massive amount of backlash and scrutiny from experts and non-experts alike, Kim always seems to win in the end and I am sure they're already planning on hiring a cryptography master to help them write something more serious as we speak.


> at present it is a gimmicky feature

Its also labelled beta.


They have an API where you can write your own client, if you don't want to use the JS client. This doesn't solve every security problem, but at least this one.

I think that they have really tried to make it secure, but they aren't just competent enough. Browser-crypto is always somewhat insecure especially when they actually store your keys and you have to trust them.


Considering the 50GB you get is the reason why I won't touch them.


I would be curious to know why. Would you mind elaborating?


I don't think it's a sustainable business, that's why, unless users will get fucked in the long run, one way or the other.

Startups these days and even bigger companies like Google, got into the habit of creating something and giving it away for free or for cheap, then when it gets popular they either raise their prices, pull the plug or sell your data, fucking the same users that made them popular in the process and frankly I'm tired of that.

Right now I'm a Dropbox paying customer and decided against GDrive for instance, even though GDrive is cheaper and even though I was and am already a heavy Google Apps user. I went with Dropbox because they value me enough to provide a Linux client and I simply don't trust Google's pricing anymore, considering their recent history.

So even though I do use free services right now, for new services and products if it ain't open-source, if it isn't backed by a non-profit and if it doesn't have a decent price tag, then I'm not touching it.


What are your thoughts on Amazon S3, since you're basically their customer by proxy?


All these issues can be fixed in a completely transparent way for the end user. And they should be.

That said, I think many of us have been missing the point with this site. The cryptography they provide allows for plausible deniability. No more, no less. They are perfectly able to view the contents of your uploads, and they never denied that. If you want truly private and secure files, then encrypt them yourself prior to uploading them.

The scope of this particular flaw (which, again, can be fixed transparently) is that you don't only need to trust Mega, you need to trust their CDNs as well. Neither of these levels of trust is enough if you truly are concerned about the privacy of your files.

It can only be considered a flaw in the first place because Mega claims to be the only party you need to trust and attempted to provide a means to ensure that. It just so happens that version 0 of this implementation does not work.

Then, fail0verflow goes ahead and say that since they have made this mistake they don't understand cryptography and should not be trusted to store your stuff. I'm sorry but that's bollocks. This system is big enough so absolutely anyone can make a stupid mistake or 2. Cryptography is just one element. A red herring IMO, being completely honest. If you are storing sensitive data don't let them do all the work and encrypt it yourself. Or just don't upload it to the cloud at all.

Absolutely everybody makes stupid mistakes and a single stupid mistake does not prove anything. I'd ask them to come down from their high horse because breaking a system is a lot less work than creating one. Especially when working under tight deadlines.


Mega said that their system ensures privacy for your files.

It doesn't.

Their product has two selling points.

i) 50 GB

ii) encryption

Without encryption they're going to be under very heavy disruptive scrutiny. This seems to be a product breaking flaw.


They don't say exactly that.

They claim to be able to read your files.

The difference between their claims and the actual system after this flaw is this:

CDNs can also potentially read your files if they are malicious and actively try (which would be a breach of their contract I assume).

Doesn't change a thing for me to be honest. I don't trust Mega and all their employees, or whoever breaks into their facilities, to have my bank accounts logins and passwords in plain text. I trust them to keep my kindle purchased books with me not being liable of being "broadcasting them" as they are reasonably protecting them.

Obviously it's a flaw and they should fix it. But it's not a doom and gloom, "their whole system is a farce" kind of flaw.


> They don't say exactly that.

In big red capitals on their front page they say "THE PRIVACY COMPANY".

I couldn't find anything in the 46 point TOS that said explicitly that they can read your files.

On their about page they say "Unlike most of our competitors, we use a state of the art browser based encryption technology where you, not us, control the keys." - that doesn't make it sound like they're telling you they can access your files.

On their "The Privacy Company" page they quote from the universal declaration of human rights,and they say "All files stored on MEGA are encrypted. All data transfers from and to MEGA are encrypted. And while most cloud storage providers can and do claim the same, MEGA is different – unlike the industry norm where the cloud storage provider holds the decryption key, with MEGA, you control the encryption, you hold the keys, and you decide who you grant or deny access to your files, without requiring any risky software installs. It’s all happening in your web browser!" - that doesn't sound like they're telling you that they can read your files.

On their help centre page they say "Because the server can't see filenames, filename collisions have to be dealt with on the client side. With standard settings, if you upload a file that you already seem to have in your account (same target path/filename, same size, same last modification time), it is skipped, but nothing prevents you from keeping multiple files or folders with the same name in the same folder." - which doesn't sound like they're saying they can read your files. They also say "Because our end-to-end encryption model inherently precludes any server-side manipulation of your data, which would be required to implement such a feature." - which also doesn't sound like they're saying they can read your data.

But now we find out that not only can they read your files but a CDN could read your files. Since we know that companies must comply with correctly formed legal requests anything that extends the number of companies that can read the files increases the number of companies that can be leaned on by well funded groups.

> Obviously it's a flaw and they should fix it. But it's not a doom and gloom, "their whole system is a farce" kind of flaw.

Their whole system is cloud storage with encryption - without encryption it's just cloud storage and there's a bunch of other people offering that. Those other providers have the advantage of not having been previously raided by paramilitary police from an aggressive government.

Crypto is hard. It's easy to go wrong. I have no idea what kind of research they did before launching, but to have broken crypto a few days after launch is not a good sign.


That's a slogan and it can be interpreted in many ways.

If you buy into slogans as tech feature listings.

They store with encryption. That remains true. The fact that CDNs can potentially break into it doesn't mean that they store without encryption. Let's not exaggerate the scope of this flaw. A flaw that in any case is going to be fixed soon.

Crypto is hard, true. But this system is fixable on the server, so it's not as critical as a hardware system that may need to be recalled.

Don't worry, and remember that in any case THEY WILL STILL BE ABLE TO READ YOUR FILES.

They are possibly not emphasizing this enough. If you have truly sensitive data don't put it there as-is.


Exactly. It's much easier to update the javascript code that does the checks to use sha256 hashes than it is to update the bootloader/bootrom of the Nintendo Wii.


Simple question: the flaws that are being pointed out are not fundamental flaws, but rather poor choice of a way to accomplish the goal; therefore, I ask, is there anything in the idea itself that is flawed?

TBMS, if the scheme were as follows: user downloads index.html over 4098-bit SSL (assuming that you trust the CA). index.html contains SHA2 hashes of all the remaining resources to be downloaded and compares the hashes of downloaded resources from completely untrusted servers to the ones that it already has and thusly decides to continue if the hashes are correct. Is there anything fundamentally flawed with this system or is javascript cryptography = bad just an overgeneralization?


Yes, the flaws pointed out are easily fixed. Mega will probably have a fixed version uploaded later today as the fix should be about 15 minutes of work. The good part about javascript cryptography is also the bad part - if any changes need to be made, they can be instantly pushed out to all clients and the clients are upgraded at the next page view.

Of course, this leaves open the main problem people have with javascript cryptography - at any point, Mega can change their javascript to no longer be secure. This means you can't actually trust that your data is secure from Mega - but then, I very much doubt anybody trusted this to start with?

It's the same thing as when I sign into my bank using my browser. My bank could at any time change their website to read in my account number and password, and then email this password to China, along with all my bank statements. I have to trust that my bank will not do this every time I use them. I also have to trust that the policeman I walked past earlier today would not shoot me with his handgun. You have to trust at some point.


But the entire point of client-side encryption is to avoid having to trust a server. If you have to trust the server, then there's no reason to do the client-side encryption at all. It's just overhead at that point.


Agree completely if the goal was to provide encrypted storage. However, as noted in other comments, this isn't actually the goal. The goal is to create a plausible legal defense against legal threats from the RIAA and similar.

If the encryption is done on the server side, Mega can legally be required to make a log of all copyrighted works uploaded to their servers and be liable for distribution. If the file is encrypted before it is uploaded, then Mega cannot physically store a log of copyrighted works. This is pretty major as take-down requests can then only be sent for a single upload. The most that could be done is for a court to publicly order Mega to cease and desist, at which point everyone can move on with no legal liability.

This is a piracy platform, not a genuine secure storage. It's fairly clever.

Sidenote regarding comment below:

I live in Africa, I'm not sure who I trust less out of my bank, the police, or Mega. Probably not Mega!


No, client-side encryption makes sense even if you have to trust the server. When Mega's server gets compromised, the attackers cannot get a user's private keys until he visits Mega's site. Without client-side encryption, attackers could get all the user's keys and thus all their data immediately.


> If you have to trust the server, then there's no reason to do the client-side encryption at all. It's just overhead at that point.

Think of it as bootstrapping. If they released a service you couldn't use from a browser, probably nobody would use it. But if they release one, even if it isn't as secure as it ought to be, and get millions of users, then someone will write a browser plug in that makes it so you don't have to trust the javascript anymore, and anyone who needs better security will use that. At that point the javascript is irrelevant because no one who needs better security is (or should be) relying on it, but first you have to get to that point. What the javascript becomes is a gateway into the service for people who don't need security and just want to take advantage of 50GB free storage, or who interact with people who have a different threat model.

For example, here's a use case: You're a whistleblower. You want to distribute something to the world and you can't allow anyone to know who you are. So you don't use the javascript, you use an open source native client, and you make your upload using TOR or pick your favorite anonymizer. Then Mega has an encrypted copy of what you want to publish, they have no idea what it is. Now all you have to do is post the link and password to a public forum (again using an anonymizer, and possibly from a different country from where you made the upload) so that by the time any third party can even know what it is that you've uploaded, it's available to the world and you're in a safe place. Meanwhile none of the downloaders who don't need protection from anyone has to install any special software because they can use the javascript.


I totally agree with half of your comment, Mega will fix all these glitches and the service will be better (eventually).

I'm not sure about the trust part. Both your bank and the policeman have are committed to follow strict regulation, and have something to lose if they fail to do so.

There's a trust factor too, but I don't think Mega has the reinforcement of regulation or the consequences if they don't deliver what they promise.


It seems we've established (very thoroughly) that Mega should not be trusted to keep our files secure.

But that's not the point of their crypto; it's just an effort to ensure plausible deniability that happens to be a marketable feature.

If your objective is to stash sensitive files, there are plenty of established options.

It's obvious that Mega was not intended to serve such a purpose, so the criticism strikes me as pointless except for its instructional/entertainment value.


If your objective is to stash sensitive files, there are plenty of established options.

In case anyone is looking for an option, I recommend Tarsnap. http://www.tarsnap.com

Tarsnap is simply the best. Spideroak is probably okay. But if you have files that absolutely must remain confidential (e.g. NDA'd source code, etc) then Tarsnap is the service that can be trusted to store them correctly.


I was considering using them. Somehow i trust them more because they havn't got a 3MB front page covered in JS animations


So it's fatally broken. But everyone's ignoring Mega's upside: you get to send your sensitive data and money to a charismatic, egomaniacal scam artist!


Or you encrypt the data yourself first (so that even the browser doesn't see your sensitive data), that way it's a free and probably unreliable (in the long term) 50GB storage facility.

I might just use it as an extra backup repository.


i doubt anyone is seriously thinking of putting their sensitive data into mega. More than likely, it is going to be a storage medium for music, movies and/or pictures.


What can't "music, movies and/or pictures" be sensitive? People used Megaupload to, for example, exchange video and music for works which hadn't yet been released, and for personal images which would never be released.


I liked this one too. They're emailing hashes of the users password, full name, and their encryption key as the "verify address" URL.

Oh and once set, you can't change any password or delete your account.

http://arstechnica.com/security/2013/01/cracking-tool-milks-...


So, who are these people who don't completely fail at crypto and how do you hire them?


Interesting question but part of them problem is like this:

Crypto Guy 1: That system would take about three weeks to implement. Crypto Guy 2: That system would take about three months to implement.

Who do you hire? You have a number of scenarios. 1 could have a set of tools that are trusted and he has experience and knows exactly. And 2 doesn't have a clue and needs a month to get up on speed with crypto code. But the reverse scenario is applicable as well. You can't be sure if guy 1 is just a fast coder that hacks together some algorithms from various sources and 2 knows exactly what is needed to from his crypto toolbelt giving you a bigger estimate from experience.


What do you want to hire them for?

A lot of it comes down to recognizing the right tools for the job, and writing-from-scratch absolutely as few of those tools as necessary. Even very good practical crypto users can make mistakes... although in this case, Mega has used the plastic end of a screwdriver to pound nails.


I mean more in a theoretical sense.

For example I use code that does crypto everyday, probably everyone on the internet does.

Web frameworks , operating systems , browsers etc all use crypto and presumably that stuff is written by some mythical person who can do this stuff properly judging by the fact that my credit card hasn't been frauded yet..


Pretty much every crypto library you use was badly broken, multiple times, during its long history. New attacks are still being found, and browsers and libraries are updated as we learn more.

For a small sample, read the wikipedia page for http://en.wikipedia.org/wiki/Secure_Sockets_Layer and take a look at how many times we all got that wrong.

The first rule of crypto is not to write crypto code. Reuse a high level library that already made all the mistakes you're about to. If you absolutely have to break the first rule, you need to budget between 5 and 6 digits for experts to review it against known attacks and best practices.


So , only use stuff written by google/MS sized companies or very mature OS projects?


No, you don't have to go that far. The same libraries used by those companies are available to all projects, and most crypto uses them (or libraries built on top of them). Mistakes such as those made by Mega result from using a crypto library that makes the author do all the work.


So Mega uses SSL ( RSA 2048) to deliver code that checks if additional resources delivered by SSL (RSA 1024) are identical to the original ones. However they have a flaw in the authentication that checks these resources such that their security can now be broken by breaking the weaker SSL connection. But given the problems with certificates, there are scenarios were breaking SSL, irrespective of the encription used, is not a problem ( if you can install CAs at the target.)

I am actually somewhat surprised that the known problems of Mega are so few. I would have expected that their crypto is more broken than this.


From what I understand the risk is not so much that somebody breaks the 1024 bit SSL. It is that the Javascript code (presumably including security critical stuff) is loaded from a third party CDN network.

So by nature a CDN network is going to comprise many servers in different locations. Therefor if somebody pwns one of those servers it is risky for anybody who downloads the JS code from that particular CDN server.


I was under the impression that even a 128 bit keyspace is prohibitively hard to crack using modern hardware, and that 256 bit would take more computational resources than we can reasonably expect to have available to us for the rest of time. Isn't 1024 bit a bit overkill?


RSA can be cracked by breaking the public key down into it's prime factors to discover the private key. AES is much more expensive per "try" because the algorithm itself includes many more steps to do a decryption even with the correct key.

Not the most detailed answer I grant you, but I don't understand all of the math involved.

When you see "256 bit" used in the context of SSL, this is referring to the block cypher used in the actual transmission rather than the RSA key size.

So an SSL implementation will probably use a 2048 bit RSA key and a 256 bit AES key. RSA is used to swap the shared secret which is used as the key to the main AES tranfer.


Thanks, I forgot asymmetric keys were easily computed. I'm just thinking random number space brute force against something like SHA2 SSL.


For AES you have to search the entire keyspace, but for RSA ( and AFAIK any public key scheme) you can actually calculate the private key. The security of public key cryptography relies therefore on one-way functions, which are easy to calculate in one direction but very hard to calculate in the other direction. ( This is used to generate the public and private keys together, but the inverse operation, breaking the encryption, should be prohibitively hard to calculate in practice.) Because of this you can not really compare key length between symmetric and asymmetric encryption schemes.


You're confusing symmetric key sizes with asymmetric key sizes. 1024 bits is a huge symmetric key, but a rather small asymmetric key.


The 128 bit strong key is applicable for symmetric key encryption (e.g. AES, Blowfish etc.). However, even 2048 bit keys are considered relatively insecure for asymmetric key encryption (e.g. RSA).


So, forgive me for asking a dumb question. What would be a better approach here? I'm assuming that SHA1 doesn't cut it.


SJCL, which Mega appears to be using, includes SHA2. They could have used SHA2 if they needed a secure hash function. SJCL also includes HMAC; they could have used HMAC-SHA2 if they really needed a MAC. CBC-MAC was a bizarre choice.

(I'm trying to address the general questions, like "why would I use or not use CBC-MAC", without getting into the general craziness of Mega's design or the idea of doing crypto in browser Javascript.)


Is browser-based crypto ever feasible?

I'm surprised you're entertaining the idea. (Edit: Sorry. I misread. I was only hoping to learn something new.)

It seems like at a minimum users would need to disable all browser extensions before uploading their content. That doesn't seem realistic, so browser-based crypto seems dubious.


I'm not entertaining the idea! I'm not interested in picking apart Mega, but I am interested in how people choose crypto primitives and what libraries they have available to them.

Don't do crypto in browser Javascript!


What's wrong with the core idea of "more secure server validates less secure server's JS before running it"?


Is the problem with the JS language itself, with certain JS runtimes or with just the idea of running a program inside a web browser to do crypto?


The problem is multifaceted; first, JS crypto is slow in most current implementations (but will get better with Typed Arrays). Second and more importantly, it is difficult to ensure total security on a page. While loading all assets from a single server using SSL helps quite a bit, there's really nothing stopping a malicious extension from getting in the way and stealing your passwords.

Of course that's true of any website in general so the lesson is: be careful what extensions you install.

Third, browser support is still garbage. Safari has major issues, Firefox is lacking support in critical places, and IE, don't bother.

However the future is bright. With Typed Arrays & Web Workers running on all threads, you can expect 10+MB/s crypto in-browser very soon. That's slow by any other standard but potentially good enough for a browser. Hell, I'm pushing about 3.5MB/s on a MBA on securesha.re, which does 128-bit AES.

Just for disclosure, I wrote securesha.re (and succumbed to a few of these problems while writing it).


> However the future is bright

... for attackers with some knowledge of side-channel attacks, which are almost impossible to block from Javascript.


Is the problem with the JS language itself?

I don't know. I do know that other dynamic languages are trusted to perform crypto (e.g. Python). The fact that any dynamic language is trusted leads me to believe JS as a language may be fine.

or with certain JS runtimes?

Yes, each runtime must be analyzed by a crypto expert to uncover e.g. timing attack vulnerabilities, etc.

or with just the idea of running a program inside a web browser to do crypto?

Yes, implementing crypto within a webpage is a bad idea for many reasons. The first is that you must ultimately trust the browser itself, as well as its updater program. Second, you must trust every browser extension, since any extension can arbitrarily modify the page, and hence the javascript. (e.g. Reddit Enhancement Suite does this.) Since every extension can be updated at any time, you can't really trust any extension. Third, the javascript can be compromised in many other ways (e.g. XSS attack, etc). Fourth, crypto itself is extremely difficult to implement correctly, and it's easy to trick yourself into adding flaws while implementing it.

For these and other reasons, browser-based crypto can't ever be trusted. Not yet, and not until the above concerns are no longer relevant.


When Python performs crypto operations, it isn't doing that while running alongside potentially hostile content-controlled code.

The first thing everyone thinks when they read about browser JS being a hostile environment for crypto is, "LANGUAGE WAR!" But this issue has nothing at all to do with languages. If Python was the de-facto standard content-controlled browser programming language, we'd be saying "don't do crypto in browser Python".


The question was "Can the design of any programming language ever affect the security of crypto code implemented correctly within it?"

This question is extremely broad, and it requires envisioning future languages which haven't been invented yet, and looking for ways the design of the language itself might impact the security of crypto code implemented within it.

That said, the question isn't the most important one. It was his third question which you and I both responded to.


More fundamentally then the reasons others have already stated is that the JavaScript is loaded from a server anyway. So you trust the server anyway, and hence may as well just do the encryption on the server.


Yes. This is also why assurances that XSS bugs have been fixed aren't comforting. The problem isn't simply that attackers will find ways to corrupt the JS crypto code; the problem is that it is extremely difficult for end-users to verify that the crypto code they're running isn't bugged.


Wouldn't that be true of using any service handling sensitive information e.g. online banking?


Your bank's website is most likely not encrypting your data with javascript.


This is why internet banking sites use 2 factor auth.


Which ones? I've been signed up with Wells Fargo & Capital One for a while now and they do no such thing.


InteractiveBroker.


Etrade


What would be a better approach?

To not encrypt files via a browser using javascript.

One reason this particular flaw happened was due to the need to secure all content loaded insecurely by index.html, for example.


Encrypting files in the browser with javascript is a perfectly valid approach to security storing content. It's just hard to get right.


I think I just heard tptacek flip his table over from here


Basically, they use CBC-MAC to verify the integrity of their script sources. This is pretty bad because a MAC is not resistant to collisions, and given the original key, it's trivial to generate alternate data that gives the same output. A better approach would be an HMAC, using something like SHA1. Doing so makes it much more difficult to perform this type of attack.


Wait, what? An HMAC is a MAC -- just one that happens to be constructed from a hash function in a particular way.


This is just nitpicking. I did not say HMACs were not MACs anywhere in my comment.


In a particular way that happens to solve the problem at hand.


I was impressed by the analysis and blown away by the proof of concept. Awesome article. Mega, you listening?


I remain convinced that the best security of all is to encrypt your files yourself, on your own computer, using trusted encryption software (I use bcrypt), with a strong password, before uploading the files to any cloud based storage service, even the ones with excellent trust ratings.


For anyone that is confused about him using "bcrypt", it is also a cross platform encryption program as well as a common key derivation function - http://stackoverflow.com/questions/9035855/bcrypt-is-hasing-...


Isn't bcrypt a hashing algorithm based on an encryption algorithm? Meaning you can't use it for encrypting your files.

Edit: found this, I don't know why they picked that name: http://bcrypt.sourceforge.net/


I believe the "b" in bcrypt stands for Blowfish, which is the underlying encryption algorithm used in this open-source, cross-platform file encryption utility to which I was referring. I hadn't known about the bcrypt used for password hashing.

My original point wasn't to recommend a particular encryption software. My intended point was that you should take responsibility for protecting your files yourself, and rely as little as possible on trusting others, like Kim Dotcom.


I always encrypt any data sent to 'arbitary hosting' using GnuPG and recipients public keys. Has been working well so far. I'm also eagerly waiting for ECC version to go mainstream. My current signing key is 1024DSA key which is known to be weak. File lockers are nice service, you just need to secure any private data.


Just for the interest of all those worried about RSA 1024 bit keys being unsafe. From my system's (OS X) certificates:

[Thawte Premium Server CA: 1024 bits, valid until 2021]... [Thawte Server CA: 1024 bits, valid until 2021]... [http://www.valicert.com/: 1024 bits, valid until 2019]

And some others I am not including...

CALM down with 1024 bits please as of 2013. (Notice that those come included in the OS)

Edit: this is not an authority argument (I am perfectly calm with 1024bits without OS X's assurance). I am only asking those people complaining about 1024bit keys to remove those certificates from their systems...


This appears to be the first actual vulnerability in Mega.

Of course you still need to be able to MITM a 1024-bit SSL connection to make use of it.


If we take the position that CDNs are not to be trusted, how many other high profile sites would be rendered vulnerable? I assume quite many are nowadays using CDNs to distribute part of their scripts.

Wouldn't it be quite easy to for example to steal users Dropbox passwords if one could modify some of the scripts they use on the frontpage and which are delivered via Cloudfront?


To address the comments about client-side JavaScript encryption being insecure, client-side JavaScript isn't inherently insecure, or rather security is entirely dependent on the threat vectors you are trying to protect against. Some examples of levels of security:

SSL/TLS: Without transport level security (SSL/TLS), and no other encryption, it is absolutely trivial to intercept data. Unsecured WiFi, malicious network operators, hacked middlemen, spoofed DNS; there are many threat vectors. Adding SSL/TLS significantly increases security.

Server-side encryption: Added server-side encryption to SSL/TLS marginally increases security. It actually doesn't protect from much. Disposal of the hard drives is a common one people cite, in case HDDs aren't wiped when sent to the junk yard or for RMA. It could also protect against some accidental data leaks if files become readable through unintended means, or if the server is actually hosting the files in a third-party service (e.g., CDN, S3). It doesn't protect from employees of the server host, and it does little to protect from hackers that get full server access as the decryption key is likely stored in a file on the server.

Client-side JavaScript: JavaScript encryption actually adds a reasonable amount of security. Yes, you have to trust the browser makers, the browser update mechanism, and the company hosting the service (they could always change the JavaScript to steal your password). It does make it less trivial for a malicious employee to access your data, they would have to write and deploy new JavaScript. It also makes it less trivial for a hacker that gains server access, they would have to modify the JavaScript too. Modifying the JS is feasible for a targeted attack. It also protects against offline attacks, where a hacker copies data to their own systems to analyze before they get detected.

I think you have to trust the browser makers and their update mechanisms. The browsers always have access to everything, they could be capturing your banking passwords, or insert dummy SSL CAs, they could do anything. You also have to trust the makers of client software, whether it's JavaScript in the browser or C on the command-line, very few people are going to check the source code, most people will accept whatever client updates are provided.

Where client-side JavaScript falls apart is that it is vulnerable to some of the most common security exploits. Namely random browser vulnerabilities, XSS attacks, and malicious or vulnerable browser extensions and plugins. People are far too willing to install or unwittingly install (thanks InstallMonetizer) browser extensions and plug-ins which have too much access to your browser data. But your bank account is also vulnerable to the same issues. So while you shouldn't use client-side JavaScript to protect Top Secret government information, it does provide a decent amount of security, and is probably good enough for most cloud storage solutions.


What if you host the javascript separately to the dynamic content (say on S3)? This would give users some protection against compromised servers. This is akin to having an app supplied from a trusted source (ITunes) and then accessing an API through client side encryption.


[deleted]


> Either your system provides security, or it provides no security. It's not an "amount" of security. Once you're compromised, you've lost.

It's not that simple. Yes, once you are compromised that's the end of it. But security is about preventing getting compromised. There are many ways to get compromised, and different ways to mitigate the attacks.

If security was that black and white (security or no security) we would all have to use air-gap networks. There would be no internet, no REST APIs, no trusted SSL CAs, we wouldn't be able to use anything. For convenience we accept a certain amount of security risk. The risk we accept, and the security measures we take, are entirely dependent on the sensitivity of the data and the threats we expect to deal with.

If the NSA and CIA are after you, nothing is secure, they can physically break in and steal your systems, they have world class cryptographers and a stockpile of zero-day attacks to create trojans with.

But for most of us you don't have to worry about the NSA coming after us. We don't even have to worry about highly targeted attacks from criminal gangs. Average people have to worry about semi-automated attacks to scam money out of them, steal their identity, or grab naked pictures. It's likely way easier to break into your corporate network than it is to hack a cloud storage provider and replace their JavaScript code.

I recommend you read some of Bruce Schneier's essays and posts, here is good start: http://www.schneier.com/essay-319.html


I don't think the attack as described would work in practice, because the modified javascript file would no longer be valid javascript and would just break the site.

I think you'd have to both create a file that passed the hash check and was valid javascript, which seems somewhat more difficult.


The real "megafail" here is that browser zoom doesn't actually make the blogs text any bigger.


For CDN scenarios it would be nice if I could include a secure checksum (SHA256) for the script in the script tag and then browser would verify it before executing the script.


Interesting article.

I have one request for you (the owner of fail0verflow.com): please enable me to increase the font size on your site. Reading that font size on a 24" monitor is painful.


It's using 15px with a 22px line height, which looks pretty typical and about as readable as any other blog. Even the code blocks are 14px, noticeably larger than the 12px in HN comments.


Which is why I resize the HN comments page. I'd really like to be able to do the same on that site (resizing just enlarges the background images, not the text).


Now here's where we will see Kim Dotcom's true colours:

Will he fix the problem and thank marcan?

Or arrest him at gunpoint, seize his property, and charge him with as many felony counts as possible?


Kim Dotcom is a judge, prosecutor and LEO?


I think there's a joke in there.


So, a bit offtopic, but other than the hype and the encryption discussions, is anyone actually using the service?


Personally I can't wait to use this service. How can they afford the storage though? Is it sustainable?


I really hope any of my banks implement this poor implemented check some day !


Megafail: The File System Has Failed

Their new album needs work.


Glad you guys are back!


KDC, what are u waiting for hire this guy now?


Why would this guy want to work for KDC?


His money is still green.


And KDC has shown he doesn't spend much on security. Definitely not enough to entice someone who is outright depressed over his current state of security.


Maybe I'm just biased after having wasted my 20s in a cube, but hanging out with KDC & his crew of henchmen doesn't sound that bad of a gig based purely on the novelty of the idea. Like it or not (and based on your posts, I know which one it is), KDC has shown some insight into how to make cash on the ol' internets... I suppose you have an argument with his ethics? Try working on a bond trading desk for 10 years. You'll see KDC as Christ-like pretty soon thereafter.


Reasonable motivations: - Money - Protecting the users of Mega


It never ceases to amaze me how schmitz is doing the EXACT same thing for at least 2 decades now. He did exactly this on BBS, had other people upload to him and offer it up for download; then he ratted everyone he knew and worked with out to the police. Rinse and repeat with megaupload, let's see what kind of deal he will be going for in the end; and now there is "mega", let's see where this will go.


Simple question: if JavaScript in the browser cannot be used to encrypt data --as seems to be the consensus here in this thread--, doesn't it severely limit the usefulness of JavaScript client side? I mean, what can then be JavaScript client-side be used for?

Should (could?) SSL have been used to send data "in the clear but encrypted" (including the password to encrypte/decrypt the data) to Mega's servers and then encrypted server-side?


This would allow mega to implement a copyright protection scheme since their servers get to handle the data in clear text and could perform matching. If the encryption happens client-side they can't.


So all they have to do is switch to a proper hash function. Probably even MD5 would do.

Other than that, their system is pretty clever, they are basically shifting the server load of strong-HTTPs'ing everything to the client.


Kim Dotcom - piece of shit who makes money on the back of the scene, gets raided for (knowingly) doing so.

Somehow the combination of Kim being a piece of shit (whom no one would want to work for) and the very real possibility of the US Government going after anyone having anything to do with MegaUpload V2 resulted in the new product not being very secure/good. What a surprise.


Google makes money from copyright infringing youtube videos and music.

Does this make anyone who works for Google a "piece of shit"?


Youtube has a legitimate use. Megaupload was designed to be used to host pirated content, and for Kim Dotcom to make money off it. I, along with the rest of the scene, am/are insulted by this abuse of the content we released.


>Megaupload was designed to be used to host pirated content...

You got any evidence?

>I, along with the rest of the scene, am/are insulted by this abuse...

Who is the "rest of the scene"?


Read the US Government's case. It's pretty clearly laid out. Megaupload employees were even sharing links to user-uploaded pirated content hosted on their servers.

Those who release pirated content in a particular way. All non-p2p warez groups.


There is no judgement on this case yet. As far as I know people are still considered inocent until they are judged guilty. And that is without going into all the shit that went on during the Megaupload story.


That's fine and all, but the evidence is pretty clear....


That's for proper jury or court to judge. At least in democratic legal systems.


Are you insulted by Google's funneling of revenue through Bermuda, so they pay only a tiny fraction of the taxes they otherwise would? I'm more insulted by that than what Dotcom does.

Google is “flying a banner of doing no evil, and then they’re perpetrating evil under our noses,” said Abraham J. Briloff, a professor emeritus of accounting at Baruch College in New York who has examined Google’s tax disclosures.


I am more insulted by Android.


Acting like the entire warez scene exists for the greater good and isn't profit driven is ridiculous. Yeah, there's honest people in it just for the fun of cracking and such, but there's also plenty of siteops selling leech and taking "donations" of hardware in return for leech credits. How do you think the major private torrent sites get their releases so fast nowadays? Releases used to spread onto public sites much slower.

At least Dotcom's service wasn't used /solely/ for piracy... you can attack his intentions all you want but the service itself was not illegal.


The leech credits crap you are talking about died out when torrent sites began getting new pres under 2 minutes after pre.


Irony meter explosion aside, I still pretty regularly run into entirely legitimate broken links to Megaupload from people just using it as an host for their own stuff.


> insulted by this abuse of the content we released

You see the irony, right?


I don't think you can blame the US government for Dotcom botching this implementation, though.


Dotcom was not able to hire quality programmers because few (if any) would work for an illegal enterprise.



I don't see any relation between your link and my comment.


Maybe the fact SOME programmers even work the Zetas. Unless, that is, you don't consider "copyright piracy" worse than drug trafficing the way the Zetas do...


I find this doubtful. At worst, you can find persons to "consult" to do the job properly, and they can move on to other projects post-launch.

Perhaps his negative reputation preceded him, but that was earned long before the US government's involvement with Megaupload. Heck, I hadn't seen anyone express sympathy for his operations ~until~ the US crackdown.


His niche isn't worse than business-as-usual here in the US.


I don't see any normal (Dropbox, etc) cloud storage companies being designed to make hosting pirated content easy and lucrative for them. Believe it or not, most people want to make an honest living.


Dotcom is seen as criminal only because he isn't rich enough to make the rules, unlike the Fortune 100. Two wrongs don't make a right but neither should a double standard be applied to him.


I don't think many around these parts idolise the Fortune 100 either.


Ugh, text-align:justify.


Really, a downvote for this? Just pointing out the obvious -- that you probably shouldn't use justify for blogs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: