Hacker Newsnew | past | comments | ask | show | jobs | submit | defaultcompany's commentslogin

They encrypt files on the client before transmission.

There was a Swizz cloud backup system existing until some years ago.. can't recall the name but it started with a 'V'. They also encrypted the files on the client side before transmission, but the files were encrypted with their own md5sum or some such as key, and therefore similar files from different systems, encrypted, could still be de-duplicated across their whole system.

Interesting! I can picture how the clients could calculate a hash prior to encryption and that would let the server know those files have the same contents once decrypted but how would that let them save on disk space? They still can’t see the contents of the file itself even if they know it’s the same so how could they deduplicate the storage? If they drop either one they are just left with a single encrypted version using only one clients key which they can’t serve up to anyone else.

I assume they had a kind of pool for files, and a system linking files (or should I say "blobs" to each client's directory layout. Kind of like if I have a disk with different subdirectories, I could run a tool (which do exist) to find duplicates, and delete all except one copy, and hardlink the rest to that one.

As for the cloud storage system, the files were, as mentioned, stored in an encrypted form, using a hash of the original file as key (possibly md5, possibly something else, I can't recall that at the moment). Which the cloud provider didn't know, but the client's application would know it. The encrypted file is provided to (every) client, every client can decrypt it because the clients keep the encryption keys (the original hashes, one for every file).

The details of that I don't have anymore, there used to be a document describing the whole thing. I probably got rid of all of that after they stopped the service (which I used for several years, with no issues).


“shoulder surfing” is not the problem. It’s people making videos or live streaming who will risk accidentally exposing password length.


This makes me want to use visidata for my databases.


Funny enough, Saul and I recently hacked on getting visidata's Ibis integration updated, so you can use visidata for poking around databases of any size, really. You might like that, but also visidata has non-ibis support for SQLite I believe.


Poshmark also is not dead.


This doesn’t ring true to me. Having processes which rely on communication between humans using natural language can of course be either structured or unstructured. Plenty of highly functioning companies existed well before structured data was even a thing.


"Talk to the vendor and see what they say" is an unstructured process relying on unstructured data.

"Ask the vendor this set of 10 compliance questions. We can only buy if they check every box." is a structured process based on structured data.

Both kinds of processes have always existed, long before modern technology. Though only the second kind can be reliably automated.


Structured data doesn't have be a database. It can be a checklist, a particular working layout, or even just a defined process. Many high functioning companies spent a lot of time on those kinds of things, which became a competitive advantage.


Technology folks often confuse structured data needed for their computing function as being needed for the business process.


How about because spent nuclear fuel will be hazardous to humans for the next ~20 thousand years? How do you amortize that cost? You can't just assume someone else will deal with it and call that cost savings. People talk about burying it but in reality it sits in containment vessels above ground and the more there is the higher the cost to deal with it so the less likely it ever will be dealt with.


Isn't that only applicable for Uranium 235 based reactors? Thorium is converted to Uranium 233 and when split the byproducts have an half life of 10s of years, meaning that the radioactivity drops to safe levels in "only" few hundred years.

This is much more manageable.

Anyway, that is to say that nuclear is a spectrum, and the current mainstream tech I believe it is the one that won because of the military applications (and therefore funding) back in the cold-war era.


Not from the same guy but here's a quine embedded in the github contribution chart:

https://github.com/mame?tab=overview&from=1970-12-01&to=1970...


This article was 160 pages long when printed in the New Yorker. Modern nuclear warheads are around 30x more powerful than the one dropped on Hiroshima. For comparison sake the equivalent resulting story would be 4800 pages long.


This reminds me of the “drop fix” for the sparc station where people would pick up the box and drop it to reseat the PROMs.


Amiga had a similar issue. One of the chips (fat Agnes IIRC?) didn't quite fit in the socket correctly, and a common fix was to pull out the drive mechanisms and drop the chassis something like a foot onto a carpeted floor.

Somewhat related, one morning I was in the office early and an accounting person came in and asked me for help, her computer wouldn't turn on and I was the only other one in the office. I went over, poked the power button and nothing happened. This was on a PC clone. She has a picture of her daughter on top of the computer, so I picked it up, gave the computer a good solid whack on the side, sat the picture down and poked the power button and it came to life.

We call this: Percussive Engineering


Apparently you also had to do this with the Apple ///.


One confusing thing to me was the word "server". An "MCP server" is a server to the LLM "client". But the MCP server itself is a client to the thing it's connecting the LLM to. So it's more like an adapter or proxy. Also I was confused because often this server runs on your local system (although it doesn't have to). In my mind I thought if they're calling it a server it must be run in the cloud somewhere but that's often not the case.


MCP is supposed to support both concepts of a local and a remote server, but in practice most have opted to build local servers and the tooling basically only supports that which is a shame and, in my opinion, a nonsensical choice that basically only has downsides (you need to maintain the local server, your customers need to install it, you have to remain retro-compatible with your local server, etc.).

This just continues to reinforce my feeling that everything around vibe coding and GenAI-first work is extremely shortsighted and poor quality.


Remote server implementations would naturally invite a number of jailbreak data exfiltration exploits, no?


Not more than what local servers do. You don't seem to understand what MCP is. Regardless of whether the MCP "server" is local or remote, it is JUST a wrapper around APIs. It's basically a translation layer to make your APIs adhere to the MCP spec, that's it.

Whether that wrapper's code runs on your laptop or a remote server changes nothing in terms of data exfiltration capabilities. If anything, it would make it more secure to have a remote server since at least you'd have full control over the code that's calling your API.


Right but at least in the case of a local instance, the risk profile is shifted to the use of the computer. A less than ideal situation for sure, but on the other hand a user should be able to do just about anything they want to with hardware they own.


I'm talking about MCP servers that call 3rd party APIs, like your local MCP server calling the Jira instance of your company, the Google Maps API, etc.

Obviously local MCP servers make sense to interact with applications that you have installed locally, but that's by far not their only use.


Reminds me of X11server


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: