Hacker News new | past | comments | ask | show | jobs | submit | merlincorey's comments login

Why not just use Partial or Shallow Clones? [0]

[0] https://github.blog/2020-12-21-get-up-to-speed-with-partial-...


git clone --depth=1 is fast because it doesn't grab the full commits history but OP is getting at a specific singular file directly.

It is neat, not aware of a git incantation so fine grained. Hope it gets built into Git directly.


Ah, thank you for the clarification -- as far as I know that is indeed unique.

It's a thimbleful sized shallow clone, one could say.


You can do a shallow clone and partial checkout.


Cool! Why you gotta tease me like this without busting out a one liner? :)

Gemini disagrees with you btw:

Unfortunately, directly combining git shallow clone and git partial checkout to grab just one specific file isn't straightforward. Here's why:

Shallow clone: This limits the downloaded commit history to a specific depth, but it still fetches all files involved in those commits. While reducing data, it wouldn't restrict files solely based on your needs.

Partial checkout: This lets you specify which files to include in your working directory, but it requires a full clone initially.

However, you have a few alternative approaches to achieve your goal:

1. Shallow clone + Sparse checkout:

Use git clone --depth=<commit_depth> --single-branch=<branch> to shallow clone the specific branch and limit history. Create a .git/info/sparse-checkout file containing only the path to the desired file. Run git read-tree -u to update the index based on the sparse-checkout file. This method downloads a limited history and only keeps the specified file in your working directory.

2. Partial clone with server support (limited availability):

Check if the server supports partial clones (currently implemented on Github and some Gitlab self-hosted instances).

Use git clone --filter=blob:none <url> for a "blobless" clone that only contains file content, no history or directory structure.

Add the desired file path to the .git/info/sparse-checkout file and run git read-tree -u as before.

This approach minimizes downloaded data but requires server support and won't work everywhere.


Because then some process on the server has to fetch/checkout the clone each time there's a push?

Whereas reading directly from the repo gets around that.


But then some process on the server has to fetch multiple files from the repository each time there's a request.

It seems this solution consists in getting around doing a bit of work when it is actually necessary (on website update) by doing a lot of work over and over.


FTA:

> This sounds like a lot of steps to serve a single file, but there's two key optimizations which can be made. The first is to cache the root tree's hash in memory, which skips two lookups right at the beginning. The root tree's hash will only change when the latest commit of the branch changes, so it's enough to cache it in memory and have a separate background process periodically re-check the latest commit.

> The second optimization is to cache tree objects in-memory using their hash as a key. The object identified by a hash never changes, so this cache is easy to manage, and by caching the tree objects in memory (perhaps with an LRU cache if memory usage is a concern) all round-trips to the remote server can be eliminated, save for the final round-trip for the file itself.

Also, the "background process periodically re-check the latest commit" seems like a bit of overkill if the repo is local; just caching and checking the mtime of `refs/heads/main` should be enough to decide whether the root tree needs re-reading.


It was Defcon 6 if I'm not mistaken and someone actually didn't go because of it which is how it became a meme.

Here's at least one source corroborating that[0]:

> "I think it's from around DC6 and is a reference to our only near brush with cancellation at the Monte Carlo for DC4," Def Con spokesperson Darington Forbes wrote me in an email. "I wish I had more to tell you—since it happened seventeen or so years ago my info is murky. Something about a casino mogul preferring we not use the Monte Carlo, threats of legal action."

> @ivydigital DEF CON - cancelled annually for over 20 years

> — Rich Trouton (@rtrouton) July 31, 2015

[0] https://www.vice.com/en/article/ezvez4/def-con-is-cancelled-...


> TOOOL still has some of the best workshops and tutorials on the conference floor and usually has some people who'll talk about breaking open Medecos or Fichets to anyone who'll listen.

While you're over there look around for the Tamper Evident Village and we'll happily demonstrate and allow you to try removing Tamper Evident Seals of various kinds.


Also very cool stuff. I always see TEV bogged down with tons of people so after 4 cons I still haven't had a chance -- and while I have to miss this year, I'll hopefully swing by next year and check it out!


The bomb threat wasn't even related to Defcon, though.

I heard it both from Dark Tangent and several high level Goons.


what was it related to? I thought someone reported a suspicious bag in the venue.


It was a known actor to the local police so it could have happened at any conference, really.


It was someone called in “a suspicious backpack with wires” which is absolutely hilarious at DEF CON


Hah...that could easily have been MY backpack.

When I'm at DEFCON, I bring a fun little device. It's an ESP8266 that constantly listens for WiFi probes coming from people's mobile devices. It then displays the SSID (the network name) on a scrolling LED text display. I keep it plugged into one of those Anker battery banks. 10,000 mAh will power it for ~16 hours, so it lasts the entire day.


Ah yes, predictive text based panics is what we need in our compilers...

The LLM hype has really jumped the shark.


The information provided by the warped part of the screen is the directionality of it similar to Renegade Ops' Dynamic Splitting as discussed in the article.


This is news because of an exploit found against NginX, I believe.

That's why HAProxy did testing to see if they were vulnerable.


This is news because of a massive DDoS against AWS/Cloudflare/Google, and isn't related to a particular flaw in NginX

https://cloud.google.com/blog/products/identity-security/how...


I didn't know his middle initial, so I thought the post was about the computer scientist, to be honest!


Just purchased it in hardcover based on your quick summary.

Edit: I'm not the only one -- copies are going quickly it seems, and I've already found a new paperback going for $100


Once an author is deceased, does it become okay-ish to download their content from z-lib?

gulp

To be fair, when I find an amazing book, I typically end up buying a physical copy even when I've already read it. And even if the author has passed.


You should look into something called "Sousveillance".


Also “The Truth of Fact, the Truth of Feeling” by Ted Chiang

Less about the panopticon of surveillance, but about the ramifications of having perfect memory in your daily personal interactions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: