Hacker Newsnew | past | comments | ask | show | jobs | submit | est's commentslogin

> bootstrap an Artifacts repo from an existing git repository

wow that's cool. I used to hack CF Worker to operate .git using isomorphic-git, it's a PITA.

> ArtifactFS runs a blobless clone of a git repository: it fetches the file tree and refs, but not the file contents. It can do that during sandbox startup, which then allows your agent harness to get to work.

That's insanely useful. How combining with only committing the file changed we'd have a single-blob editing capability against any .git


I really want to know what does M, K, XL XS mean in this context and how to choose.

I searched all unsloth doc and there seems no explaination at all.


Q4_K is a type of quantization. It means that all weights will be at a minimum 4bits using the K method.

But if you're willing to give more bits to only certain important weights, you get to preserve a lot more quality for not that much more space.

The S/M/L/XL is what tells you how many tensors get to use more bits.

The difference between S and M is generally noticeable (on benchmarks). The difference between M and L/XL is less so, let alone in real use (ymmv).

Here's an example of the contents of a Q4_K_:

    S
    llama_model_loader: - type  f32:  392 tensors
    llama_model_loader: - type q4_K:  136 tensors
    llama_model_loader: - type q5_0:   43 tensors
    llama_model_loader: - type q5_1:   17 tensors
    llama_model_loader: - type q6_K:   15 tensors
    llama_model_loader: - type q8_0:   55 tensors
    M
    llama_model_loader: - type  f32:  392 tensors
    llama_model_loader: - type q4_K:  106 tensors
    llama_model_loader: - type q5_0:   32 tensors
    llama_model_loader: - type q5_K:   30 tensors
    llama_model_loader: - type q6_K:   15 tensors
    llama_model_loader: - type q8_0:   83 tensors
    L
    llama_model_loader: - type  f32:  392 tensors
    llama_model_loader: - type q4_K:  106 tensors
    llama_model_loader: - type q5_0:   32 tensors
    llama_model_loader: - type q5_K:   30 tensors
    llama_model_loader: - type q6_K:   14 tensors
    llama_model_loader: - type q8_0:   84 tensors

They are different quantization types, you can read more here https://huggingface.co/docs/hub/gguf#quantization-types

Just start with q4_k_m and figure out the rest later.

hey you can do a bit research yourself and tell your results to us!

> providing both sync and async code paths in the same class, often using a naming scheme which prefixes the async versions of the methods with an a

I have a solution to write a single code path for both async and sync

https://news.ycombinator.com/item?id=43982570


I don't really understand what that does.

> main issue seemed to be delay from what it saw with screenshots and api data and changing course.

This is where I think Taalas-style hardware AI may dominate in the future, especially for vehicle/plane autopilot, even it can't update weights. But determinism is actually a good thing.


This is a limitation of LLM i/o which historically is a bit slow due to these sequential user vs assistant chat prompt formats they still train on. But in principle nothing stops you from feeding/retrieving realtime full duplex input/output from a transformer architecture. It will just get slower as you scale to billions or even trillions of parameters, to the point where running it in the cloud might offer faster end-to-end actions than running it locally. What I could imagine is a small local model running everyday tasks and a big remote model tuning in for messy situations where a remote human might have to take over otherwise.

I'd say UI is mostly a 2D tweaking + state management job, they don't exactly fit in a seq2seq style.

Perhaps title had a typo?

fluorographane -> Fluorographene

Can't find a single page about fluorographane

https://en.wikipedia.org/w/index.php?search=fluorographane&t...

But this

https://en.wikipedia.org/wiki/Fluorographene


Not a typo. Fluorographene is the sp² form (Nair et al. 2010). Fluorographane uses the -ane suffix to denote full sp³ saturation — same convention as graphene → graphane. The sp³ hybridization is what creates the bistable C-F orientation that stores the bit.

TIL thanks!

Fluorographane: Synthesis and Properties (pdf)https://pubs.rsc.org/en/content/getauthorversionpdf/C4CC0884...

hmm, it isn't strictly gravitational projectile movement isn't it?

Could this be used to make better ASCII animations?

Looks like it's just find sources in Confluence against bullshit Claude Code says?

I thought it can search for online cites.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: