Hacker Newsnew | past | comments | ask | show | jobs | submit | frontsideair's commentslogin

Yeah, the initial experience with no colors doesn’t look great. I can implement this when I have some free time, if you feel like doing it please feel free to open a PR. Thanks!

Sorry, too busy sending PR to fix bugs I made a few months ago :(

I think the 3 initial colors is easy to implement and will make your site more beginner friendly.


14B Qwen was a good choice, but it became outdated a bit and seems like the new version of 4B surpassed it in benchmarks somehow.

It's a balancing game, how slow a token generation speed can you tolerate? Would you rather get an answer quick, or wait for a few seconds (or sometimes minutes) for reasoning?

For quick answers, Gemma 3 12B is still good. GPT-OSS 20B is pretty quick when reasoning is set to low, which usually doesn't think longer than one sentence. I haven't gotten much use out of Qwen3 4B Thinking (2507) but at least it's fast while reasoning.


Ollama adding a paid cloud version made me postpone this post for a few weeks at least. I don't object them to make money, but it was hard to recommend a tool for local usage and make the first instruction to go to settings and enable airplane mode.

Luckily llama.cpp has come a long way and was at a point that I could easily recommend as the open source option instead.


This is the command probably:

  sudo sysctl iogpu.wired_limit_mb=184320
Source: https://github.com/ggml-org/llama.cpp/discussions/15396


I'm interested in this, my impression was that the newer chips have unified memory and high memory bandwidth. Do you do inference on the CPU or the external GPU?


I don't, I'm a REALLY light user. smaller LLMs work pretty well. I used a 40gb LLM and it was _pokey_, but it worked, and switching them is pretty easy. This is a 12 core Xeon with 64Gb RAM...my M4 mini is....okay with smaller LLMs, I have a Ryzen 9 with a RTX3070ti that's the best of the bunch, but none of this holds a candle to people that spend real money to experiment in this field.


Good point, let me add a quick note.


Thank you, it was the integral part of the whole post!


According to the benchmarks, this one is improved in every one of them compared to the previous version, some better than 30B-A3B. Definitely worth a try, it’ll easily fit into memory and token generation speed will be pleasantly fast.


There is a new Qwen3-30B-A3B, you are compare it to the old one. https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507


This is the first time I’m hearing about Nebula. How does it compare to Tailscale?


I saw from the comments that they provide smaller binaries, which may have worked on my Raspberry Pi. Maybe I’ll give it a try one day.

https://tailscale.com/kb/1207/small-tailscale


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: