Hacker Newsnew | past | comments | ask | show | jobs | submit | thcuk's commentslogin

edit cargo.toml and add "x11" to eframe.

See my post above.


Fails to build

"cargo install localgpt" under Linux Mint.

Git clone and change Cargo.toml by adding

"""rust

# Desktop GUI

eframe = { version = "0.30", default-features = false,

features = [ "default_fonts", "glow", "persistence", "x11", ] }

"""

That is add "x11"

Then cargo build --release succeeds.

I am not a Rust programmer.


git clone https://github.com/localgpt-app/localgpt.git

cd localgpt/

edit cargo.toml and add "x11" to eframe

cargo install --path ~/.cargo/bin

Hey! is that Kai Lentit guy hiring?


Tested same model on Intel N100 miniPC with 16G - the hundred bucks pc

llama-server -m /Qwen3-30B-A3B-Instruct-2507-GGUF:IQ3_S --jinja -c 4096 --host 0.0.0.0 --port 8033 Got <= 10 t/s Which I think is not so bad!

On AMD Ryzen 5 5500U with Radeon Graphics and Compiled for Vulkan Got 15 t/s - could swear this morning was <= 20 t/s

On AMD Ryzen 7 H 255 w/ Radeon 780M Graphics and Compiled for Vulkan Got 40 t/s On the last I did a quick comparison with unsloth version unsloth/Qwen3-30B-A3B-GGUF:Q4_K_M and got 25 t/s Can't really comment on quality of output - seems similar


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: