Hacker News new | past | comments | ask | show | jobs | submit | d4rkp4ttern's comments login

Highly recommend UHK for an initial ergo split setup with a traditional row staggered layout. I put an apple trackpad in between the splits. But now considering getting into column staggered layout splits with integrated trackball/pad like the Sofle

https://xcmkb.com/products/sofleplus2


It does decently well actually. You can test function-calling using Langroid. There are several example scripts you could try from the repo, e.g.

    uv run examples/basic/tool-extract-short-example.py --model ollama/mistral-small
sample output: https://gist.github.com/pchalasani/662d7f13dbe690d6e2bfef01c...

Langroid has a ToolMessage mechanism that lets you specify a tool/fn-call using Pydantic, which is then transpiled into system message instructions.


This is exactly what we have had working in Langroid since at least a year, so I don't quite get the buzz around this. Langroid's `DocChatAgent` produces granular markdown-style citations, and works with practically any (good enough) LLM. E.g. try running this example script on the DeepSeek R1 paper:

https://github.com/langroid/langroid/blob/main/examples/docq...

    uv run examples/docqa/chat.py https://arxiv.org/pdf/2501.12948

Sample output here: https://gist.github.com/pchalasani/0e2e54cbc3586aba60046b621...

https://gist.github.com/pchalasani/0e2e54cbc3586aba60046b621...


Could you please point to the docs about how to transfer iTerm2 themes to Ghostty? I couldn't find it.


It looks like the process is more manual than I thought, sorry. There's something which imports themes from the iTerm2 color schemes website weekly, but from what I can find that isn't a feature which ships with Ghostty itself.

Here's the relevant docs page, which I hope explains why I mistakenly thought that transferring a theme directly from iTerm to Ghostty was possible. You could upload your theme to the website they're being sourced from, and wait a week. But that's clearly not the same thing.

https://ghostty.org/docs/features/theme


Curious to check it out but a quick question — does it have autocomplete (GitHub copilot-style) in the chat window. IMO one of the biggest missing feature in most chat apps is autocomplete. Typing messages in these chat apps quickly becomes tedious and autocompletions help a lot with this. I’m regularly shocked that it’s almost year 3 of LLMs (depending on how you count) and none of the big vendors have thought of adding this feature.

Another mind-numbingly obvious feature — hitting enter should just create a new-line. And cmd-enter should submit. Or at least have it configurable for this.

(EDITED for clarity)


I don't think this would be good UX. Maybe when you've already typed ~20 chars or so. If it was so good at prediction from first keystroke, it'd had that info you're asking in the previous response. It could also work for short commands like "expand", "make it concise", but I can also see it being distracting for incorrect prediction.

> Typing messages in these chat apps quickly becomes tedious and autocompletions help a lot with this.

If you're on Mac, you can use dictation. focus text-input, double-tap control key and just speak.


In the editor there’s GitHub copilot autocomplete enabled in the chat assistant and it’s incredibly useful when I’m iterating with code generations.

The autocomplete is so good that even for non-coding interactions I tend to just use the zed chat assistant panel (which can be configured to use different LLM via a drop down)

More generally in multi-turn conversations with an LLM you’re often refining things that were said before, and a context-aware autocomplete is very useful. It should at least be configurable.

Mac default Dictation is ok for non technical things but for anything code related it would suck, e.g if I’m referring to things like MyCustomClass etc


Enter does continue the chat! And shift-enter for new line.

My Mac now has built in copilot style completions (maybe only since upgrading to Sequoia?). They're not amazing but they're decent.

https://support.apple.com/guide/mac-help/typing-suggestions-...


Sorry I meant hitting enter should NOT submit the chat. It should continue taking my input. And when I’m ready to submit I’d like to hit cmd-enter


I agree, but only personally. I would assume most people are on the “Enter to submit” train nowadays.

Most of my messaging happens on Discord or Element/matrix, and sometimes slack, where this is the norm. I don’t even think about Shift+Enter nowadays to do a carriage return.


There are a lot of basic features missing from the flagship llm services/apps.

Two or so years ago I built a localhost web app that lets me trivially fork convos, edit upstream messages (even bot messages), and generate an audio companion for each bot message so I can listen to it while on the move.

I figured these features would quickly appear in ChatGPT’s interface but nope. Why can’t you fork or star/pin convos?


The only editor I’ve seen that has both these features is Zed.


Another feature that I find shockingly absent from most web-based chat providers is autocomplete, I.e copilot-like suggestions to complete what your typing. Typing long text into chat boxes quickly becomes tedious and having context-based autocomplete helps a lot — you can experience this within AI-IDEs like zed or cursor etc — in fact I often resort to using those just for this feature.


But you can directly use it via the deepseek platform via an OpenAI-compatible API. Does OpenRouter offer any advantages?

https://platform.deepseek.com/usage


One key, one base_url config, one billing account. If you like to mess around with many different models it's very convenient.


Yes, all of that. In addition, you can easily give the same prompt to several models at the same time to see how they respond.


Thanks, I was more curious whether they offer a better TOPS. Also there’s this concerning skepticism about OpenRouter:

https://www.reddit.com/r/LocalLLaMA/s/uGxhqi1YYh


The big question is whether or not o3 is using any type of “meta-generation” algorithm at inference time, I.e are there multiple invocations of the LLM generation at all, or does it generate an insanely long reasoning trace in a single autoregressive stream that some somehow implicitly has search-like behavior? In other words, is the search-like behavior learned entirely in post-training and only implicitly exhibited at inference time, or is it explicitly done at inference time?

Given the enormous compute costs of o3, my speculation has been that search is explicit, but I’ve seen this post from Nathan Lambert for example that speculates (in the context of o1) that it’s possible for search to be entirely “baked-into” a single single stream roll-out (which would depend on significant long-context innovations):

https://www.interconnects.ai/p/openais-o1-using-search-was-a...

If true this would be extremely interesting.


I’m also a JetBrains person and never really “got” VSCode. So cursor was not a fit for me, the VSC kn shortcuts always felt limiting. So I use zed since it can be configured to use JB kb shortcuts. And it’s open source and super fast (rust-based)


I have the same issue, I tried to get into VSCode a few times but each time switched back to JetBrains.

If your main issue is the keybinding though there is a vscode plugin[1] that recreates Intellij IDEA bindings, which I found helped smooth the transition during my tryouts for me.

[1] https://marketplace.visualstudio.com/items?itemName=k--kato....


Thanks! Maybe this will open me up to cursor/windsurf.


You can go a step further and have scripts runnable from anywhere without cloning any repo or setting up any venv, directly from PyPi: when you package your lib called `mylib`, define cli scripts under [project.scripts] in your pyproject.toml,

  [project.scripts]
  mytool = "path.to.script:main"

and publish to PyPi. Then anyone can directly run your script via

  uvx --from mylib mytool
As an example, for Langroid (an open-source agent-oriented LLM lib), I have a separate repo of example scripts `langroid-examples` where I've set up some specific scripts to be runnable this way:

https://github.com/langroid/langroid-examples?tab=readme-ov-...

E.g. to chat with an LLM

  uvx --from langroid-examples chat --model ollama/qwen2.5-coder:32b
or chat with LLM + web-search + RAG

  uvx --from langroid-examples chatsearch --model groq/llama-3.3-70b-versatile


Hm, I think you can just run something with 'uvx <name>' and it'll download and run it, am I misremembering? Maybe it's only when the tool and the lib have the same name, but I think I remember being able to just run 'uvx parachute'.


You're right, that's only when the tool and lib have the same name. In my case, I have several example scripts that I wanted to make runnable via a single examples lib


That's a really useful tip, thank you!


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: