Hacker Newsnew | past | comments | ask | show | jobs | submit | rgbrgb's commentslogin

how come? just because it's open source doesn't mean that they run that exact binary on their servers. ngrok does pretty well without open sourcing.

The locus of trust moves, if you have the source, and trust is a factor for you, because you can simply self-host and know what you're running.

If you're in TS/JS land, I like to use an open source version of this called graphile-worker [0].

[0]: https://worker.graphile.org


I am using pgboss myself, very decent, very simple. Had some issues with graphile back in the days, cant remember what exaclty, it probably did already overcome whatever I was struggling with!

crush looks nice and I like the wacky vibe of charm [0]. anyone know the main differences between it and opencode [1]?

[0]: https://charm.land

[1]: https://opencode.ai


FWIW Carmack did this as CTO of Oculus [0]. Another configuration I've seen is for the CTO to have like 1 direct (VP Eng) who does actual eng managing. You could argue it's a staff engineer role but I've never seen staff engineers actually get much say over org direction/structure or be empowered to break gridlock like this.

[0]: https://www.uploadvr.com/john-carmacks-app-reviews-series/


this is the core problem rn with developing anything that uses an LLM. It’s hard to evaluate how well it works and nearly impossible to evaluate how well it generalizes unless the input is constrained so tightly that you might as well not use the LLM. For this I’d probably write a bunch of test tasks and see how well it performs with and without the skill. But the tough part here is that in certain codebases it might not need the skill. The whole environment is an implicit input for coding agents. In my main codebase right now there are tons of playwright specs that Claude does a great job copying / improving without any special information.

edit with one more thought: In many ways this mirrors building/adopting dev tooling to help your (human) junior engineers, and that still feels like the good metaphor for working with coding agents. It's extremely context dependent and murky to evaluate whether a new tool is effective -- you usually just have to try it out.


Also, if you figure out a good prompt today you don't know how long it will last, because of model updates outside your control


> This server integrates with desplega.ai

This is cool! no shade at all to desplega.ai but I would love a version of this that runs locally + does stuff like verifying no tests are flaky. I do this with a few extra steps via claude code + playwright tests. e2e tests are the best way I know for catching UI regressions but they're expensive and annoying to run, so something that looked at a PR and healed / wrote tests in the background as I work on features would be pretty cool.

Why local? Basically I'm just cost sensitive for my own projects and already have this nasty MacBook that only gets like 20% utilization.


One of the things we used is this algorithm with retries from meta: https://engineering.fb.com/2020/12/10/developer-tools/probab...

If your challenge is flakiness, this should help initially. Unfortunately, there’s a lot of work in our engine, and a custom system to handle operations that goes beyond vanilla Playwright so running it locally would be quite challenging.


> framework-agnostic, drop-in chat solution

Maybe I'm being dumb but is this a generic chat UI for openai models only? Pretty bearish on adoption of this if so. As a pragmatic dev I'd definitely not be keen to bake model lock-in into my UI for functionality as generic as chat.


The openai library works with most other providers by just changing the endpoint url, and this is Apache licensed, so I feel good about using it.


Is there a place to change the endpoint url? It seems we just add the workflow id and generates a secret which is used by the frontend. Apache license is good though


Note that their basic example in the readme starts with `OpenAI(api_key=os.environ["OPENAI_API_KEY"])`

Generally speaking, in most regular usages, you can replace it with an alternative provider like Openrouter, with `OpenAI( base_url="https://openrouter.ai/api/v1", api_key="<OPENROUTER_API_KEY>")`

But I haven't tested chatkit yet and don't know if it might be using special endpoints that are currently only supported by OpenAI. IANAL, but would assume though that if the client is Apache licensed, then it wouldn't be an issue for Openrouter and others AI providers to develop their own versions of those endpoints.

Maybe I'm overly optimistic, but based on what I'm seeing with MCP and other recent developments, the industry is continuously gravitating towards a commoditized/interchangeable future where no provider has a structural moat.


Looks like it supports custom backends


In my experience, the only reliable LOE estimate is from someone who just touched that feature or someone who scaffolded it out or did the scrappy prototype in the process of generating the LOE.


Working on a personal recruiter / talent agent for my smartest dev/product/design friends (and theirs) https://www.hedgy.works

Key problems we're solving:

- Everyone wants to be doing meaningful, fun work that feels like their "life's work". Few feel like they are.

- In recruiting, the AI spam problem is real and only getting worse, essentially killing the cold application pipeline. You need a referral.

- Optimizing your career feels like annoying politicking for a lot of the most talented folks who just want to focus on building cool stuff. But, as an employee, if you don't test the market (e.g. take a recruiter conversation) from time to time, your comp can really stagnate.


this is a great landing page. I downloaded.

great onboarding too, using it now.

Very handy, thanks!


Landing page is indeed very refreshing


thank you!!!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: