Hacker Newsnew | past | comments | ask | show | jobs | submit | ImJasonH's commentslogin

It's unclear to me from this post, or Red Hat's announcement[0] what makes it an enterprise build, aside from offering some support SLA.

Are there any material differences between this and the free OSS Podman Desktop[1] released 4 years ago?

0: https://www.redhat.com/en/blog/introducing-red-hat-build-pod... 1: https://podman-desktop.io/


Isn't that what most enterprise software is? A number to call and some kind of contract on it?


Thanks for giving it a shot, and for the kind words.

I didn't focus much on the realism of the environment, and spent most of my tokens making the drone "feel" right -- responsive but a little sluggish, physical, controllable, etc.

If I spend more time on it I'd probably work on making the skier a little better, since that's what you end up spending the most time looking at. It's basically a placeholder now, and it shows.

But you're right, making the rest of the peripheral view more realistic would also probably have a big impact.

Maybe I'll set up a workflow to deploy PRs to preview environments and encourage folks to send PRs to work on these things. In the meantime, feel free to fork it and make whatever changes you think would make it more fun!


Both Claude Code and Codex use sandbox-exec with Seatbelt to sandbox execution:

- https://developers.openai.com/codex/security/#os-level-sandb...

- https://code.claude.com/docs/en/sandboxing


It weirds me out a bit that Claude is able to reach outside the sandbox during a session. According to the docs this is with user consent. I would feed better with a more rigid safety net, which is why I've been explicitly invoking claude with sandbox-exec.


Here's a Go mod proxy-proxy that lets you specify a cooldown, so you never get deps newer than N days/weeks/etc

https://github.com/imjasonh/go-cooldown

It's not running anymore but you get the idea. It should be very easy to deploy anywhere you want.


Govulncheck is one of the Go ecosystem's best features, and that's saying something!

I made a GitHub action that alerts if a PR adds a vulnerable call, which I think pairs nicely with the advice to only actually fix vulnerable calls.

https://github.com/imjasonh/govulncheck-action

You can also just run the stock tool in your GHA, but I liked being able to get annotations and comments in the PR.

Incidentally, the repo has dependabot enabled with auto-merge for those PRs, which is IMO the best you can do for JS codebases.


Govulncheck is good, but not without false-positives. Sometimes it raises "unfixable" vulnerabilities and there's still no way to exclude vulnerabilties by CVE number.


I haven't experienced that (that I know of), do you have an example handy?


Checkpoints sounds like an interesting idea, and one I think we'll benefit from if they can make it useful.

I tried a similar(-ish) thing last year at https://github.com/imjasonh/cnotes (a Claude hook to write conversations to git notes) but ended up not getting much out of it. Making it integrated into the experience would have helped, I had a chrome extension to display it in the GitHub UI but even then just stopped using it eventually.


Ah you were 7mo ahead of me doing the same and also coming to a similar conclusion. The idea holds value but in practice it isnt felt.

https://github.com/eqtylab/y


What are they actually “checkpointing”? Git already “checkpoints” your code and they can’t time travel your LLM conversation.


I thought the use of AI in the Secret Invasion title sequence was actually really appropriate, even "meta", maybe even a bit ahead of its time.

The seemingly purposeful AI style made it seem unnatural (on purpose), and like a facsimile of an otherwise trustworthy thing (on purpose), which was exactly in line with the idea of the show.

The execution of that show and that idea was pretty bad, but one of the few positives of it was, to me, an example of using AI art overtly, and leaning into its untrustworthy nature.


Early in one of the conversations Gemini actually proposed a Lisp-like language with S-expressions. I don't remember why it didn't follow that path, but I suspect it would have been happy there.


I hope I didn't give the impression that I thought this language was ready to be put into commercial planes. :)

This was the result of an afternoon/evening exploring a problem space, and I thought it was interesting enough to share.


The comment was sarcastic, hence the "/s" at the end of the first sentence.

Everything else was a thought experiment to show how the idea of LLMs on everything including commercial planes is a very bad idea and would give regulators a hard time.

The point is: just because you can (build and run anything) does not mean you should (put it on commercial planes).


I don't disagree at all. :)

This was mainly an exercise in exploration with some LLMs, and I think I achieved my goal of exploring.

Like I said, if this topic is interesting to you and you'd like to explore another way to push on the problem, I highly recommend it. You may come up with better results than I did by having a better idea what you're looking for as output.


I tried a thread, I got that both LLMs and humans optimize for the same goal, working programs, and the key is verifiability. So it recommended Rust or Haskell combined with formal verification and contracts. So I think the conclusion of the post holds up - "the things that make an LLM-optimized language useful also happen to make them easier for humans!"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: