Hacker Newsnew | past | comments | ask | show | jobs | submit | hu3's commentslogin

I see a lot of game devs in discord experimenting with Electrobun to release desktop games.

I think it's going to eat a piece of the Electron pie for Steam indie games.

Most stay with bun after seeing how fast and seamless it is to run typescript games with instant auto reload:

bun --watch game.ts


I do a lot of web development, and even if we set the great tooling aside for a moment, Bun is still a major improvement (a real leap, I’d say) when it comes to performance.

Are many games built with Electron ...? I know there are a few HTML5 games, crosscode was the first one I recall seeing that really pushed it. Aren't most small games Unity or Godot?

CrossCode is a marvel in that they had to build their own runtime: https://news.ycombinator.com/item?id=41155807

I think the most famous Electron game is Vampire Suvivors, but it has since been ported to Unity.


What Discord server are you referring to? Would love to join.

Out of their way?

They have been watching talented folks waste their lives away reverse engineering hardware/software that they possess all the schematics of.

If they really wanted to help, all they had to do is send a single e-mail with a zip file.

The distortion field is unbelievable.


In my experience half the things I ran in Wine ran better in it than Windows.

So depending on what you want to run, not only did Wine catch up bit also surpassed.


How many USB ports does a MBA M1 have?

Now you have to walk around with a USB hub, for a subpar battery life.


Go never advertised, designed for, nor supported external usage of their backend.

some? It can happen with any language.

Sonnet 4.6 already available in VSCode Copilot Pro+ for me ($39/mo plan) on a 128K context size limit:

https://i.imgur.com/mHvtuz8.png

After some quick tests it seems faster than Sonnet 4.5 and slighly less smart than Opus 4.5/4.6.

But given the small 128k context size, I'm tempted to keep using GPT-5.3-Codex which has more than double context size and seems just as smart while costing the same (1x premium request) per prompt.

I have my reservations against OpenAI the company but not enough to sacrifice my productivity.


And the screamingly fast compilation speed is a boon to fast LLM iterations as well.

Even the thousands of integrations?

The strenght of openclaw is the massive community adherence.

It can connect to a ton of things.

It's like the https://zapier.com meets LLMs.


I'm sure a lot of API plumbing can be copied/adapted wholesale from the (open-source) openclaw repo. LLMs are surprisingly good at this kind of stuff. And yeah it would require some testing, but I doubt what openclaw has now is itself in a very stable state (from my very limited testing)

You think he implemented those thousands of integrations himself? Or maybe some particular tool was used that can be used again for implementing such things? Particular tool that so many of us use as well?

It was obviously largely by the community. Just take a look at PRs.

Question is, will the community continue with this level of engagement on the project now that it has OpenAI stamp on it?

I highly doubt. A fork? Maybe.


Where do you guys get the 1b exit from? I didn't see numbers yet.

It's AI. Take a sane number, add a 14,000x multiplier to that. And you'll only be one order of magnitude off in our current climate.

you can also take annualized profit run rate times negative 14,000.

probably an order of magnitude too low rather than too high as well :P

It was reportedly acquired for 30M (source: Twitter), the 1B number comes from people stating it could be worth that much on Twitter in the near future.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: