Hacker Newsnew | past | comments | ask | show | jobs | submit | xmorse's commentslogin

Check out Critique too if you are looking for a side by side diffs TUI

It uses opentui, the same framework uses by opencode.

It can also render diffs to images, pdf and html. Very useful for agents to share diffs in remote environments like Openclaw or Kimaki

https://github.com/remorses/critique


Mainly using playwriter.dev to help debug CSS issues on the page. It's an extension you can enable in Chrome and let agents control the browser via CDP

https://github.com/remorses/playwriter


Interesting, thanks To your opinion, what's the benefits compared to the native Chrome remote debugging feature + the chrome-devtools MCP?

This one works as an extension so you don't need a new browser specifically for agents. It's easier to collaborate. Also if you run the MCP in non headless mode it brings to front the browser on every interaction, like opening a new page. With the extension this does not happen.

Another benefit is context usage.

The cli can also do a lot more than other MCPs because it uses code snippets run in a stateful sandbox to control the browser, so it can do virtually anything instead of just a few exposed tools like `scroll`, `click`


MCP eats lots of context (~20k tokens for chrome's). The more tokens you use needlessly, the faster your context rots (i.e., worse performance).

I am using WebSocket hibernation so the connection will cost nothing if no events are being sent after some time, 9 seconds I think. I think cost is completely negligible. I run a similar service for the Framer MCP and it costs basically nothing even with thousands of users using it every week.

The big advantage of this architecture is that it that it's very lightweight and starts in less than 200ms. Much faster than the built in Cloudflare tunnel, and no login required


Writing this in Mojo would have been so much easier


It's barely gaining adoption though. The lack of buzz is a chicken and egg issue for Mojo. I fiddled shortly with it (mainly to get it working some of my pythong scripts), and it was suprisingly easy. It'll shoot up one day for sure if Latner doesn't give up early on it.


Isn't the compiler still closed source? I and many other ML devs have no interest in a closed-source compiler. We have enough proprietary things from NVIDIA.


Yeah, the mojo pitch is so good, but I don't think anyone has an appetite for the potential fuckery that comes with a closed source platform.


Yes, but Latner said multiple time it's closed until it matures (he apparently did this with llvm and swift too). So not unusal. His open source target is end of 2026. In all fairness, I have 0 doubts that he would deliver.


Given Swift for Tensorflow, lets see how this one goes.


That one did get open sourced but nobody ended up wanting to use it


Who would anyone want to pair a subpar language with a subpar ML framework?


That is the thing, what lessons were learnt from it, and how will Mojo tackle them.


I feel like its in AMD/Intel/G’s interest to pile a load of effort into (an open source) mojo


Mojo is not open source and would not get close to the performance of cuTile.

I'm tired of people shilling things they don't understand.


it's all over this thread (and every single other hn thread about GPU/ML compilers) - people quoting random buzzword/clickbait takes.


Use-cases like this are why Mojo isn't used in production, ever. What does Nvidia gain from switching to a proprietary frontend for a compiler backend they're already using? It's a legal headache.

Second-rate libraries like OpenCL had industry buy-in because they were open. They went through standards committees and cooperated with the rest of the industry (even Nvidia) to hear-out everyone's needs. Lattner gave up on appealing to that crowd the moment he told Khronos to pound sand. Nobody should be wondering why Apple or Nvidia won't touch Mojo with a thirty-nine and a half foot pole.


Kernels now written in Mojo were all in hand written in MLIR like in this repo. They made a full language because that's not scalable, a sane language is totally worth it. Nvidia will probably end up buying them in a few years.


NVidia is perfectly fine with C++ and Python JIT.

CUDA Tile was exactly designed to give parity to Python in writing CUDA kernels, acknowledging the relevance of Python, while offering a path researchers don't need to mess with C++.

It was announced at this years GTC.

NVidia has no reason to use Mojo.


I don't think Nvidia would acquire Mojo when the Triton compiler is open source, optimized for Nvidia hardware and considered a industry standard.


Nobody is writing MLIR by hand, what are you on about? There are so many MLIR frontends


how mojo with max optimize the process?


what about a fourty feet pole? would it be viable?


I really want Mojo to take off. Maybe in a few years. The lack of an stdlib holds it back more than they think, and since their focus is narrow atm it's not useful for the vast majority of work.


It would help if they were not so much macOS and Linux focused.

Julia, Python GPU JITs work great on Windows, and many people only get Windows systems as default at work.


Approximately nobody writing high performance code for AI training is using Windows. Why should they target it?


As desktop, and sometimes that is the only thing available.

When is the Year of NPUs on Linux?


This targets Blackwell GPUs so I’m not sure what you are talking about


The same, hardware available for Windows users, as work devices at several companies, used by researchers that work at said companies,

https://www.pcspecialist.de/kundenspezifische-laptops/nvidia...

Which as usual, kind of work but not really, in GNU/Linux.


I've commissioned a board of MENSA members to devise a workaround for this issue; they've identified two potential solutions.

1) Install Linux

2) Summon Chris Lattner to play you a sad song on the world's smallest violin in honor of the Windows devs that refuse to install WSL.


I go with customers keep using CUDA with Python and Julia, ignore Chris Latter's company exists, while Mojo repeats Swift for Tensorflow history.

What about that outcome?


https://termcast.app

A Raycast porting to the terminal. It will let you run Raycast extensions as TUI apps. Powered by opentui


https://playwriter.dev

A browser automation Chrome extension and MCP. It consumes less context than playwright MCP and is more capable: it uses the playwright API directly, the Chrome extension is a CDP protocol proxy via WebSockets.

I use it for automating workflows in development but also filing taxes and other boring tasks


This Chrome extension allows you to control your own browser via MCP.

It bridges the CDP protocol from the MCP to the browser, meaning you can do everything Playwright can.

The MCP works using a single tool: execute. It will run Playwright code to control the browser, meaning context usage is small compared to Playwright MCP and the capabilities are more extensive


I made something similar that lets you run TUI applications inside other terminals via ghostty-vt, to implement things like TMUX in opentui

It will be used to render ANSI inside opencode

https://github.com/remorses/ghostty-opentui


The advantage of space is that you have infinite scale. Maybe data centers in space do not work at low scale but you have to think of them at much larger scale.

Elon Musk considered data centers in space simply for the fact that more solar power is available in space than Earth


Super cool. I just created a similar project that runs Ghostty in the terminal. Meaning you can create something like tmux from scratch.

I will use it to add support for colored bash tool output in opencode.

https://github.com/remorses/opentui-ansi-vt.git


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: