Hacker Newsnew | past | comments | ask | show | jobs | submit | baristaGeek's commentslogin

I built this because I just thought it would be cool to show which LLMs respond faster (between GTP-4o mini, Claude 3 Haiku, and Gemini 2.5 Flash) and show some metrics (TTFT, avg tok/s, total time, and nTokens)

Are you open to UTC -5?

This very interesting blog post got me thinking how English would look like in 2100 or 2200 driven by the changes of the internet and AI. Spelling matters less so alphabet gets reduced? Simpler grammar as it gets more spoken worldwide? Emojis as punctuation?

window unseal nite no log. odd.


Hey! I just wanted to build this to show where gaming is going thanks to this AI Boom (choose your own adventure, but opinionated enough to move numbers, get items, have a game end, etc.)

Don't consider this true educational material! haha


It obviously doesn’t end today but it should be fast.

When Noriega was arrested by the US, the legitimate president started operating normally a few days after.


Trump is threatening, today, the new Next In Line leader of Venezuela.

I'm skeptical it will be over soon.

We in the USA now own Venezuela. It's all our fault going forward.


Philz Coffee reportedly nearing a $145M PE acquisition. Just another "only in SF" story. Where your barista’s startup dream comes true, and the morning pour exits before your Series B.


Pretty sure the barista's startup dream wasn't to have the stock they own to be cancelled.


But what's more quintessentially SF than realizing your options are worthless?


Exactly! Man… some people don’t get sarcasm


Postgres is a great DB, but it's the wrong tool for a write-heavy, high-concurrency, real-time system with pub-sub needs.

You should split your system into specialized components: - Kafka for event transport (you're likely already doing this). - An LSM-tree DB for write-heavy structured data (eg: Cassandra) - Keep Postgres for queries that benefit from relational features in certain parts of your architecture


IMO They don’t have a high concurrency DB writing system, they just think they do.

Recordings can and should be streamed to an object store. Parallel processes can do transcription on those objects; bonus: when they inevitably have a bug in transcription, retranscribing meetings is easy.

The output of transcription can be a single file also stored in the object store with a single completion message notification, or if they really insist on “near real-time”, a message on a queue for every N seconds. Much easier to scale your queue than your DB, eg Kafka partitions.

A handful of consumers can read those messages and insert into the DB. Benefit is you have a fixed and controllable write load into the database, and your client workload never overloads the DB because you’re buffering that with the much more distributed object store (which is way simpler than running another database engine).


Very good article! Succinct, and very informative.


Hi! Have been following LiveKit for a while.

You mention global presence but the job application forms require legal authorization to work in the US. Can you please clarify if you're remote global, or remote US?

Ty


What's exactly the benefit of running on the browser? I get that accessibility to a broader audience is a benefit but from what I understand these agents run on the terminal because they have a more powerful runtime. You don't have the browser's memmory limits, you can install native tools such as curl or docker, the model runs directly on your machine, and you have more security on the CLI.


That's a great point. Newrev actually gives you the best of both worlds:

The AI agents still run terminal-based environments on a backend server (local or cloud), so they have full access to resources and native tools. The browser UI simply provides a much more intuitive, visual interface for interaction, file exploration, and live previews, making these powerful agents accessible and easy to use without deep terminal expertise.


So running this on the browser implies you need to know how to clone a git repo, run a chmod command, etc. How do you plan to productize this so that it actually gets to the hands of non technical people?


For the short term, newrev is designed for developers who want to try out these powerful AI CLI agents but prefer not to live in the terminal. This means they can start using them right away.

Our mid-term goal is to offer a fully managed cloud service, primarily still for developers. However, we're seeing more agents run highly autonomously, which means newrev could eventually empower non-developers too. Think of how many 'no-coders' are already using tools like Cursor or Winsurf for their projects today.


Hey I wasn't aware of that! I'll post somewhere else next time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: