Hacker Newsnew | past | comments | ask | show | jobs | submit | ghm2199's commentslogin

Nice. I want to do the same too. What process/workflow did you use to move all the websites you had given your email addresses to, to move to your proton email? I am guessing it will take several years, but I would like to start the move of my gmail.

I would rank it quite close to The worst company to work for tech wise. I've worked for a bit as a contract worker in their Eikon/Elektron product division(the Bloomberg equivalent).

They have "everything": silos, fiefdoms. Systems team that operated as if the devops model never happened. A CTO they hired @~2015/2016 that had no experience in tech whatsoever. A board which instituted and instructed a cost cutting policy by,for e.g. , but not limited to outsourcing every job possible with little to no tradeoff on quality of engineering. And this was way before AI was a thing.

This left staff engineers with no bargaining power to hire engineers to write good systems.

If you are considering a job there, I would recommend doing some due diligence.


Like it seems one needs to re-think email from first principles here. One idea is to use a the idea of "theory of mind"(ToM). e.g. The ToM between me and a sender would be for both to know: "I am not as excited as you about your product launch, so sending it is a 'spam' from my PoV".

We could use two negotiating agent, e.g. my agent that knows what I care about now/today/1-week ago and negotiates with an aspirant sender's agent before they send me any messages. e.g. I could set a policy based (my ToM) for my agent like "Between 1-1:15PM every day I want to read about all product announcements I subscribed to for XYZ product type". My agent would go talk to the aspirant's sender agent and gets messages right then.

An alternative policy could be "I have some free time now, create a summary/gist of all announcements on products I might be interested in.". The agents would negotiate with the sender to do the same.

Signups emails would be to replaced by an agent which "creates" a ToM with sender on hard-stop dates. I would tell my agent : "I am interested in this logging service to compare different ones, I will not be interested once ENG-123 is closed" and mine would not just tell the sender that they are not interested when the time comes (which is when ENG-123 is closed).

Longer term policies would just age out any message negotiations because I don't like/care about those products anymore.


Seems to me that a very high percentage of people would set their agent policy to “I’m never interested in spam” and then the spammers would try to circumvent that and we’d be back where we are now except with everyone spending more computation.

If you did that then you are better off never being targeted by emails/messages by the company at all. It is in the benefit of the company to know that. Right now its a tedious unsubscribe process that requires me to keep doing it all the time and company that does not know just blasts everyone who signed up. Its a ridiculous thing to do.

This would require an inversion of dynamics based on quantification and collective realization of a couple of things:

0. Emails suffer from a "misclassification" of intent issue on a time*attention scale. Imagine time of the day/week/year on one axis and their attention on email inbox on the other. Emails have to arrive at the right (x,y) point for a user to act on. But they rarely do.

1. Well being of a user is proportional to their current state of mind to receive an message from X. Which is proportional to how likely they are to listen what you have to say.

Both of these suggest a negotiation of messages between two parties, much like when a bartender asks you if you want a refill and you can say yes/no.


for me the problem is as simple as not allowing a third party to classify what i consider spam. i do that on my own. and what i classify as spam has no bearing on anyone elses classification and vice versa.

most critically however, i would like my email client to track which email i used to subscribe somewhere. which emails are replies to emails i sent out. which senders i approve of or are in my contact list (or are addresses i set email to before). these should be overriding any global classification as spam. subscription emails should be classified as such and not as spam either.


For indie developers like myself, I often use chat GPT desktop and Claude desktop for arbitrary tasks, though my main workhorse is a customized coding harness with CC daemons on my nas. With the apps, b I missed having access to my Nas server where my dev environment is. So I wrote a file system MCP and hosted it with a reverse proxy on my Truenas with auth0. I wanted access to it from all platforms CharGPT mobile, desktop. Same for CC.

For chatgpt desktop and Claude desktop my experience with MCPs connected to my home NAS is pretty poor. It(as in the app) often times out fetching data(even though there is no latency for serving the request in the logs), often the existing connection gets invalidated between 2 chat turns and chat gpt just moves on answering without the file in hand.

I am not using it for writing code, its mostly read only access to Fs. Has anyone surmounted these problems for this access patterns and written about how to build mcps to be reliable?


One thing I have always wanted to do is cancel an AI Agent executing remotely that I kicked off as it streamed its part by part response(part could words, list of urls or whatever you want the FE to display). A good example is web-researcher agent that searches and fetches web pages remotely and sends it back to the local sub-agent to summarize the results. This is something claude-code in the terminal does not quite provide. In Instant would this be trivial to build?

Here is how I built it in a WUI: I sent SSE events from Server -> Client streaming web-search progress, but then the client could update a `x` box on "parent" widget using the `id` from a SSE event using a simple REST call. The `id` could belong to parent web-search or to certain URLs which are being fetched. And then whatever is yielding your SSE lines would check the db would cancel the send(assuming it had not sent all the words already).


If I understood you correctly:

You kick off an agent. It reports work back to the user. The user can click cancel, and the agent gets terminated.

You are right, this kind of UX comes very naturally with Instant. If an agent writes data to Instant, it will show up right away to the user. If the user clicks an `X` button, it will propagate to the agent.

The basic sync engine would handle a lot of the complexity here. If the data streaming gets more complicated, you may want to use Instant streams. For example, if you want to convey updates character by character, you can use Instant streams as an included service, which does this extremely efficiently.

More about the sync engine: https://www.instantdb.com/product/sync More about streams: https://www.instantdb.com/docs/streams


For people like me — who are kind of familiar with how react/jetpack compose/flutter like frameworks work — I recall using react-widget/composables which seamlessly update when these register to receive updates to the underlying datamodel. The persistence boundary in these apps was the app/device where it was running. The datamodel was local. You still had to worry about making the data updates to servers and back to get to other devices/apps.

Instant crosses that persistence boundary, your app can propagate updates to any one who has subscribed to the abstract datastore — which is on a server somewhere, so you the engineer don't have to write that code. Right?

But how is this different/better than things like, i wanna say, vercel/nextjs or the like that host similar infra?


I would say NextJS focuses a lot more on server-rendering. If you use the app router, the default path is to render as much as you can on the server.

This can work great, but you lose some benefits: your pages won't work offline, they won't be real-time, and if you make changes, you'll have to wait for the server to acknowledge them.

Instant pushes handles more of the work on the frontend. You make queries directly in your frontend, and Instant handles all the offline caching, the real-time, and the optimistic updates.

You can have the best of both worlds though. We have an experimental SSR package, which to our knowledge is the first to combine _both_ SSR and real-time. The way it works:

1. Next SSRs the page

2. But when it loads, Instant picks it up and makes every query reactive.

More details here: https://www.instantdb.com/docs/next-ssr


I read the TCP patch they submitted for BSD linux. Maybe I don't understand it well enough, but optimizing the use of a fuzzer to discover vulnerabilities — while releasing a model is a threat for sure — sounds something reducible/generalizable to maze solving abilities like in ARC. Except here the problem's boundaries are well defined.

Its quite hard to believe why it took this much inference power ($20K i believe) to find the TCP and H264 class of exploits. I feel like its just the training data/harness based traces for security that might be the innovation here, not the model.


The $20K was the total across all the files scanned, not just the one with the bug.

Here is thought, this is a fixed 3d environment and you lack training data or at least an algorithm to train. Why not use RL to learn good trajectories? Like build a 3d environment of your home/room and generate images and trajectories in a game engine to generate image data to pretrain/train it, then for each run hand label only promising trajectories i.e. where the robot actually did better cleaning. That might make it a good RL exercise. You could also place some physical flags in the room that when the camera gets close enough it gets rewarded to automate these trajectory rewards.

I would begin in one room to practice this.


Wow okay there is a lot here, just so that I understand this correctly:

1. Make a replica of my home/ room in a game engine or a simulator 2. Generate trajectories with RL where the reward is hand specified by me 3. Automate trajectory rewards using some proximity flags

Some stupid questions: 1. How do I build a replica of my home? Is there an SFM algorithm I could use to do this just from camera images? 2. Would this still work even if things/ furniture move around the house? 3. This data collection strategy will have a distribution shift compared to real data so it might struggle with different lighting conditions and stuff?


To get an idea of the thing. I would find something similar to https://docs.nvidia.com/learning/physical-ai/getting-started...

Caveat here is you may not be able to use their environments or you may or may not have their kind or robots to train your roomba. But at-least you could get an idea of how RL training is done for robots like yours.


So my home router, all my iot devices attached to it from printers to projectors, not to mention custom stacks like Lutron. BLE based locks, car key fobs.

All of these technically could have zero day vulnerabilities and people/companies who made it don't have the resources to buy 20000$ of tokens to go debug them... Maybe they don't care but if they do, what if they can't afford such models or get access in time.

I would like to know how can someone like me defend against them?


> I would like to know how can someone like me defend against them?

You could take the Galactica approach - de-network everything you can.


That's the neat part, you can't.

> don't have the resources to buy 20000$ of tokens to go debug them

$20,000 - how many developers do these hardware companies have that they need to spend that much? Claude Team Premium is US$125/mo for a seat and even cheaper if you buy annually...


$20000 is what the Antropic report says they spent on scanning OpenBSD [1].

[1] "Across a thousand runs through our scaffold, the total cost was under $20,000 and found several dozen more findings.", https://red.anthropic.com/2026/mythos-preview/


That's for OpenBSD, typical IoT firmware is tiny by comparison: a few init.rc scripts, some cron jobs, a php-cgi web UI, and glue code with hardcoded API keys. The total lines of code are orders of magnitude smaller, so the audit surface and expected cost are too.

Running a "too advanced" harness against a Claude Code subscription gets your organization banned, even if it's a shell wrapper over `claude -p`. You probably can't reproduce this research with a fixed-price subscription.

If you want to get a feel of what brutalist architecture is like up close, go to the Barbican in london if you can.

Its quite surreal. Very much in-your-face concrete exposure. Yet, to walk and experience it with your eyes is a study of contrasts: a giant, comparitively modern, greenhouse, has a glass roof open to the sky and yet many floors have no light or windows at all. And in the outdoor spaces, like the fountain/canal running through the complex the concrete will sort of be in the background and lets you focus on everything else: the water, the swans and the people around.

Juxtapose that to low hanging exposed concrete roofs and walls in closed passages could make one feel constrained/claustrophobic/yearning for light.


The Barbican is not a typical brutalist construction. The term brutalist refers to béton brut, which means raw concrete. I.e. you can see the shape of the wooden slats used as a cast. The concrete in the Barbican was finished by drilling to create a dappled pattern, which obliterated the shape of the slats.

There are also lots of post modern elements. For example, the columns of the girl’s school have pyramids at the top to resemble pencils.

The south bank has more buildings that are a purer expression of brutalism.


If you find yourself in West London, also check out Brunel University, all of the older buildings are pure brutalism

The best thing about the South Bank is that you can look over the river and admire the North Bank.

Sure, but as someone who likes the aesthetic, the Barbican hits all the right spots for me.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: