Russia had roughly ten years to prepare. First talks about building national payment system started around 2010-2012. And it was meeting notable opposition: "why should we care? MC/Visa are good enough, why spend money on national infrastructure".
But yeah, it is amazing that in 2022 nobody here even noticed that MC/Visa left. Even MC/Visa cards haven't stopped working and are working to this day (banks made a rule that cards that expire after 2022 continue to work for several more years so that everybody has time to switch to MIR).
Thanks, completely agree. UX is probably the hardest part here. Prompting should not be a prerequisite for getting value. We have been thinking about making the system more proactive, for example surfacing relevant notes ahead of meetings or highlighting changes that need attention. Would love to hear how you think this should ideally work.
What started as a personal experiment turned into TuneDocs - 80+ episodes of podcast-style audio generated directly from official documentation. Claude Code, MCP, Cursor, Copilot, LangGraph, AI SDK, and more. Each episode covers a specific slice of the docs in ~15 minutes. No fluff, no opinions, just the docs spoken aloud. Free topics available.
I've lived all my life in Finland, even though all through my early adulthood I was planning to move to some place much warmer. But later (especially now with children for whom the snow is so exciting) I've come to like the four seasons and the balance it gives.
That article was a strange read from my perspective, because here the infrastructure is built for winters as well. I don't remember school ever being canceled due to winter conditions, traffic is only a mess after a snowstorm.
I’m currently doing my master’s in cyber engineering, and this actually started as a simple Nim project for one of my courses. The original goal was honestly just: “learn Nim, build something low-level, understand packet structures better.”
That turned into writing a packet manipulation engine in Nim. It handles TCP, UDP, ICMP, ARP, DNS, DHCP, IPv4/IPv6, PCAP parsing, checksums, fragmentation, etc.
Then I got curious and wrapped it with FastAPI so everything could be triggered over HTTP instead of just local scripts.
At this point it’s kind of snowballed into something way bigger than the original coursework idea.
I’m not trying to sell anything. It’s open source and free. I’m honestly just trying to figure out if this has real-world value outside of “cool networking project.”
From a DevOps or CI/CD perspective, would something like this actually make sense?
For example:
Upload a PCAP from integration tests
Get flow summaries or protocol breakdown
Detect unexpected outbound traffic
Potentially fail a build if something weird shows up
Is that realistic, or am I overthinking it?
Right now I’m at the stage where I’m deciding whether to push it further into something practical or just treat it as a learning-heavy project that got out of hand in a good way.
This is a good article and should be more visible. A lot of people experience this kind of bill shock with cloud services, and I feel like it's a bit of a trap for many nontechnical or semi-technical people because the onboarding is so smooth and deploying is easy.
For the last three or four months, what I've been doing is anytime I have Claude write a comment on an issue, it just adds a session ID, file path and the VM it is on. That way, whenever we have some stuff that comes up, we just search through issues and then we can also retrace the session that produced the work and it's all traceable. In general, I just work through gitea issues and sometimes beads. I couldn't stand having all these MD files in my repo because I was just drowning in documentation, so having it in issues has been working really nicely and agents know how to work with issues. I did have it write a gitea utility and they are pretty happy using/abusing it. Anytime I see that they call it in some way that generates errors, I just have them improve the utility. And by this point, it pretty much always works. It's been really nice.
> Tier 1 transit providers doing port filtering is EXTREMELY alarming.
I was admining a small ISP when blaster and its variants hit. Port filtering 139 and the rest was the easiest way to deal with it, and almost over night most of the ISPs blocked it, and we were better for it. There was a time when if you'd put a fresh XP install on the Internet you'd get 5-10 minutes until it would get restarted.
I guess if you're really an admin that needs telnet, you can move it to another port and go around it? Surely you'd tunnel that "old box that needs to stay alive" if that's the usecase? Is there anyone seriously running default telnet on 23 and is really affected by this filtering?
Yes, development was being done in SVN but it was a huge pain. Continuous communication was required with the server (history lookups took ages, changing a file required a checkout, etc.) and that was just horribly inefficient for distributed teams. Even within Europe, much more so when cross-continent.
A DVCS was definitely required. And I would say git won out due to Linus inventing and then backing it, not because of a platform that would serve it.
But this is about SSL certificates. Google may account for say half of web traffic, but there are billions of other servers that account for the other half, and it has absolutely no care what web server or ACME client they are running or much else. It is concerned about the client experience and how it trusts certificates.
Google already has its own CA that is used for its own systems as well as to issue certificates for GCP customers. They don't interact with Lets Encrypt or any other external CA as far as I know for their own services.
Hard disagree. You read nothing, you assumed everything, and wrote cookie-cutter slop that adds nothing of substance.
There is no shit coin and no rug pull. You can’t own any coin. But this perfectly illustrates why HN, full of smart people, is also full of people who actively fight against solutions to centralization.
These problems existed before crypto. You are defending a dystopian nightmare system that is progressively getting worse. If people like me don’t build a decentralized alternative you’ll just keep bitching about problems while having literally zero solutions (except “just give the government MORE power and they’ll save us this time through regulations”.) Newsflash, the governments are the biggest consumers of this centralized tech (like Palantir) and pushers of this dystopian future where they can do anything secretly while you can’t do anything without them knowing.
“Hacker ethos” is now filled with a lot of shills for corporate greed.
You don't need to build anything. Just tell the agent to write tickets into .md files in a folder and move them to a closed foler as you go along. I've been using Claude Code with the Max plan nonstop essentially every day since last July and since then I've come to realize that the newer people are the more things they think they need to add to CC to get it work well.
Eventually you'll find a way that works for you that needs none of it, and if any piece of all the extras IS ever helpful, Anthropic adds it themselves within a month or two.
> won't it make just doing a "git checkout" start to be really heavy?
not really? doesn't git checkout only retrieve the current branch? the checkpoint data is in another branch.
we can presume that the tooling for this doesn't expect you to manage the checkpoint branch directly. each checkpoint object is associated with a commit sha (in your working branch, master or whatever). the tooling presumably would just make sure you have the checkpoints for the nearby (in history) commit sha's, and system prompt for the agent will help it do its thing.
i mean all that is trivial. not worth a $60MM investment.
i suspect what is really going on is that the context makes it back to the origin server. this allows _cloud_ agents, independent of your local claude session, to pick up the context. or for developer-to-developer handoff with full context. or to pick up context from a feature branch (as you switch across branches rapidly) later, easily. yes? you'll have to excuse me, i'm not well informed on how LLM coding agents actually work in that way (where the context is kept, how easy it is to pick it back up again). this is just a bit of opining based on why this is worth 20% of $300MM.
if i look at https://chunkhound.github.io it makes me think entire is a version of that. they'll add an MCP server and you won't have to think about it.
finally, because there is a commit sha association for each checkpoint, i would be worried that history rewrites or force pushes MUST use the tooling otherwise you'd end up screwing up the historical context badly.
Snow and ice builds up on overhead powerlines. It can cause issues. States with tornados or hurricanes are more likely to build underground which avoids this. My location in SE Michigan is all overhead and, while we rarely lose power, I see tons of issues every ice storm that some unlucky few suffer through.
I live very near a hospital and suspect I branch off their higher-SLA lines so that may be a factor.
Warmer places that don't experience cold much absolutely suffer during a cold spell. Texas (with its independent grid) has been absolutely wrecked every time it gets too cold.
Honestly, I keep an eye on them because we are a core market for their product.. We run a bunch of K8s clusters, and Postgres databases. We currently use cloud, but if someone bought us up, and demanded we stop all cloud, we could move to something like this in a fairly short time, with limited changes. I wouldn't have to deal with Dell, plus VMWare, plus a SAN company plus a networking company, and hope they all work work together nicely.
I could get a spot in a colo, drop 2 fibers assigned to a few subnets, and be up and replicating our databases in a day or two. We have no need for GPU right now, but do need to often switch DB's to add cpu, ram, etc. Honestly, it would pay off in a year to 18 months, depending on the rumored prices, and colo costs.
But yeah, it is amazing that in 2022 nobody here even noticed that MC/Visa left. Even MC/Visa cards haven't stopped working and are working to this day (banks made a rule that cards that expire after 2022 continue to work for several more years so that everybody has time to switch to MIR).