Hacker News new | past | comments | ask | show | jobs | submit | fmajid's comments login

I did a MS in Telecoms Engineering. Our Telephony teacher Claude Rigault drummed it into us that when people can't make emergency calls, pople die, thus the importance of reliability.

Level 3, the biggest Internet backbone transit provide, was spun off from the Peter Kiewit construction firm, and had as its original asset a coal mine in Wyoming.

The US Telco Sprint was spun off from the Southern Pacific railroad - train lines are convenient places to lay fibre trunks

SPRINT == Southern Pacific Railroad Internal NeTwork.

It would have been interesting to see what happened if it hadn't spun off, suddenly you'd have a huge Fortune 500 telecom where its side business was running a railroad.

Would that have kept them afloat as an operation? I've spent the last few decades here watching as their red-and-grey engines disappeared in a sea of yellow.


Didn't the telco part come from a merger with one of the baby bells that emerged when bell was broke up?

No, they were an independent telco before the 1984 breakup. https://en.wikipedia.org/wiki/Regional_Bell_Operating_Compan...

Presumaby they are worried about a reprise of what happend to Meng Wanzhou, daughter of the founder of Huawei, who was put under house arrest in Canada, thus severely damaging Canadian diplomatic and commercial relations with China, all for the US who initiated the proceedings to say "never mind" 3 years later.

I'm pretty sure the Canadians are not going to take a bullet for the US again. And the Chinese would not take the risk for their AI researchers, who like those who work at DeepSeek have proven they can be world-class.


Because Myhrvold is the single most notorious patent troll, bar none.

Never sous vide your heros

NOLF2 was hilarious. The samurai sword fiht in a trailer park while a hurricane blows through it! The fight against French mimes toting machine guns (Ah, ze pain is unbearable)!

Yes, as a user I definitely shudder at Electron-based apps or anything built on the JVM.

I know plenty of non-technical users who still dislike Java, because its usage was quite visible in the past (you gotta install JVM) and lots of desktop apps made with were horrible.

My father for example, as it was the tech chosen by his previous bank.

Electron is way sneakier, so people just complain about Teams or something like that.


Nowadays users aren’t expected to install the vm. Desktop apps now jlink a vm and it’s like any other application. I’ve even seen the trimmed vm size get down to 30-50mb.

Unfortunately you still get developers who don’t set the correct memory settings and then it ends up eating 25% of a users available ram.


I strongly agree with this sentiment. And I realize that we might not be the representative of the typical user, but nonetheless, I think these things definitely matter for some subset of users.

Apparently Go 1.24's internal implementation of maps was changed to use Swiss Tables (reimplemented in Go, of course, not the Abseil C++ implementation).

https://www.bytesizego.com/blog/go-124-swiss-table-maps

I'd be interested to see how the new Tiny Pointers algorithm would perform against Swiss Tables, specially when occupation rate is high.

https://www.quantamagazine.org/undergraduate-upends-a-40-yea...


Rust also switched their standard maps over to Swisstables a while ago. Unfortunately despite the original implementation being in C++, I don't think it's possible for C++ implementations to do the same with the STL maps due to constraints imposed by the spec. They'd have to create a new STL type for them.

Yes, in C++ in 2011 they standardized the hash table design you'd have been shown in a 1980s (or maybe 1990s if you got unlucky in where you went to school) Data Structures class. They called this type std::unordered_map

This was a Closed Addressing aka Separate Chaining table. The idea is easy enough that it's actually an Advent Of Code problem one year which is why this design is much older than the modern efficient Open Addressing tables. In C++ with std::unordered_map you are guaranteed that the address of items in the table won't change while they're in the table, this works because they're not actually stored directly in the table. You also provided an API to work with these chains of related items and to fiddle with "how full" the table is.

This API makes complete sense for a Closed Addressing table, but both the API functions and the pointer stability guarantee don't make any sense for Open Addressing.

If you only need the pointer stability you can pay just for that, Abseil offers a compatible type with that property, but if you need all the Chaining APIs (and you might in principle) then it doesn't have those, too bad.


Tiny Pointers doesn't currently have practical use, it's a complexity results.

> The team’s results may not lead to any immediate applications, but that’s not all that matters, Conway said. “It’s important to understand these kinds of data structures better. You don’t know when a result like this will unlock something that lets you do better in practice.”


Yeah, naive application of big-O complexity models can lead you far astray when you have real world problems. If you can do F in n*k1 time, but I need n*n*k2 time you may think your algorithm for F is better, O(n) compared to O(n squared) but if k1 is 1 second while k2 is 80 nanoseconds then actually my "worse" algorithm is faster until n is so huge that we have runtimes of about half a year.

It's a largely academic question. HDDs simply don't have the performance required for modern use, and NAND flash SSDs are not archival if left unpowered, so it's SSDs for all online storage and HDDs for backups.

You do have to take precautions: avoid QLC SSDs, and SMR hard drives.


HDDs are the backbone of my homelab since storage capacity is my top priority. With performance already constrained by gigabit Ethernet and WiFi, high-speed drives aren’t essential. HDDs can easily stream 8K video with bandwidth to spare while also handling tasks like running Elasticsearch without issue. In my opinion, HDDs are vastly underrated.


I run a hybrid setup which has worked well for me: HDDs in the NAS for high-capacity, decent-speed persistent storage with ZFS for redundancy, low-capacity SSDs in the VM/container hosts for speed and reliability.


Same, I run my containers and VMs off of 1TB of internal SSD storage within a Proxmox mini PC(with an additional 512gb internal SSD for booting Proxmox). Booting VMs off of SSD super quick so its the best of both worlds really.


Yes, those workloads are mostly sequential I/O, that HDDs can still handle. Most of my usage is heavily parallel random I/O like software development and compiles.

You also have the option of using ZFS with SSDs as L2ARC read-cache and ZIL write-cache to get potentially the best of both worlds, as long as your disk access patterns yield a decent cache hit rate.


I do something similar as well for my primary storage pool appliance of 28TB available. It has 32GB of system ram so I push push as much in to ARC Cache as possible without the whole thing toppling over; roughly 85%. I only need it for an NFS end point. It's pretty zippy for frequently accessed files.


I need big drives for backup. Clearly, there's even more reasons to use HDDs now.


Even in this case, you need to be careful with how you use HDDs. I say this only because you mentioned size. If you’re using big drives in a RAID setup, you’ll want to consider how long it takes to replace a failed drive. With large drives, it can take quite a long time to recover an array with a failed drive. Simply because copying 12+TB of data to an even a hot spare takes time.

Yes there are ways to mitigate this, particularly with ZFS DRAID, but it’s still a concern that’s more a large HDD thing. For raw storage, HDDs aren’t going anywhere anytime soon. But, there are still some barriers with efficient usage with very large drives.


They used to advertise the fact their CEO's wife was a nurse, and she lobbied them to prioritize safety.



It’s not that simple, even in a company that was as notoriously mismanaged as Twitter. When he came in with his trusted lieutenants from SpaceX and Tesla, they had some sort of game plan to data mine HR databases, corporate email and so on. If I were to guess, Palantir must be involved somehow. Probably the same playbook they are applying at DOGE right now.

In comparison, the “RTO to make people quit so we don’t have to pay severance” model of JPM, Amazon and others is a blunt instrument that is as likely to cause top performers to flee.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: