Hacker News new | past | comments | ask | show | jobs | submit | CaliforniaKarl's comments login

Needs a [2016]

The site was throwing a 500 Internal Server Error when I visited. Here's an Internet Archive snapshot: https://web.archive.org/web/20170704191048/https://attila.ki...


HN posts have limits on title length. In such cases, it’s not unusual to post the original title as a comment.

HN seems to remove certain adjectives and, I guess, adverbs to minimize the sensationalizing of the titles (I assume). I was surprised by it recently when a title I submitted got "Massive" auto-removed even though length maximum was respected. Made the title more humdrum, but I can understand the rationale.

For more on resistance dimmers in theater, check out this video from Jonathan Bastow: https://youtu.be/lTHNYWw6yt0?si=ygt0HpSd_aPcm3tq


You can still reach out to them, and see what they say.


That's a bit of an apples-and-oranges comparison. Cloud services normally have different design goals.

HPC workloads are often focused on highly-parallel jobs, with high-speed and (especially) low-latency communications between nodes. Fun fact: In the NVIDIA DGX SuperPOD Reference Architecture, each DGX H100 system (which has eight H100 GPUs per system) has four Infiniband NDR OSFP ports dedicated to GPU traffic. IIRC, each OSFP port operates at 200 Gbps (two lanes of 100 Gbps), allowing each GPU to effectively have its own IB port for GPU-to-GPU traffic.

(NVIDIA's not the only group doing that, BTW: Stanford's Sherlock 4.0 HPC environment[2], in their GPU-heavy servers, also uses multiple NDR ports per system.)

Solutions like that are not something you'll typically find in your typical cloud provider.

Early cloud-based HPC-focused solutions centered on workload locality, not just within a particular zone but with a particular part of a zone, with things like AWS Placement Groups[3]. More-modern Ethernet-based providers will give you guides like [4], telling you how to supplement placement groups with directly-accessible high-bandwidth network adapters, and in particular support for RDMA [4] or RoCE (RDMA over Converged Ethernet), which aims to provide IB-like functionality over Ethernet.

IMO, the closest analog you'll find in the cloud, to environments like Frontier, is going to be IB-based cloud environments from Azure HPC ('general' cloud) [5] and specialty-cloud folks like Lambda Labs [6].

[1]: https://docs.nvidia.com/dgx-superpod/reference-architecture-...

[2]: https://news.sherlock.stanford.edu/publications/sherlock-4-0...

[3]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placemen...

[4]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa.html

[5]: https://azure.microsoft.com/en-us/solutions/high-performance...

[6]: https://lambdalabs.com/nvidia/dgx-systems


This is one of two SQLite DB metadata that I really like. I've used `user_version` in the exact same way: I increment it by one every time I change the schema of the database. It's a four-byte signed integer (stored in the DB header in big-endian format), so it supports a large number of possible schema changes.

Folks should also be aware of the `application_id`, another four-byte signed integer accessed via pragma. This is meant to be set once, at the time you create the database. It's a way to ensure that the SQLite DB you are accessing is something that your code created.

For example, I use a line of Python like this to convert some human-readable characters (like 'KARL') into an application ID:

  >>> import struct
  >>> struct.unpack('=i', struct.pack('cccc', b'K', b'A', b'R', b'L'))[0]
  1280459083
So, I'd run `PRAGMA application_id = 1280459083;` when I create the DB. And on first access afterward, I'd run `PRAGMA application_id;`, look at the first (and only) row of output, and check for the expected number.

Finally, one thing to note: Because they're part of the database file's header, they are not affected by transactions. So, if you're using `user_version` to track schema changes, a schema upgrade becomes a "stop the world" sort of activity, so you can ensure it took effect.



There are three passing points on the line, where trains can pass each other without interrupting opposite-direction traffic. From North to South, they are…

• Between 22nd Street & South San Francisco, with Bayshore accessible via the 'local' tracks only.

• Between Redwood City & Menlo Park.

• Between Sunnyvale & Santa Clara, with Lawrence accessible via the 'local' tracks only.

Today, the northernmost and southernmost passing points are used by the Limited or Baby Bullet services to overtake the Local services. The service being passed has the middle station (Bayshore or Lawrence) as a scheduled stop, and remains in the station until they observe either a clear signal, or until they observe the faster service passing.

The passing point between Redwood City & Menlo Park is mostly used when a service (any service) is running slow, to allow on-time services to overtake. This is especially important when the slow-running service is a Local.

Finally, there are crossovers placed frequently along the line, often every 2-3 stations. These are used most often when maintenance is being performed on one section of track, or if a train is disabled. They are not used during normal operations.


FWIW, they don't sound worse to me, just different.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: