Hacker News new | past | comments | ask | show | jobs | submit | ardel95's comments login

1604. One could say we are overdue. I’m not sure about dust or other obstacles blocking it, but based on brightness alone a supernova in our galaxy should be visible with naked eye.


1604? Sort of; SN 1987A was visible to the naked eye at 3rd magnitude. It was in the Large Magellanic Cloud, which is almost in our galaxy but not quite. 170k light years. https://en.wikipedia.org/wiki/SN_1987A


It would be. Which is why any pair of orbiting bodies will eventually collide.

It’s just that for black holes this effect is insignificant (a merger would take much longer than the age of the Universe) until they get close to each other, much closer than 1 parsec.


The potential to detect Supermassive Black Hole mergers is one of the reasons I'm really excited about the LISA project [1], and hope it actually gets funded and doesn't delay too much.

[1] https://en.wikipedia.org/wiki/Laser_Interferometer_Space_Ant...


In quantum mechanics, bosons are (often massless) force carrying particles like photons or gluons. Fermions are the massive matter particles, such as electrons or quarks.

So, while I’ve never heard this saying before, I assume it’s meaning is that massless particles like photons are best for carrying information around (rather than electrons we are using in circuits today), while the electrons are best for carrying state, like in a switch.

Note, that in networking we have already made that transition by using fiber optics, rather than electric wire to transfer information over longer distances.


And radio! We use lots of radio as well.

I wonder what the longest information carrying electric wire is? They used to cross oceans, but not anymore. There are loads of DSL twisted pairs and co-ax cables in the "last mile" that maybe go 5km max. In rural areas maybe up to 50km with repeaters?

Is there something in between? Some old buried copper trunk cable between two university campuses or something like that?


Depending on your definition of wire, probably the earth. The first telegraph systems used the earth (or sea water can't remember) as the return path.


So that xkcd “daylight savings time” meme was an actual movie?


The article didn’t mention this, but don’t u128s get mapped to SSE2 registers on most modern x86_64 processors, and not regular 64-bit ones?


If you have the ability to spin up a new machine when the old one fails, and deploy your app onto it in one minute, it’s not a big leap to also run your app on two machines and avoid that downtime altogether.


Running two instances of a stateful application in parallel forces you to consider nasty and hard problems such as CAP theorem, etc. If your requirements allow, it's much easier to have an active-standby architecture over active-active.


Totally. But most applications are not stateful.


Most applications as a whole are absolutely stateful. Individual components of them might not be (app servers are stateless with the DB/Redis containing all state), but the whole app from an external client's perspective is stateful.

If we're talking about reliability/outage recovery, we're considering the application as one single unit visible from the external client's perspective - so everything including the DB (or equivalent stateful component) must be redundant.

Sadly this is also where a lot of cloud-native tooling and best practices fall short. There are endless ways to run stateless workloads redundantly, but stateful/CAP-bound workloads seem to be ignored/handwaved away.

I've seen my fair share of stacks that are doing the right thing when it comes to the easy/stateless parts (redundancy, infinite horizontal scalability), but everyone kinda ignores the elephant in the room which is the CAP-bound primary datastore that everything else depends on, which isn't horizontally scalable and its failover/replication behavior is ignored/misunderstood and untested, and they only get away with it because modern HW is reliable enough that its outage/failover windows are rare enough that the temporary misunderstood/unexpected/undefined behavior during those flies under the radar.


That’s a pretty pedantic interpretation of the word application. In the context of software owned by most teams, that they may decide to run on single vs multiple hosts most applications are absolutely stateless. Most applications outsource state to another system, like a relational database, a managed no-SQL store, or an object store.

And so no, most teams don’t need to worry about the hard problems you bring up.


Is it really an application if it’s not stateful? Maybe you’re managing the state client-side which makes it easier but I wouldn’t call a plain website an application, or am I missing something?


At the smallest level, even every byte of an in-flight HTTP request is still state. State, and for that matter "uptime" really depend on what the application/service ultimately does and what the agreement/SLA with the end-customer is.

The correct high-availability solution should take business requirements into account and there is no silver bullet. Running everything on a $5 VPS is no silver bullet, but neither is your typical "cloud-native" "best practice" stack that everyone keeps cargo-culting which often leads to unnecessary cost while leaving many hard questions (such as replicating CAP-bound stateful databases) unanswered.


One of the biggest misses with IP fragmentation was not requiring each fragment to carry the higher protocol header. Or at least do that for UDP.

That decision alone would’ve made fragments so much simpler on network devices and appliances, and much less likely for them to get dropped.


That would be a layering violation. IP routers don't necessarily know about higher protocols.


You could implement it as a generic 'application metadata' field in the IP header. From the perspective of IP, it one more length prefixed field in the IP header. Routers may interpret it in conjunction with the value of the protocol field; otherwise they are just required to leave it in unchanged in the header (including in all fragments).

For packets that don't want to use it, this is just 1 byte of overhead to set the size to 0.


You could design a network protocol that fragments by capturing a variable number of bytes from the next header, and ICMP already does something like that.

(None of this would fix the real problem with fragmentation, which is that you can't efficiently segment out a large frame without having some kind of reliability layer).


If I was revisiting, I'd probably eradicate the layer and pick a fixed number of flow types with distinct headers and state machines. The layers were a reasonable choice given the understanding of the time, but in hindsight I think you can make a strong case they're cut at the wrong places.


It's just a dumb mistake. All it takes is a "next layer header length" field. It would have been very simple.

You don't even really need that, and as proof, take ICMP ... which was designed as part of IP ... actually does do this. Routers are already required to copy and include the header of the packet that triggered an ICMP error.


The IP layer doesn't have to know what is in those upper layers to include 50 or 100 bytes of it in a little trunk.


If you always chop 100 add 100 then it's even more massively inefficient than the problem it solves. The router would at least need to have every protocol start with a header length value. Otherwise if you just take the first 100 bytes and stick it in the front of each packet and the header was only 57 bytes then you've suddenly got 43 bytes of garbage in the next layer's payload when you reassemble.

Keep in mind, most routers don't even bother supporting existing fragmentation because it's costly to implement in high speed hardware. So while you could theoretically have that dynamic next protocol header length value field it'd only be complicating something hardware makers already think is too complicated to be worth it. Making things unappealing complex is one of the common results of layering violations.


Theres no strict rules about layers, most routers can and do read info in tcp/udp headers.


And that's how we got forever stuck with those 2 and now have to build every new protocol on top of UDP.


Actually, that's not a bad thing. UDP is small enough to have nearly no overhead, but complex enough to let firewalls do their job. Six of the eight bytes in its header would probably be in the header of any transport layer protocol anyways (only the checksum might be unnecessary).

Wikipedia lists over 100 assigned IP protocol numbers [1], and while it would break existing firewalls, adding a new protocol would certainly require less work than the transition from IPv4 to IPv6. But UDP is already simple enough that there's very little benefit in not just building on that.

[1] https://en.wikipedia.org/wiki/List_of_IP_protocol_numbers


No it isn't. That fault lies with nat and idiots who only open http on their firewalls.


They can read higher layers, but they (currently) don't have to in order to implement IP correctly


> most routers can and do read info in tcp/udp headers.

Do most routers really do that, or just the ones which are also trying to act as a firewall?


For example, IP routers often peek at UDP/TCP port numbers to calculate ECMP flow hashing. This is technically naughty but it's read-only and it's only an optimization that isn't required for correct forwarding.


Yes. I doubt you can find one that is not capable.


Almost every modern router in a multipath network peeks at the next layer to implement flow hashing correctly.


That would effectively kill software patents. Which is a fine outcome.


The absolute performance isn't very important for the stock price. What matters way more is the performance vis-a-vis market expectations. So in this case, the market was expecting something better (or there was some other guidance in the report that spooked investors).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: