Fantastic write up. It's mind blowing how much complexity there is to keep flights going day in and day out.
My guess is all airline NOCs operate 24/7 as flights happen around the clock. Also planes typically don't have much downtime as that loses money so everything has to be a continuous operation.
Cool looking at the pictures of the dashboards. It's nutty to think how much has to be tracked when doing airplane maintenance.
there's a lull from 1am - 5am as pilots are sleeping and airports are dormant. It kicks back up again at 5am for the early bird flights. Aircraft parked at the gates will be powered down for the night only to be brought back to life a few hours later. Southwest isn't a major international airline so you won't find them flying 24/7 like Delta or Lufthansa.
Not necessarily; some airports are closed at night to keep nearby residents happy. If you're operating a hub-and-spoke airline out of such an airport, there's not much activity going on at night. There's still some; your longhaul flights are still there, so the NOC is likely still open, but there's far less activity than you'd have during the day.
Correct me if I am wrong but I thought the primary appeal of LoRa was range? Also isn't the primary factor in making long range radio go through things is the frequency? So 2.4ghz is the same frequency as consumer wifi right and thus would propagate about the same right?
It doesn't seem like this would be that useful except that the protocol is LoRa so you can have higher bandwidth between two devices if they happen to be close enough together.
LoRa would go much farther than Wifi on 2.4ghz. Lora uses Chirp Spread Spectrum (CSS) modulation while wifi uses OFDM (Orthogonal Frequency Division Multiplexing). The first being designed for extreme range while the latter for bandwidth. At 2.4ghz you could probably get LoRa connections up to 6 miles with the right antenna height.
6 miles seems a very optimistic estimation: 2.4Ghz propagation is very reduced by obstacles like buildings or trees and at that frequency the atmospheric water (fog, rain, humidity) have a big impact on propagation. And you need also to consider that 2.4Ghz is a very polluted band, then the noise floor is significatevly higher than in the 865/915 Mhz.
Moreover at 2.4Ghz the Fresnel window is smaller and the risk of multipath fading is higher.
LORA uses a sub noise-floor link budget. It allows some pretty crazy performance, at the expense of massive speed losses. Like 203kbps for LoRa vs 1,376,000kbps for WiFi lol.(max phy speeds, ymmmv).
WiFi sensitivity is about -90dB, while LoRa sensitivity is around-150dB…. So that’s about a million times more sensitive. So you need about a million times more signal strength to use low bandwidth WiFi (still impossibly fast by LoRa standards) than to use low bandwidth LoRa.
Those are radio specifications. Real links require about 10db more to get any kind of reliability, but the comparison stands.
> 2.4Ghz propagation is very reduced by obstacles like buildings
I never did much 2.4ghz stuff because that was what rich people did, or people mad enough to modify microwave oven magnetrons. However I was always under the impression that freespace loss on 2.4 was terrible. but it turns out its "only" ~9db more than 865
Worth mentioning that 2.4GHz has a lot more attenuation due to clutter than 900MHz. Your problem is usually buildings and non line of site transmission paths. When the signal has to pass through and bounce off things your link budget takes a big hit.
There is the idea of the path loss exponent. In a vacuum it's 2.0. 900Mhz with clutter it's -2.5 At 2.4 MHz it's -3 and -5.8 it's -3.5.
Other downside for higher spreading factor spectrum is data rate drops which results in longer packets. Longer packets means more energy per packet and a higher chance someone else will blow your packet out of the water.
You've been able to buy 900 and 2.4GHz transceivers for the last 20 years.
When I worked in the Trimble Navigation radio group, 2.4 GHz was tried but its real world range sucked compared to ~900 MHz and CB ~450 MHz bands of existing solutions. It's simply limitations of physics that lower frequencies propagate farther (at lower bandwidth) than higher frequencies.
even 900mhz sux vs 433. the lower the better it penetrates matter for the same amplitude.
lower than 430 you start to run into severe bandwidth issues though. and its not allowed to transmit lora/dss on 430 in the us without license hence the 900mhz
at 2.4ghz the real world usage is limited. might as well use wifi. the only advantage is short range bandwidh while keeping lora compat.
FSP loss for 915 MHz at 10 kms is ~ -111.67 dB while for 2.4 GHz is -120 dB.
That is a 9 dB loss which is significant. It could mean the difference between a copy or just plain static though the LoRa is supposed to be copyable down to -140 dBm.
The max tx power is around 150 mW (21.76 dBm), so at 10 kms, the RSSI is 21.76-120 = -98.24 dBm which is above the -140 dBm limit.
This calculation is assuming there is no loss due to vegetation or humidity or other barriers.
"Going through things" isn't always necessary / is avoidable in some deployments. And 2.4GHz signals can propagate an okay distance between nodes if there aren't things to go through. (Globalstar's emergency SOS satellite constellation uses the n53 band, which is right above the 2.4GHz "wi-fi" band, and it propagates between handsets and LEO through 1400km of air just fine.)
So you could probably pull off a 2.4GHz mesh outdoors in rural areas? It'd be feasible in the same places a microwave-laser hilltop-to-hilltop link would, but instead of "fast but point-to-point" it's "slow but meshed" (and with much larger tolerance for slop — you don't need to put everything on fixed masts so they have perfect line-of-sight, you can just stick them on the tops of trees or whatever and if they wave in the wind it still works.)
Mind you, the authors' motivating use-case for the hardware seems to be their project (https://github.com/datapartyjs/MeshTNC) to (AFAICT) bridge LoRa (or some specific LoRa L2 protocol — Meshtastic, probably?) to packet radio, i.e. digital packet-switched signalling over amateur (HAM) radio bands.
In that context, the tradeoff of high throughput for low propagation makes sense. Insofar as you're working with LoRa, and want to build and experiment with a bunch of site-local devices that mesh between themselves and interoperate with LoRa data-link protocols, you'd likely be speaking something like LoRA over 2.4GHz (LoRa itself doesn't spec a way to do that, but you could make it happen within the closed ecosystem of your own home/office.)
And in that context, you could use a MeshTNC device as something like "LoRaLAN" router. It'd be something you'd keep somewhere central in your house (like a wi-fi router), plugged into power + an antenna (internal to your house, like a wi-fi router) and plugged into a packet-radio transceiver with its own even-bigger antenna, outside your house. (Like a wi-fi router being plugged into a gateway modem on its upstream WAN port.)
This MeshTNC device would then pick up signals from:
- regular LoRaWAN IoT devices and Meshtastic handsets in your building
- more custom devices in your building†, that you've built yourself, that use another MeshTNC module; where these other devices do their part of the meshing only on the 2.4GHz band, which means they don't need big fiddly external antennas like LoRa devices do, but can be quite compact
- and possibly, a separate bidirectional LoRa repeater (made from any existing "high-gain" LoRa module, i.e. the kind used in mains-powered LoRaWAN base stations) — which brings in LoRa mesh traffic from outside your building, and picks up and carries away "destined for elsewhere in this area" LoRa mesh traffic that your "LoRaLAN" device has emitted (either due to forwarding it from your 2.4GHz-only mesh handsets/devices, or due to forwarding it after receiving it from packet radio.)
Though keep in mind you only need that complexity for the 2.4GHz-only mesh devices, since there isn't an existing mesh to forward those packets. But this whole setup is still also a regular LoRa mesh, and so you can still use regular LoRa (e.g. meshtastic) handsets, and put out packets that make their way through your regional mesh, back to the packet-radio bridge in your building; and from there to who-knows-where.
† To be clear, the 2.4GHz mesh handsets would only work reliably inside your building (if the 2.4GHz antenna is inside your building); but knowing HAMs, half the point would be seeing how far away you could get from your house/office and have your 2.4GHz mesh handsets keep working. (You'd probably want to have a second MeshTNC "base station" with a building-external antenna to try that. Pleasantly, that doesn't complicate the topology; it's all still just mesh, so you can just drop that in.)
How does the app read the variable if it can't be read after you input it? Or do they mean you can't view it after providing the variable value to the UI?
You could have a meaningful wall between administrative/deployment interface backends and the customer server backends - only the latter get access to services that have the private keys to decrypt the at-rest storage of secure variables, and this may be fully isolated to different control planes. So it becomes write-but-not-read.
But that's just a bare-minimum defense-in-depth. The fact that an attacker was able to access the insecure variables, and likely the names of secure variables, is still horrifying.
I agree / hope that’s what they meant. It seems disingenuous, though, to describe it as unreadable, since obviously something has to read it to bake it into the deploy. And given their apparent lack of effective security boundaries in one area, why should we assume that they’ve got the deploy system adequately locked down?
It’s not like I had a ton of trust in them before, but now they’ve lost almost all credibility.
The key is that both were randomly assigned to users - you’d never know if you’d open a thread and be a moderator. If you posted in the thread you couldn’t moderate.
And about the same frequency you’d be assigned to metamoderate, basically being asked if a moderator’s “vote” was a good one or not (you didn’t have to fully agree you’d do the same, just that it wasn’t bad).
Someone who scored low in meta moderation would get less or no moderator chances.
How is this mode not a standard part of their disaster recovery plan? Especially in sf and the bay area they need to assume an earthquake is going to take out a lot of infrastructure. Did they not take into account this would happen?
> While we successfully traversed more than 7,000 dark signals on Saturday, the outage created a concentrated spike in these requests. This created a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets.
We established these confirmation protocols out of an abundance of caution during our early deployment, and we are now refining them to match our current scale. While this strategy was effective during smaller outages, we are now implementing fleet-wide updates that provide the Driver with specific power outage context, allowing it to navigate more decisively.
Sounds like it was and you’re not correctly understanding the complexity of running this at scale.
Sounds like their disaster recovery plan was insufficient, intensified traffic jams in already congested areas because of "backlog", and is now being fixed to support the current scale.
The fact this backlog created issues indicates that it's perhaps Waymo that doesn't understand the complexity of running at that scale, because their systems got overwhelmed.
What about San Francisco allowing a power outage of this magnitude and not being able to restore power for multiple days?
This kind of attitude to me indicates a lack of experience building complex systems and responding to unexpected events. If they had done the opposite and been overly aggressive in letting Waymo’s manage themselves during lights that are out would you be the first in line criticizing them then for some accident happening?
All things being considered, I’m much happier knowing Waymo is taking a conservative approach if the downside means extra momentary street congestion during a major power outage; that’s much rarer than being cavalier with fully autonomous behavior.
They probably do, they just don't give a shit. It's still the "move fast and break things" mindset. Internalize profits but externalize failures to be carried by the public. Will there be legal consequences for Waymo (i.e. fines?) for this? Probably not...
They're one-of-one still. Having ridden in a Waymo many times, there's very little "move fast and break things" leaking in the experience.
They can simulate power outages as much as they want (testing) but the production break had some surprises. This is a technical forum.. most of us have been there.. bad things happened, plans weren't sufficient, we can measure their response on the next iteration in terms of how they respond to production insufficiencies in the next event.
Also, culturally speaking, "they suck" isn't really a working response to an RCA.
Waymo cars have been proven safer than human drivers in California. At the same time, 40k people die each year in the US in car accidents caused by human drivers.
I'm very happy they're moving fast so hopefully fewer people die in the future
"Move fast and break things" is a Facebook slogan. Applying it to Google or Waymo just doesn’t fit. If anything, Waymo is moving too slow. 100 people are going to die in seven days from drunk drivers and New Years in the US.
The most effective way of decreasing traffic deaths is safer driving laws, as the recent example of Helsinki has shown. That and better public transportation infrastructure. If you think that a giant, private, for-profit company cares about people's lives, you are in for a ride.
> The most effective way of decreasing traffic deaths is safer driving laws
This is almost hilariously false. "Oh yeah, those words on paper? Well, they actually physically stopped me from running the red light and plowing into 4 pedestrians!"
> If you think that a giant, private, for-profit company cares about people's lives, you are in for a ride.
I honestly wonder how leftists manage to delude themselves so heavily? I'm sure a bunch of politicians really have my best interests at heart. Lol
> This is almost hilariously false. "Oh yeah, those words on paper? Well, they actually physically stopped me from running the red light and plowing into 4 pedestrians!"
It's very clearly proven that hitting a pedestrian with 50 km/h is exponentially more dangerous than hitting them with 30 km/h. It's very clearly proven that having physically separted bike lines prevents deaths. It's very clearly proven that other measure like speed bumps, one-way streets, smart traffic routing prevents deaths.
And I am not even going to respond to your idiotic "leftist" statement.
It's very clearly proven that murder is dangerous, yet people still commit it. You still have not explained how laws stop things from happening, as if by magic.
> And I am not even going to respond to your idiotic "leftist" statement.
This says more about you than it does me. Taking the most cynical view possible, at least a for profit company has a profit motive to keep me alive unlike a bureaucrat. A bureaucrat doesn't lose their salary if traffic deaths go up. In fact, if a problem gets worse, they often receive more funding to fix it. If a government road is dangerous, you cannot easily fire the government and switch to a competitor's road.
The success you mentioned in Helsinki wasn't a triumph of law; it was a triumph of engineering. The question is not whether we want safety, but which system—a state monopoly with no financial penalty for failure, or a private entity that faces financial ruin if it kills its customers—is more likely to engender it.
If the onboard software has detected an unusual situation it doesn't understand, moving may be a bad idea. Possible problems requiring a management decision include flooding, fires, earthquakes, riots, street parties, power outages, building collapses... Handling all that onboard is tough. For different situations, a nearby "safe place" to stop varies. The control center doesn't do remote driving, says Waymo. They provide hints, probably along the lines of "back out, turn around, and get out of this area", or "clear the intersection, then stop and unload your passenger".
Waymo didn't give much info. For example, is loss of contact with the control center a stop condition? After some number of seconds, probably. A car contacting the control center for assistance and not getting an answer is probably a stop condition.
Apparently here they overloaded the control center. That's an indication that this really is automated. There's not one person per car back at HQ; probably far fewer than that. That's good for scaling.
Fundamentally is there anything you can't write in rust and must write in C? With AI languages should mostly be transposable even though right now they are not.
Intrusive linked lists have performance and robustness benefits for kernel programming, chiefly that they don't require any dynamic memory allocation and can play nice with scenarios where not all memory might be "paged in", and the paging system itself needs data structures to track memory statues. Linked lists for this type of use also have consistently low latency, which can matter a lot in some scenarios such as network packet processing. I.e.: loading a struct into L1 cache also loads its member pointers to the next struct, saving an additional step looking it up in some external data structure.
Great video on how the public is getting screwed on energy deals.
Basically large tech companies have the deep pockets to push up prices at electricity auctions. But why bid in public when you can do those deals in private. That's the first problem. All that needs to be out in the open.
What really irks me is that the market is so manipulated that we can't do anything about it. Think about NEM 3.0 vs 2.0. Putting data centers in their own rate class does make sense as the first step.
>Basically large tech companies have the deep pockets to push up prices at electricity auctions. But why bid in public when you can do those deals in private.
Public utilities can't do the same? Moreover if the implication is that large tech companies are somehow getting great prices at the expense of residential users, what does that mean for the electric generators on the other end of this transaction? Why are they leaving money on the table by selling to large tech companies for cheap?
These companies are regulated and can only charge for the costs they incur plus a flat profit on top of that of 10% or so.
The datacenters give allow them to justify building a lot more capacity to serve them. That increases costs, which means that 10% added for profit is now a bigger number and they can give bigger returns to their shareholders. But those profits are extracted from the existing customers who now see higher bills to cover the costs of expanding capacity to serve the datacenters.
The whole capped profit creates the distortions you illustrate.
The effect has a name: the Averch–Johnson effect, named after the Harvey Averch and Leland Johnson paper: "Behavior of the Firm Under Regulatory Constraint"
A number of years back I got bored during covid and decided to reverse engineer as much of the Wyze Cam V2 camera I could and make some custom firmware for it. Right now that lives at https://github.com/openmiko/openmiko
That said it's really hard to make long term supportable open source camera software/firmware. And when picking cameras it is even harder because the market as it stands now does not let you have it all. You need to pick what facets you really care about.
Also keep in mind even the above code is not really opensource all the way: I still had to load the driver binaries. Not sure that source will ever be released. The kernel is also old as heck.
What I do feel good about though is saving these old cameras from the dumpster if Wyze ever stops supporting them. The firmware works for simple cases: just load it up and you can start curl'ing frames. I used it in scripts to put together timelapse videos with ffmpeg. No need to screw around with authentication, phones apps, email, etc.
I would love to find a "zero to hello world, from scratch" type tutorial for putting custom firmware on a camera not supported by one of the existing projects (or a similar writeup detailing how one of these projects got started in the first place).
Hey, Openmiko is a nice project. With your baggage of knowledge, I would love to see you contributing to Thingino as well. While we still depend on binary blobs from the manufacturer SDK, there is a work on alternatives to replace what is replaceable with open stack. Join the team, have fun.
reply