Do you really think Putin would hesitate to arrest him if he did that?
Snowden knows he is being watched closely. I suppose that is itself a reason to take what he says with a grain of salt, but I certainly don't take his silence on the Ukraine war as evidence of assent.
It's just not like Snowden of the past to endorse apps with bad privacy defaults and non encrypted group chats like Telegram. I'd have understood if he had said the same if the CEO of Signal was arrested, but I can't understand it for Telegram, an app that's mostly not used in an e2e encrypted way.
Telegram is also an app that is widely used by Russian troops to organize and also for dissemination of propaganda and misinformation. It's just not characteristic of Snowden to endorse apps that could potentially be honeypots/backdoored, and to equate such apps as important to free speech.
I've read somewhere that Seymour Cray used to write his entire operating system in absolute octal. ("Absolute" means no relocation; all memory accesses and jumps must be hand-targeted to the correct address, as they would have to be with no assembler involved.)
Really interesting to me that none of the commentators I've seen in the press have even hinted that maybe an OS that requires frequent security patches shouldn't be used for infrastructure in the first place. For just one example, I've seen photos of BSODs on airport monitors that show flight lists -- why aren't those built on Linux or even OpenBSD?
Security is not a feature that can be layered on. It has to be built in. We now have an entire industry dedicated to trying to layer security onto Windows -- but it still doesn't work.
The vendor who makes the software has always written for Windows (or in reality, wrote for either DOS or OS/2 then transitioned to NT4). History, momentum, familiarity, cost, and ease of support all are factors (among others, I'm sure).
Security is a process, not a product.
And yes, distros require frequent updates, though more to your point, you can limit the scope of installed software. I'm sure airport displays don't need MPEG2, VP1 and so on codecs, for instance.
It's also important to remember that there is a lot of 'garageware' out there with these specialized systems. Want SAML/OIDC support? We only support LDAP over cleartext, or Active Directory at best. Want the latest and greatest version of Apache Tomcat? Sorry, the vendor doesn't know how to troubleshoot either, so they only "support" a three year old vulnerable version.
Ran into that more than a few times.
Given the hypothesis of what caused the BSOD with Crowdstrike (NUL pointer), using a safe language would have been appropriate -- it's fairly easy in this case to lay the blame with CS.
Microsoft supplies the shotgun. It's the vendors responsibility to point it away from themselves.
> I'm sure airport displays don't need MPEG2, VP1 and so on codecs, for instance.
They don't, until the day the airport managers are approached by an advertising company waving the wads of cash the airport could be 'earning' if only they let "AdCo" display, in the top 1/4 of each screen, a video advertising loop. At which point, those displays need the codecs for "AdCo's" video ads.
Boy do I sure hate you for saying that. I mean at some point you are right. That is the future. But god am I mad at you for reminding me this is the world we live in.
Absolutely (sigh)! But with a deployment of devices like that, the operator has a solid central management system from which they could push software as-needed.
The vendor who makes the software has always written for Windows (or in reality, wrote for either DOS or OS/2 then transitioned to NT4). History, momentum, familiarity, cost, and ease of support all are factors (among others, I'm sure)...
That's starting the argument with "weight loss is about overall diet process, not individual choices" and then hopping to "ice cream for dinner is good 'cause it's convenient and I like it".
The statement "Security is a process, not a product." means you avoid shitty choices everywhere, not you make whatever choices are convenient, try to patch the holes with a ... product ... and also add an extra process to deal with the failures of that product.
The statement "Security is a process, not a product" refers to no _product_ can be a security strategy. _Processes_ are part of security. The security landscape keeps evolving and what was appropriate even 5 years ago may not be appropriate today. You have to evolve your strategy and countermeasures over time as part of your _processes_.
The statement "Security is a process, not a product" refers to no _product_ can be a security strategy.
That's the negative part. The positive part is that security considerations have to run through an entire organization because every part of the organization is an "attack surface".
The whole concept of CrowdStrike is that it's there to prevent individual users from doing bad things. But that leaves the problem of CrowdStrike doing bad things. The aim of security as process is avoiding the "what-a-mole" situation that this kind of thinking produces.
They want to hear that they can pay $X dollars to this service provider, and tick all of the cover-your-ass boxes in the security checklist; where $X is the cheapest option that fits the bill.
> an OS that requires frequent security patches
> Security is not a feature that can be layered on. It has to be built in
This is a common misunderstanding, an OS that receives frequent security updates is a very good thing. That means attention is being paid to issues being raised, and risks are being mitigated. Security is not a 'checkbox' it's more of a neverending process because the environment is always in a state of flux.
So to flip it, if an OS is not receiving updates, or not being updated frequently, that's not great.
What you want is updates that don't destabilize an OS, and behind that is a huge history and layers of decisions at each 'shop' that runs these machines.
Security is meant to be in layers and needs to be built in.
> but it still doesn't work.
It does work because the 'scene' has been silent for so long, but what we as humans notice is the incident where it didn't.
This sort of thinking is one of the main problems with the industry, in my opinion.
We've got a bunch of computers that mostly don't make mistakes at the hardware layer. On top of that, we can write any programs we want. Even though the halting problem exists, and is true for arbitrary programs, we know how to prove all sorts of useful security properties over restricted sets of of programs.
Any software security pitch that starts with "when the software starts acting outside of its spec, we have the system ..." is nonsense. In practice, "acting outside its spec" is functionally equivalent to "suffers a security breach".
Ideally, you'd use an operating system that has frequent updates that expand functionality, that is regularly audited for security problems, and that only rarely needs to ship a security patch. OpenBSD comes to mind.
If software has frequent security updates over a long period of time, that implies that the authors of the system will continue to repeat the mistakes that led to the vulnerabilities in the first place.
I think that’s an oversimplification. If you have a Windows system handy, look for a file named “errata.inf” [0]. It’s a giant configuration file that is full of tweaks to make dodgy hardware work reliably.
Hardware, software and firmware are all prone to mistakes, errors and corner cases that are surprising. Security issues generally live in the intersection of systems with different metaphors. Hardware is not immune from issues, and software can help reduce that impedance mismatch.
> and that only rarely needs to ship a security patch. OpenBSD comes to mind.
How is that accomplished? Are OpenBSD programmers somehow vastly more competent, that they make security mistakes only 0.1% as often as other OS's?
I find that hard to believe. People are people.
> If software has frequent security updates over a long period of time, that implies that the authors of the system will continue to repeat the mistakes that led to the vulnerabilities in the first place.
Why would that be the case? Authors come and go, systems live on.
Security updates arise from a combination of auditing/testing and competence. 100 times as many security updates can arise simply because one OS is being used and battle-tested 100x more than another.
Nobody's smart enough to write code that "only rarely needs to ship a security patch". Not at the scope of an entire OS with thousands of people contributing to it.
OpenBSD still has security updates. Software packages often installed on OpenBSD-based systems often issue security updates. OpenBSD has a much smaller footprint than Windows and still has security updates.
You realize that you are personally insulting 100k people you've never met by judging their individual skills and abilities despite knowing nothing about them?
It makes it very hard to put any credence into your opinion when you are so judgemental with no information.
> Are OpenBSD programmers somehow vastly more competent
It's not about competence, it is about priorities.
OpenBSD obsesses about security, so that's what drives the decision-making.
All public companies are driven by profit above all, with the product being just a mechanism to get more profit. As a direct consequence, quality (and security, which is part of quality) is not the top priority. Security is only relevant to the extent its absence reduces profits (which very rarely happens).
Remote update is a nice way of saying remote code execution. It is really really hard to ensure that only the entity that you want to update your system, can update your system, when facing a state-funded adversary. Sometimes that state adversary might even work in concert with your OS vendor.
"If your adversary is the Mossad, YOU'RE GONNA DIE AND THERE'S NOTHING THAT YOU CAN DO ABOUT IT." [1]
Not patching is insane -- you'll let script kiddies in. Patching might not stop the next Stuxnet author, but you'll slow them down _and_ have fewer script kiddies.
A lot of people seem to be focusing on how the band-aid of automatic security updates can be ugly without considering the hemorrhaging that it's actually stemming. Nobody's stepping up with a realistic solution to the problem, which means we're stuck with the band-aids.
Is that really so hard? Isn’t the problem mostly solved by signing your update and verifying the update at the client? As long as you can keep the private key secret, that should be enough, right? Or are we assuming you can’t keep a single key private from your adversary?
You could get a Solarwinds type situation where the adversary has the signing keys and ability to publish to the website.
You might also find that the vendor ships a library (like libxz) as a part of their invisible or hidden supply chain, that is able to be compromised.
You might find that one of the people working at the company makes a change to the code to enable remote access by the adversary in a targeted collaboration/attack.
The problem isn't that signing key (although I could delve into the lengths you'd need to go to to keep that secret under these threat models) - the problem is what they sign. A signed end release binary or series of packages isn't going to address the software source code itself having something added, or the dependencies of it being compromised.
Except for the first point, these things aren’t exclusive to remote updates though. I thought we were talking about the challenges of remote updates compared to other methods (like replacing the system or manually updating it with installation media). Supply chain and insiders would be affected that, too.
Frequent security updates are a good thing, frequent security auto-updates are not, at least when it comes to situations like this. Technology that runs 24 hour services such as airports and train stations should not be updated automatically just like that, because all software updates have high potential to break or even brick something. Automation is convenient and does saves money which would have to be paid for additional labor to do manual updates, but in cases like this, it should be understood that it's better not to break the airport and roll-out update manually in stages.
Airport staff need to be able to support them. Not HN types.
Most people know how to use a windows computer.
Most IT desktop support knows how to use and manage windows. Even building facilities folks can help support them.
Microsoft makes it easy to manage a fleet of computers. They also provide first party (along with thousands of 3rd parties) training and certifications for it.
Most people don't know how to tell what's going wrong with a windows computer
A windows computer that relies on cloud services, as an increasing and often nonsensical subset of the functionality on one does, can often only be fixed by Microsoft directly
Microsoft intervenes directly and spends billions of dollars annually on anticompetitive tactics to ensure that other options are not considered by businesses
And with this monopoly, it has shielded itself from having to compete on even crucial dimensions like reliability, maintainability, or security
I know of a very small airport where what is displayed over the HDMI part is essentially Firefox at fullscreen with powersaving disabled so the screen does not blank. Some of them are Intel NUC, some of them are Raspberry Pi with HSM in a box. These devices basically "boot to Firefox" with relevant credentials read off internal TPM/HSM.
Those among airport staff who do not know how to use a computer at all can get them working by just plugging them in.
> Most people know how to use a windows computer.
They know enough to open a browser.
> Most IT desktop support knows how to use and manage windows.
They know how to cope with Windows, at best.
> Finding someone who knows a BSD is not easy.
BSD is everywhere and in far more places than Windows, like almost every car sold after 2014. But you never ever see BSD because it's already-working with nothing for the end customer to do.
You consider signage infra? Same with conference rooms. Most of the places I have worked have facilities type people working on it. Tier 3 is usually a direct phone call away for them
You would send an engineer into an airport to reboot a sign?
At some airports, staff does maintain infrastructure.
At others, airline staff is responsible for it. And just like airport staff, a tech who can deal with Firefox on Windows is cheaper than someone who can troubleshoot the same in Linux or a more custom system.
For many CTO/CISO it is more important to have a good target to shift responsibility when things go awry than to have a reliable/secure system. A Big Brand is a good target, an open-source project like OpenBSD is not. I doubt any CTO will be fired for choosing Widnows+CrowdStrike (instead of Linux/BSD) despite many million losses.
"Nobody ever gets fired for buying IBM" is as true as ever at least in the corporate world.
> I doubt any CTO will be fired for choosing Widnows+CrowdStrike (instead of Linux/BSD)
I was personally involved in a meeting where my firm's leadership advised a client who did fire their CTO and a bunch of other people for what was ultimately putting what they thought were smart career moves over their actual responsibilities.
Unfortunately, as you did just point out, the CEO, other execs, and board are often just as incompetent as the CTO/CISO who have such shit-brained mindset.
Or don't use an OS at all. We need to think about minimizing the use of software in critical infrastructure. If that means less efficiency because you have to be near something to maintain it then so be it. That would be good for jobs anyway.
Even unikernel applications have an OS compiled into the application. It's necessary to initialize the hardware it's running on, including the CPU and GPU and storage.
I suppose you could build it as a UEFI module that relies on the UEFI firmware to initialize the hardware but then you get a text only interface. But then the UEFI is the OS.
But this outage was not an OS problem. It was an application bug that used invalid pointers. If it was a unikernel it still would have crashed.
1. How does the software obtain new data at run time?
2. How do you make sure that thing doesn't pose a security hole when a vulnerability gets discovered? (assuming this never happens is unrealistic)
Vulnerabilities in what though? If you make an application so simple that it can only fetch data through an API and display, there's simply not much more that it can do. And a simple application is easy to audit. So it would be ideal if we could bundle this (akin to compiling) and deploy on bare metal.
The answer to both questions is robust organizational infrastructure. To be frank, I think a minimal linux system as a baseline OS serves most use cases better than a bare metal application, but many applications have self-contained update systems and can connect to networks. Self-repairable infrastructure is a necessity, both in terms of tooling and staffing, for any organization for which an outage or a breach could be catastrophic, and the rise of centralized, cloud-reliant infrastructure in these contexts should be seen as a massive and unacceptable risk for those organizations to take on. Organizations being subject to unpatched vulnerabilities and inability to manage their systems competently are direct results of replacing internal competency and purpose-built systems with general-purpose systems maintained and controlled by unaccountable distant tech monopolies
> the rise of centralized, cloud-reliant infrastructure in these contexts should be seen as a massive and unacceptable risk for those organizations to take on
I agree with you but I also want to play the devil's advocate: using software like CrowdStrike is not what I would call being "cloud-reliant". It's simply using highly-privileged software that appears to have the ability to update itself. And that is likely far more common than cloud-reliant setups.
Yea, and use of highly privileged software with the ability to update itself that the organization has no oversight of should be the most suspect. Software is used by nearly every organization for drastically different needs, and I think there will never be adequate security or reliability for any of them if software providers continue to consolidate, generalize, and retain ever more control of their offerings. Personally, I think the solution is local-first software, either open-source or grown within the organizations using them, which necessitates having that capability within orgs. The whole "buy all our infrastructure from some shady vendor" model is a recipe for disaster
To pick on your airport example a bit… all of the times I’ve gotten to enjoy a busted in-seat entertainment system, I’ve found myself staring at a stuck Linux boot process. This goes well beyond the OS.
Those sorts of things just need to boot to a web browser in full screen with some watchdog software in the background, launching from a read only disk (or network image). Get a problem, just unplug it and plug it back in. Make it POE based so you can easily do it automatically, stick them on a couple of distros (maybe even half on bsd, half on linux, half using chrome, half on firefox)
A web browser is an unbelievably complex piece of software. So complex that there are now only two. And also so complex that there are weekly updates because there's so many security holes.
There are more than two, and the vast majority of the time people don't need anywhere near the complexity that modern browsers have shoved into them. A lean browser that supported only a bare minimum of features would go a long way to reducing attack surface. As it is now, I already find myself disabling more and more functionality from my browsers (service workers, WebRTC, JS, SVG, webgl, PDF readers, prefetch, mathml, etc)
Yeah, options exist but it's not a very diverse ecosystem in practice.
I'm excited and optimistic about ladybird for that reason. We need more options.
We've seen this week that the world does not want options. It wants a single point of failure in all infrastructure so that nobody is blamed for making the wrong choice.
I'm sure we've all heard the phrase "We're a Windows shop" in some variation.
I understand the reasons for it, and why large, billion dollar companies try to create some sort of efficiency by centralising on one "vendor", but, then this happens.
I don't know how to fix the problem of following "Industry Trends" when every layer above me in the organisation is telling me not to spend the time (money) to investigate alternative software choices which don't fit into their nice box.
Yes, I'm well aware. I wasn't trying to conflate a CrowdStrike problem with a Microsoft problem. Having said that, in this particular incident, the problems were specifically limited to Windows OS.
I read the T&C of this CrowdStroke garbage and they have the usual blurb about not using it in critical industry. Maybe we just charge & arrest the people that put it there and this checkbox-software mess stops real quick.
from the reporting so far, no one has died as a result of the Crowdstrike botch. For my money, that sounds like it's not being used in 'critical industry'.
/unset
There were several 911 service outages included in the news yesterday, so I would definitely say agree those fall into the category. I haven't seen how many hospitals were deeply affected; I know there were several reports of facilities that were deferring any elective procedures.
I almost had to defer a procedure for one of my cats because my vet’s systems were all down. This meant they couldn’t process payments, schedule appointments, use their X-ray machine, or dispense prescriptions. (Thankfully, they had the ingenuity to get their diagnostic equipment online through other means, and our prescriptions had already been dispensed so we didn’t have to reschedule.)
I would imagine it’s the same story at human hospitals too that ran afoul of this. I wouldn’t expect life-critical systems to go offline, but there’s many other more mundane systems that also need to function.
>Really interesting to me that none of the commentators I've seen in the press have even hinted that maybe an OS that requires frequent security patches shouldn't be used for infrastructure in the first place.
Nobody's commenting on that because it's the wrong thing to focus on.
1) This fuckup was on CrowdStrike's Falcon tool (basically a rootkit) bricking Windows due to a bad kernel driver they pushed out without proper hygiene, not on Windows's security patches being bad.
2) Linux also needs to get patches all the time to be secure (remember XZ?) It's not just magically secure by default because of the chubby penguin but is only as secure as it's most vulnerable component, and XZ proved it has a lot of components. I'd be scared if a long period goes by and I see no security patches being pushed to my OS. Modern software is complex and vulnerabilities are everywhere. No OS is ever bug-free and fully bullet proof in order to believe it can be secure without regular patches. Other than TempleOS of course.
The lesson is whichever OS you use, don't surrender your security to a single third party vendor who you now have to trust with the keys of your kingdom as that now becomes your single point of failure. Or if you do be sure you can sue them for the damages.
Because it suits their anti-Windows agenda, M$ and so, while ignoring Crowstrike also botched Linux distributions, and no one noticed, because they weren't being used at this scale.
1) While CrowdStrike can be run on Linux it is less of a risk to use Linux without it than Windows. I don't think most Linux/BSD boxes would benefit from it. It could be useful for a Linux with remotely accessible software of questionable quality (or a desktop working with untrusted files) but this should not be the case for any critical system.
2) There is a difference between auto-updates (common in Windows world) and updates triggered manually only when it is necessary (and after testing in non-prod environment). Also while Linux is far from being bug-free, remotely exploitable vulnerabilities are rare.
>2) There is a difference between auto-updates (common in Windows world) and updates triggered manually only when it is necessary (and after testing in non-prod environment).
Again, those auto updates that caused this issue were developed and pushed from Crowdstrike not from Windows. That tool does the same auto updates on Linux too. On Windows side you can have sys-admins delay Windows updates until they get tested in non-production instances, but again, this update was not pushed by Windows for sysadmins to be able to do anything about it.
> I don't think most Linux/BSD boxes would benefit from it.
EDR isn't antivirus. It logs and detects more than it prevents, and you need that on Linux as much as Windows. You can do incident response without it if you are shipping your logs somewhere, in the sense that you can do anything without any tool, but it's certainly a lot easier with.
Possibly you need it less than on Windows since it's easier (for now) to do kernel stuff with eBPF, but then somebody has to do the kernel stuff.
Speaking as a professional red teamer, no OS has a ton of RCE, but applications do, Linux applications no less than Windows ones. Applications aside I'd rather be up against Windows in the real world because of Active Directory and SMB and users that click stuff, but Linux running a usual array of Linux server stuff is OK too.
every year multiple times per year there's reports of Microsoft Windows systems having either mass downtime or exploitation.... it's kind of amazing that critical systems would rely on something that causes so much frustration on a regular basis.... I've been running systems under Linux and Unix for decades and never had any down time... so I don't know I mean it's nice to know that Linux is pretty solid and always has been the worst that's ever happened has been like a process that might go down during an upgrade, but never the whole system.
Linux is vulnerable too (but not as vulnerable as windows of course) it’s just not targeted by hackers because it’s market share is so small. That wouldn’t be the case if, say, half of all users ran Linux.
And that sees plenty of attacks too. But here Windows wasn't under attack or a Windows vulnerability exploited, CS just fucked up and companies were stupid enough to put all their trust in CS.
I've never managed linux IT departments--how well are the management tools compared to what Microsoft offers such as tooling for managing thousands of computers across hundreds of offices.
Layering is absolutely possible, but more at the network layer than the individual computer layer.
Minimal software and OS running on linux as a layer between any windows/whatever and internet connectivity. Minimize and control the exact information that gets to the less hardened and trustworthy/complicated computers
I'm sorry but even Linux requires frequent security updates due it's large ecosystem of dependencies. It's more or less required by every cyber security standard to update them just like windows.
On the other hand OpenBSD doesn't require very frequent patching assuming a default install which comes with batteries included. For a web server there's just one relevant patch since April for 7.5: https://www.openbsd.org/errata75.html
I agree that all dependencies should be treated as attack surface. For that reason, systems for which dependencies can be more tightly controlled are inherently more secure than ones for which they can't. The monolithic and opaque nature of windows and other proprietary software makes them harder to minimize risk about in this way
> why aren't those built on Linux or even OpenBSD?
Because in the non-Silicon-Valley world of software, if you pick Linux and it has issues, fingers will get pointed at you. If you pick Windows and it has issues, fingers will get pointed at Microsoft.
This sort of emergent behavior is a feature, not a bug.
Operating systems that don't require frequent security patches aren't profitable.
Anyway, this is the step of late-phase capitalism that comes after enshittification. Ghost in the Shell 2045 calls it "sustainable war". I'd link to an article, but they're all full of spoilers in the first paragraph.
It probably suffices to say that the series refers to it as capitalism in its most elegant form: It is an economic device that can continue to function without any external inputs, and it has some sort of self-regulatory property that means the collateral damage it causes is just below the threshold where society collapses.
In the case of Cloud Strike, the body count is low enough, and plausible deniability is low enough that the government can get away with not jailing anyone.
Instead, the event will increase the money spent on security theater, and probably lead to a new regulatory framework that leads to yet-another layer of mandatory buggy security crapware (which Cloud Strike apparently is).
In turn, that'll lower the margins of anyone that uses computers in the US by something like 0.1%, and that wealth will be transferred into the industry segment responsible for the debacle in the first place. Ideally, the next layer of garbage will have a bigger blast radius, allowing the computer security complex to siphon additional margins.
I don't think CS type endpoint protection is appropriate for a lot of cases where it's used. However:
Consider the reasons people need this endlessly updated layer of garbage, as you put it. The constant evolution of 0-days and ransomware.
I'm a developer, and also a sysadmin. Do you think I love keeping servers up to the latest versions of every package where a security notice shows up, and then patching whatever that breaks in my code? I get paid for it, but I hate it. However, the need to do that is not a result of "late-stage capitalism" or "enshittification" providing me with convenient cover to charge customers for useless updates. It's a necessary response to constantly evolving security threats that percolate through kernels, languages, package managers, until they hit my software and I either update or risk running vulnerable code on my customers' servers.
You're making my point. You're stuck in a local maximum where you're paid a lot of money to repeatedly build stuff on sand. You say you hate it but you have to do it.
That's not strictly true, but it's true in an economic sense:
You could just move your servers to OpenBSD, and choose to write software that runs on top of its default installation. There have been no remotely exploitable zero days in that stack for what, two decades now? You could spend the time you currently use screwing with patches to architect the software that you're writing so that it's also secure, and so that you could sustainably provide more value to whoever is paying you with less effort.
Of course, the result wouldn't never obtain FIPS, PCI, or SOC-2 compliance, so they wouldn't be able to sell it to the military, process credit cards, or transitively sell it to anyone that's paid for SOC-2 compliance.
Therefore, they can either have something that's stable and doesn't involve a raft of zero days, or they can have something that's legally allowed to be deployed in places that need those things. Crucially, they cannot have both at the same time.
Over time, an increasing fraction of our jobs will be doing nothing of value. It'll make sense to outsource those tasks, and the work will mostly go to companies that lobby for more regulatory capture.
Those companies probably aren't colluding as part of some grand conspiracy.
It's also in their best interest to force people to use their stuff. Therefore, as long as everyone acts rationally (and "amateurs" don't screw it up -- which is a theme in the show), the system is sustainable.
> I've seen photos of BSODs on airport monitors that show flight lists
The kiosk display terminal is not something I care about that much.
> We now have an entire industry dedicated to trying to layer security onto Windows
Too bad we have no such layering in our networks, our internet connections, or in our authentication systems.
Thinking about it another way there's actually no specific system in place to ensure your pilot does not show up drunk. We don't give them breathalyzers before the flight. We absolutely could do this even without significant disruption to current operations.
We have no need to actually do this because we've layered so many other systems on top of your pilot that they all serve as redundant checks on their state of mind and current capabilities to safely conduct the flight. These checks are broader and tend to identify a wider range of issues anyways.
This type of thinking is entirely missing at the computer network and human usability layer.
Walter's answer is good, but here's another. When the two cars collide, it's possible to imagine that they are exact mirror images of each other and hit exactly head-on, so that a large sheet of paper hung vertically at exactly the collision point would not be torn. Of course we know that wouldn't literally happen in the real world, but it is possible. This thought experiment demonstrates that the collision is equivalent to hitting a brick wall at 50mph, not 100.
Alternatively, imagine one car is parked (in neutral, with its brake off) and the other car hits it at 100. The center of mass of the two cars is moving at 50 both before and after the collision (conservation of momentum); after it, the cars will be moving at that average speed. The impact will again be equivalent to the original scenario.
The original claim probably results from a conflation of these two scenarios.
ETA: So what student drivers should be told is that hitting another car head-on is like hitting a brick wall at the same speed. For this to be exactly true, the momenta (mass * velocity) of the two vehicles have to be equal and opposite, but to communicate the general idea, I don't think we have to go into that.
Please help me understand where my intuition (or maybe my assumptions/simplifications) is wrong.
Assume two perfectly elastic cars. When they collide at 50mph, each car will bounce backwards at 50mph (due to conservation of momentum), representing a change in velocity of 100mph — identical to the brick wall case.
I feel that introducing deformation or other energy dissipation to the equation kind of takes it out of the “high school physics” realm, right? What else am I missing?
Edit: ah I see, the car will bounce off the brick wall at 100mph as well, resulting in a 200mph change in velocity. I guess you could explain it then that the effect of the impact is felt entirely in one car in the brick wall case, and it’s spread out over two cars in the head-on case?
> When they collide at 50mph, each car will bounce backwards at 50mph
This is the incorrect part. They would both go to zero velocity/momentum.
Momentum is a vector quantity, so has a direction and magnitude. Two identical cars with the same speed going opposite directions would have the same magnitude of momentum, but opposite sign. After colliding, their sum would be zero.
If you watch billiards you would see kinda the same thing going on.
Edit: completely messed this up. Other comments are more correct
In a perfectly elastic collision, both kinetic energy and momentum are conserved. In a perfectly inelastic collision, kinetic energy is not conserved (because it is converted to heat), but momentum is always conserved.
So lets say you have 2 objects of the same mass traveling toward each other at the same speed. In a perfectly elastic collision, the balls objects will "bounce" off each other, going back in the opposite directions. In that case momentum is conserved (as you note, it's a vectored metric, so before and after the the total momentum of the system is 0), but so is kinetic energy, because you still have 2 masses traveling at the same speeds (think about if you have a Newton's cradle and pull both end balls up and drop them at the same time - they'll both bounce back).
In a perfectly inelastic collision, both masses will essentially crush and come to a complete stop where they collide. Again, momentum is conserved (it's still 0 before and after the collision) but kinetic energy is not conserved because it's all converted to heat of the 2 objects.
> I feel that introducing deformation or other energy dissipation to the equation kind of takes it out of the “high school physics” realm, right?
I don't think so. In my high school physics class I learned about both fully elastic and fully inelastic collisions. The math basically works out the same in the fully inelastic collision case, which I did here, https://news.ycombinator.com/item?id=40628932
The elasticity doesn't matter for the equivalence of the two scenarios (head-on collision at 50 vs. brick wall at 50). We've assumed, albeit implicitly, that in the head-on case, the cars have equal and opposite momentum. Whether the collision is perfectly elastic, perfectly inelastic, or somewhere in between, a car will experience the same forces in the two scenarios (assuming, of course, that the elasticity is equal in the two cases).
The bounce in mechanics is defined by the coefficient of restitution, which is how elastic something is. Technically it's related to the energy lost in the collision.
Perfectly elastic objects bounce apart with mirrored velocities. No energy is lost.
Perfectly inelastic objects just stop. All of the energy is dissipated through noise, heat, and deformation.
Momentum is conserved in both.
For drivers ed, cars are almost perfectly inelastic.
Turn on Lockdown mode. I have had that turned on for my iPhone since it was first available and it is not really that much of an inconvenience. I think the only time I notice is when someone tries to share a photo album with me on family trips.
Penicillin never kills viruses; only bacteria. This is true of all antibiotics. Antivirals are a separate category (and don't tend to work as well AFAIK).
This is the point I would emphasize. A kernel module that parses configuration files must defend itself against a failed parse.