Some of the early lawyerly uses of AI have been no bueno.[0] Yet the legal need to produce knowledge from huge amounts of text is such an obvious alignment with LLMs...
I think most everyone would agree that using early LLMs (and personally I'd still consider current LLMs to be early) in legal contexts is ill-advised, at best. Circle back to this question in 5 years and I think the response will be very different.
Wouldn't want to be the guy who pushed this particular commit. It's ironic that the company that is supposed to prevent this sort of thing causes the biggest worldwide outage ever. Crowdstrike is finished. Let's hope this will result in at least a small increase in desktop Linux market share.
When the world calls for blood against your organization, it's a test of the organization's character: will they throw a scapegoat under the bus (even if there is a directly responsible person) or will they defend their staff, accept fault, and demonstratively improve process?
More importantly, the companies that enabled auto update from a vendor to production rather than having a validation process. This sort of issue can happen with any vendor, penalising the vendor won't help with the next time this happens.
It’s both. If you’re an engineer and you push out shitty code that takes down 911 systems and ambulances, you f’ed up. Push back against processes that cause harm, or have the potential to cause harm. You are ultimately responsible for your actions. No one else. The excuse of “I was just following orders” has been dead and buried since WW2.
Yeah, ideally management should know better. But management aren’t usually engineers. Even when they are, they don’t deal with the code on a day to day basis. They usually know much less about the actual processes and risks than the engineers on the ground.
if one of the people i manage is not up to the task the fault is mine. I've hired them. I should setup a system of hard gained trust and automation to avoid or at least minimize them fucking up. When fuckups happen, they are my fuckups. Critical systems don't survive only on trust, obviously. If I don't setup the teams and the systems properly, my bosses will also take the blame for having put me in that position.
I'm not advocating for lower layers to avoid responsabilities. But if an head needs to roll you should look above. That said, peole are hardened by fuckups, so there are better solutions than rolling heads, usually.
Right. In one sense, what we're talking about is different ideas on how companies / teams work. There's a wonderful book called "Reinventing Organizations" by Laloux that I recommend to basically everyone. In the book, the authors lay out a series of different organisational structures which have been invented and used throughout the ages. The book talks about early tribes where the big man tells everyone what to do (eg mobsters), to rigid hierarchies + fixed roles (the church, schools) to modern corporations with a flexible hierarchy, and some organisation structures beyond that.
The question of "who is ultimately responsible" changes based on how we see the organisation. In organisations where the chief decides everything, its up to the chief to decide if they should place blame on someone or not. In a modern corporation, people at the bottom of the hierarchy are shielded from the consequences of their actions by the corporation. But there's also a weird form of infantilisation that goes along with that. We don't actually trust people on the ground to take responsibility for the work they do. All responsibility goes up the management hierarchy, along with control, power and pay. Its sort of assumed that people who haven't been promoted are too incompetent to make important choices.
I don't think thats the final form of how high functioning teams should work. Its noble that you're willing to put your head on the chopping block, but I think its also really important to give maximal agency to your employees. And that includes making people feel responsible and empowered to fix problems when they see them. You get more out of people by treating them like adults, not children. And they learn more, and I think that's usually, in the long run, better for everyone.
I agree that if a company has a bad process, employees shouldn't be fired over it. But I also think if you're an employee in a company with a bad process, you should fight to make the process better. Never let yourself be complicit in a mistake like this.
> It’s both. If you’re an engineer and you push out shitty code that takes down 911 systems and ambulances, you f’ed up.
This is wrong. If a company is developing that kind of software is the responsibility of the company to provide a certain level of QA before they release software. And no, it's not that "engineers are pushing out shitty code", but that the shitty company allows shitty code to be deployed in customers' machines.
Many major companies have post-mortem reviews for this kind of thing. Most of the big failures we see is a mix of people being rushed, detection processes failing, a miscommunication/misunderstanding of the effects of a small change.
One analogy is rounding - one rounding makes no difference to a transaction, but multiple systems rounding the same direction can have a large scale impact. It's not always rounding money - it can be error handling. A stops at the error, B goes on, turns out they're not in sync.
Which guy is it? The person who pressed the button? The manager who gave that person more than one task that day? The people who didn't sufficiently test the detection process? The people who wrote the specs without sufficient understanding of the full impact? The person who decided to layoff the people who knew the impact three months ago?
Unlikely, just as Solarwinds wasn't finished when they distributed malware that got government agencies hacked. You underestimate the slow turning radius of giant company purchasing departments.
Enterprise Linuxes also employ Crowdstrike or similar "security" products as mandatory part of their IT deployments. Often (always?) this is due to companies wanting certification for their secure processes, in order to sell to government or large corporations that require them.
Why the fuck didn't MSFT just do blue/green canarying? No update should be rolled out to a billion devices at once until it's baked in a million devices for a bit, and that only after baking in 10,000 devices for a bit.
Crowdstrike broke the update for Windows only this time. Although look around, they did a bad update on Linux earlier this year (although that only broke some of the Linux installs).
But what would it take? Another crash or two? Boeing is like one of those big supertankers that take forever to change course - even if the iceberg was a kilometre away, there's nothing they could do but ram into it.
This is the only viable future for space travel. In orbit assembled space ships with nuclear thermal propulsion. Travel time to Mars with conventional chemical propulsion takes just way too long.
I can really see this happening with Malten Salt Reactors finally getting traction. China already has built a demonstrator and is now building a full-scale version of an MSR Reactor and now finally the US is building a demonstrator as well.
No, molten salt reactors are not the right technology for nuclear propulsion. The idea of a nuclear thermal rocket engine is to heat up very light molecules to very high temperatures, and so to achieve higher exhaust velocities than chemical rockets. If you plug a higher exhaust velocity in the rocket equation, you end up needing less fuel mass for the same cargo mass. In practice, the best nuclear thermal rockets achieve a lower temperature than chemical rockets, but they can dedicate it to heat only hydrogen (H2), rather than the combustion products in chemical rockets (such as H2O or CO2), so overall the exhaust velocity can be approximately twice as high.
Still, temperature is quite important, you want the core of the reactor to run as hot as possible. You are limited by the fact that you don't want the core to disintegrate. The NERVA project [1] achieved temperatures in excess of 2200 K.
Molten salt reactors are designed to reach about 1000 K. That gives up most of the benefit of using a nuclear reactor. You would still beat chemical rockets, but only by 25%, not by a factor of 2. Why would you do that? If you build on the NERVA project and use TRISO fuel (which was not available at the time) you can end up with a specific impulse of more than 1000 s, which is 2.2 times higher than what the best chemical rockets can deliver, and 2.85 times higher than SpaceX Starship.
There are non-nuclear alternatives, particularly inward of the asteroid belt.
PV in space can be made very thin. The absorption length for photons in CdTe, for example, is just 0.1 microns. Without having to be mechanically robust against wind and rain, great gossamer PV arrays could have very high power/mass ratios. These could drive plasma engines with high Isp.
None of that has anything to do with reducing travel times to Mars unless your entire payload is on the order of a couple of pounds.
That's like replying to someone saying it takes too long to drive from New York to Seattle, by saying that we could build an efficient 1000 mile per gallon car, that travels at .01 miles per hour. How efficient the vehicle is isn't the slightest bit useful to solve their complaint.
A high thrust to weight ratio when the weight is a couple of pounds isn't useful. What's useful is having a huge amount of thrust that's large enough to shove multiple tons of mass at high accelerations.
How large would such a construction need to be to accelerate 100 tons at 1g? Maybe someone could do the math for us. I assume it's on the order of dozens/hundreds of miles long per dimension and would be completely infeasible compared to just using an engine with high thrust to begin with.
[Edit]
Here's some rough math.
From wiki, assume a typical ion engine can produce 150mN of thrust from 4,000 W of power input.
Using a space station solar panel as an example of solar collection in space, each space station solar panel is 420 square meters in size and produces 31,000 W of power.
One space station solar panel would then provide (31,000 W / 4,000 W) * 150 mN = 1,162 mN, or .001162 N of force.
The force required to accelerate 100 tons at 1g requires 996,402 Newtons of force.
To generate that much force, you would then need 996,402 N / .001162 N = 857,488,812 space station solar panels worth of power.
As one space station solar panel is 420 square meters, then that requires 857,488,812 * 420 square meters = 360,145,301,040 square meters of solar panels.
Assuming square construction, each side would need to be 600,121 meters, or 373 miles long.
I assure you, just using high thrust engines makes infinitely more sense than building a pv-based ship scaled up so far that the ship's dimensions are nearly 400 miles long on each edge. At least for any time soon ..
Why would you need to accelerate anything at 1 g? That's a ridiculously high acceleration for getting to Mars. What matters more is the total delta-V, and if it can deliver it in time short compared to the transit time to Mars.
High Isp solar electric systems would not exploit the Oberth effect (likely they would start in high Earth orbit) so they don't have a high acceleration need from that.
If you want to accelerate to 15 km/s in 1 week, that's 2.5 milligees.
Accelerating/decelerating at 1G the entire journey would be the perfect scenario. Not only that would be the shortest travel time, but it would maintain gravity inside the ship all the time. If this is not the ultimate goal being worked towards, then we may as well just give up now. Nuclear is where it's at - it's the most efficient weight to power ratio generation known to man.
It's about as realistic as propelling the vehicle with unicorn farts. In particular, the kinds of nuclear propulsion being discussed in this thread could not do it. Solid core nuclear thermal rockets using hydrogen have an Isp of about 1000, so they could accelerate a vehicle at 1 gee for less than an hour.
The power/weight ratio of nuclear rockets actually sucks, compared to chemical rockets. Conveying heat through a solid/fluid interface is awkward and slow compared to just making it in situ by combustion.
Indeed. Probably a combination of both nuclear-thermal and nuclear-electric or ion drive I think it's called otherwise. The nuclear thermal would provide initial boost, then ion drive can do continuous acceleration half way and then deceleration the other half way. That would get a ship to Mars in a fraction of time.
I live in one of those upscale glass tower estates in central London. There is fancy valet underground parking for some 500 cars. There are 2 slow chargers in total that don't even work 50% of the time. Owning an EV is a fucking joke for most people and I don't see that changing any time soon. Rebuilding entire world's power grid is gonna take forever.
His death, while tragic is unfortunately not surprising. Apparently he had massive addiction issues for a long time now. In 2019 he was in a coma from ruptured colon caused by opioid abuse and the doctors gave him 2% chance of making it, so he was living on borrowed time.
I'd say that if a company doesn't want to invest time in interviewing a candidate in person and instead use automated tests, then the candidate has every right to respond in the same manner by using AI to beat those automated tests.