Did you miss the part about contraband? You quoted it, after all.
Firing on neutral shipping is not the same as intercepting it and inspecting it for war materiel or other contraband. Preventing shipping from reaching or leaving Kuwaiti ports is not the same as inspecting it for war materiel or other contraband.
Iran has been requiring shipping to submit to inspection and tolls via an adjusted route through the strait. And they can certainly deem oil contraband if they are allowed to do food and medicine, as quoted.
Ships that don’t stop get fired upon. That’s what happens in a blockade.
Kuwait is a US ally and hosts American military bases. Stopping shipping to there is very clearly legitimate.
> And they can certainly deem oil contraband if they are allowed to do food and medicine, as quoted.
Wikipedia is defining what the term blockade means not what constitutes a legal blockade.
Medicine is not allowed to be blockaded. Food is not allowed to be blockaded if there is a shortage.
Relavent parts of the san remo manual:
> 102 The declaration or establishment of a blockade is prohibited if:
> (a) it has the sole purpose of starving the civilian population or denying it other objects essential for its survival; or
> (b) the damage to the civilian population is, or may be expected to be, excessive in relation to the concrete and direct military advantage anticipated from the blockade.
> 103. If the civilian population of the blockaded territory is inadequately provided with food and other objects essential for its survival, the blockading party must provide for free passage of such foodstuffs and other essential supplies, subject to...
> 104. The blockading belligerent shall allow the passage of medical supplies for the civilian population or for the wounded and sick members of armed forces,...
You're correct that blocking oil would be allowed (barring situations where the civilian population needs it to survive, which doesnt apply here) if this was a legal blockade. However Iran is not complying with the other rules around blockades, which is the issue.
That only applies to Iranian traffic. It would in fact be an act of war for the U.S. to blockade maritime traffic of countries it's not already at war with.
Cuban tankers have hardly left the island’s shores for months. Oil-rich allies have halted shipments or declined to come to the rescue. The U.S. military has seized ships that have supported Cuba. And in recent days, vessels roaming the Caribbean Sea in search of fuel for Cuba have come up empty or been intercepted by the U.S. authorities.
Last week, a tanker linked to Cuba burned fuel for five days to get to the port in Curaçao but then left without cargo, according to ship-tracking data. Three days later, the U.S. Coast Guard intercepted a tanker full of Colombian fuel oil en route to Cuba that had gotten within 70 miles of the island, the data showed.
While President Trump has pledged to halt any oil headed to Cuba, the Trump administration has stopped short of calling its policy a blockade.
But it is functioning as one.
Sure, economic sanctions have been in place for a long time, but the US has started seizing full ships.
Yeah my overall spending at McDonald's declined significantly after the 2022 bout with inflation, and it's not just that prices went up (it was inflation, they mostly all went up), but that they leaned into trying to appeal to people who would already have been spending lots of money.
It would have been one thing just to make the food taste better, but they went the opposite and made it take forever to prepare and serve. But for me the whole point to McDonald's was to get in, eat something consistently decent, get out quickly. So they actually made things worse, because I already had plenty of other spots to get "nice" food if that's what I was in the mood for.
I'm not going to say bring back the heat lamps per se but there was a lot of value to people like me in having a restaurant that delivered on the original promise of "fast" food...
Yeah, the first EV I bought was an otherwise boring Hyundai Kona. Great car and great EV but you could easily mistake it for the gas version if you weren't paying attention.
And surprisingly to me it is even pretty damn efficient despite being originally designed as a gasoline-powered vehicle.
You're assuming that the customer growing their own fruit could do it at lower overall cost. Logistics are fairly inexpensive all things considered, if they really represent 90% of the total cost of fruit it says a lot for how low agribusiness has driven down the cost of the other 10%.
I think for some types of produce, a home garden is an easy win when it comes to cost. Sure there are things that are very difficult (labor intensive, water intensive, etc.) to grow, so avoid those. But tomatoes, cucumbers, lettuce, beans, potatoes, peas, and beans are pretty easy to grow, and seed stock can be purchased cheaply. I haven't done this as an adult because I am so excessively lazy (but it's on my to-do list for this year, finally), but we had a vegetable garden when we were kids, and between my mom, my sister, and I, it was very manageable, and we ended up growing more than we could use, and gave some away to neighbors.
Digital computers were named after the humans whose jobs they automated out of existence.
They were invented to reduce cost of computation, not to eliminate the probability of error per se. Ask a Windows 11 user, they'll tell you computers still make errors.
The whole field of reproducible builds is only a field because compilers also have had trouble historically of producing binary artifacts with guaranteed provenance and binary compatibility even when built from the same source codes.
If I assign a bug fix ticket to a human developer on my team, I won't be able to precisely replicate how they go about solving the bug but for many bugs I can at least be assured that the bug will get solved, and that I understand the basic approach the assigned dev would use to troubleshoot and resolve the ticket.
This is an organizational abstraction but it's an abstraction just the same, leaky as it is.
> The whole field of reproducible builds is only a field because compilers also have had trouble
No, this is not comparable. The reason reproducible builds are tricky is not because compilers are inherently prone to randomness, it's because binaries often bake-in things like timestamps and the exact pathnames of the system used to produce the build. People need to stop comparing LLMs to compilers, it's an embarrassingly poor analogy.
> The reason reproducible builds are tricky is not because compilers are inherently prone to randomness
And neither are LLMs. Having their output employ randomness by default is a choice, not a requirement, just like things like embedding timestamps into builds is a choice that can be unwound if you want the build to be reproducible.
> People need to stop comparing LLMs to compilers, it's an embarrassingly poor analogy.
They are certainly different things, but if you are going to criticize LLMs it would be better if you understood them.
> Having their output employ randomness by default is a choice, not a requirement
This is not really meaningfully true. E.g. batching, heterogeneous inference HW, and even differences in model versions can make a difference in what result you get, and these are hard to solve.
But again, these are all things that are also true of build systems.
GCC 16.1 vs. 15.2 will get you differences. GNU ld vs. gold vs. mold vs. lld will get you differences. Whether you do or do not employ LTO or other whole-program optimization vs. whether you do gets you differences.
Have you never debugged a race condition that worked on your machine but didn't work in prod, based only on how things ended up compiled in the final binary?
I'm not saying these are identical but there's a lot more similarity than you all seem to understand. And we've made compilers work well enough that a lot of you believe that they give very routine, very deterministic outputs as part of broader build systems even though nothing could be further from the truth, even today.
It is random if you select it to be (temperature != 0, etc.).
It is not random if you don't use random sampling in its output generation.
It the whole thing were actually stochastic then prompt caching would be impossible because having a cache of what the previous tokens transformed into to speed up future generation would be invalidated by the missing random state.
Look at llama.cpp, you can see what samplers are adjustable and if you use samplers that employ randomness you can see what settings disable the random sampling. Or you can employ randomness but fix the seed to get reproducible results.
An LLM is a set of structured matrix multiplies and function applications. The only potentially non-deterministic step is selecting the next token from the final output and that can be done deterministically.
Even if it were reproducible, realistically most people are using some service like Claude that makes no guarantee that the model or hardware didn't change. Which is fine, it doesn't need reproducibility.
This is interesting though, I didn't know PyTorch had a debug mode for reproducibility.
Even with this debug mode, a different batch size can give different results for the same input - e.g. your tensor multiplies might use different blocking, hence different associativity.
I posted that to show that at a bare minimum, there is some pretty extreme nondeterminism (though probably mild in effect) in even the most pedestrian GPU inference, unless you go to the extreme of using the debug mode and taking the potential performance hit.
This is not my claim, you're veering wildly off course here. I'm merely responding to the common, tiresome and, to be frank, stupid analogy of LLMs to compilers.
It's an abstraction for you, not the rest of that developer's team, who have to reproduce the same solution even after said developer has "won the lottery", so-to-speak.
inb4: "Don't worry, just use GPT to make the docs"
but even if it didn't it still provided a binary that is mathematically proven (assuming no compiler bugs, which if found are fully fixable, unlike LLMs) to correspond to the code you wrote.
> Like, if waterfall of a project can be done in 2 weeks, is it agile now?
Sure. The thing is, the waterfall guys would tell you it's impossible to do it in 2 weeks because you need to have written down everything first. "Thousands of pages" was the terms they used.
Agile guys would point you to the Agile manifesto which would lead you to "working code over documentation" and "people over process".
A 2 week period to go from initial spec to product in a user's hands to capture feedback and make changes from there is much closer to agile than to waterfall. In fact it's more or less exactly some older versions of Scrum (which didn't permit deviating from the planned sprint user stories midway through the sprint, instead changes influenced the subsequent sprint).
The DoD's 2167 standard from the late '80s mentions the following documentation that should be produced as part of the development process (section 6.2 and Appendix D):
This is a particular artifact of the government system process. These are contracted pieces of work that Company A would deliver, Company B would administer, and Company C would be contracted out for additional work. Further, all specifications were created ahead of time because changes would cost extra. (Anyone who has done government contracting can talk to the shenanigans involved with it - I have not lived in this world for a long time.)
That said, we still do ad-hoc versions of many of these. For example, a system/segment specification today is an OpenAPI document between microservices. Most larger SaaS companies have the equivalent of a Software Configuration Management plan - Who can change terraform or a GHA, what are the standards that they conform to (linter, peer review standards).
> This is a particular artifact of the government system process.
Yes, a government process meant to implement the waterfall approach.
If you look at Dr. Royce's paper which originated the concept, he was very explicit that it required upwards of thousands of pages of documentation to be written up front, if you were doing it "right".
By the time the required documentation had all been written, there should be essentially nothing left to do but to actually type out the punch cards as specified and turn then into a system of compiled programs.
Now, this appealed to government because it put documentation in place that was felt to be more viable for contracting processes, but ever since Dr. Brooks chaired a 1987 Defense Science Board study on the issues already facing the DoD trying to implement waterfall methods, they've been trying to restructure their software acquisition methods to pursue better outcomes rather than more concretely defined outputs.
Of course it's still a tremendous challenge for them even now, and it remains common to see defense acquisition projects that will say "Agile" to the right people even as they prescribe a full waterfall-style 'system engineering V' approach behind the scenes.
The ad-hoc responses that the commercial space often involves is usually more appropriate, believe it or not. They get process added when process is helpful, but not before it is helpful.
(and my link to the Royce paper isn't working anymore - I need to fix that!) - I am planning on a followup that takes the last 3 years of change in mind.
Yes, that's why his paper essentially said "you're going to have to build two." One to figure out the mistakes you can't predict ahead of time, and the second for the real deal. Do your best to get through the first one as fast as you can, but still deliberate enough that there won't be any bugs left behind for the second one.
But a third or subsequent iteration was definitely a failure in his mind, and even building two (or one-and-a-half, depending on your framing) was simply a concession to the reality that actual implementation would run into unpredictable issues, for much the same reason computer science had already learned the halting problem was undecidable.
I have a book with his paper and to the extent he speaks of iteration as desirable, it is only iteration between succeeding steps of the overall 'waterfall'. E.g. in an ideal world you iterate between system requirements and their decomposition into software requirements (updating the system reqs as necessary to ensure the software reqs you're writing are accounted for). Likewise for system requirements to software analysis, and so on.
As you point out, he mentions that this concept is “risky and invites failure”, and goes on to allow for re-refinement and re-implementation of the software requirements and program design phases based on experience from the testing phase. But he goes on to emphasize: “However, I believe the illustrated approach [waterfall with reimplementation post-test] to be fundamentally sound”.
The rest of his paper then goes into the detail of these phases, and he specifically notes early on that there is a natural question, of how much documentation is enough? And he gives a very clear answer: “My own view is ‘quite a lot’; certainly more than most programmers, analysts or program designers are willing to do if left to their own devices.”
It's not an accident that the DoD software acquisition requirements based on waterfall as mentioned by the other comments were so numerous or onerous. As Dr. Royce puts it:
- “The first rule of managing software development is ruthless enforcement of documentation requirements”
- When asked to review software projects the first thing he does is review the documentation. If the documentation is seriously lacking his recommendation is to replace the whole project management and shift 100% of work to fixing documentation.
- “Management of software is simply impossible without a very high degree of documentation”
- If procuring a $5M hardware device he'd expect a 30 page spec to suffice. If procuring a $5M software system, he'd “... estimate a 1,500 page specification is about right.”
I wasn't pulling "thousands of pages" from thin air. It's right in his paper and he's extremely clear about this. It's not an off-hand remark, he goes on to justify why he thinks that mass of documentation is required.
I want to emphasize that he's writing from the problems he was facing in his era. Computer systems necessarily were room-sized installations, interactive computer time was incredibly expensive, but paper was cheap. There was no Internet to speak of to share powerful and efficient open-source libraries. There was no "continuous deployment" or "continuous integration".
The system had to work well pretty quickly after the subsystems were built, installed, integrated and tested or this newfangled computer system that cost millions in 1960s dollars to run per month would be nothing more than a money sink while the nerds tried to troubleshoot.
Nowadays we don't develop under those kinds of strictures and we've put tremendous investments into allowing real useful systems to be developed using the simpler processes that even back then were much easier to develop around, when it could be used (Dr. Royce's paper even leads off by describing the 'nice' process as he explains why you can't use it as system size grows). The voluminous test documentation he's propose are things we pretty much do write today, but we call them test suites and we grow them along with the program, rather than write them all months before coding.
I think there's a lot to be said for what a modern-day waterfall process might look like with the technologies and iteration speeds available to us now, the only problem is that I think it will still resemble agile more than it would resemble the process Dr. Royce described.
Indeed, I came across this not as a contractor but in my university textbook :) I wanted to collect the document list that forms the "thousands of pages" mentioned above in the waterfall model.
Yeah and that's helpful too, because we typically talk about caricatures about both agile and waterfall and I think people truly don't realize that waterfall isn't simply "think about what you do before you do it" and nor is agile "code first; think later".
If people truly understood what waterfall is and how it's supposed to be carried out, they'd be less apt to recommend it. Nothing prevents a team from employing planning in an agile effort, but doing this doesn't turn it into a waterfall project and you shouldn't describe it as such.
If anything, teams that refuse to use agile (thinking it inherently means meetings, story points and not looking beyond 14 days) often send up choosing something even simpler, like cooking up a simple design doc of 4-6 pages before implementing it.
But that's still not waterfall, it's just another of the infinite renditions of agile methods that are out there, just without the consultancies issuing formal training certs.
at one point or another in my career (gov contracting) I had to write or co-write or review every one of these. and without fail, within 6-12 months they would be stale/inaccurate/obsolete/… the truth is, even on projects where sufficient time is allocated to write these, there is never (literally) time allocated to keep them up-to-date
LGPL yes. However the rest is false. GPL would have made a bit more of their fork open source, but Apple would have otherwise had no problem forking it and not allowing contributors. KHTML developers often complained in those early years that their fork was in theory open source, but it quickly became so different that it wasn't possible to figure out which changes were wroth porting and which not.
I suspect a lawyer could look at the state of WebKit and Chrome these days and conclude there is so little original code remaining that it can be re-licensed to closed source (see a lawyer for what this complex bit of law means) - worst case they only have to rewrite a small amount.
reply