I don't think it's preemptive vs cooperative that matters. What Rust's abstraction allows is for a function to act like a mini-executor itself, polling multiple other futures itself instead of delegating it to the runtime. That allows them to contain subtle issues like stopping polling a future without cancelling it, which is, yeah, dangerous if one of those futures can block other futures from running (another way you could come at this is to say that maybe holding locks across async points should be avoided).
> holding locks across async points should be avoided
Wait, what would be the point of using locks then? It seems to me there's no point taking a lock if you're gonna release it without calling any awaits, because nothing can interfere anyway. Or do you mean cases where you have both cooperative and preemptive concurrency in the same program?
They're going to want to know about panels being installed, and regardless of whether the regulation says they can be installed, they will be, so the regulation will tend towards allowing installation but at least encouraging reporting it (while regulation that forbids installation will mainly just discourage reporting).
That does seem like an orthogonal question to me. That we are wealthier and better off now doesn't really say much about the raw capabilities of the people now vs then, when it's obvious that technology has a truly gigantic role in the wealth of modern times (and compounds onto itself: many technologies making developing new technology easier).
I hear zero-trust is a trendy buzzword at the moment, so let's apply the basic idea here: having a hard shell and a soft and chewy center is not a security posture that works, in practice. You need to harden at every level. RMM uber-admin credentials are the ultimate soft center: you compromise those, you can kill the entire IT infrastructure. The only alternative is to distribute access: have multiple smaller IT teams that adminster small parts of the system, with more 'central' roles providing services but not having full control of most machines. It's not a fun option, but it might also work a lot better if each team can actually adjust policies for the environment they're working in as opposed to trying to have one completely unified policy for an entire multi-thousand employee company. And, for critical systems, I would seriously consider the wisdom of having a remote 'wipe and reformat' button at all.
At a bare minimum, your backup systems should have a completely disjoint set of credentials to your main systems, stored and controlled differently, ideally by a seperate team, if you have the resources.
(And the arguing becomes a problem when IT ceases to consider their job to be solving problems for users within some constraints, and just starts to consider their job to be enforcing those constraints. This also mixes badly with incompetence, which tends to turn everything into a tedious tick-box exercise that neither improves security nor solves user's problems. It's not a good time to have an IT department that can't resist any new security checkbox a vendor offers but can't figure out how to work any of their fancy tools to make life even the slightest bit smoother for their users)
This is my experience as well. The average quality in ROS is rock-bottom and so while it contains all of the things you might want from a robotics framework, you pay dearly for it. I would also say that whole the concept of distributed network modes is a convenient one for robotics development, stringing your control loops through such a structure is a recipe for disaster and should be avoided in your system design as much as possible (and one of the problems with ROS as a framework is that it heavily encourages this)
Papers in any journal (even or especially Nature, depending on your prior) should have a significantly larger degree of skepticism shown towards them than statements in reputable textbooks (which also should not be taken as complete gospel). Papers are a 'hey, we did a thing once, here's what we think it means' from a source that is very strongly motivated to do or find something novel or interesting, even if you trust that there is no fraud they are not something to approach uncritically.
The US government does spent a pretty large amount of its budget paying some of the debt back. More now that there's been multiple brinkmanship games of threatening a default for political points.
You also have to understand that the foundation of money is debt in the sense that if we paid back all the debt, money wouldn't continue to exist. The sum total of debt exceeds the total quantity of money.
You're asking for exactly the same cake. You want for the GPL to pass through this process, but not the proprietary licenses that the original GNU tools were washing away.
(the paradox of copyleft is that it does tend to push free software advocates in a direction of copyright maximalism)
3 and 4 are what that argument is based on, I believe. 3) on the basis that the output is not _reproduced_, and 4) on similar grounds that output that's just not at all the same as the input data isn't affecting the market for the original image (I think this is the more debatable one, but in general the existing cases have struggled at the early stages because the plaintiffs have not been able to actually point to output that is a copy of their part of the input, and this does actually matter).
reply