Spring Boot, Django, Rails, Laravel, ASP, ... it's not unreasonable to expect Rust to have an equivalent to something that most popular languages have.
I think this particular case would be difficult to refactor even in an IDE like PyCharm, which afaik is the best at refactoring Python (might be outdated).
PyCharm understands pytest fixtures and if this is really just about a single fixture called "database", it takes 3 seconds to do this refactoring by just renaming it.
I believe the idea is that those identifiers are semantically related: that fixture decorator inspects the formal parameter names so that it can pass the appropriate arguments to each test when the tests are run. A sufficiently smart IDE and/or language server would thus know that these identifiers are related, and performing a rename on one instance would thus rename all of the others.
And maybe you were being facetious, but an IDE is an “Integrated Development Environment”.
In PyCharm: Move cursor on any occurence or definition of "database" fixture, press the "Rename" hotkey (Shift+F6), delete old name and type new name, press Enter key to confirm.
A single fixture, yes. If there are many fixtures of the same name in different test modules, it wouldn't work, but that's not how I understood the problem in the blog post, which says
>rename every instance of a pytest fixture from database -> db
Every instance of a fixture, not every instance of all fixtures of the same name.
I don’t think that this will handle what the author wants, though. If each function takes db as an argument and uses it internally, those don’t count as references that you can change globally, right?
Location: France, Switzerland
Remote: Yes
Willing to relocate: no
Technologies: Java, Kotlin, Rust, Python, enough front-end and database skills. for regular features, APIs, testing, Continuous Integration, GitHub Actions, GitLab, Content writing, Public Speaking
I've been in IT for more than 20 years, 17 of them in development (developer, team lead, software architect, then solution architect) and 6 of them in developer advocay.
My journey has been deeply rooted in Java, Spring, and "DevOps", where I’ve focused on improving software architecture, testing, and continuous delivery practices. I’m passionate about sharing my knowledge and experiences, on top of writing and designing software. My goal is to help teams and organizations bridge the gap between development and operations, enabling them to adopt modern technologies and practices with greater confidence.
Feel free to ping me at nicolas at frankel dot ch.
You mean pleas from some members of the community not to do so and many more pleas from many more members of the community to the contrary. It is impossible to do "the community's bidding" when different parts of the community demand contradictory things. So we try to cater to the majority while giving the minority sufficient time to adapt.
SecurityManager is a case in point. You and a few others claimed that removing it would have a large harmful impact. We proceeded in our usual cautious manner to first test the more widely believed hypothesis to the contrary before doing anything irreversible and put a warning in place in JDK 17. A lot of people have adopted JDK 17 or later over the last few years and those of them who are using SecurityManager have seen the warning about its planned removal. As most people believed, the warning did not uncover some hidden widespread use of the feature (and your campaign didn't manage to find widespread support for your position).
It is perfectly alright to be dissatisfied when a decision doesn't go your way, but it's not alright to present it as if the decision went against "numerous pleas" without mentioning that, however numerous (which, in this case, was fewer than ten people), those pleas came from a small minority. Had we gone the other way, far more people would have been dissatisfied.
Never seen anyone ask for you to get rid of security manager. Fix and improve it, sure. But no, it had to go and we did not get a suitable replacement. RIP.
Of course you did -- many, many, many times -- you just didn't know that that's what you were seeing.
People rarely directly asks for an old feature that they're not using to be removed. Rather, they ask for new features that the old feature interferes with. SecurityManager imposed a significant tax on nearly every new feature, as it interacts with just about everything, slowing down much of the work on the JDK. SecurityManager was (actually, still is as it has not been removed yet) simultaneously one of the most expensive features and least used features in the JDK. Everyone who asked for new significant features effectively asked for SM to be removed, as they could not be delivered -- certainly not as expeditiously -- with it in place.
In fact, as early as JDK 8, new features (such as streams) had to work around SM (simply documenting that it could not be effectively used in combination with the new feature; few people noticed because few people used SM). With time, it became ridiculous to say that SM is ineffective when used in combination with more and more new features (e.g. virtual threads, and the same would have been the case for Leyden [1]), and it still kept exacting a tax on most uses, so it had to be removed if we were to give people the features they were asking for. The fact that more robust, effective, and relevant security mechanisms are now not only available but have grown far more popular in Java applications than SM ever was only meant that the whatever arguments there may have been to keep it and delay and significantly complicate new features were weak.
[1]: I shudder to think how Leyden could be done while keeping SM in some functional state.
> The fact that more robust, effective, and relevant security mechanisms are now not only available but have grown far more popular in Java applications
Like what? Or are you referring to isolating the entire JVM? That's pointless for plugin systems that don't want to deal with the ridiculous amount of overhead that'd entail.
> SecurityManager was (actually, still is as it has not been removed yet) simultaneously one of the most expensive features and least used features in the JDK
Well, yeah. It was hard to implement, I'd know because I was working on integrating it in a plugin framework I was working on at the time. But we eventually abandoned that idea when we heard the announcement of SM being deprecated with no replacement.
Would've been great to be able to support sandboxed plugins for the game we were working on.
Like all the security mechanisms that are used by virtually all security-sensitive Java applications, from cgroups/containers and firewalls to encryption protocols and serialization filters (they're not using SM).
> That's pointless for plugin systems that don't want to deal with the ridiculous amount of overhead that'd entail.
You cannot offer secure server-side plugins (i.e. on shared machines) with a mechanism like SM, and even client-side plugins now use different, simpler, mechanisms.
> Would've been great to be able to support sandboxed plugins for the game we were working on.
It's not that hard to offer some in-process client-side sandboxing using modules and some instrumenting class loaders without SM. It may not be very robust, but SM wasn't as robust as some imagined it was, either.
There aren't in-process isolation mechanisms robust enough for server-side use -- where DoS attacks are a common threat -- and even on the client it would be both safer and simpler to sandbox the entire process or make no security claims (many client-side programs with plugins opt for the latter).
> Like all the security mechanisms that are used by virtually all security-sensitive Java applications, from cgroups/containers and firewalls to encryption protocols and serialization filters (they're not using SM).
Not reasonable to implement across all platforms users may choose to run a game on. This discussion is about a game client, not the server. A replacement solution would have to work everywhere Java runs, and should not impact the user's system in any noticeable way.
> but SM wasn't as robust as some imagined it was, either.
The docs, at the time, implied SecurityManager was the way to go to run untrusted code, similar to Applets.
Since there is no reasonable alternative and the JDK team has seemingly given up on this feature we've instead opted to require all plugins to be source-available, manually vet the code, and build & distribute via our CI & only allow the client to load plugins signed by our CI.
> This discussion is about a game client, not the server.
Well, I didn't know that's what this discussion is about, but sure, we can talk about that. :)
> A replacement solution would have to work everywhere Java runs, and should not impact the user's system in any noticeable way.
Except 1. there's little demand for such a system in the JDK (and, as I've said, you can do something reasonable on your own with modules, class loaders, and a tiny bit of instrumentation) and 2. I don't think anyone offers such a general mechanism, especially as different programs would have different requirements on robustness, some of which cannot be achieved in-process (e.g. web browsers, the prime example of a program running untrusted code, these days use process isolation).
> The docs, at the time, implied SecurityManager was the way to go to run untrusted code, similar to Applets.
Yes, that was the best way; whether or not the best was good enough is another matter.
> Since there is no reasonable alternative and the JDK team has seemingly given up on this feature we've instead opted to require all plugins to be source-available, manually vet the code, and build & distribute via our CI & only allow the client to load plugins signed by our CI.
I would say that even SM may not have given you what you need, but yeah with it or without it, some other approaches are required for better fortification. I would recommend additionally looking into some sort of basic sandboxing -- which will control which APIs are exposed to the plugin -- using modules and instrumentation (that can inspect/restrict some operations that are exported by, say, the java.base module).
Success of VSCode plugins, microservices, containers, and out of process VSTs, have proven that on modern hardware people favour stability and improved security, over the in process plugins.
.NET also dropped their version of SecurityManager when they did the Core rewrite.
All of those plugins have shit latency. This model is not suitable for games, at all. The plugins need to be able to render their own graphics, which happens at 60~120fps. Also have you ever tried running ~200 JVMs on the same machine?
> Also have you ever tried running ~200 JVMs on the same machine?
This is one of my pet peeves with the "garbage collection" model of memory management: it does not play well with other processes in the same machine, especially when these other processes are also using garbage collection.
With manual memory management (and also with reference counting), whenever an object is no longer being used, its memory will be immediately released (that is, the memory use of a process is always at the minimum it needs, modulo some memory allocator overhead). With garbage collection, it will be left around as garbage, and its memory will only be released once the process decides that there's too much garbage; but that decision does not take into account that other processes (and even the kernel for its page cache) might have a better use for that memory.
This works fine when there's a single process using most of the memory on the machine, and its garbage collection limits have been tuned to leave enough for the kernel to use for its caches (I have seen in practice what happens when you give too much memory to the JVM, leaving too little for the kernel caches); but once you have more than a couple processes using garbage collection, they'll start to fight over the memory, unless you carefully tune their garbage collection limits.
It would be really great if there were some kernel API which allowed for multiple processes (and the kernel caches) to coordinate their garbage collection cycles, so that multiple garbage collectors (and in-process caches) would cooperate instead of fighting each other for memory, but AFAIK such API does not exist (the closest I know of is MADV_FREE, which is good for caches, but does not help with garbage collection).
Contrary to common culture, if memory strain is an issue with GC, it is even worse with algorithms that cannot cope with fragmentation, or have to keep going down into the OS for memory management.
Optimizations to avoid fragmentation, locking contention or stop the world domino effect in reference counting algorithms, eventually end up being a poor implementation of a proper GC.
Finally, just because a language has a GC, doesn't mean it also doesn't offer language features to do manually memory management and reference counting if one feels like it.
While Java failed to build up on the learnings from Eiffel, Oberon, Modula-3, there are others that did, like D, Nim, C#.
I'm building this JEP for automatic heap sizing right now to address this when using ZGC: https://openjdk.org/jeps/8329758
I did in fact run exactly 200 JVMs, running a heterogeneous set of applications, and it ran totally fine. By totally fine I mean that the machine got rather starved of CPU and the programs run slowly due to having 12x more JVMs than cores, but they could all share the memory equally without blowing up anyway. I think it's looking rather promising.
> With manual memory management (and also with reference counting), whenever an object is no longer being used, its memory will be immediately released
Well, this is a fundamental space vs time tradeoff — reclaiming memory takes time, usually on the very same thread that would be doing useful work we care about. This is especially prominent with reference counting, which is the slowest of these all.
Allocators can make reclamation cheap/free, but not every usage pattern fits nicely, and in other cases you are fighting fragmentation.
> Well, this is a fundamental space vs time tradeoff — reclaiming memory takes time, usually on the very same thread that would be doing useful work we care about.
Precisely. Which is fine if you don't have to share that space with anyone else; the example which started this sub-thread ("running ~200 JVMs on the same machine") is one in which that tradeoff goes badly.
But it wouldn't be as much of an issue if the JVMs could coordinate between themselves (and with other processes on the same machine), so that whenever one JVM (or other things like the kernel itself) felt too much memory pressure, the other JVMs could clean some garbage and release it back to the common pool of the operating system.
It might even be a problem without garbage collection - linux might be a big culprit here with its tendency to over-allocate. Some signal would be welcome that says “try to free some memory” - I believe OSX has something like that.
> Which games are you shipping in Java that depend on Security Manager's existence?
None, because it didn't pan out like I described above. No sense continuing developing something using a technology that is due to be removed. The project was abandoned.
This was for a third-party Old School RuneScape client which supports client side Java plugins. The current approach is to manually vet each plugin and its update before it is made available for users to install.
> Most games are deployed as services nowadays, they have a full network between their rendering and game logic.
Networked games do not communicate at 60~120fps. That's just not how it works, writing efficient netcode with client-side prediction is important for a reason.
> Yes, that is what Cloud Services do all the time across the globe in Kubernetes.
Yeah, on the servers they pay several grand a month for. Not on end user craptops which is where games obviously run.
You might want to take a look at GraalVM’s Isolates. They are lightweight security boundaries, that can even limit things like memory usage and cpu time, for java, js, python among other languages, all within a normal JVM process.
Author here. Yes, it's all about Conway's Law. Unfortunately, it's still a law that governs all companies. Fun fact: I stopped working in projects because I mostly had to find technical solutions to organizational issues.
PS: I wouldn't call it "political" (negatively connotated) but "organizational" (statement of fact)
This is the purest end goal though, if you've got a large monolith or you directly consume a shared backend with a large number of teams, how do you get there? BFF is just one attempt to decouple with another layer of abstraction. Eventually you can strangle the problem on the backend.
CDC on that backend would be one way. You could stream the data out to the various teams who could then materialise a view of it within their application and then craft an API over that view.
Funny to see this here. More than twenty years ago, I did my Master thesis in Architecture using VRML to display a building I had designed and the surrounding district.
I especially loved the Level-Of-Detail object that allows to design different objects depending on the distance of the object with the view point.
I still have the files somewhere, but last time I checked, I found no software to read them.
>As a simple example, imagine a small team of, say, 6 developers:
>
>The team has created and now supports a mobile app, an Alexa skill, three separate web apps, fifteen microservices, and a partridge in a pear tree.
If a team of 6 developers created 15 microservices, I'm afraid that they don't understand the pros nor the cons of microservices and they should be fired instantly.
Too be fair I did the same thing 2 years ago and I'm sure I'm not the only one. Drank to much microservice koolaid and ended up with 12 services. Consolidated it down to 7 once initial micro high wore off
How does "number of microservices" correlate to "complexity of your infrastructure"?
I'd also note that the OP had no concept of time in it - it didn't include a timeframe, so we aren't capped by developer man/work-hours.
If we are talking instantaneous support, there is no telling how this correlates - one complex microservice may be too complex for many devs, or many microservices simple enough o be supported by just one.
If we are talking about about either development hours, or code-familiarity, then I'm not sure how we are capped - are we going to limit the number of LoC a dev can write, repos they commit too etc, as well?
Six full time developers who wrote 15 microservices that each do one thing well should not be fired. The burden of microservices lies in the underlying infrastructure required to host them. So, assuming we have a competent infrastructure engineer, devops engineer, security engineer, site-reliability engineer, and a handful of other hats being done well... your six developers can write and maintain far more than 15 microservices assuming that each service has a clearly defined single-objective scope.
Because infrastructure complexity <> Software development complexity.