The solution is to do more with less. Devs are taking advantage of a wealth of abstraction and being buried under it. But they actually don't need it.
You do not need some 3rd party library to write a command line interface for your python code. You do not need three different projects just to maintain your list of installed packages or virtual environment. And you don't need a package to wrap around loading environment variables.
Just learn how to do these things in a simpler way, without the extra deps. Will there be some more boilerplate code? Yes. But it will be your code, and it will be so simple that you will learn to write it very well. In addition: make your code loosely coupled and composeable, and overload existing code with additional functions when feasible, rather than starting a new project from scratch. Press your company to let you contribute back to OSS, as this lets somebody else do more of the maintenance, and has the side effect of reducing the total number of software projects. (It immediately saves you money and time. Kind of stupid that more of us don't do this already.)
Simplifying is hard. But when you get good at it, you find you're much more buoyant.
I come from a Python background so libraries were always just a pip install away.
In the past year, however, I've been working on a large C++ codebase (couple million lines) and the result has been a considerably greater amount of "roll your own."
This has filtered back into my Python. If it's not in the standard library, I don't install it unless I really need it.
My boss always tells me that you can probably write it faster than a library, something I never used to believe. Until I tried it. I needed to check which of two version strings, of XX.XX.XX.XXXX format, were bigger. I tried the most recommended version number library, then I tried writing my own solution, the simplest solution I could think of:
def version_compare(v1: str, v2: str) -> int:
"""
Compares two versions.
Parameters
----------
v1: str
v2: str
Returns
-------
int
1 if v1 is greater, -1 if v2 is greater, 0 if equal.
"""
for el1, el2 in zip(v1.split('.'), v2.split('.')):
if el1 != el2:
return 1 if int(el1) > int(el2) else -1
return 0
My code was faster by like 20x. Libraries are bloated and you probably only need a small subset of the functionality, so write your own code has become my mantra.
Another benefit, that IMO is as big if not bigger, is that you can easily make changes to this code whereas changes on the library side may be hidden behind configuration, impossible at all, or god forbid even require a wrapper anyway. In the above example, if you wanted this instead to return the version number string that is larger, that's an easy change and one that's very obvious from the PR what it is doing.
As a long time C++ developer, the modern trend of library package managers has always seemed insane to me. In C and C++ integrating with a dependency is no trifling matter. Beyond just simply making your executables bigger, you're also making the build process more complex, you're locking yourself to an interface, and you probably need to keep special considerations in mind; for example, some libraries need you to call a function when you load them. Therefore, people don't make trivial (i.e. things anyone can do in a few minutes) libraries, and you don't add dependencies unless you really need to.
When I started playing with C/C++ the first thing I complained about was how difficult package management was. Conan is not as simple as cargo or pip. It's often simpler to find a header only library and plop it into your repo, but you still have to modify build configurations to include it.
This is the first time I thought to myself "Maybe that's actually a good thing."
When you're prototyping, a package manager can be really convenient, because it lets you try things out really quickly without investing a lot of time. But once you have most things in place you really just want something that will remain stable for a long time.
Another slightly more general approach, which turns the string into a tuple of numbers and then uses tuple comparison, which gets "1.2.3" > "1.2" right, but does yield "1.2.0" > "1.2" which may or may not be what you want:
(In Python 2 days, the last line could have been `return cmp(v1, v2)`, but sadly cmp() and .__cmp__() were removed in Python 3.)
And both of these implementations demonstrate an advantage of using someone else’s library: it has probably had more care put into it. And implementing these sorts of things yourself often leads you to compromise on functionality—though at the same time, using a misfitting library also leads to compromises. It can always go both ways.
My experience is that it’s frightfully common for people to unintentionally oversimplify their implementation, or to forget that their implementation is too specific and improperly use it in a more general context. For example, to think they are only dealing with /^[0-9][0-9][.][0-9][0-9][.][0-9][0-9][.][0-9][0-9][0-9][0-9]$/ (and if the syntax really is that simple, then simple str comparison would be enough, as another comment suggested), but then discover somewhere down the line that perhaps some of the two-digit components can actually grow to three digits, or perhaps some suffix is added, or an additional component; or perhaps just use it for a different type of version string somewhere else in the program. Perhaps years later. (Mind you, if it violates your format expectations it may be better to immediately raise ValueError rather than using potentially-different semantics as you’d get if you used, say, packaging.version.parse().) As it stands, the provided implementation didn’t mention its limitations in its documentation. That is bad and makes it much more likely to be used improperly. It should have said something like “compares version numbers in 'XX.XX.XX.XXXX' format”.
(I’m not speaking for BYO or library philosophies, merely describing considerations and caveats of both.)
If the goal is to always have the best thought-out and least-fragile solution for any given code operation, then the trade-offs are that it's possible the library solution is going to not fit quite as perfectly as it night, and it will almost certainly be slower.
For our purposes, we never have strings that are other than the form XX.XX.XX.XXXX so there was no reason to generalize. Which improves the speed, makes it easier to read, and provides all the more reason not to use a library.
Meanwhile I've had to make changes to hand-rolled version comparison code in multiple systems because the original developers didn't account for the second segment of the version being greater than 9.
I'm absolutely not saying that you should have used a library, but pointing out the other side of the problem. If they had used an existing library to handle the versions, it wouldn't have had this issue. And test cases wouldn't help here because if they didn't think to code for multiple digits in the second spot, they likely wouldn't have thought to test for it.
There's enough benefits and drawbacks to library vs roll-your-own that I don't rigidly stick to one way or the other.
Now think about how (some) of these libraries came to be:
Someone like you wrote something like the above for their needs. Then they needed it again in another program. Then in another. They figured, why not put this into a library I can use across all my projects. Maybe this was internal only. Maybe at some point they thought: why not push this out there to github and upload it to a public repo and let other people use it.
Then someone (maybe the same person) needs not just XX.XX.XX.XXXX with actual numbers. For whatever reason their standard is vXX.XX.XX.XXXX[-RCX] or other variations that aren't just a simple number format.
Of course if you never ever have a need for that kind of version number (or whatever other "problem area" we apply this principle to) and only ever write one piece of software, you're fine with roll your own. But this is (one way) how libraries get "bloated". The fact that they are libraries (or frameworks if we think a little wider) need to take care of "all the problems" and not just a very specific one. Especially if it's open source and not an in-house only library which attracts lots of people with lots of different variations on the same problem. It's very hard to have a very opinionated stance of "this is what you get, change your version number standards, I'm not changing my library". Of course you can do that, but your library will probably end up in obscurity. The "can deal with anything you throw at us" is probably going to be more popular and if there's no Linus type guy doing code reviews before accepting contributions, you end up with sub-optimal code very quickly.
I've done the opposite before. I came to a code base that had various variations on sending email notifications for batch jobs. Like literally 20 jobs with 20 variations (well maybe it was 15 different ones with different types of bugs and quirks). I extracted them into an in-house library for sending a standardized version of these emails. In the end it had accumulated quite a few features for adjusting the email template, attaching various formats of output, automatically zipping it up etc. I'm pretty sure you might call it "bloated". In reality it made things better (consistent) for the users receiving those, easier for us to understand errors, faster to code new jobs and just calling one standard function that everyone knew how to use and that all bugs had been ironed out from over time. Import library, call function. Instead of deciding which other project to copy the email sending code from, finding out where in the mess of code it was, throwing it away again because "Oh yeah right, that project's version of it can't deal with this type of attachment etc.
Its a numpy style docstring. PyCharm can be configured to auto generate them. I guess I just add the types to be thorough.
Ideally I want to generate api documentation via a tool. A long time ago I wrote a script to parse the AST and generate markdown from the docstrings and function information, but it was mediocre and I haven't wanted to use sphinx because its too heavy and doesn't seem to produce ideal markdown output.
As described it probably is, but using string comparison would make what is already fairly fragile (they must have the same number of components) even more fragile and sensitive to change (they must now always have dots and digits in the same place, and the failure mode is now completely invisible). More stuff to be aware of. In practice I find precious few actually fixed-width formats like this, and more than a few pieces of software have struggled to go from version 9 to 10 or 99 to 100 because of bad assumptions. Or even date formats, two digit year was fine until 2000 when you either wrapped around to zero again, or went up to 100 (and some date APIs have modelled years as the number of years since 1900 in this way).
> You do not need some 3rd party library to write a command line interface for your python code. You do not need three different projects just to maintain your list of installed packages or virtual environment. And you don't need a package to wrap around loading environment variables.
Couldn't agree more.
I'm going through the experience now of switching from Python to Lua and this experience of trying to do more with less is really eye-opening. I find that lots of things where Python has trained me to expect them to be one-liners are now idioms of half a dozen lines of code. ...but as I use them, I find myself introducing a lot of variation around each half-dozen-loc-idiom. Then the thought occurs to me: Should I make this into a utility function? And the answer is usually no: Because it would have to support all this variation and create a complex interface. Having to understand the interface would introduce as much complexity for the person trying to read/maintain my code as having to tackle half-dozen-loc-idioms where there could be one-liners.
Languages that default to simplicity and make-your-own-batteries force that thought process on developers, making them better developers in the process, able to produce more maintainable code.
Languages that default to batteries-included try to spare developers the work of having to think about such things, and the end result is way more messy.
My favorite aspect of lua is that it gives you all the tools to implement anything and do it simply. Python, C#, and many languages push you to find the “right” way to do things, and spend time searching through documentation to find the shortcomings of each option and which method may be optimal for your situation. It’s like walking into a hardware store with a specialized tool for everything—you spend your time worrying if you’re really using the right thing. Lua feels like a garage workshop with all the basic tools you need to quickly implement something on your own—master a few and only go hunting for something new when you’ve got no other choice.
That is an odd take on Python particularly, which has been widely lauded as a good language for exploratory coding - easy and quick to put something together and iterate on it. You seem to be claiming the opposite here.
> That is an odd take on Python particularly, which has been widely lauded as a good language for exploratory coding - easy and quick to put something together and iterate on it.
I think the point your parent was trying to make is that, at least in Python, there is a lot of emphasis put on doing things the python way e.g. list comprehensions over loops.
For myself, the python experience is... mediocre.
Ruby feels so much better. Between the embracing of functional programming and the pry gem, it feels much nicer to use. Sadly, it's not the strong horse in the race.
Lot of emphasis by whom? Yes, certain constructs are subtly encouraged (by documentation, community blogs, etc.), but there is no python police to enforce the One True Way(tm). You can still write your quick&dirty prototype using not-quite-pythonic-python if you prefer, allowing you to move and iterate just as fast as in Lua or Ruby.
At least that's been my experience with Python (after reluctantly getting used to the indentation as part of syntax :) ). Of course, different languages will appeal to different people, so I do not doubt your claim that Ruby suits you better. But the post I was replying to above was putting Python in the same bag as C/C++, which to me seems very inaccurate.
You just described why I preferred Assembly early in my career, and C/C++ for the majority. Making your own batteries enables exponential opportunities to improve and optimize a project, simply because one knows all the parts, they are not externally sourced "batteries-included" features.
I've also fled to Lua. LuaJIT is "finished". Love2D has a few dependencies but they are big and slow. Love does releases only every couple of years. I occasionally grab some code from the internet, but I vendor it in, and now its my code.
Is the natural conclusion of this argument that we should be writing assembly because that has the fewest batteries included? Or is there a counterveiling force?
But most people don't pull 100s of random libraries, just the few most popular and needed. The strawman where people download tens of libraries just not to write oneliners mostly does not exist, besides javascript land.
Having to ask your colleague is an extra cost - for common tasks (like the example of parsing of command line arguments used in the parent comment) you should be using the exact same code as your colleagues unless there is a good reason to make an exception (which there rarely is), so you should be able to read and maintain others' code just as yours because it's written in the same way as you would, making the same style of calls to the exact same functions. And furthermore, the third party libraries (at least for Python) are so standard that a new hire would have used the exact same library if they ever had to do the same task - for most things there's going to be just a few reasonable ways (and often just a single idiomatic one) to do that thing in the language, which sometimes used a built-in library and sometimes uses a popular third-party library.
If your company has a dozen tools which need the same functionality, "rolling your own" for each codebase (or worse, copy/pasting it without being able to update them from a single place) is obviously not a good solution - the choice is between making and using an internally maintained library, or using an open source one.
Then you flog mercilessly people in your organization who write code like this. When I was coming up, the greybeards were bastards, but they taught me the right way to go about things.
I've rarely had a hard time understanding code that was written for a purpose rather than importing a library. Except in rare cases (like dates, big APIs) a library is often unnecessary cruft. Should I go pull a fat logger when all I need to do is send data over TCP to rsyslog? As long as I consider all of my use cases, and properly defend the cases I wont use I can write faster, often times more understandable, code. An example of a great, but often unnecessary library, is Pandas. I've seen Pandas imported for trivial CSV work, simple data manipulation, etc just because it was "familiar" rather than the right tool. Pandas is so, so, so heavy that if you're not using it completely for data science oriented work you are probably wasting your time...people seem to not understand this distinction.
I've had people cut juniors/data scientists/etc loose on a "be-all-end-all" framework and ended up with the worst code I've ever seen. Edges so sharp just looking at them will cut you. I've seen code that imports trivial things like line counting! I've even been on the response team for a security incident involving several internal packages that imported the wrong thing from the package managers! I've had to re-write important library functions for a package that was imported and deprecated, and the author just went to do something else. Libraries are great, but they are not a fire-and-forget tool. When you use them, you must use them sparingly, correctly, and securely. Importantly, you need to pray a library will be maintained over the life of your software.
"Documentation and community support" is worth absolutely, positively, nothing if the usage of the library enabled the writing of poor code. Hire better developers, and create strong coding standards at your company. Let software engineers do what they do best and choose the right place to include a library after evaluating all options. Experience level matters. Late seniors and staff+ understand the difference. New seniors, TLs, and of course braindead management love to bring in the universe on every project.
Your job as a software engineer is to solve business problems set before you.
If your job is to send data over TCP to rsyslog, then you’re doing your job.
If sending data over TCP to rsyslog isn’t your primary task, then doing that is a distraction, at best. Maybe the rsyslog protocol is simple enough that writing your own function is sufficient. Maybe it isn’t.
I’ve been doing this for 28 years professionally and will often choose a library (and I _do_ typically look at the code and compare code diffs across versions on upgrade) over writing my own, because:
1. My job is not to write a task runner—so I use Sidekiq or Celery or Oban.
2. I’m not confounded by the illusion of NIH being lesser.
I will always write my own code if it’s core to what I’m doing. If it’s a sideshow, I will absolutely outsource it to a library or a compiler or whatever. Anything else in the context of the job you’re hired to do is irresponsible.
Code is only hard to figure out if you don't understand the business goals. Third-party libraries/frameworks will almost never explain the business to you.
However, there is a practice known as testing that is supposed to document the business goals so that your colleagues aren't left in the dark. Hopefully you haven't skimped there, regardless of where the functional pieces came from.
> Code is only hard to figure out if you don't understand the business goals.
Ah right, so when an architecture astronaut picks up the GoF Design Patterns book and creates a rube goldberg architecture with an interface for every. single. class., it's not going to be understandable because folks didn't understand the business goals.
Agreed. It's not terribly hard to think like the person who wrote it, even if they went rube-goldbergesq, if you understand where they were coming from. If the business details have been lost, it becomes harder.
I think who you’re replying to is being sarcastic (a clue is using a pejorative - “architecture astronaut”). FWIW, I agree with their sentiment and disagree with yours.
Yes, exactly. Even architecture astronauts are well intentioned, and if you understand the conditions under which they worked then it is quite easy to follow what they've done. It only becomes difficult when that context is lost.
So hopefully they've encoded that context in tests. Worst case, if you never learned how to read code, you can throw the implementation away and the intention will remain validatable. But if you don't have the business end of the code documented... Good luck!
> Yes, exactly. Even architecture astronauts are well intentioned, and if you understand the conditions under which they worked then it is quite easy to follow what they've done. It only becomes difficult when that context is lost.
I posit you lack either the experience of the imagination if you think "business context" is the only factor in the development of an unmaintainable mess.
Usually, when an architecture astronaut gets involved, the business details are immediately lost in the overwhelming slop of what has been built.
And usually, the people having to deal with this crap are ones who have never dealt with the system in the first place, and the business people and the technical people are both gone from the company…
An interface for every single class doesn't really come from GoF, it's the D in SOLID -- this idea that classes must never depend on each other but on interfaces representing other classes.
On a tangent...a pervasive problem in programming language syntax is the confusion afforded by Class Access Modifiers (public, protected, private, et al), aka CAMs.
In some languages, these are foremost used as mixin selectors (inheritance, traits, or other composition). They are commonly overloaded to describe the object's interface. I am not the only who to have seen objects inherit from other objects, with the derived class consisting of new CAMs to provide the proper interface (in lieu of some dedicated Interface class, like in Java). Then there's a third concern about testability and how to reach methods like private and protected.
If language syntax supported these different concerns with different mechanisms (notably Python and Go have made some headway for enabling testability) including a formal definition of an interface outside of CAM-mixin selection, it would make developer's lives a lot easier and quell many of the conflicting opinions about implementation dogma.
I try to convince them to keep stuff simple at my work, but we still go creating a shitshow of epic proportions. If our app was done the traditional way it could probably run on a Rasperry Pi. Instead they are pushing for microservicces, one database per service has been mentioned. A SPA app for the frontend. All for what amounts to a standard crud app that sends some jobs to the cloud.
Back in my day, we coded in assembly and looked up opcodes in paper books. And we liked it! At least we weren't still on punch cards.
Eg, I use the python docopt library because it makes life easier and more straight-forwards once you understand it. Which, to be fair, not everyone wants to put in the time to learn. But I like it more than having to code up an argument parser from scratch using string matching, and then having to do that every single time... I don't need it, but I also don't anything more than nasm, anything more than that is luxury! Kids these days...
Most people use abstractions so they don't have to.
There's a difference between someone using an ORM who knows SQL and someone using an ORM so they don't have to learn.
The fact is, most people use these abstractions so they don't have to understand it.
---
Back in my day ... it was accepted that you needed to understand atleast 1 level of abstraction below where you're working at. Nowadays, that attitude is only used by older developers.
> The fact is, most people use these abstractions so they don't have to understand it.
How do people just accept that there's magic going on? That would drive me crazy. I don't expect people to know all the details, but there should be some level of understanding at least one layer down.
>overload existing code with additional functions when feasible, rather than starting a new project from scratch.
why? are you making a jab at small packages? ill just rant on that and say i think small packages are great. they are extremely purpose built and lovely to use. they do become transitive dependencies often leading to the "issue" in this blogpost but im in the yolo upgrade camp
The churn required to keep up with frequent updates to many dependencies can work for some projects, but I would argue only for projects with continuous funding. If a project is only funded every couple of years and has many dependencies, odds are, the next time it is funded, there will be an inordinate amount of work to patch/update the code. For such projects, fewer or no dependencies may be better.
this one hits home for me because I do work in academia where funding goes kaput frequently and I do have to agree on some level. Some of the most stable and popular tools in my field have few to zero dependencies and are written in c... bioinformaticians will know what I mean.
Just a quick shout-out to Python Poetry (https://python-poetry.org/) which makes package, version, and virtual environment management extremely simple!
I love it so much that I can't imagine having to work on a python project without Poetry now, lol.
Does anyone here actually use Python? Because "not [using] some 3rd party library to write a command line interface for your python code" should mean: Use the argparse module that's in the Standard Library. Not coming up with something homegrown.
I was wondering the same thing, there is very little reason to outside of the standard library in Python. Then you have the 3rd party the ones that have been solidified as basically standard as well.
You shouldn’t be downloading a package from Joe Schmoe.
Imagine some point in history where the wheel hasn't yet been invented. But someone figured out that an octagon works kind of reasonably well as something that you might want to put under your chariot, and the whole world of engineering is under a culture where "reinventing the octagon" is some kind of cardinal sin. Where would we be then?
Also: Your reasoning is logically a fallacy. From "people making mistake X tend to do Y" it does not follow that "doing Y is a mistake".
But now imagine the world of tomorrow, where despite the convenient availability of a range of wheels well optimized for specific roles you get told "that just adds dependencies: any artisan builds their own from scratch" and so you and your customers get to learn the unique quirks and paper-cuts of each wheel of every chariot you sell thereafter.
Its usually not so much "a range of wheels optimized for specific roles" and more "one size fits all."
You want a wheel that fits your cart. You have a lathe. Why make do with a wheel someone else made, standardized, and sells to fit a wide range of carts when you can make the perfect wheel for _your_ cart?
Most of the functionality in libraries use standard algorithms anyway. I doubt anyone thinks its a good idea to write your own cryptography or markdown processor, but why do you need a library to left pad a string with zeroes?
if statements and flags take time to process. You only have one way you need to do something. Do you want to take that much more compute just checking flags that will never change so you can get the library function to do what you always want it to do?
I merely pointed out that the "don't reinvent the wheel" knee-jerk reflex that some people are exhibiting in a broad range of situations is an absurd absolute. Negating something and then stating it as an absurd absolute does not prove nor contradict the original proposition.
Sometimes it's more economical to reinvent some wheel than not to, sometimes it's not.
Sometimes reinventing a wheel leads to a net-improvement in the state of the art of engineering, sometimes it doesn't.
That sort of relativism makes for boring reading, but it also happens to be the truth.
That's why the better approach is: learn how to build wheels and how they work, then use off-the-shelf wheels - unless you know that you need some peculiarities that are not found on the market; only then modify those wheels or build your own.
I'm saying: When you know how to build wheels, you may well want to build your own. If I have old bread and five minutes of spare time and a pan, I'll happily make my own crutons. There might be many reasons for why this might be the right course of action in a given situation. Maybe it's more economical to make more of the resources I already have rather than expend more by buying them at the store. Maybe I like keeping alive the heritage and knowledge of cruton making and will teach my kids how to make crutons one day. Maybe I like not being subjected to worldwide cruton-shortages when they happen. ...I just don't get the universal law of "if there's crutons available at the store, you must never ever make your own", and I don't think it is one.
Goldman Sachs thinks that risk management software is so core to their business that it makes sense for them to do it in-house while their competitors used off-the-shelf software. One weekend in 2008, Lehman Brothers collapsed. Figuring out your risks after such an event was something that nobody had programmed into their software. But Goldman Sachs was able to get their people to work through that weekend and know what their risks were when markets reopened on that Monday. In the year that followed, they had one of their best business years and massively outperformed their competitors.
"You think reinventing the wheel sometimes makes sense? Haha, well I guess you also think that Google and Amazon should be manufacturing their own computers then..." ...this is something that somebody might have said sarcastically 20 years ago. Well guess what. They are.
> I guess you also think that Google and Amazon should be manufacturing their own computers then..." ...this is something that somebody might have said sarcastically 20 years ago. Well guess what. They are.
So, if you don't have the resources of Google or Amazon it may make sense to get your wheels from somebody else, right? It's called specialisation of work.
You won't really know how to make wheels without building your own and putting it to actual use nor know when making your own wheels or using existing wheels makes more sense without the experience of both using your own homemade wheels and off the shelf wheels.
After all many wheels on wheelhub nowadays exist because someone got frustrated from their experience with using existing wheels.
Sure, but there are many ways to make wheels, are your wheels 1.8m or 2m apart, what material, etc.? You may pass it off as a lark but stuff like this matters - to city builders, chariot makers, train track designers, and so on.
You can go full NIH and reinvent the stone and finish about twelve years post end of universe or you can standardize and say ok, lets reuse some code.
And if everyone reuses same code you can get into problems - common vulnerabilities.
There's a fine line between NIH (not invented here) syndrom and overusing third party libraries. The classic example being the is-even[0] npm library, which seems like a joke, but has ~200k weekly downloads. It just seems that many devs too quickly reach for a third party library before taking a few minutes to think if it would make more sense or not to write the code themselves.
If you are really struggling, I'd suggest trying out the latest .NET before abandoning all hope.
I was recently able to build a 3d software rasterizer and streaming web service using .NET6 and I did not have to consume a single 3rd party dependency.
Even things like SIMD-optimized projection matrix calculations can be found in the box (System.Numerics).
There is obviously the Microsoft aspect to this equation, but I feel like you should at least try the porridge before throwing it away. The empire has a hell of a Death Star these days.
Did you need to build a GUI? It feels like Microsoft invents some new Windows UI library every 5 years or so and deprecates the previous one. The last I used .NET, it was WPF but I hear WinUI is the hot thing now.
The hot new thing its Blazor the WebAssembly framework. They are pushing it everywhere, even inside Web widgets on MAUI.
Then you have MAUI, which is basically the rebranding of Xamarin Forms, but it does required a bit of rewriting, and apparently there are some bugs still. And on the Mac they have taken the shortcut of building on top of Catalyst to reuse iOS Xamarin.
Then you have the WinDev team pushing WinUI 3.0, still behind UWP capabilities and moreso behind WPF.
Currently on .NET your safest bets are Windows Forms, WPF, the 3rd party projects like Avalonia and Uno, or if the use case is possible, Web.
I would wait for MAUI to mature a bit more, consider WinUI a implementation detail of MAUI on Windows, and ignore it for everything else.
I've been writing Windows Forms applications for 21 years. It's still officially supported and maintained on the latest .NET. I don't think it will ever die. Microsoft invents new libraries but they deprecate nothing.
GUI frameworks are subject to more churn than, I think, any other programming domain.
On the desktop, you have WPF, WinUI, QT, GTK, Swing, JavaFX, Electron, and a dozen more. On the frontend, you have React, Angular, Svelte, Ember, Vue, and probably a thousand more, plus roughly as many CSS toolkits as there are atoms in the universe.
And honestly? None of them are that great. Most of them are perfectly functional, but at some point you hit a limit of their abstractions, and start fighting against the framework, rather than fighting alongside it.
This is probably true of any domain if you go deep enough, but UI development seems significantly more fraught with danger and churn than the backend.
Yes, I was wondering where they were rasterizing to. Although of course the most long-lived way is to just P/Invoke straight into user32.dll to bring up a window.
(The last time I wrote a 3D rasterizer was in 1996, and I had to type my dependencies in from paper into Borland Turbo C)
You remember that demo where someone made an EXE from C# and made it run on Windows 3.1?
I wish there was some streamlined way (config of VS Code?) of doing that, not as extreme, but building regular Win32 apps without the .NET runtime, but with the C# language. Like how you can still use modern C++ but target just Win32.
That would be I think, an awesome compromise of power. C# is a really great language all on its own, without the .NET standard library.
I'd forgotten about that, do you have a link handy? It might be doable in MSBuild. I might actually have a use case for this as well, after battling problems of self-contained apps.
The "AOT" compiler is basically "run the JIT upfront".
A bitmap that is then encoded to the client as jpeg or x264 data across the websocket. The hard part is making the pixels get to the human eyeballs. I handwave that away with the browser.
It seems there is one of those posts on HN every 6 weeks.
And yet there is no solutions besides yolo-live-with-it.
I thought it sucks too, I'm in the same boat, but I considered the opposite for a while: what if we didn't have all the code reuse and boatloads of libraries at our fingertips?
That would be a TON of code added to our applications and overall - shock full of bugs and lack of optimizations. These libraries, when popular are battle tested, cover much more grounds than we do today (but might need tomorrow) and have been looked at if slow.
This ecosystem relies mostly on everyone acting like adults; Libraries to respect semver and not making large, breaking API changes on minor versions; and an ecosystem that is "safe", where bad actors can't pretend to be someone else.
Ideally there would be an open "group" of people running penetration tests and security overview of most packages where the collective could donate against their work. Large donations would allow specifying which packages you'd want them to review, and fixes would flow money to the maintainers...
But I'm dreaming. We're all building on a gigantic pile of hacks written for someone's portfolio to get their next job, which won't be in the same language and the thing will become abandoned.
"And yet there is no solutions besides yolo-live-with-it."
There is a solution: Stop accounting for dependencies as zero cost. Start accounting for their costs, in their need to be updated, understood, and possibly significant overcompelexity for your needs, plus all these costs are transitive into the dependencies they pull in.
This is not a magic solution. It doesn't mean you just think these magic thoughts and all your problems go away. But if you actually do this, it will solve your problems. It will turn the wide wonderful world of existing libraries back into a benefit instead of so often a net loss.
I still use a lot of libraries, but I've been noticing lately I use more "fundamental" libraries. I use a YAML/JSON parser for my config, but I don't worry so much about all the fancy wrappers around config that try to make it super nice to have environment variables or maybe a config file or maybe map it to the command line or maybe get it from the network... I just use a config file parser and commit to having that. My dependency trees are not completely pristine exactly, but they end up with a lot of these sorts of things in them but not so much all-singing, all-dancing "frameworks" or text templating libraries that are so amazingly helpful that they have entire embedded scripting languages or other such things.
It’s all about balance. Use libraries where it makes sense (e.g., if I’m reading HDF5 files, I’m going to use the HDF5 library, not roll my own), but don’t forget that you (ostensibly) know how to program on your own and like implementing some things yourself (rather than just lashing together other people’s code). Using a language with a decent standard library helps.
I think they mean that it's code they're not having to write/test/maintain themselves because presumably if you pick a good enough library the quality of code/features are better then what one individual or team could do
While there are issues with this approach like the article outlines, personally I believe it to be better then the alternative of countless developers reimplementing the same feature sets over and over because honestly that just seems like a waste of human time and talent
To further drive the point, almost everyone seems to be conveniently missing a great benefit of libraries: they tend to be greatly battle-tested, most of the successful ones.
The moment people start realising their own (and most likely crappy) implementation to solve problem Z and discover they didn't consider X or Y or W scenario and haven't even tested for those, most of them will hopefully understand what "balance" and "trade-off" actually mean.
You never actually need 90% of the functionality a library gives, though... With some notable exceptions (cryptography). Most times I've excised a library I've only needed to implement a small part of what it did.
That's true, and there may be valid reasons to remove a library and reimplement that 10% yourself such as for performance, stability or educational reasons. However if the library is performing as expected, does not using 90% of it make it any less valuable? If the problem has been solved in a satisfactory way and remaking it doesn't bring benefit or solve a problem then it seems wasteful to spend the human time to do so
This is at heart an economics problem: how to provide a public good (software security) in the presence of free riders. The way society handles this kind of risk is through insurance, but there are a number of things that need to happen to enable a sustainable economic basis to fund the necessary dependency-vetting to happen.
Edit: I wrote up my thoughts on the way forward here:
Say you create a new React app. The fact that you do is an indicator that the core tech (web tech in this case) does not satisfy your needs out of the box. So you grab a framework from "user land", one of many.
React though is not self-contained. To even get it to build anything, it depends on various other projects that each have deep dependencies. Webpack, linters, a choice of CSS framework, etc.
At this point, you app itself has zero functional dependencies. You haven't even produced a hello world, all of this stuff is needed just to get it to do anything at all.
As you'll then start to build actual functionality, you'll notice that React doesn't have facilities or an opinion on the most basic of stuff. Loading data, managing forms, standardized UI, the essentials are not there. So you go for robust/popular libraries for each of those gaps, which in turn have their own dependency trees.
This way even the simplest of apps require thousands of dependencies. It's not due to developers too lazy to write "leftpad". It's a tech stack issue where nothing is standardized or included out of the box.
We're not drowning. The boat was not made to be waterproof and yet we think it is.
Any software you build, like everything else needs maintenance. It's a fallacy to think that just because math is correct (boolean algebra) that your code will tick along fine indefinitely.
> You’ve probably seen run-of-the-mill web applications with hundreds of direct dependencies
This is one of the reasons I like working on legacy web apps. Other than maybe jquery, they typically don’t have any dependencies. It is so easy to follow the code logic because it is all right there and nothing is hidden 10 layers deep in dependency hell.
Who’s drowning? I feel better than ever. This is a great problem compared to the old way of stuff being out of date and it being a giant pain to use libraries and stuff.
Also (knock wood) I haven’t experienced any dependency related outages and have been able to run Java, ruby, Python, and javascript stuff for many years without problems. When I need reproducibility I pin to specific versions. And I vet what packages I use and don’t just randomly grab whatever is in the stackoverflow copy and paste.
I lightheartedly object to C# being mentioned in the same breath as the rest. I've been writing Windows Forms applications for 21 years. Most C# applications have fewer than 10 dependencies, and they tend not to have their own transitive dependencies. It's bliss. We just really don't have the same level of problems in this ecosystem.
I have to disagree, C# can be as bad too. I work on a medium-sized (~500k loc) C# codebase, and we get tangled up in dependency chains frequently. It's a web app with a number of integrations and API calls for things like Google/Facebook/Bing ads that each use some library from them, and other stuff like cloud storage calls and FTP file deliveries. They all tend to bring in more dependencies, like Newtonsoft Json parsing, Excel reader/writer classes, HTTPS and certificate management libraries, log4net logging. I spend a non-trivial amount of time sorting out version conflicts among these, even with the Nuget package manager doing most of the work.
What happens is version conflicts. The dependencies often want to bring in conflicting versions of the grand-dependencies, the lower level libraries like Newtonsoft.Json and log4net and so on. There's a <bindingredirect> syntax in web.config or app.config that allows telling something to look for a different version. This usually works, and Nuget gets it right automatically most of the time, but occasionally there's a real conflict where a version isn't compatible and I spend a day a few times a year dealing with that.
Yes, having originally been a C# developer and now diversified and had some experience in other tech, I realised that we are spoiled by one of the most robust standard library and suite of first party libraries that nothing else I've ever used has stood up to it.
It depends. I've encountered C# apps with very few NuGet packages.
And then there are the ones that pull in all sorts of logging libraries and abstractions, 3rd party DI frameworks, 3rd party ORMs, Newtonsoft JSON, and more!
Fortunately, the trend is moving back to more centralized, MSFT maintained packages and libraries (EF Core, ASP DI, System.Json, etc.), which makes future maintenance less of a headache.
The main issue I have with 3rd party packages (particularly with obscure ones) is vetting them out. Arguably, the same is true for pulling in MSFT libraries, but when serious security issues happen, you can expect MSFT to (eventually) resolve it. I don't really get that guarantee from 3rd party packages, which might have only 1-2 unpaid maintainers if you are lucky. If you are unlucky, the code might be abandoned and completely unreadable.
I try to limit this risk by only using well-known, well-maintained packages, e.g. Serilog, and keeping package dependencies as minimal as possible.
Isn't this the consequence of a large, free, open, diverse system? I remember the constant complaints about their being "too many javascript" frameworks a few years ago, and how their are always new ones.
I'm less concerned with the moral or practical considerations here, but can this really be a bad thing? Or, perhaps more to the point, isn't it a sign of a healthy ecosystem?
It seems as the world gets more interconnected this problem of "too much"; it's an interesting one to solve... And I'm not saying we shouldn't, but I guess I'm curious about the philosophical question of complaining about complexity in a decentralized public ecosystem. Is there a solution to the ecosystem as a whole? Or will sort of curated subgroups of the ecosystem be birthed that attempt to create a walled garden. It doesn't seem possible to create a shared walled garden without stifling innovation, becoming more closed, all the things that I think most open source communities don't want to do.
In the "old" days ("old" being .. what, ten years ago?) packages were released as libraries, which bundled large amounts of functionality into a single atomic version. You could depend on OpenSSL for all your crypto code, GLib for argument parsing or atomics or UUIDs, libiconv for character encoding. There was some expectation (usually met) that the developers of these libraries would have some sort of basic testing strategy, and that they would try to maintain API compatibility over long periods of time.
A large program might have less than a dozen external dependencies, and they released every couples months at most, so keeping up to date was easy. And most of the time you didn't even need to update unless someone discovered a security vulnerability, which was not that common despite all the libraries being written in C or C++.
The approach of NPM (etc) is fundamentally different in that it's considered normal for a project to have hundreds or thousands of dependencies. To manage the update cadence, the users of these package repositories try to invent new schemes for version numbers -- or write quirky gif-laden blog posts about how they burn the monthly budget of a small country on testing in CI.
But the core problem is that they've locked themselves into a stupid and unsustainable paradigm. You can't do the sort of software development that involves writing code if you've decided that your job is to haphazardly glue together other people's code, especially when your're willing to depend on packages written by anything and anyone.
--
I honestly think a lot of the problem of over-dependency comes from the new package managers that make it too easy to depend on stuff recursively. If NPM required you to type out `npm add-project-dependency left-pad/1.0.0 --checksum sha256:a1b2c3...` then that would have been a strong forcing function against a huge number of tiny packages, because at some point your fingers get tired.
For example, in my personal projects I use Bazel as a build system, which makes me a bit more aware of dependency hell because each dependency needs to be registered.
There are projects like QEMU that are straightforward to build in Bazel because their dependency tree is bounded, but some of the stuff coming out of JavaScript land is just unavailable to me because I don't have the patience to chase down hundreds of packages to run a "hello world".
I've found I avoid certain parts of the Rust ecosystem because they're JS-ish, but there's other parts that have a more C/C++-ish style and those packages work fine for me. Similarly for Python. Go has largely avoided the issue, though I don't have an intuition as to why (different culture?).
A problem with Rust is that the compilation unit is the crate, which pushes developers into splitting something that is one library into multiple crates to get incremental compilation, and it's hard for the user to tell the difference (you can synchronize their version numbers, sure, but the user has to put each crate into their list of dependencies individually).
It's not even in the "old" days, what you described is still the situation today, for C and C++ languages - as I'm sure you are aware. You have to know about all your dependencies, and curate them all manually. This despite all the misguided "package manager for C projects" attempts that pop up here on HN every now and then.
Well, C was made with UNIX in mind. To that end, UNIX/Linux is both the package manager and dev environment for C. Further C package managers are just redundant.
The thing I hate the most is when some JavaScript library uses some obscure build system (or any build system really) but on GitHub they don't publish the built version. What could have been simple download now turns into several hours exercise in trying to understand and make work yet another build system, downloading intermediate bullshit from random websites. Just publish the damn library, thanks.
Binary infected with malicious payload is more likely to be detected by antivirus or by manual checking of the signature/checksum if user cares to.
Infected build system? In case of linux distributions, there are maintainers and packagers responsible for their source and binaries. In case of javascript, does anyone care?
For simple, once used projects, I don’t concern much with these aspects. But for a big projects, choosing your framework and dependency is always a big concern and should do it carefully. Investigate all your dependency is a must, not just for the security aspect.
And when you do it, you will convince yourself less is more and your team will re-create many things just to reduce the dependency.
I'm going through this right now. At dayjob we have a lot of internal dependencies and libraries that are created sufficiently far from me that I treat them as third party and when things go wrong (which they do a lot) it's a nightmare to debug and work through. Easily half a day gone.
So for sideproject I'm going mostly bespoke and using very little 3rd party libraries. The result of that is there's sooooooo much boiler plate to write and it's very boring and it's hard to be motivated and productive to get through it. What's keeping me going is that once the boilerplate is done - it's mostly the foundational level of data queries + endpoints - then I shouldn't really have to touch that stuff again.
My conclusion is that there's no great answer.
Using lots of libraries is like riding a horse. It'll start moving right away but you don't have full control and you gotta find the right way to get it to do what you want, and may have to fight it if it really doesn't want to. Vs building most things yourself is like building a car. You don't go anywhere for a long time as you're building it, but once you've built it you can move very quickly and if anything goes wrong you can easily pop the hood and fix/change things up.
I think for long lived projects it's better to build the car, but also be disciplined in writing very good code + documentation. The initial build out sucks hard but that pain will get amortized over a very long future and ultimately be worth it.
And for absolutely required dependencies go with paid services. Specifically paid services where the company's primary focus _is_ providing that service. They are financially and existentially motivated to give good service, and are usually good about keeping backwards compatibility while usually staying on top of security/modernization upgrades without requiring work on your own end. A managed database is a good example of this kind of paid dependency.
Another downside of the deluge of dependencies that I don't see discussed often enough is bloat. If you import something that imports something else that imports something else ad nauseam, you're importing a lot of stuff you're not actually using and won't ever actually need. Tree shaking was supposed to help with that, but it barely moves the scale.
The author says that "Reusing external code is a huge advantage, and when everyone does it, it’s also a competitive necessity", but when I've taken it upon myself to "reinvent" something that there's already a moderately decent solution for, I've found that it really wasn't that much extra effort and the end result was insanely more flexible in the ways that I needed it to be than the supposedly off-the-shelf alternative.
Not that simple. You also have to stop using some popular frameworks/libraries even in CSS. For example, Tailwind is the hotness these days. I like it but in production, they recommend installing using npm and not CDN. Yes it is for performance and size reasons but still.
How about building on solid ground for once? Datalisp.is the thing I keep posting here and it keeps getting overlooked but a solid foundation means longer lasting structures, just look at the 3-4-5 triangle; it still has a right angle.
This path was paved when NPM and Node.js decided that lots of small dependencies are a good thing. All of this damage and JavaScript's reputation as a maintenance nightmare is a result of that terrible, terrible decision.
I wanted to start a new react project and someone who I was going to collaborate with said they wanted try storybook. Ok, follow the tutorial. It immediately spits out all these warnings
So that failed IMO. There should be zero warnings! A pile of warnings has no meaning because no will notice when it goes from 18 to 19 warnings.
I ended up aborting that and trying other things. It's been 10hrs now and I still have nothing working. Every path has lead to some kind of out dated article and/or dependency issue.
Funnily enough this becomes much less of a problem once you use a real type system instead of having to write manual tests. Both because whether your code compiles becomes a pretty good (not perfect, just as your test suite is not perfect) indication of whether this dependency upgrade is breaking for you or not, and because those ecosystems tend to have a lot less need for new versions to fix things or change behaviour in the first place.
The problem is that, since the introduction of dependencies has become so easy, people have become dependency hogs.
Every time you introduce a dependency, you need to weigh the benefits against the cost. But to the person introducing a dependency, at that moment when they make that decision, the cost is just "pip install x". So they think the cost is negligible, but, of course, the true cost doesn't start to show until much later, when it's too late to undo the decision.
One slighly masochistic thing I've done for myself is that I don't allow myself to ever do "pip install x". I force myself to (1) download a source distribution for the package, or wheel if it's pure (2) cut the internet connection (3) do "pip install --no-deps" or "setup.py", building from source from the local package
(4) see if it works, or if the absence of dependencies causes actual breakage (5) if there's breakage, go to 1, repeating the process with the missing dependency -- when I've spent more time on this process than I think the package is worth, given the benefit it brings to my project, then stop the process and try to live without the dependency.
Interestingly: Forcing yourself to build everything from source also tells you a lot about the state of health of a package you're about to introduce as a dependency. Like: Does it depend on something that's just a wrapper around a perl module that hasn't been touched in 15 years; that kind of thing.
I disagree. I work with Scala and I can see myself in that post 100%.
If things compile and your test pass you might be OK (sure), but the part where "those ecosystems tend to have a lot less need for new versions to fix things or change behaviour in the first place" is not there, unfortunately. If you are using Scala Steward, you'll have a good number of PRs almost every week.
You do not need some 3rd party library to write a command line interface for your python code. You do not need three different projects just to maintain your list of installed packages or virtual environment. And you don't need a package to wrap around loading environment variables.
Just learn how to do these things in a simpler way, without the extra deps. Will there be some more boilerplate code? Yes. But it will be your code, and it will be so simple that you will learn to write it very well. In addition: make your code loosely coupled and composeable, and overload existing code with additional functions when feasible, rather than starting a new project from scratch. Press your company to let you contribute back to OSS, as this lets somebody else do more of the maintenance, and has the side effect of reducing the total number of software projects. (It immediately saves you money and time. Kind of stupid that more of us don't do this already.)
Simplifying is hard. But when you get good at it, you find you're much more buoyant.