Do people really expose their or their employer's source code to random third party convenience services?
I do understand the convenience factor here, I just think it's dodgy to encourage developers to be so flippant with privileged access.
I'm not wholly against this sort of stuff, I'm sure we've used similar links in the past for CI and coverage, but it seems to be the end of the slope, where we're handing out access to our stuff for something so frivolous. This is the same sort of mechanism that got everybody and their dog's copies of Windows XP infected with trojans in the early 2000s. "Sure, I'll install that toolbar, just let me see Britney naked".
This could be a local, auditable script that fetched a static list of projects seeking funding.
Large software companies have no idea what dependencies, libraries, or even languages are in their stacks. Companies are using multiple versions of the same library, replicated in a bunch of different repos. Different teams in he same company end up re-implementing solutions over and over, sometimes in a whole other framework than the team next to them because nobody knows what anyone else is using. Makes compliance a nightmare.
It's wild out there.
We have an open source project just for this: https://github.com/fossas/fossa-cli. It currently supports roughly 20+ build systems and languages, and pairs with our web service for license and vulnerability discovery.
Would love your feedback.
A few years back I wrote a cross platform make replacement due to issues with an existing recursive make solution not getting dependencies right due to issues with it creating the wrong DAG (http://lcgapp.cern.ch/project/architecture/recursive_make.pd...). The recursive solution was used in the first place because it was fast but it happened to be incorrect (interesting how stuff like Pipenv has really long lock times because it's prioritizing correctness). So I have an awareness of some of the difficulties in this space very directly. Before I did some projects with build systems and packaging there were a lot of things I didn't know I didn't know.
If they're not running that business and have enough access to the code to be able to use this service, they're probably a developer who again should be able to know about the dependencies.
Or perhaps they're in a different position, and only want to know about this for curiosity's sake - in which case, I guess this could be useful. But still, you should be able to ask about this in-house.
And as for the "layers" argument - if there are too many layers of dependencies to keep track of, something is very, very wrong with the technology you are using. (And yes, I do consider modern web tech completely insane.)
So you were basically being glib about the salaries, since you know a lot of us are in that position? What do you suggest we do?
When I list the dependency tree of our project at work I get ~5500 unique packages (many are different versions of the same ones). Does the fact that I don't know them by heart mean that I'm being paid too much?
But seriously, I haven't said that you have to be able to recite the dependencies when woken up at night, just that you should have an existing internal methods of keeping track of them and auditing them, and not relying on some comes-one-day-disappears-the-next web service.
On a technical level we do the best we can to get to know all the package dependencies and the various other things that we have to be aware of such as security and license issues (we are aware that not everyone does this). But honestly even when you decide to take this seriously it is still hard. Because of transitive dependencies and long chains of dependencies in the libraries you use, it is hard to be able to have a high confidence that you know everything you depend on. Essentially this requires tooling of some form or another to have any chance. With our in-house code we know what our direct dependencies are and we usually can track down most of what we need fairly quickly because we control our build environments. However when we go consult for a clients who have large codebases that we have never seen before it takes a while to track everything down. Sometimes if people have customized build systems this can be really hard to do. We have our own static analysis tools to help with this too.
Even without all the online package management services it's still a hard problem, consider the case of the humble makefile (http://lcgapp.cern.ch/project/architecture/recursive_make.pd...) Add in dependencies on remote computers and this gets harder. Take for example Python with it's huge number of different ways of installing a package, we have some tooling to check things but it's probably not 100% accurate because of the various ways in which Python packaging is broken exacerbated by the various ways in which people have worked around these shortcomings in the past. Pipenv has helped with the lock files but not everything is using those. The power of good tools for package analysis is clear and we use whatever we can. I hope you will see that this is actually a hard problem in a business sense as it costs substantial amounts of time for a business to create tooling for these things and the customer is likely unable to assess the benefits of this directly. We have an obligation and a desire to bring value to our customers and this means we will sometimes have to prioritize a 100% coverage of package information less than other objectives if the client demands it (for example fixing mission critical bugs may be a higher priority). In an ideal world we would love to know every aspect of the stack that we run on but as time goes on the increasing complexity of the systems we use makes this harder.
The problem seems to be that open-source gratis software contributes nearly zero friction to a company building out its tech, so any alternative would have to compete against that near-zero friction. I just don't see each company negotiating separate prices with 100,000 package maintainers to use all of their software on a custom linux distro just for one of their internal servers or whatever. It's a tremendous amount of friction for each company to bear.
If that friction could be eliminated, while keeping a requirement to pay for use of the software, then I think a non-gratis ecosystem could dwarf the gratis software world within two or three years from its launch.
I can spin up a quick testserver using Windows Server, running an IIS webserver and Microsoft SQL Server, with my software stack written in C#, programmed in Visual Studio. It covers pretty much everything I could need, and involves ordering à la carte instead of negotiations.
Obviously it costs more and unless I spin up a cloud server I can't "just spin up a server" without making sure I have enough licences etc. But AWS/GCP/Azure solve that mostly.
Or a model based on Patreon?
Regarding funding open-source software: Companies I've worked for have all been OK with purchasing licenses for software that saves development time. They've also been careful to abide by software license terms. I'm surprised that more open-source libraries/frameworks don't require the purchase of a commercial license in order to use them commercially.
I didn't create an account or sign in, but it created a public profile on your domain using my name without my consent.
Is there any way to remove that?
You might want to anonymize that URL for accounts who don't sign up just for clarity.
Well, that's not true. Maybe you could indicate which package systems you actually are able to analyze?
(It's fine if it doesn't, but it would be nice to know up front.)