Don't know if it was really community management or its architecture. But the incentives to monetize the site ever more aggressively is probably the most important factor.
Ads and bundling malware (here: unwanted software) with downloads won't be received very well. The malware even had a negative influence on projects themselves. Example I know of was filezilla, which many never used or trusted again.
I don't know if you can compare sourceforge with github because the former was also used by many to get ready-to-use binaries where github is more exclusive to developers and peers.
SourceForge was originally developer focused so I wouldn't be so quick to assume the two aren't comparable. Github Releases produces download links, Github Actions can be used to compile assets and Github Pages can host the website. Thus a lot of developers do use Github as a one stop shop. Also worth remembering that most end users don't really care who hosts the EXE they want to download, whether that's Github or elsewhere.
The only things that will stop Github going the way of SourceForge is that:
1. It's owned by Microsoft. They'd be damaging their own reputation twofold. Once because of the malware on the Github and a second time because of their OS, Windows, running like crap after that malware is installed.
2. There's a general movement away from [edit: manually] downloading binaries (as there should be!) -- even on Windows package managers are becoming more commonly used.
>Which windows package managers build from source? I thought they were all binary package managers.
I think the GP meant downloading random binaries through a web browser, as most people do on Windows. Package managers with hash verification are a million times safer than that.
Hash verification does nothing as SSL is protecting ordinary downloads from MITM attacks already.
Actually, binary package managers are arguably strictly less safe than getting the binaries directly from upstream because you've just introduced a (literal) "man in the middle", who might have their own agendas and their own need to monetize. Like, for example ... SourceForge did. In a different context these MITM attacks can go wrong even when they're genuinely trying to be helpful, as we may remember when Debian accidentally patched out important code from a core encryption library and caused everyone to get the same SSH keys.
Generally for security you want to reduce the number of middlemen who can tamper with things, unless those middlemen are explicitly adding some sort of security value by pre-tasting the apps. So, Apple app reviewers: yes. Linux distro maintainers: sorta sometimes. Homebrew, winget etc: no. It's all automated.
In linux/Mac, I can "sudo apt install vlc", "sudo yum install vlc", or "sudo brew install vlc". I know I'm getting a valid and good software delivery. It just works. No malware. No sketchy websites. No malware packages.
Did you just get Wine from upstream? The answer is no and this has wasted significant amounts of time of Wine developers in the past. I used to be one and spent way too many free evenings and weekends debugging subtle crashes reported by users, that turned out to be packaging bugs introduced by men-in-the-middle.
Lots of other Wine devs had the same experience, so at that time we had a policy that if someone reported a bug the first instruction was to uninstall their distro package and to install the binary packages produced by the Wine project itself. Sometimes users were upset by this advice, but it resolved a significant number of "bugs" that were being added by the distributors.
> The whole point of the earlier comments is that it is preferable to use a package manager instead of using a browser to download binaries.
But there are package managers for Windows, aren't there?
Microsoft even has its own app store, that you need to use to install applications such as Windows Terminal.
People often install software on Linux and macOS by downloading installers using a browser. In fact, that's how non-bleeding edge versions of Xcode are installed.
As a practical approach on Windows, for well-known software, I go to Wikipedia, look up the project there and then get the link, obviously that's not guaranteed to be correct, but for well-known projects bad links usually get corrected relatively fast, so it's probably correct
For lesser known projects I just cross my fingers, but typically lesser known projects are less likely to have malware posing as them
I think the idea is that a lot of the major package repositories tend to curate what's published (rather than let anyone arbitrarily publish whatever they'd like). If I trust the maintainers of the Debian or Fedora or Arch repos or whatever, then the fact that they serve a package means I would trust it more than downloading from some random website I don't know.
Checking a hash is safer only if the person who commits the "known good hash" to the package repository actually verifies that goodness. If the programs developer sells out and puts malware into their official binary releases, and the package maintainer doesn't notice this, then checking hashes doesn't protect you from the malicious release.
>If the programs developer sells out and puts malware into their official binary releases, and the package maintainer doesn't notice this, then checking hashes doesn't protect you from the malicious release.
Nothing short of a full code review will save you from a malicious supply chain attack. This is currently exploding with NPM as well, so building from source doesn't save you here either. See the node-ipc debacle from a few weeks ago [0], and also my own post from last week about a malware infected NPM module [1].
The reality is that yes, the days of being able to trust anything downloaded from a stranger on the internet are pretty much over now. But known-hash package management is still a good first layer of security versus downloading random binaries from the web.
Nothing short of full code review will ensure that malicious code from a dependency doesn’t get into your program at all, but the capabilities of that malicious code once it does get into your program can certainly be limited.
I don't disagree that Javascript in general runs with a lot more permissions than should probably be given but permissions is a very hard thing to manage. So you decide to install a node package that helps retry and automatically back off requests you want to send to your server since you're working in a mobile environment that has sketchy internet reliability... it is very hard to craft a set of permissions to allow that package to retry requests to certain addresses you'd like to retry without also letting it throw random system metrics it's collecting to an ad-server. Additionally, these permissions will be more comprehensible to review but they still need to be reviewed and reviewed by a party you trust.
That's a tall order for the open source community when it comes to a volume of packages like npm or composer.
You can logically look at lpad as a package and say "We're padding strings, we don't need to do anything with network traffic or disk reads" but building an automated system to make that evaluation and saving that evaluation in a tamperproof manner is going to be hard... and what happens if lpad starts trying to do a better job - operating correctly in left-to-right character sets and in languages where alignment is dictated by hyphens instead of blank space.
Code isn't simple and while the above statement may seem hyperbolic I think it's just because most of us accept the risk, it's highly unlikely that malicious supply chain attacks will suddenly sink our businesses so we accept a reasonable level of security in exchange for what feels like a modest risk but actually fully 100% preventing that risk? You'll need to check every usage scenario on every set of hardware and go through that code with a fine toothed comb.
> it is very hard to craft a set of permissions to allow that package to retry requests to certain addresses you'd like to retry without also letting it throw random system metrics it's collecting to an ad-server
If you're expecting to have to craft ACLs program-wide for that package, then sure, that's going to be hard. That's more coarse-grained than what I'm talking about here though.
Imagine, if you will, a programming environment where modules you load don't have any authority to access the network, filesystem, etc unless you pass them in. They're just not in scope. In such a language, the default way to grant HTTP access might look like this in pseudo-Python:
Now, this library has the problem you mentioned; it can make arbitrary outgoing HTTP requests, meaning it can send any metrics it collects to an ad server. It can't access the filesystem because the filesystem isn't in scope either, so what it can collect is limited, but still more access than it needs. So here's one way you might fix that (obviously highly simplified/incomplete in its implementation):
Now, this version of myMaliciousLibrary gets an object that looks like the http library it would normally use; it just happens to ignore the origin parameter that the library passes to it and uses the one it was initialized with instead. Now, that library can make requests to good-site.example.com, but if it tries to make a request to adserver.evil.com, it'll go to good-site.example.com instead. (You might also imagine a version of ProxyHTTP that just throws an error if the request's origin isn't the same one it was initialized with, or isn't in a list it was initialized with.)
Most of today's programming languages aren't strict enough to enable this of course; JavaScript isn't there yet for example, and most other languages aren't even close to being able to do this. But that's mostly because they don't provide strict enough encapsulation, not because they're lacking some complex security enforcement mechanism.
> I think it's just because most of us accept the risk, it's highly unlikely that malicious supply chain attacks will suddenly sink our businesses so we accept a reasonable level of security in exchange for what feels like a modest risk
I just think people are accepting way more risk than they realize, given the number of dependencies in modern applications and the amount of access each of those dependencies have. It feels crazy that any one of those dependencies in any application could exfiltrate massive amounts of sensitive data from any machine it happens to run on.
I think there'd be a lot of reliance on all these fundamental components working correctly (and prohibiting other components i.e. "We're using this curl library at a basic level and exposing a safe interface through this protected library but somehow blocking calls from code outside the library") which might have serious repercussions for code reuse.
There are ways you can instruct an OS to block calls to certain library functions/memory ranges (i.e. remove and grant elevations privileges) but I think such a system would need to be designed from the ground up. If you can still `require(lib/reallyOldCurl)` then your protections might allow linters to be able to mark packages as unsafe, but I think it'd be extremely difficult to block access without an extremely fundamental design intent.
I appreciate your reply quite a bit though as I was envisioning the restrictions being defined on a universal package level (aka lpad is given no disk access) rather than being delegated to the consuming developer.
My point is that I don't believe there is a "general movement away from downloading binaries" evidenced by the rise of package managers for Windows. That most linux package managers also use binary packages only strengthens my belief that there's no such trend.
ramesh31 has already addressed that point though. He's right that I meant the user manually downloading binaries via searching the web for a download source vs having the package manager handle the download and install for you.
Maybe I could have been more explicit, I took for granted that the context would be pretty obvious to most people. Sorry about that.
A package that isn't provided by a repo with a rep to protect isn't much different from a bare executable, but signed packages provided by repos with limited access to the root store and the signatures are quite different.
"Which windows package managers build from source? I thought they were all binary package managers."
Gentoo's package manager mostly builds from source (though there are some binary packages available for Gentoo -- usually of huge binaries like Firefox, which take forever to build and which you have to update pretty frequently).
Easy: there's no such thing as absolutes. Just because one thing is gaining momentum it doesn't automatically means the other thing is no longer required.
Yesterday, I downloaded from sourceforge to build, but then stopped and did apt-get install instead. I assume the binary I installed was built from the same source I downloaded. There are no absolutes.
"But the incentives to monetize the site ever more aggressively is probably the most important factor."
That itself was forced by the dot com fallout that killed the parent company hardware business and made SF a more critical leg for revenue. It all went downhill from there, especially when new leadership came in.
Disclosure: I worked at VA Linux/SF for 4 years (1998-2002), was on the SFDN "launch" team, and am still friends with a few of the original SF core guys. We all view what happened to SF as a tragedy.
The worst was ads on the download page that were allowed to say "DOWNLOAD" in green large letters. You ended up downloading JUST the malware and not even the thing you were interested in.
I wonder what the criteria is for reported malicious ads to actually end up being removed and de-listed by Google?
"Murder for hire" (or any legally unambiguous high-profile crime enabler) -> Guaranteed removal. GReply: "Look how much we care"
"Run the dslreports malware infected" -> No meaningful action is taken by Google, they want 'dat DSLReports.exe $$$! GReply: <Crickets>
"Fuck yo and pwn your network" -> ? GReply: <Crickets>
The juicier question is:
How many straight up virus vendors have proliferated through Google ads and chrome 0-days? That's not going to be publicly available data. Why not? Seems like as a service operator, the right thing is informing clients who $BIGCO helped get infected with virii or malware. If notification upon discovery were mandated, at least some of the pool of desolation which undeniably exists (directly thanks to Google AdHoles) would be cleaned up. Is Google too proud to admit this happens? Protecting users, especially the most defenseless in the case of Ads, is consistently not a real priority for BigG.
Companies are full of people, and being humans we all makes mistakes or overlook things for too long. That would be okay if good faith efforts truly existed.
I'd pay to see the data on how many paid google ads ran for 'flashplayer' 'flash' and similar.
After watching people I know search for it (years ago) and click the first result (which was never the actual flashplayer) - they must of infected 10s of thousands.
Also terms like 'social security admin' 'new social sec card'- I watched my mom do the same thing - g-search, click first results.. start filling in info, second screen more personal info, third screen start to question whyu.. fourth screen - wait this ain't right.. sigh.
> Example I know of was filezilla, which many never used or trusted again.
Sourceforge probably killed a lot of projects with adware, but Filezilla ruined their own reputation and continue to stealthily distribute adware/malware.
The freeware Paint.NET project did something similar. For years their download page had deceptive Download Now links which were actually spam/advertising/malware.
It looks like they no longer do this, for what that's worth, but I still don't recommend it to people. A pity as the application itself is very good.
The Minecraft modding community is full of this shit. And it’s ten year olds that don’t know any better who have to learn the hard way about this stuff.
I regularly wonder how much it would really cost to host those assets somewhere else that didn’t do that. Or GitHub! Why don’t the devs just move their mods to GitHub!?!?
Mechanism for a while could only be downloaded from a website which made it nigh impossible to do the download. I had to disable UBO, and then all the redirects and click hijacking made it almost impossible still.
It really seemed like the mod author, Aidan C. Brady, had sold out.
I didn’t have the displeasure of needing it for a few years but when I checked recently, it was thankfully available on Curseforge, which although could be better, could also be a lot worse, at least it’s functional with UBO.
Not using GitHub is pretty common with game mods in general, not just Minecraft.
I put it down to that the majority of mods are not developed by professional developers and learning git on top of learning enough coding to make a mod is just too much to ask of these people.
Really, they don't need or want version control, CI pipelines, CLI tools, or any of the things normally used by professionals. All they need/want is a good user friendly webpage to share their mods, and maybe a forum or some way for users to ask questions/provide suggestions/report bugs in a non formalized way.
Are you sure they're bundling malware just to pay for hosting? It could be the other way round: they realise GitHub might ban them for pulling that kind of thing, so they host elsewhere.
Are the mods generally FOSS? That would enable cleaned-up forks, although in the case of Filezilla this doesn't seem to have happened, which I find somewhat surprising.
No, they’re bundling malware because the sites that host mods are a cesspool, reminiscent of the 90’s warez scene - the downloads are all “type the third word from this page, the second word from this page, then click this sponsored link, then wait in a ‘queue’ where we show you video ads - and then you can download your mod at 20kbps”
I’m surprised they don’t split them up into 1.44mb rar archives “for your convenience”.
In my experience very few, if any, mod communities take a considered approach toward licensing. Whether you're allowed to base a mod on an existing mod is left to guesswork. They're quite unlike the FOSS world in that regard. moddb doesn't even have a section for licence information, unlike, say, GitHub.
Modding communities often also take a 'permissive' approach to the copyrights of the base game, but fortunately the publishers are often wise enough not to bother taking action against such mods. A mod like [0] gets to exist despite that, ultimately, it's infringing on the base game's copyrights.
Ad.fly! I just also responded to the parent about trouble downloading a mod, and you’ve reminded me it was this site.
I’ve dealt with shitty as infested download websites, but this one took it a whole new level. I had to disable UBO, and even then getting the download was nigh impossible.
I agree it’s not worth the trouble, just avoid, the website is a whole new level of user hostile.
In regards to Curseforge, I agree it’s not great, but at least it works.
I don’t like 3rd party launchers or mod management applications, so for our little server I just make .zip available with a readme with pictures.
What’s useful about Curseforge is that with just a mod identifier, you can construct links using an excel formula with the url setting download filters to be for our client version automatically. So I use an excel sheet to help me keep track of the mods and make it easy to check for updates.
Yup the "that's not what I thought I was downloading" aspect made SourceForge untrustworthy and I was just done with SourceForge and any links to SourceForge at that point.
Yeah, I can't believe the authors didn't mention the malware fiasco as the primary contributor of SourceForge's demise. Before it's downfall it had several useful open source and widely used libraries. Post malware it's use was banned by many employers.
I think a huge difference between github and sourceforge is that user base and the fact that github ends up being used by a lot of private companies wanting to host their private repositories (and being willing to pay money to do so).
To my knowledge (I was very young during the period when sourceforge was still good) sourceforge never really attracted private customers by offering value adds to CVS/SVN so it didn't have a reliable non-donation revenue model. Github has accumulated a cash cow in the form of a lot of companies using it for issue tracking and as a nice UI into their repository state - this doesn't make them secure for ever, they do have some very real competition and the fact that they are maintaining an ecosystem means they have a lot of costs beyond just hosting but I think they're in a place where, if they started to inject obnoxious ads, they believe their revenue would suffer even in the short term.
"To my knowledge (I was very young during the period when sourceforge was still good) sourceforge never really attracted private customers by offering value adds to CVS/SVN so it didn't have a reliable non-donation revenue model."
Correct. At the time it was a much tough hill to climb; the business climate was still not ready for it by and large; quite resistant actually, hence why for a while an on-prem version of the software was developed and sold and supported.
The Information had an excellent article a few years back about the conflict between the Enterprise sales people and those responsible for the community. The latter won and it was the right business decision. Thinking of GitHub as yet another enterprise software development tool rather than the premier social network for developers shows a breathtaking lack of vision.
sample size of 1, but I'd say 25% of the time I'm downloading a binary release from github, as I don't want to spend the time setting up a specific environment just to compile xyz. I just want the tool/app as a consumer. The other 75% is code I want to incorporate into some project I'm working on as a developer.
> I don't know if you can compare sourceforge with github because the former was also used by many to get ready-to-use binaries where github is more exclusive to developers and peers.
Some projects use the gist page for their web page, and some use the release downloads as the download binary they link to. Sometimes, just as was often the case with sourceforge, the main project repo page was the main page for the project. If you think github is more exclusive to developers and peers, I think that's just your own bias in how you use it coming though, as many non-developer people I mention it to know what it is if it comes up in conversation.
I see them as extremely similar, to the point that on the relevant point of how it's used by the public I'm having a hard time coming up with differences.
Hm, I may be wrong here (it's been over a decade...) but I remember Sourceforge being more about the code (in a simplified, VCS-style distribution directory which may have, but often did not have binaries), and Freshmeat being all about installable packaged software.
No matter how many years pass, downloading something from SourceForge feels like downloading a sketchy patched executable from a eastern european forum or a game mod from some community managed website.
As someone whose internet experience is firmly in the Anglosphere, when you see Cyrillic on a website, you think "I'm downloading malware". Keep in mind, I'm not going to Cyrillic sites for news, recipes or something else, it's usually related to software. And malware at least back in the day was endemic on those sites.
I just took it to mean it's sketchy downloading and running a binary from a forum where you don't speak the language. You have no idea if the vibe on the forum is friendly or hostile, if the first response to the post you're downloading from is someone saying "Hey, thanks for the great work as always!" - or someone coordinating an identity theft scam - or both!
I think because of the abundance of expertise in low-level machine language there, many cracked deDRM'd binaries or patched executables for abandonware games come out of Eastern Europe, hence the association.
This was also before usable quality automatic translation was built into browsers. Babelfish would lose even more in translation than modern Google Translate.
General discussion forums, yes, atleast 3. And tens of various niche forums. It's not about "how many" it's about how much activity they generate compared to the population size of the country.
The words just before it were "a sketchy patched executable". People are bound to infer there is something wrong with it, regardless of whatever your intentions might have been.
I dont understand why the easterneuropeans keep publishing strange software on strange fourums why don’t they just use GitHub???
When I wanted to change some hidden settings on my car I found many strange software packages on these strange forums. But nothing on GitHub, these forum people live in their own strange walled of garden.
They had a sneaky download button to download some adware vs the download link to the actual software you wanted in the first place. Easy to confuse the two.
That's the "ill-conceived DevShare" mentioned. From the paper:
> While DevShare was an opt-in service, some projects complained that SourceForge bundled third-party adware in their downloads without their consent.25 Ads were added to project download pages with fake download buttons to trick users into clicking on the ad. Often, clicking on these ads resulted in the download of adware.
> Management focused on ROI; SourceForge was expensive to run and did not have a plan to bring in revenue. This led to the introduction of DevShare, but since management did not understand the open source ethos and the development team was not included in management decisions, DevShare was a major failure. It prioritized ROI over trust and bundled adware with project downloads. Many projects started leaving SourceForge, citing DevShare as a main reason.
Yes, I someone who lived through that period I can confirm: I used SourceForge regularly until The point where it was proven to my satisfaction that the malware being added to downloads was not a fluke or a hack, but an intentional choice by management.
Then, I instantly flipped to having a very strong commitment to never again trust SourceForge. You've probably heard about how a reputation is built over years but can be destroyed in an instant... that's exactly what happened here.
Part of the reason is that from the point of view of the person downloading (binary executable) software, the most important feature the site provided was my trust that they would give me the same binary I was asking for. In a similar vein, if I discovered that a bank had been intentionally lying about account balances and adjusting them in the bank's favor to make a profit, I would immediately and permanently sever my relationship with that bank.
That was a bummer. There was that point in time that ending up on SourceForge felt like ending up on one of those shady download pages. GitHub by comparison was a delight.
When they made the download button send you to an interstitial page with ads and a countdown timer before starting the download, that felt eerily like those shady download sites.
"Sneaky" is an understatement; the download link to adware was more prominent than the download link for the product in the headline (which product was also often afflicted with adware).
I found the article rather thin; I was hoping for some explanation of who made those disastrous decisions, and why.
It's easy to criticize them on this issue, but it clearly demonstrates that they were struggling to monetize the site. If people paid for the software they use like they pay for their coffee, companies wouldn't have to resort to things like this.
Lack of investment in the actual technology, poor vision for what the product could be...they focused on extracting short term value at the expense of long term value. A key example is the interstitial page with the countdown timer whenever you want to download anything that forces you to sit there and watch an ad. Anytime you see something like that, run. Those kinds of decisions are made at the board level to generate revenue, generally because the investors don't have enough money to actually keep the company running and working on higher value, longer term bets. If not made at the board level, it's symptomatic of a dysfunctional and incompetent product organization that doesn't really care about the product, and views what they are doing as a job they'll be gone from in a few months or year anyway.
Sourceforge also suffered because it was project oriented and not code oriented. On Github everything revolves around the repo (No repo, no project.)
The proportion of projects on Sourceforge that didn't have single commit was staggering and made it really hard to discover legitimate projects.
There certainly are plenty of immature and barely started projects on Github, but Sourceforge had devolved into a wasteland for people to post ideas in hopes that developers would flock to their aid.
Aside from the malware issue, which the new owners seem to have addressed, Sourceforge also had a confusing and unpleasant to use UI, especially for downloads. Optics matter - people make decisions based on their feelings about a site.
(They seem to have improved that too, although the site is still quite cluttery even with an adblocker).
Does anybody else remember uploading binaries to an FTP site and then refreshing a page which lists the last 10 uploads Source Forge has received and ticking boxes to "claim" the files into your project? So bizarre.
I read the paper but came away feeling that their attempts to find something quantitative failed. The graphs didn't particularly correlate with the periods of decline and downfall and my guess is that if you showed the researchers the graphs before telling them it was sourceforge, they would have struggled to identify whether and when the company was failing.
It seems to me that the most obvious explanation for the decline of SourceForge is that it gave (gives?) too much away free of charge. In this respect, it seems to me that GitHub, Bitbucket, and hosted GitLab are no better.
This is making me stop and think about whether I should use GitHub for my primary open-source project. Will GitHub become the next SourceForge? Is it already happening?
I respect SourceHut's decision to charge all users (at least once it leaves alpha), but GitHub's network effect is strong, and as much as I respect Drew for pushing toward a free software future, I need Windows and Mac CI. Maybe I should set up something self-hosted; I can afford it, at least for now.
GitHub is essentially a "freemium" business; a lot of businesses use it and pay for it. This is quite different from SourceForge which never really had such a model, and AFAIK relied 100% on ads. At its peak SourceForge had about $23m in revenue, and GitHub has ~$300m currently. While both serve the same purpose, they're very different as a business. GitLab and BitBucket have similar business models (with some differences).
I can't predict the future, but I see no reason why GitHub should "become the next SourceForge". The SourceForge exodus to GitHub was already well underway (and, at the time, also Google Code, BitBucket, Launchpad) because they offered vastly superior products than SourceForge, which was always a mess. People seem to have a lot of rose-coloured glasses about SourceForge, but their UI was clunky (even for the time), slow, chaotic with all the ease of use as your tax form, stuff often didn't work, etc.
The whole adware fiasco was the death rattle of an already declining business that couldn't build a vaguely decent product.
GitHub is the leading social network for an incredibly valuable demographic, developers. Just the social graph is worth far more to Microsoft and LinkedIn than the purchase price or any revenue it may produce. The potential of GitHub Actions for Azure is too, which is why it was so short-sighted for Google or Amazon not to bid.
Maybe. GitHub and others have deeper pockets and provide more services (CI/CD) and so they have more ways to monetize (build cycles, private repos, org sizes, and so on).
>Forges are online collaborative platforms to support the development of distributed open source software. While once mighty keepers of open source vitality, software forges are rapidly becoming less and less relevant.
By the definition proposed, is not GitHub a "forge", and "relevant" enough?
Is there a natural tendency for open source software repositories to be run by private companies? Perhaps the cost of development and maintenance make it difficult for non-profit organizations provide comparable services?
I don't think the cost of development is an issue, but the cost of hosting certainly is. Open source organizations have a lot of resources (programmers, reputation, etc) but it is rare for them to have much money.
WordPress, in a moment of weakness in 2005 started pushing shady SEO advertising on WP.org. I self-hosted my blog so wasn’t directly affected but I noticed before the story broke in the tech press:
When Sourceforge was top-dog and very useful to the open source community, it was a time when individuals weren’t willing to pay for things online at all, much less for lucrative monthly subscriptions.
Sourceforge became almost useless a long time before GitHub came on the scene though. AFAIR they were taken over by a commercially focused company and then added more advertising and inserted shovelware/malware into downloads.
Again, my recollection (as a non-programmer) was Subversion repos were popular at this time -- launchpad.net linked source code was nearly all SVN -- and the likes of Tucows for MS Windows binaries, and other non-SF sites like Freshmeat became useful again.
Then Linus came out with `git` and the ecosystem shifted.
TL;DR to my recollection there was a gap between SF's decline and GitHub's rise, so the GitHub didn't cause SF's decline?
Ads and bundling malware (here: unwanted software) with downloads won't be received very well. The malware even had a negative influence on projects themselves. Example I know of was filezilla, which many never used or trusted again.
I don't know if you can compare sourceforge with github because the former was also used by many to get ready-to-use binaries where github is more exclusive to developers and peers.