Microsoft's dpkg packages themselves do some fairly weird/annoying stuff in their maintainer scripts.
e.g., azuredatastudio contains code that manages a /usr/local/bin/code symlink...
... and it contains code that converts a PGP key in armored form to binary form and dumps it in /etc/apt/trusted.gpg.d, even if the sysadmin already took the time to verify the PGP key and put it into their own .asc file in the same directory (ok, I guess at least they aren't force-appending their key to the old/deprecated/monolithic /etc/apt/trusted.gpg file like many others)...
... oh and in doing so they dump out microsoft.gpg to the current directory, whatever that may be... they should at least be using mktemp!
... and they are doing other things that they should be relying on debhelper to do for them, e.g. installing shared-mime-info-spec files manually rather than with dh_installmime; installing desktop-entry-spec files manually rather than relying on the triggers that already handle installation of such files...
As for teams, it has its own oddball way of monkeying with /etc/apt, and one other weirdness: it explicitly changes /usr/share/teams/chrome-sandbox to be setuid. If that file is supposed to be setuid then ship it as such... shipping it non-setuid and then modifing it in a maintainer script sets off alarm bells and breaks dpkg-statoverride...
> This is why I decided to bite the bullet and go flatpak for the likes of teams, slack and other proprietary software.
I always recommend this now if it is an option. Even if Microsoft does nothing wrong, it is way too easy for me to make an error and somehow enable Microsoft antivirus stuff on my Fedora machine which I still don't know how I managed to do.
Flatpak is best iff the vendor is producing the packages. I have found all too often than I'm stuck on an outdated version of a orphaned, unofficial pacakge...
Still I wonder if you'd author all non-proprietary packages how many of these violations you'd find. Especially in smaller projects. Don't have any examples ready since I also try to avoid anything but established software, but I'm certain in the past I've had issues like mentioned above, even persisting after removal of the package.
>explicitly changes /usr/share/teams/chrome-sandbox to be setuid
These kinds of hacks are pretty consistant with my few experiences with Microsoft (and other vendor's) closed software on Linux. It's all best avoided for community maintained solutions.
The packages.microsoft.com repositories are regularly broken, which results in perennial build failures if you use GitHub Actions which includes it by default.
GitHub's response to "your default configuration is broken" was basically "welp, NOMFP":
(It's a quote from the UK political satire series "The Thick Of It", in which the character Malcolm Tucker remarks, "NOMFP. N-O-M-F-P. Not My Fucking Problem. I quite like that. Did you like that? I'll use that quite a lot today.")
Not only Ubuntu. And not for the first time [0], [1], [3] (Stack Overflow). Would be surprised, however, if the servers are in space instead of under water [2].
Just to clarify the title, Space in this context is the 'Biggest Key on your Keyboard' kind not the 'On Disk' kind.
It looks like a parsing issue where something in MS's automated process is replacing various parts of filenames with spaces. Looks like / and _ at a glance, but there could be more.
"nowhere in the Github issue does anyone mention disk space"
What is mentioned is ambiguous. So while disk space isn't specifically mentioned, neither is spaces in the context of filenames.
Here's what people are reading in that issue:
"Update: our infra team is still working to resolve this issue. They ran into some space issues but this issue should be resolved quickly. Unfortunately, I do not have an ETA yet."
None commented yet on the suggested issue? They ran out of space? Really? Azure Monitoring, Azure Sentinel - surely they have the simplest metric: used vs remaining space.
Or did they hit a limit in the blob/file storage. Still: no monitoring?
You may have noticed I have so many questions on Microsoft Ops.
Reminds me of an F1 story. A car was able to finish the race but stopped before reaching the pit lane. The team communicated it as being a fuel pick up problem. The underlying cause was that there was no fuel left to be picked up.
Wanted to update my Teams on Ubuntu 20 in the morning (after coming to office after months working from home) and had the same issue. Removed the package and installed it from snap. Works like charm so far.
What SLA are Ubuntu mirrors held to? This doesn't seem great, but won't a quick run of mirrorselect remove the offending repos and let you get on with life?
For your home setup: sure, it should be resolved soon, not too big of an issue.
What's much more annoying is that this breaks CI/Docker builds. Right now you can't deploy new container images that depend on a MS repository. I hope it's resolved soon.
We had the same problems already in the 90s and resolved them by caching, it's unthinkable that the whole build is broken by a problem with an external dependence - if you assume everything will be 100% available you basically make sure a problem will happen.
I wrote "hashcache" [0] [1] for this for my Linux distribution (which is the build from source variety) to avoid issues with upstream going away, or worse changing the contents of a file.
Many different caches can be used, in case one is down as well.
As the other comment says: cache it yourself. Not only will you save time downloading it, avoid sudden version changes, your build environment should really not depend on pulling things out of your control every single time
The modern argument would be that you're more likely to have problems with your own cache than someone else, and you should just use caching as a service.
It's a lot easier to remove yourself from the critical path if your cache has problems than it is to add the caching after the fact when a third party is having problems, though.
You plan for failure. What happens if your cache fails? You fall back to Microsoft. But what happens if Microsoft fails and you don't have a cache? Well you build a cache in advance and default to that.
Caches can also improve build time, allow for security tools to scan, and other such benefits too. So there are other benefits outside of creating a redundancy.
We do local caching, and it’s caused more outages, downtime, headaches and lost productivity than I can count. Although the hardware requirement is minimal, all that other stuff makes it expensive. So run a risk calculation for your own use case.
The modern argument is most likely made by people who sell caching as a service and have an interest in you blowing $500/month on a glorified FTP or a script to setup Artifactory.
well we cached the final docker images ;-) unfortunatly if you need to add another dependency for our build image were done. (well we could pull the old one and add the dependency and repush this image)
The “mirrors” in this case are Microsoft run servers specifically for Microsoft made packages for Ubuntu. Rather than the actual Ubuntu mirrors for all Ubuntu packages.
Microsoft repos were silently added to Raspberry Pi OS earlier this year, ostensibly "in case anyone wants to use VSCode" and definitely not just as a super shitty way of adding telemetry to every Raspberry Pi OS install without asking the user if they were cool with this.
e.g., azuredatastudio contains code that manages a /usr/local/bin/code symlink...
... and it contains code that converts a PGP key in armored form to binary form and dumps it in /etc/apt/trusted.gpg.d, even if the sysadmin already took the time to verify the PGP key and put it into their own .asc file in the same directory (ok, I guess at least they aren't force-appending their key to the old/deprecated/monolithic /etc/apt/trusted.gpg file like many others)...
... oh and in doing so they dump out microsoft.gpg to the current directory, whatever that may be... they should at least be using mktemp!
... and they are doing other things that they should be relying on debhelper to do for them, e.g. installing shared-mime-info-spec files manually rather than with dh_installmime; installing desktop-entry-spec files manually rather than relying on the triggers that already handle installation of such files...
As for teams, it has its own oddball way of monkeying with /etc/apt, and one other weirdness: it explicitly changes /usr/share/teams/chrome-sandbox to be setuid. If that file is supposed to be setuid then ship it as such... shipping it non-setuid and then modifing it in a maintainer script sets off alarm bells and breaks dpkg-statoverride...