Sometimes the excuse is that cpu and memory are cheap as if resources aren't under constant pressure from the hundreds of processes that all pretend they are the only one on the machine.
Many also don't know how to draw simple, low-overhead abstraction boundaries. Everybody wants to make the most generic code planning for things that will never happen or at least not in the way they think. You can't call out most people on this because this one piece of code they are working on is always the exception.
And most developers don't know how to optimize as they write code. They often don't understand the problem or have enough mechanical sympathy to understand what code you should probably be careful with and should be made quick to start off with. They probably don't understand the cache system enough to know why that linked hash table is probably the wrong data structure. Hash means fast to them, and to them this is all premature optimization even when the correct solution is probably no more difficult than what they are doing. Calling something premature optimization is often used to shut down discussion of how their code can be improved.
The field is too in love with horribly inefficient frameworks. Writing network code and protocols is now considered too low level for people. Many are too caught up it what is tech-cool. They're like the holistic medicine practitioners of software and they don't even know it.
But unless they work in embedded software, this is rarely a priority. In fact, this tends to be actively discouraged : in the good shops, reliability, robustness and security is the goal, in bad shops, that's bullet-point features and meeting unrealistic deadlines.
To get an idea of how optimization is poorly regarded, look no further than the infamous Heartbleed exploit of OpenSSL. One of the contributing factor for this bug was that the team used a custom memory allocator for performance reasons. After the bug was discovered, the team was actually blamed for it more than anything else that lead to the bug. And FYI, it was a real performance improvement, at least on some platforms.
If reliability is the top priority for your project, it just means you factor that in to your optimizations. It doesn't mean you just ignore efficiency and write any old shit, as seems to be the custom today.
Your OpenSSL example seems to suggest the Heartbleed exploit was a result of developers daring to write performant code, and not that they simply wrote buggy or insecure code, performant or not.
reliability * efficiency = developer skill
So in short, if you aim for certain level of reliability and employ cheap/unskilled developers, efficiency will suffer. And as long as companies don't really care about performance, developers will not gain the necessary experience to write efficient code with just-in-time optimizations.
 - for "skilled" meaning probably mostly some knowledge and experience in writing fast code; it's probably not the experience you'll get when all coding you've ever done is for the web.
You can certainly write efficient code, that's reliable and robust and secure given the time and resources to do so. It usually comes down to money, and how no matter how efficient or high minded you are, crafting the optimal solution takes more time and resources. IE more money. If you can sell users and clients on the extra costs, great. Otherwise you need to figure out what you can do, with the time and resources you have, that will sufficiently satisfy your customers.
And yeah, the problem is with the gobs of resources available on modern computers writing efficient code matters less because most users aren't going to care if their music player is using 1% of the CPU or 10% of the CPU.
Of course I say that with the annoying exception I've experienced where the custom MP3 player in GTA V requires a relatively huge amount of resources and will easily reduce performance by 10-20FPS even on an overclocked i5.
For a given amount of resources invested it is. From a business perspective this includes not just the time a developer spends on a given piece of code, but how good a developer one is willing to pay to work on the code. Perhaps those making the business decisions overestimate the cost of writing more efficient code and underestimate the costs of not writing more efficient code, but they there is a cost that will be taken into account.
Of course you can, but often it takes more time (or more expensive developers) than inefficient code, and the people paying the bill don't necessarily benefit from the efficiency.
Sometimes, people get a knock out and do a good job. Bootstrap was seen that way for a while. (I've not heard much of it lately, so assuming it has waned in popularity?) By and large though, we all have different requirements for development. And none of them are shared by the customers/users of our software.
Example: everybody wants their web pages to be smaller but nobody wants to part with massive JS frameworks for sites that everyone are using, even if their site is mostly text. Instead, endless amounts of blogging and engineering time are dumped into things like minification, compiler tech, and other esoterica to solve a problem created by developers. This is incidental complexity, and it lays waste to software systems.
In short, the zeitgeist selects what it wants to hear.
Often better code takes only marginally longer or even less time since you aren't having to deal with all the extra complexity that tends to be a part of "modern" development.
The complexity is of our own making. We spend too much time on this extra cool and extensible way to use json and xml in case we even want to run on a quantum computer in in 50 years or avoid a simple recompilation when we need to change a constant that will likely never need to be changed when we should have been spending that time on more important things.
Do you know of organizations where an internal drive for excellence is cultivated? Where technical people communicate directly across hierarchy levels and horizontally? Maybe even outside the company?
: People who are relatively close to the customer and not the implementation. Those people don't tend to understand how efficient some things are because they can't estimate how long something should take from lower principles which they have a benchmark for.
This makes them much happier software users.
Does anyone know if there's a generalized name for this problem? It's the same one where your teacher gives you "only two hours of homework tonight" but forgets you have four other classes all doing the same thing.
Tragedy of the Commons seems to imply some harm is happening to the actors themselves; the coordination problem with computing resources here is a little different, because most of the time, application vendors don't feel any impact of the problem they contribute to, which makes them even less incentivized to coordinate.
Hmm, I can see how they're related, but it doesn't seem to be quite what I'm talking about. I'm more referring to the overuse of resources by individual elements assuming or acting as if they're the only thing using it, like in the above examples of a teacher acting like their class is your only one, or a thread consuming resources like no others are active.
Parkinson's law more seems to be the result of the effect that I'm talking about (among others, like procrastination).
> The field is too in love with horribly inefficient frameworks
Dude, I can't emphasize this enough. One of the best things I read in past week.
I'm probably guiltier of this than I realize, especially since I loooooove me some abstraction.
From there, you would be well served by working through Hennessy and Patterson's Computer Architecture and Computer Organization and Design.
In parallel, you could work through a book on Operating Systems, maybe even playing with the code for a toy system (e.g., http://pages.cs.wisc.edu/~remzi/OSTEP/ or https://pdos.csail.mit.edu/6.828/2016/xv6.html).
An alternative or perhaps parallel approach would be to play the assembly games put out by Zachtronics.
Secondly, abstraction doesn't necessarily track with efficiency/inefficiency.
(Nonetheless, thanks for your suggestion!)
Though, instead of reading, you really should just get over your complex and try writing some efficient code.
It's not a complex. I'm always very surprised by this attitude. Theory serves practice.
Case and point: the OP mentions that an understanding of cache architecture enables you to reason about which data-structures exhibit good cache locality. Discovering that hashtables have poor cache locality by trial-and-error seems like a waste of time compared to gaining theoretical insights.
I think you may have hit a linguistic snag here; there's lots of "theory" in the classical CS sense dealing with performance from an O(n) point of view, such as Knuth's work. But for producing fast results on actual hardware you end up having to take into account lots of ugly details of the platform. Learning about cache behaviour doesn't really fit into CS so it's not the first thing people think of when you ask for theory.
"Knowledge transmitted by practitioners through written and oral culture outside of the academy": what's the word for this?
Oral culture is especially prone to this -- at least a blog post or a physical book has a date on it. You have no idea when the your co-worker's suggested optimization technique was developed.
I think the word (or phrase, rather) is "tribal knowledge".
* from the book
secrets of the cult
That said, if anyone knows some kind of collected guide to writing efficient software, I'd be happy to learn about it, 'cause I haven't seen anything like this published.
There are various analyses of Doom and Quake source around which look at the techniques used there.
That said, spend a week with C and valgrind/cachegrind. There's a lot of theoretical stuff that is hard to get at (or is for me) without a little exposure to how the system works. a coupe hours here and there will extend your mental model to include the various layers of cache. That'll make the more esoteric stuff more accessible.
I think it requires an open mind to want to learn these things and put effort into it instead of blowing things off as premature or something the maintainer of the code behind you will come and fix.
No, that would be standard for computer engineering, not CS.
Try to have a conversation about hardware to many practicing software developers who were CS majors and they'll look at you as if you've grown a second head.
I find that this often extends beyond performance too. Many websites/apps waste lots of screen space because they were clearly designed for, and tested at, 1080p.
This manifests in assuming everyone can afford the abstraction boundaries chosen. Or that everyone needs the same tradeoffs. Or, ultimately, that everyone would come to the same conclusion.
I think of it as the desire of most people to boil software down to formulaic choices. Imagine if you came up with something as elegant as "F = ma" for software.
I feel this is somewhat related to the rise in functional programming. It is not that either of those things are bad goals to chase. But they are not ends to themselves.
While the frameworks themselves are made for a particular purpose, they are often misused. One great example would be electron. I found an electron clone of Keepass. Now why would anybody want that.
And you find people justifying Huge bloated electron apps that take a minimum of 50 MB for a simple Hello World. And then they say memory is cheap.
I say memory is cheap for 1 app at that size. Not running all apps at that size. If busybox can run GNU coreutils with a size less an MB then why can't they make more efficient apps?
>...frameworks... are often misused.
You don't think about performance, as a developer, usually, until it becomes a problem, and there are so many layers and frameworks between an MP3 player and the CPU that this kind of thing should be expected.
It is much more difficult to keep a real-time system working in real-time than it is to keep something performing as well as a desktop computer user expects it to perform.
Doesn't mp3 transcoding have dedicated hardware instructions, making the job much easier?
You can get dedicated MP3 low-power decoder chips. You might have one in a fancy PC soundcard (SBLive?), but I don't think it's included in baseline AC97.
(I remember having a 486DX that could decode MP3s with the Fraunhofer player at about 95% CPU but not Winamp, which was slighly too slow to keep up)
I don't know how CPU-efficient it really is though, and that seems to be Ted's main concern.
My only annoyance with it is that (AFAIK) there still isn't a 64b version of it, though that problem is so common on windows it's hardly worth mentioning.
I miss Audion, from the folks at Panic (makers of Coda & Transmit):
It had a similar spirit to Winamp, so much that AOL/Nullsoft looked into acquiring them. As did Apple before iTunes existed, as documented in The True Story Of Audion:
It's ancient software pretty much, written to be efficient on computers several generations back; should be fine as long as it hasn't had any major rewrites since then.
foobar is pretty efficient.
Here it handles MP3 decoding and spectrogram visualization with only a few percent of a CPU.
Nowadays I listen to music off my phone, and on that platform BlackPlayer is as close to perfect as a music player gets.
I'd recommend trying it out, I think you'll be surprised by how nice it is to use.
AIMP 3 takes 30MB of RAM even with 20+ GB playlist, on a machine with two sound cards, playing music 10-12h a day, and I've never seen it going even above 2% of CPU.
That being said, the author's sarcastic tone is fully justified. Everybody picks a favorite language and defends it, as if it is their parents under fire. I am all for Elixir and I love it, but I already had to write Golang several times because I needed more raw speed, being one example.
Don't be fanboys, fellow programmers. You're paid to do jobs, not being cool. Too many forget that.
That it contains more than 20 GB of song data? If so, I fail to see how that relates to the memory needed to talk about the songs.
If it's instead 20 GB of actual song meta information, then I'm both and impressed by and a bit scared of your music collection. :) And also impressed by the software's ability to deal.
If you find a DB that can cope with 20GB without sweating much, give me a shout! ;) You might become rich if you make one, too!
Oh, this one will be easy! But I'm warning you now, you'll need 24GB of RAM to run it ;-)
Also many years ago I tested all freely available audio players for there features and lightweight on resources and nothing beats AIMP.
Have years of rating data, play count on the AIMP library system. I wish I could move it to Android somehow.
Anyway that's my quest to find a lightweight audio player.
I am open to new concepts but once I get the concept and I like it, I am rarely looking for it implemented in several different languages.
Although Pony has new concepts Elixir doesn't have ;)
(Personally, I enjoy watching lecture videos of new languages just for the concepts they introduce me to, even when I never program in them - I can recommmend the CurryOn! youtube channel)
It's just that I want to stabilize my skills and sell myself in new ways recently; thus I am more focused on perfecting what I can do and becoming an expert and a pro.
Lectures introducing new concepts are something extremely valuable!
Android can show how much a app / internal program consumes in mW so I wonder if Google is collecting this data in order to classify apps based on this.
Everything should be pretty self a explanatory. The "holoscript" part gets executed by my configuration management tool, but it basically means that the two LoadModule lines are appended to the stock /etc/pulse/default.pa
Also, if you have a firewall, open port 6600 to LAN if desired.
PulseAudio on the other hand throws a fit. For reasons unknown to me, PulseAudio basically doesn't support running as a system-wide instance, so things get pretty messy if you want sound to come from multiple users (IE. Your user, and your mpd user). If you only ever use one user on your system, I'd recommend just running mpd under that user - I believe that's what I did to get it working on my computer. Of course, if you're not using PulseAudio then this isn't an issue in the first place.
This is the approach I took, and I think it works very well.
The only problem with that approach is it means that mpd is tied to your X session. I quit or restart X every once in a while and it's nice to have my music keep going during that time. But obviously that's not a use-case everyone cares about since most people don't have a reason to leave X.
* http://www.tedunangst.com/flak/post/mplayer-ktracing (https://news.ycombinator.com/item?id=13704163 https://news.ycombinator.com/item?id=13624174)
* http://www.tedunangst.com/flak/post/rough-idling (https://news.ycombinator.com/item?id=10254828)
* http://www.tedunangst.com/flak/post/browser-ktrace-browsing (https://news.ycombinator.com/item?id=11830969)
* http://www.tedunangst.com/flak/post/accidentally-nonblocking (https://news.ycombinator.com/item?id=11847529)
* http://www.tedunangst.com/flak/post/firefox-vs-rthreads (https://news.ycombinator.com/item?id=11470042)
Also you can have a hardware decoded MP3 path, and a software decoded OGG path, for example. They can coexist.
Most real hardware wasn't designed to be on the market long enough for any of this to come to pass, though, and you're probably 100% correct.
I am a Windows user myself, but I feel "ncmpcpp" is
a good alternative on linux (apparently that's an Ncurses Music Player written in CPP). Simple, console list interface with good features.
It's simple but works perfectly; I don't understand why music players nowadays are such clunky monstrosities when the music should do all the talking.
Same here, I never understood why people were initially so enthousiastic about iTunes or other players after Winamp that tried to hide your audio files behind a convoluted GUI and that abstracted the files on disk into a crappy, opaque database.
For me a directory tree with well-named directories and files is still the least-worst solution and also has been, over time, the most dependable one. (On Android I use Music Folder Player due to this. iPhone I don't know, I don't have one.)
This. My Music is very meticulously organized on the file-system level. Every single folder (sans Soundtracks and Videogame music) is organized like so:
My Music -> Artist -> Year - Name -> Track # - Title
My Music -> Black Sabbath -> 1971 - Paranoid -> 01 - War Pigs.mp3
That's it. That's all I need. I've had it this way since 1998, and it still works, across myriad computers, file systems, and operating systems. One of the reasons I stayed with Winamp so long is that I could simply right-click on a folder and click "Play in Winamp", and I was all set.
All of these other apps that try to organize my music for me inevitably fail, because the tags are rarely complete or consistent. Trying to backup all my music to Google Play Music has shown that, time and time again.
These days I'm happy that I was finally able to get that functionality back with Audacious.
I frequently like to create playlists based on the ID3 genre tag, so a folder-based approach doesn't really work. Folders also complicate things when you're trying to maintain separate music collections -- like a folder for ripped music vs storebought.
I still use iTunes. The filing system on-disk is identical to what I was doing anyway.
iTunes Media > Music > Artist > Album > nn Song Name
(Indeed, I point Plex at the same folder and am slowly migrating over to using it rather than the built-in sharing for network playback.)
It's all that extra stuff that can only be done by storing data structures elsewhere: playlists, random by genre, etc, that requires the convoluted GUI and opaque database.
A pure hierarchy can't handle compilations or playlists very well. A database, a real one, ought to be the solution - my best player experience was with Amarok 1.x which let you use embedded sqlite or connect to mysql/postgresql.
I don't feel like I want a whole lot from a player, but I do want to queue a track or two while leaving the player on shuffle, to play individual tracks out of a cue/flac, integration with some service that can tell me when a band I like is playing near me, and ideally a recommendations service too. So I've generally ended up with the heavyweight players.
Even before iTunes was released I used MusicMatch Jukebox. Keeping ID3 tags and file/directory names and syncing things manually with files was always just such a PITA. And how do you even deal with smart playlists or even playlists in general in an efficient way with pure files?
And to the OP's point - it looks like iTunes raises my CPU's power usage from 0.3W at idle to around 0.7W, and that was with the UI visible - keeping my whole machine with display on still at around 5W, or easily powered by a USB phone charger.
It also has a fun feature: a playback item can be an "action", such as stopping playback, or even running arbitrary Lisp code. :)
Thanks, installing now. I've had music organised in folders since the mid-90s and it's not correctly tagged. Google Play Music messes up most of it, looking forward to items finally playing in the correct order.
- Spotify is primarily for streaming Spotify songs. It shouldn't be inefficient, of course, but you're looking in the wrong place if you just want to play your audio files.
- Groove: Windows built-in programs are rarely great, why expect otherwise from Groove?
- Come on, dude. You can't just write something off because the homepage has a trendy design. And Shoutcast is ancient technology! I'm pretty sure I used Winamp with Shoutcast on my Pentium 4 and it worked fine.
- Foobar2000 has been around forever! It was fast and efficient ten years ago, and that hasn't changed.
This guy should have done a little more research.
For example, I've seen it make calls to
* Google Tag Manager
* CDNS - presumably for image content
I've blocked most of them.
Specifically love being able to find music by year, and while not the simplest UI, it's playlist builder does what I need. Wish I could run it on my phone.
I do suspect however that my use cases are unusual in that I don't stream music, and have a large library replicated on my PC, Surface, and my phone - and I've spent years completing and cleaning up the meta data, which Windows Media Player makes really good use of.
Be sure that there's quite a lot of us out there still but we don't like to tout it, lest the cool kids attack. :)
It also produces objectively better sound reproduction, and can handle FLAC which the original firmware could not.
Pandora - https://github.com/PromyLOPh/pianobar
Spotify - https://github.com/plietar/librespot
Any others for other services that still work?
As you said, Winamp + classic skin, with all the "features" turned off like Winamp Agent and those other stupid things they added after the AOL acquisition, is perfectly fine. The fact that OP didn't even try out (or discover) foobar2000 tells a lot.
I think most people are being sensible when they opt not to run unpatched software from 15 years ago.
And, yeah, I grew up with Winamp and MP3s (or even .MOD/.XM). But the world moves on.
The biggest roadblock at this point is the way Spotify has been handling the deprecation of libspotify. They haven't made a replacement public yet, and the public web endpoints don't support playback like libspotify does.
Did you even read the article?
Nuklear, the gui part, I know to be extremely inefficient. Due to it not being called that much (refresh every 10ms, IIRC) it uses an insignificant amount of cpu time.
dr_flac and LAME are efficient enough, especially considering the work they do.
Note that LAME can do fixed-point, while dr_flac uses floats. This is very important when talking about power efficiency, as floats use much more power.
Sending the PCM data to the sound card (using ALSA) takes an insignificant amount of cpu time (memcpy to fill the buffer, that is ~172kB per second (44100 * 2 * 2 / 1024 ; 2 channels of 2byte samples at 44.1kHz)). Though ALSA (speex) resampling 44100Hz to 48000Hz does take a significant amount of cpu time (IMO best to resample while decoding).
On "modern" linux there is also Pulse Audio, that takes that resampling overhead from the program onto itself. Note that PA (idk if still) uses floats to resample, making it very inefficient.
Floats on modern x86/amd64 cpus are fast, as fast as integers (although float operations have a delay so it turns out slower). More important is that floats use much more power to compute. Another thing with power efficiency is that modern cpu cores go to sleep to save power, so one should not wake them up every milisecond to check on something.
As for the article;
>After downloading hundreds upon hundreds of kilobytes of zip file, fire up a shell, run mpg123, and wowzers. It plays MP3s smoothly and efficiently. A remarkable feat of engineering, especially considering it’s all written in plain old C without leveraging the synergies of dozens of frameworks. One wonders how they managed.
I want to say "no shit".. so... No shit. For multiple reasons.
>Time for an old standby? What’s Winamp up to these days? Oh, great, looks like more internet radio nonsense. And the website has the modern flat square design I’ve recognized as a harbinger of impending disaster. Sigh.
Winamp doesn't even get the chance because the website looks signals an "impending disaster" ?
I've tried Tomahawk, WMP, VLC, and others. But they all have some stupid social connectivity, or rating/library organizing thing (so does Winamp, but it's unobtrusive), or just takes too many resources.
All my music is already organized (Ampache), I just want to play an M3U. VLC was a close second by the UI is too clunky for me (though I'm guessing skins and some customization can change that).
Finding efficient software for mobile phones is worth the effort:
As a C programmer it seems obvious that it's efficient exactly because it doesn't use dozens of frameworks.
I wonder if the author was ironic saying that...
I'll check out this Russian thing too though.
Apple, this was the one product that meaningfully changed the game for me and many others. We miss it.
I definitely agree with Ted's conclusion, though. We need an Energy Star (TM) rating for software.
Can anyone here recommend a good player for Windows?
The default UI is difficult, but it's ridiculously customizable. You can find tons of people's customization profiles for it to save effort.
My other music player is Youtube :)
It can display the covers of albums with excellent resolution and clarity.
The compatibility with the music formats is excellent, supporting most of the formats designed for the music player. Actually i can play most music from 1948 to 2017 on it, without any change in configuration.
With optional hardware it can support 4-channel surround audio, as well, although this isn't an user-friendly setup.
Not everything is great -- my music player isn't portable at all. It is big and weights more than 10Kg. It also doesn't support remote operation. You need to be there at the user console to change tracks, etc.
My music player is called a "Lenco L75".
Not to mention the required power dongle can be a real limitation, even if the weight is not.
As for degradation, in theory there is degradation every time the record gets played. In practice tests done in the 60s show that records can be played over 1000 times without significant deterioration of audio quality.
However, the record surface is very susceptible to damage. This means that, play a record over a worn stylus, or under a turntable with a too heavy tracking force or a poor quality cartridge -in short, a bad turntable-, and the record will wear down rather quickly, sound quality will be degraded quickly.
Also, for Winamp, the only version that matters is v5.623 (dated Dec 9 2011)
MPC-HC ships a built-in copy of ffmpeg/libav via LAV Filters, what extra codecs did you need?
I use foobar2000 on Windows and MOC on Linux and I really do not miss either of them when I'm on the other platform.
MPD comes close to MOC but it's too complex to use for me, and some clients don't actually list all of the files in my library.
I might just go back to Clementine, if it works. That's what I use on Windows.
I just got bit by the new Firefox dependency. Installing PulseAudio left me without sound in VLC (even though I'm using the PulseAudio output plugin) and Firefox can't play videos anymore.
I haven't tried apulse -- I solved the problem by not using Firefox anymore -- but I hear it gives good results.
Thing is... I know what you're saying, and I don't doubt it's probably just some trivial issue that I can solve with a two-hour trip to Google and my PA config files. However, not running PA works fine, and has been working fine for like 12 years on every computer I've owned. It's literally no effort at all. I could spend some time getting sound to work again, or do absolutely nothing at all and have it working fine, just as it's worked for the last six years on the machine in question.
The days of endlessly troubleshooting ALSA settings and OSS wrappers aren't really gone if we're troubleshooting PA settings and ALSA wrappers instead.
Sounds like a better option, IMO.
Read file > play
Internet > dl file > convert decrypt > read file > play > remove files after max cache reached
The internet/downloading takes a lot off power.
The filesystem is also in use far more
The encryption takes a lot off cycles as well. The reason that it is there is so it won't be a glorified music piracy software ala napster/kazaa/limewire.
At one point Apple removed indy apps from their store which cached youtube music to file system for offline listening. A nice feature, but music is copyrighted and has
a rich history with lawsuits. You can't just stream music, because streaming in this context is same as downloading.
All battery munching things that add up, but which are essential due to the nature off the data.
Furthermore, Spotify needs to run on ios, android, windows, linux and osx.
For this needs far more intricate code than a player for 1 OS. Optimizing individually for each and make it shared platform is no easy task
I agree a lot needs to be better, but there is quite some bias and overlooked factors.
Read file from flash. Decode. Play.
Modern wifi is surprisingly efficient and shouldn't make much of a dent on a laptop-like device.
There doesn't need to be much of a difference. Decoding encryption is no good excuse to bump it up to multi-GHz range -- even at high quality, compressed music comes at no more than a few dozen kilobytes per second. That is nothing.
Unless the application is pre-caching entire albums, encryption & decode aren't a good excuse.
Portability has nothing to do with it either. mpg123 runs on lots of systems. "I must run on Linux so I use 3.5GHz on Windows!" said no application. Ever.
Modern wifi is surprisingly efficient
There are dozen off indy software freely made free of charge that can easy and efficiently cache music as per my apple store reference.
You say multi million dollar spotify company can't find 1 dev to make it good?
I mean perhaps it is true as well, but it feels so unlikely .
It likely just isn't a concern for them. Just as making page loads lightweight isn't a concern for most press-like organizations that prefer to serve megabytes of script, pictures and ads across dozens of requests.
Bloat and inefficiency is the way it goes in this industry, and these companies are too big to care if tedu doesn't like the way the cycles are wasted. Solo developers may have different priorities.
I remember well the anecdotes of working for Steve Jobs. Devs sitting through an entire weekend just to make the transition for switching between menu's precise to the millisecond. This one is slightly hyperbole, but the point is standards.
It's no coincidence the ipod is the most famous music player in history ever.
Devs might see the issues with the code, but as you say, a lot off people in high places give 0 shits.
I've had this happen myself at many companies I worked with. I found a way to be more efficient and u get false promises of it being adressed in the future or your advice gets ignored, or dismissed as a waste off time.
At worst u become the know-it-all who "can't let things go". Fun times..
While there are many inexperienced devs. A devs work is only as good as the standards of it's "captain".
I may be becoming too cynical or slightly old but I don't care if the businessmen are smart enough to recognize a good investment in quality. I am not interested in educating bean counters. I simply don't ask them and I leave them no choice. I invest in a future where I won't be cursing myself or them. They never know it and we're all better off.
If it was the pure audio source then nothing stops me from copying it and put it in my own library, possibly shared with anyone through my own NAS avoiding ads and future subscriptions.
I could also just stream the cached folder into my own app, and while losing the ability to pick songs in real time it effectively becomes a radio given the main client keeps running, which I would do on a random cheap pi or server.
Try out librespot if you fancy a comparison to what the Spotify app audio backend could be like (obviously there is no clunky gui here).
Someone else mentioned VLC's GUI sucking, and that is so true and such a pity, both for the playback and conversion/transcoding abilities it has. I don't mean graphical skins, mind you, it just needs (a lot) more love.