I feel like we've reached the worst age of computing. Where our platforms are controlled by power hungry megacorporations and our software is over-engineered garbage.
The same company that develops our browsers and our web standards is also actively destroying the internet with AI scrapers. Hobbyists lost the internet to companies and all software got worse for it.
Our most popular desktop operating system doesn't even have an easy way to package and update software for it.
Yes, this is where it's at for me. LLM's are cool and I can see them as progress, but I really dislike that they're controlled by huge corporations and cost a significant amount of money to use.
Use local OSS models then? They aren’t as good and you need beefy hardware (either Apple silicon or nvidia GPUs). But they are totally workable, and you avoid your dislikes directly.
$3000 is not that much for hardware (like a refurbished MBP Max with decent amount of RAM), and you'd be surprised how much more useful a thing that is slightly worse than the expensive thing is when you don't have anxiety about token usage.
Ok, from that perspective we are still a few years from when a college student in Portugal can run local OSS models on their own hardware...but we aren't a few decades away from that, at least.
> they're controlled by huge corporations and cost a significant amount of money to use.
is there anything you use that isn't? like laptop on which you work, software that you use to browse the internet, read the email... I've heard similar comment like yours before and I am not sure I understand it given everything else - why does this matter for LLMs and not the phone you use etc etc?
Unfortunately we live in a "vote with your wallet" paradigm where some of the most mentally unhealthy participants have wallets that are many orders of magnitude bigger than the wallet of the average participant.
Honestly I think it's under-engineer garbage. Proper engineering is putting in the effort to come up with simpler solutions. The complex solutions appear because we push out the first thing that "works" without time to refine it.
Dystopian cyberpunk was always part of the fantasy. Yes, scale has enabled terrible things.
There are more alternatives than ever though. People are still making C64 games today, cheap chips are everywhere. Documentation is abundant... When you layer in AI, it takes away labor costs, meaning that you don't need to make economically viable things, you can make fun things.
I have at least a dozen projects going now that I would have never had time or energy for. Any itch, no matter how geeky and idiosyncratic, is getting scratched by AI.
They're possible, but they're not exactly relevant, and you couldn't do something like that on newer hardware. It's like playing a guitar from a museum because the world just forgot how to make guitars. Pretty dystopian.
I’m not the OP, but my answer is that there’s a big difference between building products and building businesses.
I’ve been programming since 1998 when I was in elementary school. I have the technical skills to write almost anything I want, from productivity applications to operating systems and compilers. The vast availability of free, open source software tools helps a lot, and despite this year’s RAM and SSD prices, hardware is far more capable today at comparatively lower prices than a decade ago and especially when I started programming in 1998. My desktop computer is more capable than Google’s original cluster from 1998.
However, building businesses that can compete against Big Tech is an entirely different matter. Competing against Big Tech means fighting moats, network effects, and intellectual property laws. I can build an awesome mobile app, but when it’s time for me to distribute it, I have to either deal with app stores unless I build for a niche platform.
Yes, I agree that it’s never been easier to build competing products due to the tools we have today. However, Big Tech is even bigger today than it was in the past.
Yes. I have seen the better product lose out to network effects far too many times to believe that a real mass market competitor can happen nowadays.
Look at how even the Posix ecosystem - once a vibrant cluster of a dozen different commercial and open source operating systems built around a shared open standard - has more or less collapsed into an ironclad monopoly because LXC became a killer app in every sense of the term. It’s even starting to encroach on the last standing non-POSIX operating system, Windows, which now needs the ability to run Linux in a tightly integrated virtual machine to be viable for many commercial uses.
Oracle Solaris and IBM AIX are still going. Outside of enterprises that are die hard Sun/Oracle or IBM shops, I haven't seen a job requiring either in decades. I used to work with both and don't miss them in the least.
You don't need billions of dollars to write an app. You need billions of dollars to create an independent platform that doesn't give the incumbent a veto over your app if you're trying to compete with them. And that's the problem.
Oh super cool. What's interesting is that to my eye it looks horrific at first glance but the longer you look the more you realize how advanced this whole thing is.
Ideally the model would be run locally in the browser, so the author isn't paying whatever they're paying. But the web standards to do complicated stuff locally aren't there yet and probably will never be.
That's not a practical answer but it's my two cents.
I wish I could give him two cents without having to try. HTTP status 402 with micropayments or something needs to become a thing. The platforms do it... (subs, tips, donations, rewards etc etc.) Why can't the web.
Maybe, but WASM still has its limitations and pains. If you compile with emscripten you're still using thousands lines of generated javascript to glue the wasm and javaecript together.
> ESA projects are a bit demo-like and limited in scope
I am kind of confused by that statement, what more would you expect from the Copernicus Programme? Isn't it a technical improvements over NASA's LANDSAT programme?
I don't mean "demo-like" in terms of poor technology. I meant that this technology doesn't yield products or services with global scales to an extent it happens in the US. Google Maps successfully uses both LANDSAT and Sentinel imagery. This is the wider problem of European failure to build companies/systems on top of technology.
> 1) very stable due to rolling-release producing small changes
Having very frequent updates to bleeding edge software versions, often requiring manual intervention is not "stable". An arch upgrade may, without warning, replace your config files and update software to versions incompatible with the previous.
That's fine if you're continuously maintaining the system, maybe even fun. But it's not stable. Other distributions are perfectly capable of updating themselves without ever requiring human intervention.
> 2) the skill barrier to getting a full system is “basic literacy, to read the wiki”
As well as requiring you to be comfortable with the the linux command line as well as have plenty of time. My mom has basic literacy, she can't install ArchLinux.
ArchLinux is great but it's not a beginner-friendly operating system in the same way that Fedora/LinuxMint/OpenSUSE/Pop!_OS/Ubuntu/ElementOS are.
> Having very frequent updates to bleeding edge software versions, often requiring manual intervention is not "stable". An arch upgrade may, without warning, replace your config files and update software to versions incompatible with the previous.
12 in the last year if you used all the software (I don’t many people are running dovecot and zabbix), so probably actually like 3 for most users: https://archlinux.org/
That’s not too dissimilar from what you’d get running stable releases of Ubuntu or Windows. And of course plenty of windows software will auto upgrade itself in potentially undesired ways, windows users just don’t blame the OS for that
I don't just mean the types of manual intervention mentioned in the news. ArchLinux ships bleeding edge software to users with very little downstream changes. ArchLinux also replaces config files when upgrading. This is inherently different behavior from stable release distributions like Ubuntu.
ArchLinux is not an operating system where you can do an unattended upgrade and forget about it. That's not "bad" or "good", that's just a design choice.
Arch replaces _unmodified_ config files when changing. It’s not an uncommon behaviour in software to update defaults to the new defaults.
If you have a modified config file, it puts the new default one in a .pacnew file for you to compare, which seems strictly better to just deleting the new default one.
Huh you're right, I must've confused myself by removing/installing instead of upgrading recently.
Anyway I think the discussion boils down to semantics. ArchLinux is not "unstable" in the sense that it is prone to breaking. But it also delivers none of the stability promises that stable release distros or rolling release distros with snapshotting and testing like OpenSUSE Tumbleweed deliver. To call ArchLinux stable would make every distribution stable, and the word would lose all meaning.
Most distributions promise that an upgrade always results in a working system. Instead moving the manual maintenance to major release upgrades.
> without warning, replace your config files and update software to versions incompatible with the previous.
This is just nonsense, pacman doesn't do this. If you'd modified a config file, it will create a .pacnew version instead of replacing it. Otherwise you'll get the default config synced with the version of the software you've installed, which is desirable.
It's pretty rare to modify any config files outside of ~/.config these days anyway. What few modifications I have at the system level are for things like mkinitcpio, locale, etc and they never change.
> idk why Arch doesn't invest in whats standard in every other major distro
Simplicity, among other reasons. Installers force the users hand and need maintenance. Having no installer but rather a detailed installation guide offers unlimited freedom to users. Installation isn't difficult either, you just pacstrap a root filesystem and configure the bootloader, mounts and locale.
ArchLinux does now have an installer called archinstall, but it's described more as a library than a tool. It allows you to automate the installation using profiles.
Just to paint an example, if I am installing Arch I like to have:
* A user configured through systemd-homed with luks encryption
* The limine bootloader
* snapperd from OpenSUSE with pacman hooks
* systemd-networkd and systemd-resolved
* sway with my custom ruby based bar
* A root filesystem in a btrfs subvolume, often shared across multiple disks in raid0
If you were to follow the installation guide it will tell you to consider these networking/bootloader/encryption options just fine. But trying to create an installer which supports all these bleeding edge features is futile.
Also if you want 'Arch with sensible defaults' CachyOS is basically that, people think of it as a 'gaming distro' but that's not an accurate characterisation. I use it as a daily driver on my personal machine mostly for non-gaming work and it's an excellent distro.
There is though the TUI installer, not like it used to be where the commands were typed in following the wiki. Not that there was anything wrong with the 'manual' mode, it gave you insight into the basic building blocks/configurations right from the start.
It's been a very long time since I moved to Arch, but I swear that something like 12 years ago it did have some form of menu-driven installer.
Nowadays, there are so many ways to partition the drive (lvm, luks, either one on top of the other; zfs with native encryption or through dm-crypt), having the efi boot directly a unified kernel image or fiddle with some bootloader (among a plethora of options)...
One of the principal reasons why I love Arch is being able to have a say in some of these base matters, and would hate to have to fight the installer to attain my goals. I remember when Ubuntu supported root on zfs but the installer didn't it was rather involved to get the install going. All it takes with Arch is to spend a few minutes reading the wiki and you're off to the races. The actual installation part is trivial.
But then again, if you have no idea what you want to do, staring at the freshly-booted install disk prompt can be daunting. Bonus points for it requiring internet for installation. I would have to look up the correct incantation to get the wifi connected on a newer PC with no wired ethernet, and I've been using the thing for a very long time.
I feel like we've reached the worst age of computing. Where our platforms are controlled by power hungry megacorporations and our software is over-engineered garbage.
The same company that develops our browsers and our web standards is also actively destroying the internet with AI scrapers. Hobbyists lost the internet to companies and all software got worse for it.
Our most popular desktop operating system doesn't even have an easy way to package and update software for it.
reply