I've worked in "co-op adjacent" styles all my life. We'd typically have 3 or 4 folk as owners, but with additional employees.
The key restriction I see is the difficulties in large-group decision making. 4, or 3 seems to work well, but more than that it seems to flounder.
Every decision comes with upsides and downsides. So risks have to be taken, but in large groups the most common option is "status quo". You see this play out all the time in large companies with big boards and deep management all the time. It ends up stagnating because there's always the "safe" option which means trailing behind.
Of course it can work. But it's not common.
In a truly cooperative everyone gets a say. And most people are risk adverse. Ultimately it tends to collapse under its own inaction as those who are accepting of more risk go elsewhere.
Electing top management is also a weak approach because it becomes very political and seldom elevates merit.
People have different goals, some want long term job security, some want short-term rewards, and so on. The more people you add the harder doing anything becomes.
In other words a cooperative succeeds when it has a very clear set of goals and ambitions and only accepts people who are tightly aligned with those. It's a very difficult structure to be successful at.
If that's true, it would be very sad indeed. Techical excellence is a very low bar to clear. It's so easy even AI can do that part.
When I was young, and learning my technical skills, then naturally I was focused on improving those skills. At that age I defined myself by what I did, and so my self worth was related to my skills. And while the skills are not hard to acquire, not many did, and they were well paid. All of which made me value them even more.
As I've grown older though I discovered my best parts had nothing to do with tech skills. My best parts (work wise) was in translating those skills into a viable business, hiring the right people, focusing my attention where it's needed (and getting out the way where it's not.) My best parts at work are my human relationships with colleagues, customers, prospects and so on.
Outside of work my technical skills mean nothing. My family and friends couldn't care less. They barely know I have drills at all, and no idea if I'm any good or not. In that space compassion, loyalty, reliability, kindness, generosity, helpfulness, positivity, contentment and so on are far (far) more important.
I hope at my funeral people remember those things. Whether I could set up email or drive an AI will (hopefully) not even be in the top 10.
I really love your post, but I do think (and I come from an artistic background) that some skills have their own beauty, like work of art. Some love for creativity and what we create has a meaning of its own. Certainly worthy of an epitaph.
It’s why overuse of AI is a bad call imo. You skip a part of the journey. Like Guy Kawasaki says “make something meaningful”. If we are all AIs talking to eachother, everything becomes meaningless, we will become a simulation of surrogates.
That said, human compassion, relating to others and everything you mentioned trumps everything else.
Sure thing, but at the same time, there's creativity and then there's work; I could creatively write things in C or assembly for the art of it, but that isn't what my employer pays me to do. I could do my job in notepad or `ed` and type every character myself, but that's inefficient.
Same goes for art (which is often what it's compared to), some part of art is creative, but the vast majority of art that people get paid salaries for is "just work"; designing a website, doing graphics work for a video game or TV production, that kinda thing.
tl;dr, AI won't replace artisans but it's a tool that can help increase productivity / reduce costs. Emphasis on can, because it's a lot more complex than "same output in less time".
Harvester is just Kubevirt with some UI atop it, the same as Redhat Virt. Works fine if you’re hosting datacenters or whatever, haven’t seen it be suitable in smaller manufacturing environment
Over 60% are SUSE?! Sorry, but I’m with everyone else…
I remember since the start that SUSE was more popular in Europe, but no way would that be the case in the US. If anything, I’d be willing to put my money on > 60% of Linux installs being RHEL/Centos rather than SUSE
You could get the number wrong. The quote stated that 60% of the companies use Suse to power some of the workloads. So if most of these companies would use Suse to host SAP, some have a few teams using Rancher and some (more so in Europe ) are using Sles you still get to these numbers even if most of them use RedHat for most of their workloads.
Why would they lie? Hacker News simply has this bizarre blind spot about what Fortune 500 companies do and what computers are that run Linux. One of their biggest customers is Chick-fil-a using k3s for the their point-of-sale network. I'm sure there are approximately zero employees interacting with the system that realize that, but it's still there.
Also, from my own experience, SUSE used to have nearly all of the US geointelligence processing because of the HPC connection mentioned elsewhere with CrayOS, but that went away when DNI forced everyone onto the CIA's private AWS service, which only had RHEL AMIs available. The national labs and more niche intelligence processing that can't run in the kinds of machines AWS provides still make heavy use of it.
Imagine I had a pill today that absolutely cured cancer. It would take years of clinical trials, testing on animals and humans, exploration of side effects, which cancers it fixes and so on. and so on. And thats before we talk about production at scale and so on.
The AI idea is the really fast part of the cycle, but its a tiny part of the process.
While the post seems simple, it's arguably complex, as the comments here point out.
Simple solutions are good enough some of the time, perhaps even most of the time, but often fall down with edge cases. But edge cases add up, and dealing with them is complicated.
For example, calculating pay for hourly paid workers is a "simple" problem. Deduct start time from end time and multiply by rate. Covers 90% of the workforce.
But the other 10% take much more work. That team that rotates an on-call worker (which earms an allowance), who gets a call (first hour is free, next is double time etc.)
So it is with software. Adding 2 numbers is trivial. But what about overflows? What about underflows? What if one number is infinity? What if it's i?
The simple solution is "just add, ignore edge cases". The complex solution handles the edge cases. Which is better in the long run?
I think it's important to understand the domain to know if those edge cases are likely to happen. If no one on payroll is ever on call, then no need to design for that. Solution works as intended. If it turns out we need a more robust calculator later, then we can design for that. But adding that complexity before the domain requires it seems unnecessary to me.
But also, just because there is complexity in the domain doesn't mean there needs to be complexity in the software. An elegant, simple solution could be implemented for calculating payroll or adding infinity. That's the hard part though.
On the one hand the legislation seems unimplementable for many OS makers, not just FOSS ones.
(The issue of "primary owner of the device" being the most problematic.)
Equally the concept of "app store" is different for different OS's. iOS and Android are clear. Mac and Windows are mostly "download and run from website" (although both want to pivot to appstore, with varying degrees of success.)
Then we need to wonder if yum and apt are stores, given that they aren't actually owned by "linux".
In truth though it kinda doesn't matter. It's trivial to add an "age" field to account creation. It's trivial for users to enter any date they like. So on the one hand it's easy for OS makers to comply, it's easy for users to lie.
Presumably if the law could have mandated age checks then would have, so I'm not even sure thus is slippery slope. Most minors don't have photo ID. Most desktop hardware doesn't have a camera (at the time of account creation.)
This feels like performative law-making. Vague language. Unenforceable user participation.
> Then we need to wonder if yum and apt are stores
IMO this is quite simple - as they provide software, they are "stores" too. Although I think most would associate a store with e. g. MS store, Apple store and so forth.
The word "store" is weird though. Would it not be easier to use different words? Anyone providing software for download; and perhaps add a size threshold to stop pestering small business or solo users. This really seems to target Linux here.
I’m not defending this law, just discussing the wording.
First, either this law, or another already on the books, or established case law, defines what an app store is. Sovereign citizens get hung up on legal wordplay because they mistake legal jargon for English. It’s not, any more than I move a small furry mammal (mouse) to click religious imagery (icons) on my desktop (not a desktop).
But second, if you really want to wordsmith it, “store” can mean “place where you keep stuff”, not only “place to buy things from”, as in in short for storage. Where do you save work documents? A file store. That’s not where you buy docs, but where you keep them. A crafty DA could probably say, lacking a definition otherwise, that an app store is where you store apps, and buying them is incidental. And they’d probably win over you and me arguing otherwise, because they can speak legal to the judge and we can’t.
As the user interface through which users download (among other things) apps... it absolutely is an "app store". It's not where the binaries are hosted, but you don't see anyone claiming the App Store iOS app isn't an app store because the apps are ackshyually on Apple's CDN servers, do you?
Ok. Take something like Gnome software or KDE discover, where you can add different sources. You can point to both your distro's repos AND Flathub. Is the store the app? Is it Flathub? Theres different sources depending on which distro you use and users can tweak it.
Apple App store is a disingenuous example because it's a proprietary app hard-coded to use Apple sources, you can tweak the sources... Apt or yum are no more app stores than curl or git.
The app is a store, and Flathub is also a store if it allows downloading and installing the packages directly.
Apple's App Store is a perfect example because there is no difference between stores with private sources and stores with open sources for the purposes of whether or not it is a store.
I feel like there's no market for this. Or at the very least, it's an incredibly niche market. Hear me out :)
Firstly, this is a pretty expensive approach to cooking, so you're already in a narrow space (people with money). There are a lot of products in this space already, from restaurants to (high quality) frozen meals etc.
(Aside - meals for 4 implies kids, and people with kids tend to have financial boundaries.)
Second you want health food. So again, more niche inside your niche. My instinct (since I've done no actual market research) is that people who are into health food aren't great candidates for subscriptions. They're into provenance, minimizing waste, and so on.
Thirdly, you're asking people to cook the meal. (Which I applaud, cooking meals is a good thing.) But cooking is easy. Once you've learned 10 dishes it's easy to learn another one. And you tweak to suit your taste. At which point your "service" is little more than grocery delivery.
Prepping food actually takes minimal time (in the real world). To a novice getting an onion peeled or chopped might be a time saver, but in reality it takes seconds to do it manually. Honestly the time spent ordering etc would dwarf savings here.
Lastly, there's little incentive for people to remain on this service long-term. Maybe it's useful in the learning phase. But it's a quick cut as skills develop. So short term, high turnover customer base.
Obviously it could be done. In a rich neighborhood you might even get enough customers to keep a delivery guy busy. But outside of SF I don't see it catching on.
But don't worry about what I think, I've been wrong many times. All I suggest is lots of market research first (perhaps outside California. )
It's the other way around. It won't work in expensive places like SF or NYC, because the people who move there are happy to pat paprika on a chicken and toss them into an air fryer. Or live on rice and beans.
Here there's some cultural pressure on people to cook for their families. Especially Asian moms. Someone feeding their kids McDonald's and frozen meals is going to be judged, but what's a parent who works 996 supposed to do? Though back when I was in uni, we'd take turns preparing dinner; I can see it working for students as well.
Cooking is also always cheaper than buying it off the shelf. We buy the frozen spinach despite it being 3x the cost of fresh spinach because we don't want to wash the sand out of spinach and then wash the sand out of the sink.
My problem is we have a bunch of stuff in the fridge. Prep is less about time, more about energy. I'm hungry, I don't want to peel carrots. We buy large batches of fish over the weekend and clean them up and prepare them, but they're in the sink 4 days a week and 2 of those days we just end up eating outside.
I think everyone wants healthy food as a core meal. We're talking something other than pasta with butter - the health ministry suggests dishes of 1/4 carbs, 1/4 protein and 1/2 veggies. But what choices do you have here? Nobody wants to retrofit their usual dishes to have less carbs and more veggies, lol. Cutting down on protein also cuts cost. Tim Ferris says eat the same thing every other day but most people don't do this, especially not kids. There are plenty of healthy options. Grilled veggies do taste great and so are things like minestrone. Burgers can pass the health food bar too.
Of course they absolutely can license their software any way they like. That is their prerogative. Personally I write software for a living and it's all licensed with a commercial license.
By definition then I am not an Open Source Programmer. (I work on the odd Open Source project, but that's not the same thing.)
So any programmer can license things anyway they like. If their license is Open Source compatible then they're free to call themselves an Open Source programmer.
Your suggestion is equivalent to asking why an amateur golfer can receive prize money in a professional tournament. They very much can though, but then they're no longer an amateur.
>> The prompt starts at the first field and <RETURN> (not <TAB> !) moves to the next.
This is hilarious to me, because times have certainly changed.
When we first started shipping Windows software the big complaint from users was the use of Tab to switch fields, while Return triggered the default button (usually Save or Close).
The change, for users used to DOS was painful - not least when capturing numbers as the numeric key pad has Enter not Tab.
Software developers either stood firm, convincing customers to learn Tab, or caved and aliased the Enter key to the Tab key. Even today I still find that option here and there in Software that's been around a while...
Author here, and thanks for reading. I'm glad to hear stories from a developer POV about those days. It's interesting uncovering subtle interface changes as I investigate various applications. It makes sense to me to not use Return for fields, especially when fields could gradually accommodate longer and longer blocks of text. Being able to naturally type multiple paragraphs, say for a "Notes" field in a database, would make sense.
Yes, it makes sense when viewed like that, and was probably a necessary change.
DOS chose Enter though because in those days mist data capture was numbers. Lots and lots of numbers. Data capturers could track the left hand down the column (so keeping place on yhe paper) any type with the right. Enter is right there in the keypad so only one hand needed.
Switching to Tab means 2 hands needed on the keyboard, so difficult to keep track on the paper.
Typically also, on DOS screens there was very little multi-line entry. Addresses were multiple entry fields, and so on. Tab was pretty much not used (outside of word processing).
If I went back now, to design the standard keyboard, I'd add dedicated "Next" and "Previous" buttons on the numeric keypad. No need for Enter there.
> not least when capturing numbers as the numeric key pad has Enter not Tab.
And “Enter” isn’t “Return”.
I don’t know how the PC and PC software did it, but the Mac, when it got a numeric keypad, discriminated between return (on the alphanumeric keyboard) and enter (on the keypad), and software did discriminate between the two.
We had a contractor write a replacement for some green-screen software that we ran for years. The replacement was of course a web interface, written in PHP, and nicely themed and all that was great in 2005.
We kept running into all kinds of weird issues when importing data back into the legacy system. Of course, after we started looking into it, I narrowed all the issues down to the same two users.
I don't remember exactly what it was, but users would hit a certain key on the keyboard at the end of every field, before they used the mouse to click on the next field and enter more data. This resulted in an undesired character at the end of every field!
I realized exactly what was going on as I watched a person fill out the form and submit it.
"Why are you doing that!"
"Doing what?"
"Hitting the space bar (or whatever key it was) every time you fill out a field!"
Of course, in the old system you had to hit that key to save each field as you entered it.
The key restriction I see is the difficulties in large-group decision making. 4, or 3 seems to work well, but more than that it seems to flounder.
Every decision comes with upsides and downsides. So risks have to be taken, but in large groups the most common option is "status quo". You see this play out all the time in large companies with big boards and deep management all the time. It ends up stagnating because there's always the "safe" option which means trailing behind.
Of course it can work. But it's not common.
In a truly cooperative everyone gets a say. And most people are risk adverse. Ultimately it tends to collapse under its own inaction as those who are accepting of more risk go elsewhere.
Electing top management is also a weak approach because it becomes very political and seldom elevates merit.
People have different goals, some want long term job security, some want short-term rewards, and so on. The more people you add the harder doing anything becomes.
In other words a cooperative succeeds when it has a very clear set of goals and ambitions and only accepts people who are tightly aligned with those. It's a very difficult structure to be successful at.
reply