It seems like what Europe really needs to do this is a viable mobile OS. It's been true for a while that Linux + LibreOffice is plenty to handle most government workers' needs on the desktop, but that's only good for when they are at their desks. Are there any viable alternatives to iOS and Android that are totally free of "dépendances extra-européennes"? What's the plan?
The Finns, as always, continue to develop mobile phones, Jolla is back from the dead and supposedly starts shipping sometime in 2026 with a new iteration on the hardware and the OS, time will tell if it'll have any impact.
Might not be 100% Europe-made from the get go, but good ideas and executions often start with small steps and iterate rather than having something groundbreaking out of the gate.
> I'm not convinced that replacing one proprietary OS with another is the solution.
Someone correct me if I'm wrong, as I'm not super familiar with Jolla's/Sailfish's architecture, but isn't most of the OS actually FOSS, while there is a thin proprietary compatibility layer, and that's about it? Was some months ago I last read about it so could be misremembering, but seems like a good first step at the very least.
> Consumer don't care if the OS is proprietary, as long as it works
I agree entirely (and they also don't even care if there's a trustable party who they can trust, just look at how many people happily use Google).
And this is exactly the mentality that's gotten us where we are. Consumers don't care about these things, and then end up lock into vendor ecosystems like the one op is describing here.
Linux on Mobile has been progressing steadily in recent years, and is in a state suitable for very early adopters and tech enthusiasts. Definitely not for the general population IMHO.
FWIW, it's not just the EU that needs this urgently: most of humanity sorely needs a trustworthy mobile OS that's not designed against their interests.
Linux on the desktop has been progressing for many many years... and a lot of stuff still doesn't work out of the box
I've recently had some fun at the intersection of "moving windows between screens" vs "ui scaling" vs "ambient system is wayland but the snap uses x11 internally".
Multiple displays with different scales has worked fine since at least 2017 (which is when I stated using sway, and precisely for this reason).
OTOH, I know that recent versions of GNOME struggle with this. Just last year I saw plenty of situations where moving windows across displays triggered all kind of quirks. This is a GNOME-specific issue, and like most of its issues, doesn't affect all other compositors.
A big hurdle to this is hardware vendors locking bootloaders and making it impossible (or impractical) to write or use existing drivers.
Manufacturers maintain long running forks of Android (often very old Linux kernels) with their drivers hidden in their fork's source.
I'm a firm believer in the right to repair software - and the fact that it's illegal to reverse engineer binary blob drivers (or proprietary software at all) is a shame (not that you could even untangle a driver from a binary blob of a Linux fork). I'd go as far as feeling strongly that drivers should be open source, and if they aren't, documentation sufficient for the community to write drivers should be made available by manufacturers.
>the fact that it's illegal to reverse engineer binary blob drivers (or proprietary software at all) is a shame
Where? I don't think it's illegal in the US at least. The only things I'm aware of that may have legal issues are related to radios, specifically modem/baseband stuff, and maybe WLAN cards.
Android Open Source is good enough. The tough part are device-specific drivers that never make it upstream and are eventually abandoned by the vendor, making upgrade past specific kernel versions very troublesome.
Why not? GrapheneOS and others show that it is possible to make viable operating systems on top of AOSP, which also have their own useful extensions.
It seems like a waste not to use an existing, well-developed, hardened, open source base, that at the same time provides great compatibility with most existing apps.
Since it is open source, it would always be possible to fork if AOSP goes off the rails.
I think the primary issue is that it is currently hard to get embargoed security patches, unless you have some partnership with an OEM.
At the same time it is an open source product and can therefore be forked. Being controlled by Google presents not nearly such an issue as Microsoft products or the Apple ecosystem.
I often venerate antiques and ancient things by thinking about how they were made. You can look at a 1000-year-old castle and think: This incredible thing was built with mules and craftsmen. Or look at a gorgeous, still-ticking 100-year-old watch and think: This was hand-assembled by an artist. Soon I'll look at something like the pre-2023 Linux kernel or Firefox and think: This was written entirely by people.
At least with physical works (for now, anyway), the methods the artisans employ leave tell-tale signs attesting to the manner of construction, so that someone at least has the choice of going the "hand made" route, and others, even lay people without special tooling, can tell that it indeed was hand made.
Fully AI generated code has similar artifacts. You can spot them pretty easily after a bit. Of course it doesn't really matter for the business goals, as long as it works correctly. Just like 99% of people don't care if their clothing was machine made vs. handmade. It's going to be a tiny minority that care about handmade software.
The modal person just trying to get their job done wasn't a software artisan; they were cutting and pasting from Stack Overflow, using textbook code verbatim, and using free and open-source code in ways that would likely violate the letter and spirit of the license.
If you were using technology or concepts that weren't either foundational or ossified, you found yourself doing development through blog posts. Now, you can at least have a stochastic parrot that has read the entire code and documentation and can talk to it.
Judges, particularly appellate judges, spend a lot of their time reading briefs. So, as you can see, some of them have strong opinions about brief typography. (Judges, as a group, have strong opinions about lots of things).
"The briefs, opinions of the district courts, essential parts of the appendices, and other required reading add up to about 1,000 pages per argument session. Reading that much is a chore; remembering it is even harder."
That is a lot of reading. Depending on how long an 'argument session' is, retaining the detail must be a challenge.
Indeed. I think what I'm imagining is something like Typst for courts and lawyers.
Imagine if, nationwide, we lawyers could draft in plain text and never (or rarely) have to worry about court-specific typesetting rules or wrestling with Word!
I do a lot of long-form writing. I switched from typing first drafts to writing them and saw improvements in my creativity, organization, and general quality of writing, and also found that I finished my work faster. Then I switched from ballpoints to a fountain pen and saw what all the fuss was about. People who write a lot appreciate how smooth fountain pens are, how the well-designed fountain pens have great ergonomics that reduces hand fatigue, and how the ability to pick your ink can let you fine-tune your writing experience even more. Yes, they can be impractical, and I can absolutely see how disposable ballpoints and rollerballs eventually won the market competition (in the US at least) but modern fountain pens, for some use cases, deserve a try.
I have a few fountain pens, some were not cheap. The one I use the most is my Lamy Safari. Taking and using an expensive fountain pen outside of the house where I may lose it kinda creeps me out. That said, the Lamy is a solid, not that expensive fountain pen. ~$30.
Another Lamy Safari fan here. It’s my preferred pen amongst the others in my collection. Though, I just discovered the Lamy CP1, which feels like a sleeker version of the Safari. I’m giving the CP1 an honest go right now.
You will not be disappointed! I use a black Lamy cp1 as my "daily driver" for the last, I don't know, 35 years? I got it in school as a present and it is lying here, next to me right now. The varnish has rubbed off at the sharper edges and the metal shows. I changed the nib about 20 years ago, at the end of my academic career. Somewhere in the 2010s the small ring in the cap broke, so that it did not stay closed anymore and I had to replace it. I started out with black ink (uh edgy) but changed to blue later. I love Pelikan's 4001 Royal Blue, it is so smooth.
I also have a few other fountain pens, among them a Montblanc M146. It is crap for writing and I only bring it to put it on the table in certain meetings -- the same ones where I actually wear my Speedmaster ;)
I also think the Safari is just a very cool looking pen. I haven't really gotten into writing with fountain pens much as of yet, but I saw a Safari in a store and bought it based on looks alone.
The smoothness is night and day for me; seconds after picking up a ballpoint pen I want to reach for my FP, because with ballpoints you have to apply pressure, and even then the flow doesn’t match a fountain pen.
This is, arguably, totally fine, because these are still valid programs that run (and run quickly). BUT, it makes the benchmark programs poor choices to compare the verbosity of languages. So statements like "For a language famed for its terseness, Haskell it turns out, isn’t as terse as expected" can't be supported when comparing benchmark programs that were written to maximize speed, rather than written to minimize developer time.
Fortunately, the Benchmark Game does publish all of its programs, including the ones that don't "win" the speed race, and it's possible to find nice, concise, idiomatic Haskell programs in there.
Yes, that is what I mean. By the way, on the benchmark game website, is it still possible to sort benchmark results not by speed, but by gzip'd source code size?
In 1983 “learning how computers work” was a huge goal for many buyers. Parents bought computers so that kids could learn to use them, which often meant learning to program. (BTW, thank you, mom and dad).
Bar passage rates are falling because law schools have become increasingly theoretical. Many law professors have never had actual clients who pay them for legal work, or have worked as lawyers in the real world for less than five years. These profs are producing interesting academic scholarship but they are increasingly isolated from the legal profession. Law school classes, perhaps as result, tend not to teach what bar exams test: the elements of crimes and torts, how many witnesses are required for a will to be valid, the elements of a valid contact, and so on. These are exactly the things that Californians need their lawyers to know. If too many people are failing the bar exam, the problem isn't the exam, but the prep.
Lol, no, the bar exam is theory too. Average LSAT scores for applicants have dropped because smarter people caught on and realized law school is one of the biggest scams out there today.
As noted in the article, the exams are largely identical–only the required score is different. Having once attended a beautiful law lecture on the similarities of American Indian law and the law of high seas, I can state with some certainty that the Ivy league law schools aren't really a hotbed of teaching the daily grind of local court.
No, that's not right. The bar exam tests what you learned during BARBRI.
Law school is for learning how to do good legal research, analysis, and writing, with some exposure to a selection of substantive areas of law thrown in for good measure. At the top schools, there's also a strong focus on understanding the law through historical, philosophical, and policy-oriented contexts. Memorizing "black letter law" is not an objective. In fact, at most final exams, you can bring in a stack of study guides and reference books to look up what you don't know.
No, but a surprising number of Computer Science degrees are mislabeled. Plenty of CS programs don't even require that their students take even a single class on computer science.
Probably because it's pedantic. At most schools, if you want to major in computer stuff, you major in some sort of computer science major. In some universities, it's in the engineering school. In others, it's associated with other science programs, math in particular. There are various historic reasons for this.
People who then go to build software systems are at least hopefully applying at least some engineering principles. Software arguably often doesn't follow as well-established engineering principles as some other types of engineering. But arguing there's some great separation between computer science and engineering really is pedantic.
Is it pedantic? I think Computer Science should be taught by Computer Scientists, which is a different set of people than Software Engineers. I say this AS a Software Engineer.
Every branch of engineering and science has more theoretical and more practical aspects. Look at physics. The people who do the theory and those who design the detectors have pretty different jobs.
And universities, at least the more elite ones, always tend toward the theoretical side in all fields. Which is mostly a good thing so long as it doesn't over-rotate. Whatever language you learn is probably going to be yesterday's news in 10 years. (Unless it's COBOL :-))
[And I say this as an engineer with degrees in non-SW fields.]
It is pedantic because at some schools the degree program for people whose intention is to write software in exchange for money is "computer science" while at other schools it's "software engineering" and at yet other schools it's almost certain to be something else. Which means that arguing about which is the theoretically "correct" term has no relevance, since the names are used so interchangeably by the schools themselves.
> At most schools, if you want to major in computer stuff, you major in some sort of computer science major.
That was definitely true in the 90s, but I don't think it's been the case for years now. Most schools have degrees like Information Systems where most of the people end up who are just generally interested in computers and/or find CS too difficult. Hell I work with a guy right now whose degree (from a state university) is in "computing" whatever that means.
In my opinion, it's just a low hanging fruit some people like to hit. It's pretty clear what the GP comment meant. Comments like that make HN worse (IMO) than the pun threads on the Juicero article earlier today (https://news.ycombinator.com/item?id=14771084).
What I appreciate about books like this is that they assume the reader is already familiar with programming, and so the book cuts to the chase and explains why this language is unique and how it is different from the languages the reader probably already knows. The author was smart enough to know that very few students of F# will have picked F# as their very first introduction to programming, so this makes a lot of sense. I get frustrated with programming language tutorials that take the opposite approach and expend multiple screenfuls carefully explaining what a variable is.
f* yeah lots of tutorials and books start saying "do as i say, write everything i ask you to do, i know more than you noob" specially those made by the author of "learn * the hard way".
Not for this stuff. The EFF has no lobbyists in Washington DC, in fact doesn't even have an office in Washington DC. What they do, they do well: Litigating in court. But don't give them money to influence Congress.
reply