Little-endian is generally better for compilers, to the extent that it matters. For example, with SIMD, it makes most sense to number the lanes in the same direction as addresses in memory, so that lane 0 is at offset 0 when doing a load or store. And if you want to reinterpret the bits of a SIMD value as a different type, little-endian is the only sane way to do it. And so on.
I've become fond of thinking about little-endian becoming known as a universal "CPU byte order", to complement "network byte order". Each order makes the most sense for its domain.
I take a pragmatic approach. I'm cool with BSD license for most of user-level application code, because if someone wants to make a binary-only application, I usually have options I can choose from. But in kernel space, a binary-only driver or kernel means that open source of any kind is locked out of the hardware, because it's hard to write drivers without specs. In that space, the GPL is one of our few tools for getting device makers to give us what we need.
Has that really worked, though? It seems like, in the case of things like video cards, the GPL means that we have the full-performance, 3d accelerated, proprietary and closed driver, which is the one you're going to use unless you have a moral stance against it, and the objectively inferior (slower, at least) Libre version of the same thing to talk to the same hardware.
What if a working computer is your primary concern, not the political status of the code?
If you care about more than the short term, such "political status of the code" should matter to you. The purpose of the GPL is not (or should not be) being able to say "my code is holier than yours", nor necessarily about having the best working implementation.
No, its main purpose is guaranteeing that all users can understand how the device works, even if they don't belong to the company that builds it. Other FLOSS licenses don't guarantee that in the same way than the GPL does, as they allow modifications to be kept secret. We could say that the GPL is "knowledge-friendly".
In the case of video cards, the alternative would be having only the proprietary closed driver and no open source version. It's incredibly hard to know of such closed systems work by reverse-engineering them; an open source driver, even if limited and less perfect, provides a full specification of the device.
Then you are able to use the proprietary driver so long as you agree to its licensing terms.
I suppose that you are arguing that an OSS driver licensed under more permissive terms would encourage GPU manufacturers to contribute more back to the original driver. Maybe it would and maybe it wouldn't. It certainly wouldn't discourage them from not contributing back.
Somehow I think that if they wanted parts of their drivers to be OSS they would have licensed them that way in the first place.
"Share" does not implies they are sharing freely with everyone.
It does mean that if you crack open any product built around Allwinner (or other similar SOC's), you'll find PCB's that are all close variations of each other, and if you look at the code accompanying it, you'll find plenty of "sharing" going on even if they're not sharing the source with us.
Normal division by zero gives you Infinity. To get NaN, you have to do something as numerically confounding as divide zero by zero, which isn't any infinity, because the numerator is zero, and which isn't zero or any finite number, because the denominator is zero.
While most people don't hack kernel code themselves, there is still value in promoting open source operating systems. A future in which Windows becomes popular on mobile/embedded devices is likely a future with more binary blobs, more OS-locked hardware, and fewer opportunities for those people who do want to hack on kernel code.
> A future in which Windows becomes popular on mobile/embedded devices is likely a future with more binary blobs, more OS-locked hardware, and fewer opportunities for those people who do want to hack on kernel code.
I just can't agree with that. F/OSS fan that I am, I still think the same opportunities will exist whether Windows runs on a particular piece of hardware or not. After all, GNU/Linux and BSD have flourished despite Windows running on the same hardware for the past 30 years. How is this any different? I see it as opening up new channels for learning, not closing them off.
To look at it another way, this is similar to Microsoft being late to the party on modern smartphones. The existence of Windows Phone 7 and 8 has done absolutely nothing to slow down the explosion of popularity in Android and iOS phones.
I don't foresee a huge percentage of Raspberry Pi owners jumping ship to Windows 10; maybe dabbling with it but that's it. As far as we know, GNU/Linux will continue to offer a superior learning experience.
I didn't see anything in the parent's comment about GNU/Linux or FreeBSD on desktop PCs, nor was I talking about it on desktop PCs. The Raspberry Pi has always been meant as a learning device, a maker's toolkit, a base for building something other than a boring desktop PC. The fact that it now has the power to be a desktop PC doesn't mean that's all it will ever be.
One definition (the one I was using) of "flourish" is "to develop rapidly and successfully". I stand by that description; GNU/Linux and the BSDs have seen steady improvement and rapid development surges over the past few decades. If you're claiming otherwise, I'd say look at the server statistics across the Internet. Look at mobile phone operating systems. Look anywhere but the desktop, and you'll see Free and Open Source solutions far outstripping any offering from Microsoft.
I say, welcome to the party Microsoft. Again, one more channel for learning should be welcomed, not shunned because of old prejudices.
What? These are not "old prejudices", Windows is still proprietary software and that's still harmful to user freedom. This is not just "one more channel for learning", it's a dying proprietary platform trying to piggyback on the success of a popular open-source platform.
Given that Microsoft is steadily becoming more open-source friendly and I'm seeing them being ridiculed for it across the web, I'd say a lot of old prejudice is at play.
Besides, Google's version of Android is also proprietary software that is not only harmful to user freedom, it's also more and more privacy hostile every day. Yet Google is still held as the pinnacle of open source friendly companies in some circles. Old prejudices, again.
I distinguish between proprietary apps and proprietary kernel. The open source world can build its own apps. Mostly. However, it can't build its own kernel if the hardware is undocumented and/or only supported by OS-specific binary blobs. Drivers are key to a lot of things.
MS is a lot more open-source friendly, and smarter, but it's pretty clear that this is a strategy to get more people tied into ecosystems which favor proprietary Windows. MS is behind, so they're playing catch-up.
I don't disagree, and I'll add that the Raspberry Pi is still encumbered by a binary blob that is required to boot, no matter the actual OS that is loaded after. I for one would absolutely love to see a 100% fully documented, Free and Open Source development board, from the board layout, to the CPU/GPU design, the boot code, and all kernel and userland software.
Unfortunately there is no real market interest for such a device, so I'll take what I can get.
While I don't run Windows in my house (except in a VM for work), I make a living teaching a product that does run on Windows, and I'm not an Apple (or Microsoft) apologist.
And I humbly submit that Windows as far from a dying platform. Less popular, sure, the numbers bear that out, but popularity isn't a measure of longevity. If that was the case, OS X would be dead, along with most Unix-like OSs.
Yes, in some circumstances that's true. For example, recently I think it was a GNU codec library that was released under a permissive license, because there are plenty of proprietary video codec libraries and for a free format to win, a permissively-licensed library is useful.
But Stallman put the GCC under the GPLv3.
So I don't think your comments re: papacy are terribly on-point.
The argument is that self-driving cars would reuse existing roads, fuel, service, and manufacturing infrastructures that have been developed for existing cars. And, adoption can be incremental. Start with cars which drive themselves highway only, and require a capable driver to be at the wheel at all times. That may be soon. Gradually add more use cases at the speed of engineering, regulation, and social acceptance.
Some of the hard things to change in the world are hard because they require many people to change their behavior in coordination. Cars to self-driving cars doesn't. Email to Email with innovative clients doesn't. Introducing a new communication protocol does (though of course it's not impossible). Of course, this is only one aspect of a complex world.
Let's consider just one aspect of this that I see as a show-stopper out of the gate: networks.
Driverless cars, even even in a narrowly adopted context, like a single highway, will need to be networked. Not only will they need to be networked during the time that they are on that highway, for cars in close proximity, it needs to be a network with 99.999999999% uptime. At a minimum, the cars will need to have a transponder with nearly zero bandwidth (like planes); but more likely, this connection will need to send more data than just position. GPS isn't nearly accurate enough unless cars can be spaced 100s of feet apart.
Do you know of any networking technologies that you would trust with your life?
Compare driving on a highway to being in a plane. In a plane, there are usually minutes following a worse-case equipment failure or total loss of communication. In a car, there are seconds or less.
Nobody suggests driverless cars are working open-loop from GPS coordinates; no current driverless cars work that way. Its generally a combination of gps, local sensors including sonar, radar and cameras, and car-to-car networks. The UofIowa Driving Simulator has done studies of trains of cars linked by short-range networks, where they coordinate braking and acceleration to achieve inter-car distances of a few feet.
Sure, in principle; my point is that even the best networks fail way too much for this to be practical. In the context of this particular thread, I'm saying that implementing driverless cars even in a narrow context is a non-trivial infrastructure upgrade, whatever form it takes. These networks will need to have reliabilities and uptimes on par with medical or nuclear safety systems.
We don't need any networks. Self driving cars will always need to be able to share the road with normal cars. That mechanism will support the case of sharing the road with other self driving cars too. Networks can add fancy features, but they can happen on their own time, incrementally.
SIGABRT (or similar) approach is similarly tempting. No checking for errors, no complicated control paths. If Unix systems let processes register (and unregister) files to be automatically deleted on abnormal exit, it'd be pretty convenient.
The generalized form of this would take arbitrary cleanup actions, not just file deletions, and would look a lot like atexit(); In fact, one could put a SIGABORT handler in that would (a) deregister itself and (b) call exit() to run atexit handlers.
Of course, if the reason for the abort was memory scrambling that destroyed the registry of cleanup actions, things get messy, which is why it needs to deregister the signal handler.
I'm not keen on having to hijack the signal handler, nor of the global variables needed for the atexit (or equivalent) handler registry.
All things considered, I'd rather language + compiler support for unwinding with user settable cleanup handlers, a/k/a real exceptions.