Some of academia looks at completely novel approaches rather than increments, but the main issue here is, after the budget has gone, or the phd has been issued, the project is left in the water as a new project, new budget and new paper to publish is required. There's no such thing as a "10 year project" in academia.
The landscape is very different from what it was in 70s. They were working on a clean slate before half of the human population had computer terminals in front of them. Their battle was to get the terminal in front of people, but I'd argue that the biggest accelerator to adoption has been social media, not the way GUIs behave or programming languages function. The web is now the legacy software we're stuck with if we want any impact at this scale.
Say for example, if you had a completely new idea for a general purpose operating system which simplified things greatly, but was unlike Unix, and it had no web browser. Now what? People aren't interested - they want their web browser.
Rust is hardly groundbreaking. It's a mix of imperative/functional styles that are well known, thrown in with pointer ownership which has been researched and toyed with in various forms for years before. It's a small improvement on C and C++. Making a practical implementation is praisworthy, but I'd say this hardly counts as an invention in open source, and it's particulaly unrepresentitive of the FOSS community anyway because it's backed by a big company with a big research pot.
The kind of game changers Kay is talking about are not that easily approachable. You won't be able to take your existing knowledge of language X and suddenly see how they mostly apply to Y too. If that were the case then you've not really changed the paradigm, only given a glimpse of how it could look from the existing one.
And the reason you don't see many of these kinds of innovations in the FOSS world (although they definitely exist), is because they don't gain traction. If something is clearly new and takes significant effort to learn, very few people are going to take the time to investigate it. Meanwhile, solutions which fit well into the existing paradigm are easily accessible by masses of developers, and they flourish. This is probably one of the main reasons that real invention is rare: people are after fame, and adoption rates aren't going to go up quickly if you challenge existing conventions.
Half* of the UK's roundabouts are in Skelmersdale, and they provide little function because there are so few cars on the road in the area. They probably increase emissions because you've got to drive that much further to get where you want (The largest of these roundabouts is half a mile in circumference) - and with nothing but grass, woodland and strange "art" sculptures in the middle, they're a huge waste of land.
National parks are places of outstanding beauty. I can assure you Skelmersdale is not.
By waste of land, I mean the place was built using several times more land than was needed for the relatively low population, but some clever fool with government money decide they'd build a bunch of small ghettos separated by large roundabouts.
Sounds miserable. I was imagining a bucolic wonderland of natural beauty, with birds nesting and ancient oaks over shady glens.
My local town is in the process of sticking 'roundabouts' everywhere. Mayor visited someplace and liked them; he's pushing them on us now. Sometimes its just a dumb island in the middle of an intersection; you have to crank around it at microscopic speed which benefits no one.
The worst one has two lanes halfway around; one lane the rest of the way. From the south you can take the right lane and exit north without turning (much). But nobody understands the point; everybody stops and creeps around. So they put up a map(!) so you can figure it out. And 13 arrows and warning signs. All in the interest of 'efficiency'
Can't see where this answers "Can android gain from KDBus?" or any hint at why we'd even try when "things we havn't looked at: security and performance" would probably be the biggest motivators. Is there something I'm missing or is this just another case of hopping on the bandwagon?
We've also reinvented a car to go with our reinvented wheel, but we've also made is so the car is useless unless you use our wheels, and by virtue of having no other function, the wheel is also pretty useless unless you use it with our car.
But don't fret. Our car will provide you with it's own desktop built into the dashboard. Our car can communicate other cars, but only those of the same model, and even if our car doesn't have support for something you need yet, don't worry, as we'll build it directly into your car soon!
Consider if you took two copyrighted pictures and combined them in some way in photoshop. We can lay claim to the combined work, but we may not have consent of the original authors works to distribute thier work.
Now consider if you trained the NNet with the same two images, such that it was highly overtrained and basically produced a combined replica of the inputs. This is essentially the same as doing it manually in photoshop. That a computer done it does not take ownership away from the creators of the two images.
A NNet isn't trained with two images though, but millions. Do we abandon copyright because of scale? Should the NNet operator not be required to keep the entire training set such that copyright can be traced? Do we invent an entire new industry on determining the probability that a particular image was used to train a NNet (and by how much it affected it), such that it's owner can claim royalties on anything the NNet produces?
The question isn't about google versus operator, it's about whether or not we're going to continue investing in the madness of copyright for machines designed to mimic human brains, and if so when will it apply to ourselves - for we can't archive our own training set.
This is an interesting question. I wonder if, due the number of items in the training set and the minimal impact of each individual creative work, it would be considered fair use.
If your training set significantly consists of images from someone else's training set in the same domain, you might have a conflict. But for arbitrary images, it may be analogous to search engines indexing (and learning from) copyrighted material, which is generally protected.
It's a UI innovation that is found in OS X, but Apple did not invent it.
I meant my list to be more like: here are some big innovations in window-based GUI computing, which can be found in OS X. Not that these are all exclusive to OS X.
People say nothing has changed in desktop GUIs but I remember the dark times before tabs and window managers. Constant dragging, resizing, and minimizing windows just to keep track of things. Same with desktop search--constant careful folder curation just to keep files from getting lost and forgotten.