2) any decent software book that teaches to separate the presentation layer and platform layer from the rest of the program (that way, you have much less to port between platforms and maintaining it is not as costly as people fear)
EDIT: my purpose in this line of questioning is to assert that if you are trying to persuade someone to not do a thing, you will be more effective if you can give someone a straightforward alternative.
I likewise think that the folks trying to get people to stop writing python2 should pick a release of python3 to become an LTS release in the same way that in 2014 python2.7 was effectively declared an LTS release with support until 2020.
EDIT to parent's edit:
I'm not really trying to persuade anyone; not anymore. I understand the incentives that push people toward wasteful solutions. This won't stop until we hit a resource limit. All I'm saying is that there's a lot more we can do with current hardware once Moore's law is definitely dead. The mostly untapped potential is conditioned on people not doing extremely wasteful things just to shave off a little development time.
Skia is not a GUI library for example.
CEF is not a GUI library, it's a WebView.
And there should be at least a column indicating whether the library has support for usual desktop applications and/or for multimedia full screen things (such as games, or a movie player with OSD and custom graphical design and elements, textures).
And the GUI debate is about "where are abstract libraries that compile to native apps", after all, anything else is just for prototyping a "native app". And usually people just answer but Qt is nice enough, yet everyone uses Electron :/
Please, people, for the love of those whom your customer may hire after you, take the time to understand signals and slots before porting that godawful mess of garbage you got from your customer's previous contractor to Qt.
What is the go-to tutorial/book that humans who are knowledgeable in Qt recommend for efficiently learning it?
The natural reaction for someone is to push harder, not to look and sense what is the reason for the performance or perceived performance issue.
I want to run far away from the whole computer mainstream today because my brain is suffocating.
Given that this is doable in HTML/CSS/JS on a Nexus 5 (an almost 4 year old smartphone) it just doesn't make economic or UX sense to optimise the UI beyond the 60 fps target.
If not, then the snark is unwarranted.
Does the fact that a glorified IRC client eats a big chunk of system resources matters to said program's developers? Probably not, it would seem. Does it matter to me as a user of such program? Yes, it does very much. The problem is, right now there isn't a good way for the market to signal this back to the developers so that they would care.
Switch to a competitors app?
Also when people talk about Electron sucking the one example that keeps coming up is Slack, which is making me wonder to what extent the core problem is Electron and to what extent Slack is just a badly written app.
https://josephg.com/blog/electron-is-flash-for-the-desktop/ (and associated discussion over at https://news.ycombinator.com/item?id=14087381).
As for switching to competitor's app, you can't do that if it's a networked app using propertiary protocol. The SaaS world has killed interoperability.
We've gone through several of those cycles and they're not in any fundamental way tied to Moore's law.
Let's check back in ten years, see how things are going.
I'd have written the same thing 10 years ago and you would have written the same thing back then too.
So, another 10 years then?
What will happen on the chip level is more cores, not faster cores, possibly larger caches.
He spoke of the number of transistors per area in a plane doubling every year. He didn't specify silicon. He didn't specify photolithography. He also said "at least for the next decade" in 1965.
In 1975 he revised it to every two years. In 2015 Gordon Moore himself said "I see Moore's law dying here in the next decade or so."
So let's let poor Gordon off the hook. He's being attributed things he never actually said.
Tunnel effects are real and very hard to reduce, even at lower temperatures. The band-gap can't get much smaller, supply voltages are about as low as we know how to get away with.
There are solutions in terms of exotic materials with even more exotic fabrication methods.
I linked to a nice video the other day, see if that interests you:
That's the state of the art per 2012, not much has changed since then, though there has been some incremental improvement and optimization as well as larger dies for more cores.
Yes, but we've been aware of that one for decades, it just isn't going to happen for anything other than maybe (and that's a small maybe) for memory. Removing the heat is hard enough with 3D heat infrastructure and 2D chips. Removing the heat from 3D chips is not possible at a level where the interior of the chip is above permissible values unless you clock the whole thing down to where there are no gains.
> There are others that are being researched.
Yes, but nothing that looks even close to ready to mainstream.
> Then there are the unknown unknowns...
They've been 'unknown' for a long time now.
Really, this is as far as I can see more hope than reason.
Also, Magnesium is 20-25 times more common than Manganese. Just as well- production of Magnesium metal is pretty small because it's so difficult to work with.
The regression mentioned in the press release was only used to predict one property (the Curie temperature) of the materials based on experimental data for similar materials.
It's still a really impressive piece of work, just nothing to do with AI.
The paper is open access, so you can read the full text here: http://advances.sciencemag.org/content/3/4/e1602241.full
They spot facts in byte streams that we don't see. We can then contextualize the info into another part of the domain, or drive it deeper.
... via my fortune clone @ https://github.com/globalcitizen/taoup
From what I understand, we already have; that's why you usually see it called "machine learning" in research and academia - and "artificial intelligence" everywhere else.
That isn't to say there's no crossover in both directions, but usually if you are being serious on the subject, you call it ML - if you're trying to hype it or build interest, you call it AI.
Note that this only goes back so far; prior to the early 2000s or so, the terms were used more or less interchangeably. At one point, the term "machine intelligence" was used, then died out, but I've seen it used again recently.
Ask a convolutional neural network what "cat" means, and the best you can get is a probability distribution of pixels on a grid. It's not intelligence, but just an encoding of facts provided by an actual intelligence.
Once they stop tweaking Watson (example) for every tasks, I'll declare it an AI.