I've been reading through the Vulkan spec, which is in way similar in that OpenGL has been extended since its introduction in 1992 with more and more features to support new graphics capabilities as they evolved.
I've seen Vulkan called the successor to OpenGL, but reading the spec it seems more like the end game for raster graphics card programming. OpenGL 4.0 was released in 2010, and since then changes have been incremental. We more or less have figured out how to do raster graphics (ray tracing may be a different story), so it made sense to invest tens (hundreds?) of millions of dollars to develop the Vulkan spec, and then many millions more to implement it.
What other technologies are there were we are more or less at the end game? I know Qt5 widgets is considered feature complete for desktop apps.
Photoshop pretty much got it right a couple decades or so ago, and they've just been porting it, smearing on new lipstick, and figuring out how to make more money with it ever since.
I would argue that this is true of most of Microsoft Office as well. When did they really add a new feature to PowerPoint that you had to have?
And it's no surprise both Adobe and Microsoft have pushed people towards a subscription model for this software: Nobody in their right mind would pay for upgrades otherwise. Arguably Office you need every ten years to ensure you have security updates because of the amount of foreign content you process with it, but Adobe? Psh.
>When did they really add a new feature to PowerPoint that you had to have
Funny enough, the screen recording functionality added to PowerPoint a few updates ago is as far as I can tell the best simple screen recorder available for Windows 10 and the closest thing to native screen recording outside the game bar. Not sure why that hasn't made it into the snipping tool yet.
The feature set of Microsoft Office, yes. But I think Google Docs took some reasonable steps backwards in features in exchange for a big leap forward in collaboration. (Or a few steps towards but not nearly far enough to where Douglas Engelbart was in 1968.)
>The live demonstration featured the introduction of a complete computer hardware and software system called the oN-Line System or, more commonly, NLS. The 90-minute presentation essentially demonstrated almost all the fundamental elements of modern personal computing: windows, hypertext, graphics, efficient navigation and command input, video conferencing, the computer mouse, word processing, dynamic file linking, revision control, and a collaborative real-time editor (collaborative work). Engelbart's presentation was the first to publicly demonstrate all of these elements in a single system. The demonstration was highly influential and spawned similar projects at Xerox PARC in the early 1970s. The underlying technologies influenced both the Apple Macintosh and Microsoft Windows graphical user interface operating systems in the 1980s and 1990s.
>Engelbart's vision, from the beginning, was collaborative. His vision was people working together in a shared intellectual space. His entire system was designed around that intent.
>From that perspective, separate pointers weren't a feature so much as a symptom. It was the only design that could have made any sense. It just fell out. The collaborators both have to point at information on the screen, in the same way that they would both point at information on a chalkboard. Obviously they need their own pointers.
>Likewise, for every aspect of Engelbart's system. The entire system was designed around a clear intent.
>Our screen sharing, on the other hand, is a bolted-on hack that doesn't alter the single-user design of our present computers. Our computers are fundamentally designed with a single-user assumption through-and-through, and simply mirroring a display remotely doesn't magically transform them into collaborative environments.
>If you attempt to make sense of Engelbart's design by drawing correspondences to our present-day systems, you will miss the point, because our present-day systems do not embody Engelbart's intent. Engelbart hated our present-day systems.
And it's in the direction of multi-user collaboration that X-Windows falls woefully short. Just to take the first step, it would have to support separate multi-user cursors and multiple keyboards and other input devices, which is antithetical to its singleminded "input focus" pointer event driven model. Most X toolkits and applications will break or behave erratically when faced with multiple streams of input events from different users.
For the multi-player X11/TCL/Tk version of SimCity, I had to fix bugs in TCL/Tk to support multiple users, add another layer of abstraction to support multi-user tracking, and emulate the multi-user features like separate cursors in "software".
Although the feature wasn't widely used at the time, TCL/Tk supported opening connections to multiple X11 servers at once. But since it was using global variables for tracking pop-up menus and widget tracking state, it never expected two menus to be popped up at once or two people dragging a slider or scrolling a window at once, so it would glitch and crash whenever that happened. All the tracking code (and some of the colormap related code) assumed there was only one X11 server connected.
So I had to rewrite all the menu and dialog tracking code to explicitly and carefully handle the case of multiple users interacting at once, and refactor the window creation and event handling code so everything's name was parameterized by the user's screen id (that's how you fake data structures in TCL and make pointers back and forth between windows, by using clever naming schemes for global variables and strings), and implement separate multi-user cursors in "software" by drawing them over the map.
At least 15 years ago you could drag a marker that hides above the vertical scroll to create multiple views of the same document. I didn't know it carried into other windows, so that might be newer.
PowerPoint now has a great feature where it will do speech to text and supply real time subtitles below your presentation. It’s pretty good too. Seems to ignore swear words though (yes we tested that first).
Have you used Photoshop lately? There are many new, modern features for selection, content aware erasing, scaling, filling, HDR graphics, 3D, text, computational layers, etc. Get a 30-day trial and try it!
Computational layers of lipstick. And rip-offs of stuff that's been around for decades, that Adobe didn't invent (like tabbed windows, which Adobe patented and sued Macromedia over, in spite of all the prior art).
Around 1990, Glenn Reid wrote a delightful original "Font Appreciation" app for NeXT called TouchType, which decades later only recently somehow found its way into Illustrator. Adobe even CALLED it the "Touch Type Tool", but didn't give him any credit or royalty. The only difference in Adobe's version of TouchType is that there's a space between "Touch" and "Type" (which TouchType made really easy to do), and that it came decades later!
The next talk was given by Glenn Reid, who previously worked at both
NeXT and Adobe. He demonstrated the use of his TouchType application,
which should prove to be an enormous boon to people with serious
typesetting needs.
TouchType is unlike any other text-manipulation program to date. It
takes the traditional "draw program" metaphor used by programs like
TopDraw and Adobe Illustrator and extends it to encompass selective
editing of individual characters of a text object. To TouchType, text
objects are not grouped as sequences of characters, but as
individually movable letters. For instance, the "a" in "BaNG" can be
moved independently of the rest of the word, yet TouchType still
remembers that the "a" is associated with the other three letters.
Perhaps the best feature of this program is the ability to do very
accurate and precise kerning (the ability to place characters closer
together to create a more natural effect). TouchType supports
intelligent automatic kerning and very intuitive, manual kerning done
with a horizontal slider or by direct character manipulation. It also
incorporates useful features such as sliders to change font sizes,
character leading, and character widths, and an option which returns
characters to a single base line.
TouchType, only six weeks in development, should be available in early
August, with a tentative price of $249. BaNG members were given the
opportunity to purchase the software for $150.
Qt Widgets is an amazing library, but honestly it is both more and less featureful than it needs to be in various cases. The rich text document stuff still holds up OK for basic cases, but I think the text rendering story could be a bit better. Last time I was doing low level text stuff in Qt, performance was not super impressive, and some of the APIs left a bit to be desired.
Well said. The notion that Qt widgets are "finished" was just an aspiration not to spend much more money on it I think; they sort of rowed back on this when it became apparent that Qt Quick isn't always appropriate, but by then it had sort of spread around as a "Qt fact" amongst people who didn't actually use it.
The number of rough edges, missing bits and outright bugs mean that it's certainly not "finished"... just like all software really.
They helped spread the rummor when plenty of new features are QML only, specially when targeting non-desktop devices.
To this day if you want a common file dialog that works properly across all Qt deployment targets, you need to use QML, as the Widgets version is not adaptive and will display a tiny desktop common file dialog on an LCD display, for example.
My preferred metaphor is to the RISC revolution. Just like RISC decoupled CPU design from programming language design (in the sense that hardware was often designed to make assembly coding easy), Vulkan has decoupled shader language design from driver writing. OpenGL was designed under the assumption that the general game-dev public would be writing shaders for direct consumption by the driver/hardware combo; Vulkan, on the other hand, seems to be designed to be a) written by a narrow group of game engine developers, and b) generated by compilers from higher-level shader languages.
(NB: I Am Not An Expert and these are my Uneducated Impressions.)
Vulkan was primarily designed to allow issuing batch based (queue) validated instructions sets, issued from multiple threads. Lessons learned from OpenGL ES 2.0 showed only a subset of techniques is needed, hence the API is smaller. Shaders are precompiled. Smaller, simpler driver.
I think/hope the "endgame" for 3D APIs is that they disappear completely into compilers. Vulkan still has too many compromises and design warts to support different GPU architectures from high- to low-end and is already more complex than GL ever was after 25 years of development (just look at the huge extension list that already exists for Vulkan).
I don't need an "CPU API" to run code on the CPU in my machine, so why do I need to go through an API to run code on the GPU (hint: it's mostly about GPU makers protecting their IP).
The irony is that the Raspberry Pi is basically a GPU chip with some small CPU cores tacked on. So you yes you actually need the GPU API to run code on the CPU.
I've seen Vulkan called the successor to OpenGL, but reading the spec it seems more like the end game for raster graphics card programming. OpenGL 4.0 was released in 2010, and since then changes have been incremental. We more or less have figured out how to do raster graphics (ray tracing may be a different story), so it made sense to invest tens (hundreds?) of millions of dollars to develop the Vulkan spec, and then many millions more to implement it.
What other technologies are there were we are more or less at the end game? I know Qt5 widgets is considered feature complete for desktop apps.