edit/mass reply: I've been coding web apps with the rest of you guys for 20 years. The web isn't the problem, the tooling just isn't there yet. The solution space is large and we're still in the 'throw things at the wall' stage. It will eventually be figured out and web dev will be nice and stable just like the backend and database layers.
Clearly, there are many things wrong with that, but the fact that one could whip up several screens and essentially ship them in a few hours makes it a worthwhile thought exercise to think about what has been gained, and what lost. Most enterprise apps that I encounter seem to be massively overcomplicated for what they actually do.
I think the big issue is every app I use/develop in the enterprise needs single sign on and fine grained access control.
Those modern requirements create a minimum layer of complexity that really slows things down.
But yeah, the whole enterprise programming world really missed an opportunity with JavaFX. For all the shit they get being able to deploy desktop apps in a virtual environment is really powerful.
FWIW, before there was Delphi there was an entire language (and consulting industry) built around this: PowerBuilder. Its primary component was a "DataWindow" which was a fancy presentation object around basic CRUD SQL. Good times were had by many.
Local dev, autorefreshes the Vuejs frontend code (yarn serve) and backend autorefreshes with `air` if Go-based or Intellij debug hot-reload if Jvm-based. I can whip up a db schema, backend, and frontend in a day with a good amount of layout and styling. It just took repetition and templating a few things.
Which is to say it doesn't take a lot of tooling to be productive--as long as it's focused.
Many Delphi apps back in those days were very lazily slapped together, with thin layers of code (or none) on top of whatever queries got the job done. There are performance issues, security issues, correctness issues (in terms of: does the app allow you to do some things that should not be allowed). There are also meta-level issues that came into play, as anything that did not fit into the out of the box patterns was so outrageously more expensive to develop that companies opted away from it, and in many cases UX suffered.
On the other hand, we have now overreacted in the opposite direction, with many business apps having empty layers of code that do nothing except pass on the exact same message to the next layer, because, well, "that's how it's done in the real world", or something. Tons of boilerplate code get created that never end up adding any real value. Other than this specific example, there are plenty of YAGNI violations.
I prefer something of a balance between the two.
The issues you state. Security, correctness and performance. None of these are solved by having more software. It might seem safer if you have more layers of shit between the user and the valuable data, but it isn't. Having a mental idea that now I use 'model and view' im going to be quicker, more correct or more secure I think demonstrably isn't true.
Circling back to what I was trying to say with the first comment - I suspect the rapid development promise and culture back in the day led to a lot of these apps being developed. The tooling was just an enabler.
And also from an end user perspective, not everyone has a powerful dev machine.
I'm heavily reticent towards using web-based interfaces to remote services, and I think that's healthy and fine. As a developer, of course it's less complicated to not be negotiating network communication in order to create an application!
Now, consider that there's far more new people coming into programming each year than the prior year (or even if it's not strictly true on a year-over-year basis, there are far more software engineers with 1-10 years of experience than there are with 10-20 years experience, possibly even than those with 10+ years experience).
In a market such as that, ease of use is paramount and the killer feature that drives almost all you usage. Catering to amateurs is the path to increased usage, and mind-share, and market-share where applicable. With that in mind, is it really any wonder that amateurs are catered to so much that there's actually a regression in tools that cater to professionals?
My guess is that if you look into subgroups which are not friendly to new users for one reason or another (most commonly because of complexity and skill), yet still have retained users for some reason, you'll find high quality professional tools. I think C/C++ probably fits this, as well as systems programming, and kernel development. I'm not part of any of those subgroups, but my guess is they have not only retained the quality of their supplemental tools, but increased them.
However it still largely ignored as good practices, in a world where teachers still use IDEs like Turbo C++ as teaching tool.
Similar to Smalltalk, and Common Lisp are super fast to develop things. Maybe it is they were relying too much on their superior environment? So that other languages cannot easily replicate so that more generic but not so productive technology stand to the last?
I am not sure what headache are you talking about. My product/s are usually single exe with dependencies statically linked and acting in dual role: setup and the end software itself. Installation goes like this:
1) customer clicks on link
2) setup.exe is downloaded and ran.
3) setup copies itself to a proper location and renames itself to yourwonderfulsoftware.exe.
4) When running it may communicate to servers to get whatever data/files/licenses it needs if any.
5) It also checks for new version and if there is any it can self update if you click that "update" button.
6) Before update starts the old version and data are always backed up.
Yes it does lack that instant satisfaction experience that comes with good web applications. But then again the if software I want to make looks like a good candidate for Web target way I will implement it as a web app.
In hopefully not too distant future Web assembly may make the difference minimal but we will have to wait and see how it goes.
They recently changed their licensing AGAIN: https://www.qt.io/blog/qt-offering-changes-2020
If you rely on having a stable Qt, why not pay the company that develops the software? You and I, the developers that use Qt without paying for it, still get the latest and greatest Qt, with all it's source code for free, under the same LGPL license that Qt has always been under.
There is also a legal framework to make sure Qt will remain open source forever: the KDE Free Qt Foundation.
The fun fact that this also brings up is that the KDE Free Qt Foundation also means that if Qt does not keep supporting open source Qt, then the framework code becomes licensed under the BSD.
It is literally in Qt's best interests to keep the open source community by it's side.
This does not detract from my point that the license of Qt is not changing.
> As a result, they are thinking about restricting ALL Qt releases to paid license holders for the first 12 months.
> They are aware that this would mean the end of contributions via Open Governance in practice.
Because it's too expensive for me to afford. More than an order of magnitude too high for me. As side projects I write closed source software so they do pull in a a bit of money. But this is typically only a few hundred to a few thousand per year per app. I think they target corporations rather than individuals or small shops.
I've used it a lot, as well as other popular toolkits (Qt, GTK, etc.) and I find that wx is, at the very least, the least bad option and overall programming in it hasn't been a pain, regardless of which binding I used.
I guess my only complaint would be that it's a bit harder to do something way outside the norm when compared to Qt, but at that point it might be better to use native SDKs or straight up OpenGL or something for your GUI.
There are some rough spots here and there but all in all I am really baffled why you don't see more use of it. I use it in C++ for all my GUIs.
This also makes customizing the widgets much harder (if possible at all). There are also some self-drawn widget libraries available for wxWidgets.
GTK bindings via PyGObject also work on macOS, if Qt's licensing doesn't suit you.
Only downside is that it is Windows only.
Looking back to VB and other WinForms RAD tools it's easy to do that stuff and there are HTML WYSIWYG tools but that double-click code behind logic doesn't scale - software these days is distributed, has more complex requirements and expectations. Once you bolt MVC or MVVM or whatever to one of those GUI toolkits you get very close to modern JS framework complexity.
We don't even have standard ui primitives like we did in the past. Every major website is expected to have a team of world class designers and reinvent the wheel.
It doesn't need to be this way. But it's the way we have chosen. It has advantages, but I'd imagine the economic cost is enormous.
People writing apps with 100s of global variables was the norm.
If you worked with WPF you'll see angular 2 is very similar in spirit - complexity is on par. I don't really see the difference between native development and JS once you start dealing with the same level of complexity.
Sure, there's always a tension of "simple" versus "limited". But simple was fun.
Until I knew more later and it became a disaster.
That aside, things have changed rapidly in the last five years in the C#/.NET world first with .NET Native then with .NET Core and CoreRT. Exciting times, really.
So far they are demoing single file release, which is basically packing everything into the same exe, but you get a JIT + MSIL instead.
Ironically this comes at the same time that C# the language has become much more usable without GC or with minimal GC thanks to the work that went into implementing Span but I think that was more a matter of necessity to support advanced async features for web usage (although I found it also made P/Invoke a joy and eliminated virtually all my need for marshaling in a few codebases.. and would have eliminated all the performance issues the led the OS team to abandon C#). It does seem that the ASP/Blazor team is driving the show and calling the shots after the UWP failure in terms of adoption, and I’m not seeing too much that would indicate it’s not the case even with Project Reunion. I’ve been testing WinUI 3, MSIX, and WebView2 and have been disappointed at the lack of story for putting all the parts together. It seems like side-loading packages with sparse package projects is intended to replace “native” UWP packages (“regular” AppX packages require .NET native unless side-loaded and I can’t get apps pulling in WinUI/STJ/Buffers/etc code to compile to .NET Native without an undeclared dependency on System.Private.CoreLib and without serious hacks to enable RTTI which makes me think they’re not meant to be used in that way any longer) but as always MS isn’t very forthcoming about the future of UWP components more than a single step at a time with all bets clearly hedged.
Mono is now part of the dotnet family so I suspect they are going to use that for AOT in C# forwards. Starting with Blazor.
Project Reunion seems to be the start of long roadmap to bring UWP tech into Win32, as if Windows 8 had introduced UWP as Win32 evolution without the additional application model.
So if developers don't migrate en masse into UWP, bring UWP APIs (COM v2) to all Windows developers and in 10 years from now pretend that Windows 8.x (UAP)/W10 UWP never happened.
I used to believe in UWP despite all its issues, still think C++/CX was the closest that Microsoft ever came to having their own C++ Builder (C++/WinRT tooling is still too clunky), and .NET Native was what .NET should have been since day one.
Oh, well lets see how all of this will turn out.
It is quite different than a general purpose AOT compiler for .NET, as it also takes into account HPC# code patterns.
Too bad there are just a few quality interviews with him: https://www.artima.com/intv/anders.html
For me it is the case.
Before Delphi was a really the greatest, smartest and easiest RAD IDE.
I even think that today dev tools are in a bad state and nothing match what we once had back then!
Especialy for creating 'responsive' Gui apps easily.
Then, as shitty companies always do, they changed completely the tool and the language with the new bad .net like one. Despite it to be wrong, they claimed that this was better than what we had before and forced it on users.
But it was not anymore the Pascal Object that we liked!
Sometimes you just have to make do with what your customers want. And besides, C# is IMHO the best computer language ever invented.
Anders has a couple of interviews where he mentions that he resisted invitations from ex-Borland colleges that moved into Microsoft.
It was only when Borland stop being what it used to be, that he decided it was time to move on.
What happened was that decided indies weren't any longer their target demographic and they wanted to be enterprise and focus on product and process lifecycle management (PPLM) tooling instead.
So Delphi and C++ Builder got priced accordingly and they renamed themselves to Inprise.
After 20 years they are trying to get back the indies with community editions, but the damage is already done, and 100% of their customers are enterprise shops.
So unless you are working for a Fortune 500, you will hardly come in touch with Delphi / C++ Builder nowadays.
It was a big mistake not embracing AOT from day one, C++ wouldn't have kept its king position in MS ecosystem if the original .NET had been like .NET Native since the begging and kept Delphi like features for low level coding (some of each are have been added since C# 7.x).
JIT support could still be an additional option as well, just like on languages like Eiffel.
Instead we got NGEN (with basic optimizations), .NET Native (out of Midori, but which seems on the death march with Project Reunion), MDIL/Bartok (out of Singularity, used only in Windows 8.x), and all remaining efforts are from third parties (Xamarin pre-acquisition), Unity, CosmOS.
And no one really knows if CoreRT will ever be integrated into the main product.
Case in point: when Windows RT jailbreak came out, my .NET Framework AnyCPU apps just worked there. Now when I package, I very often have to list the target architectures in advance.
When I hear Java announcing JIT in Java and catching up with C# on syntax sugar, I feel that C# + .NET might start to lag behind in innovation.
Then there was Axum, Cω, Phoenix Compiler (LLVM like in .NET), Singularity, Midori, Rosylin, MDIL, .NET Native.
GraalVM goes back to MaximeVM and JikesRVM, so JIT in Java is also quite old.
What all these projects need is the money and political willigness to keep driving them forward, and here is probably the main issue with some .NET research projects, because since the begging Windows Development (which kind of owns the C and C++ story) teams aren't that willing into having too much .NET on their turf.
First, it is not friendly to web, we have some special web requirements, we decided to spawn a node process to do it.
Second, the tooling is so much better in Visual Studio. And the compiler is so much smart with sophisticated and syntactic analysis.
Last but not least, it really lacks 3rd party libraries so people always need to implement themself.
C# may not be the best option for any application. But it is general enough to almost support every types of application.
He never got around to it.
And this isn't theoretical--Altium actually did a full code rewrite in order to get off of Delphi because of this.
Satire and (2016)
And the title included "C# coders". I am not a C# coder, but all C# GUI coders I know had praised Delphi for its design and convenience for GUI...
That was an additional custom controls library that you could enable, you could choose to use the standard L&F as well.
It was also available for the C++ products from Borland and it traces back to their first Windows 3.x compilers.
My first experience with it was in Turbo Pascal for Windows 1.5 (the last TP before Delphi was born).
Also, similar libraries existed for MFC or plain C Win32, sold by companies like ComponentOne.
Also the hello world apps for Delphi had these icons everywhere so it was "just the way" you built applications using it.
A standard button is a TButton: http://docwiki.embarcadero.com/Libraries/Sydney/en/Vcl.StdCt...
A button with a glyph is a TBitBtn (bit meaning bitmap): http://docwiki.embarcadero.com/Libraries/Sydney/en/Vcl.Butto...
Made it easy to spot a Delphi app.
The author also misses the point that C and C++ are systems programming languages (for developing operating systems, device drivers and low-level stuff such as compilers) and Pascal is an application programming language. C and C++ were pressed into service for developing application programming because many nerds thought it cool to have the fastest benchmark speed test result and ignored the fact that these languages are unsafe to use on a day-to-day basis. That's why C# and Java were invented.
Pascal is a totally fine system s language. AEGIS (a from scratch Unix like) was written in Pascal and by all accounts was a great option at the time.
And what makes C and C++ unsafe exists in Pascal.
These days you can probably mix assembly within your Pascal code or access pointers directly with some custom language extensions, but these weren't part of the original Pascal specification.
I'm not sure where you got this idea that Pascal is memory safe.
Borland enabled adding Assembler to almost every language for which they made a compiler.
We need an AOT compiled general purpose application language like C# with managed memory access. If people want to fool around with low-level stuff let them use Rust, assembler or C/C++.
Pascal OTOH is 100% safe no matter what you do because you can't access pointers directly or do pointer math or out-of-bounds array access. The pointers available in Pascal are "managed" pointers which prevent you from accessing the underlying machine hardware.
This all incurs a little overhead, but you won't find any insecure Pascal programs due to memory mismanagement.
Maybe in your old classic pascal. Not true for modern Delphi/FreePascal incarnations of it. You can do pretty much anything including pointer math, including safe version of it.
I think people get hung up on the theoretical capabilities of "pure" Pascal but the forms that people actually used had all sorts of extensions. Back in the day, whether you were using Pascal or C or whatever, it probably wasn't very standards-compliant anyway.
I am just expressing my opinion. You know, taking distraction from mundane work to talk technical things. I am not trying nor do I need to prove how I do my development/design and what tools do I use and frankly I do not give a flying hoot what others might think about it. I run my own business after all.
As to you particular point of language being unsafe because it allows typecast pointer to integer: allowing unsafe features in my view does not make language unsafe as long as it provides safe way of doing things as well. It is called flexibility in my book.
Security wise: could I've done better? Sure. Anything could be done better but you've probably heard about the law of diminishing returns. Does the fact that I use language that have unsafe feature automatically make my software unsafe if I do not use said features - big fat NO. Even if do use such feature (and rarely but sometimes I do for the sake of efficiency) it does not really change the main point.
As senior engineer at a security conscious firm, who used to freelance by writing exploits for code written by developers with your attitude, I want to make sure we don't use your software.
Are you seriously afraid to promote your business because it might be attached to your comments? That should tell you something.
And if you are as you say major security guy as you claim you probably already checked all the software you allow to use for security breaches. So you either found the problem and got rid of said software or you're just full of it.
And if your business recommendations to your company are base on "attitude" picked from internet chat or shall I say you personal likes why don't you submit this conversation to your employer.
And finally the last thing I feel like doing is promoting my business/services to people like yourself.
Do you know what an 0 day is?
> And if your business recommendations to your company are base on "attitude" picked from internet chat or shall I say you personal likes
My recommendations to my employer are based on many things, including overall security posture. The world's best engineers ship memory unsafety bugs in C and C++, including modern C++. Your statements belay an overall lack of respect for the problem space, which in my experience leads to an increase of issues, even above the general steady state.
> why don't you submit this conversation to your employer.
I plan to as soon as there is actionable information.
> And finally the last thing I feel like doing is promoting my business/services to people like yourself.
You're the one who's been regularly deciding to comment on my posts trying to convince yourself that everything you're doing is fine, and you can't do better. "People like [my]self" are just telling you that you can do better, and only when you decide to go out of your way to start shit.
I'd prefer if you stopped commenting on my posts.
I am not Microsoft/Google/etc and do not give flying hoot about their language preferences. I ran my own business and my clients are happy. As already said not a single complaint.
Good luck hiring dedicated expert or team willing to break it. I am not Google and cost of trying to break my software where it matters far exceeds any potential benefits.
So sure, I am not the a prime choice as security target which makes it even better.
But these comprise only 30% of all security errors as the research shows. Most are due to memory management issues.
C# appeared after Sun sued Microsoft for trying to embrace/extend Java.
They don't deserve the trash talk.