It's hard being an old programmer as some technologies that you used in the past, that were more productive than certain technologies today, are no longer popular. Creating client side apps in the 90s was arguably easier than creating web apps today with the soup of html/css/js frameworks that are changing every month.
edit/mass reply: I've been coding web apps with the rest of you guys for 20 years. The web isn't the problem, the tooling just isn't there yet. The solution space is large and we're still in the 'throw things at the wall' stage. It will eventually be figured out and web dev will be nice and stable just like the backend and database layers.
Delphi certainly was supremely productive for the kinds of one-off, bespoke enterprise apps that I was involved with, and it is valid to question why that was the case. I don't think the tooling was necessarily all that superior, the industry trend was simply way towards "slap a grid on top of that SELECT * FROM Orders".
Clearly, there are many things wrong with that, but the fact that one could whip up several screens and essentially ship them in a few hours makes it a worthwhile thought exercise to think about what has been gained, and what lost. Most enterprise apps that I encounter seem to be massively overcomplicated for what they actually do.
Most enterprise apps that I encounter seem to be massively overcomplicated for what they actually do.
I think the big issue is every app I use/develop in the enterprise needs single sign on and fine grained access control.
Those modern requirements create a minimum layer of complexity that really slows things down.
But yeah, the whole enterprise programming world really missed an opportunity with JavaFX. For all the shit they get being able to deploy desktop apps in a virtual environment is really powerful.
> "slap a grid on top of that SELECT * FROM Orders"
FWIW, before there was Delphi there was an entire language (and consulting industry) built around this: PowerBuilder. Its primary component was a "DataWindow" which was a fancy presentation object around basic CRUD SQL. Good times were had by many.
Working on a number of quarantine projects, I have come close to this now. Currently make single fullstack repo for the app with backend and frontend code committed together and deployed separately. Client goes to Netlify. Backend to a hosted VM that can autoupdate from Github and apply schema migrations.
Local dev, autorefreshes the Vuejs frontend code (yarn serve) and backend autorefreshes with `air` if Go-based or Intellij debug hot-reload if Jvm-based. I can whip up a db schema, backend, and frontend in a day with a good amount of layout and styling. It just took repetition and templating a few things.
Which is to say it doesn't take a lot of tooling to be productive--as long as it's focused.
What the hell else to business apps do? I'm genuinely interested to know what you think is the clear issue with showing users data from database tables.
I'm not sure I 100% understand your question, and it's also hard to respond without more context.
Many Delphi apps back in those days were very lazily slapped together, with thin layers of code (or none) on top of whatever queries got the job done. There are performance issues, security issues, correctness issues (in terms of: does the app allow you to do some things that should not be allowed). There are also meta-level issues that came into play, as anything that did not fit into the out of the box patterns was so outrageously more expensive to develop that companies opted away from it, and in many cases UX suffered.
On the other hand, we have now overreacted in the opposite direction, with many business apps having empty layers of code that do nothing except pass on the exact same message to the next layer, because, well, "that's how it's done in the real world", or something. Tons of boilerplate code get created that never end up adding any real value. Other than this specific example, there are plenty of YAGNI violations.
I guess I don't buy the idea that anything has changed. If a straight query shows you the data you need why would you put anything else but a thin layer of software over the top? More software is worse, not better. Apps are still lazily created by people downloading frameworks and plugging them into whatever shit they happen to be creating at the time.
The issues you state. Security, correctness and performance. None of these are solved by having more software. It might seem safer if you have more layers of shit between the user and the valuable data, but it isn't. Having a mental idea that now I use 'model and view' im going to be quicker, more correct or more secure I think demonstrably isn't true.
This could be argued infinitely, I think. I do appreciate your points; yes, simply adding code - by default or by itself - does not resolve those things I mentioned. But, it's important to view my comments within the context of what we were talking about, which was the defacto development style that grew around Delphi, especially in the early days (before their libraries got smarter). In the case of my "SELECT *" example, this was quite often literally the case - queries were not optimised in the least bit regarding the columns and rows that were being retrieved (mostly due to lack of awareness or laziness). In terms of correctness, you can bet that business rules were usually sprinkled around the UI in a happenstance kind of way, rather than being collected in a relatively well defined, easy to verify and unit test business object, say. Again, while I do agree in theory that "more code" does not necessarily solve anything, these are the kinds of practicalities I am arguing about, not on a pure level. The tooling was great for getting things done instantly, but encouraged a laziness that caused a counter revolution which went too far in the opposite direction.
My work until recently was working on one of the Delphi apps you're talking about. Not in terms of unoptimised queries but bad separation of business logic and ui. But the product has passed between 4 diffent companies and is over 20 years old now, I just don't see how you prevent these issues. I do see what you're saying, but I think the real issues is humans which hasn't changed.
Agreed, and I also don't have a real answer. At least, not a technological one. The closest we can get it is a bunch of developers who care about a product, ask themselves lots of "why do we do it this way", and "could this be done better", and try to come up with some solid answers. Of course, with products that survived 10+ years of maintenance, it is highly unlikely that enough time will be afforded to any team.
Circling back to what I was trying to say with the first comment - I suspect the rapid development promise and culture back in the day led to a lot of these apps being developed. The tooling was just an enabler.
Not to mention how native client-side apps consume far less memory and CPU than their web/electron counterparts. I miss those days when I would have a bunch of native apps constantly running and minimized. These days it's considered normal for a music player to consume 500MB of memory and for chat apps to consume multiple GBs of RAM.
But if you spent the same amount of money on your dev system adjusted for inflation, of course, then it could probably handle everything you could throw at it.
It's ridiculous that we've relied on hardware to save us and that we do more and more every day. We clap our hands at a bit of text and some pictures on a screen taking hundreds or thousands of megabytes of memory. This incredible ineffecincy means that computers cost more than they should, not as many people have them as could, that we spend more resources than necessary to make them and more electricity to run them than we need to. Why don't programmers generally understand how incredibly wasteful we are with the resources we've been given.
It's only true if a handful of apps do it, but if everyone starts to do it then you end up with a scenario where RAM isn't enough for every application.
And also from an end user perspective, not everyone has a powerful dev machine.
You've even drank some of the koolaid by calling them "client side apps". They're simply applications, or if you must, native applications! ;)
I'm heavily reticent towards using web-based interfaces to remote services, and I think that's healthy and fine. As a developer, of course it's less complicated to not be negotiating network communication in order to create an application!
You've even drank some of the koolaid by the personal computer industry to call them applications: real applications clearly run on mainframes and reach the end user via terminal display units!
I can't help but feel cynical that a fair bit of modern complexity comes from the sheer amount of features that didn't exist. The variety of screens, interfaces, OSes and the accessibility aren't free. Those are why we use all these cumbersome frameworks after all.
There's always a trade-off in technologies between things that are easier to learn, and things that are easier to use once you've learned them. When there's a steady state of people cycling through the system, it's easy to cater to specific segments of the population with different types of tools. Those that don't need hand holding and want the extra bells and whistles to get exactly what they want out of something will likely find their needs met in such a situation.
Now, consider that there's far more new people coming into programming each year than the prior year (or even if it's not strictly true on a year-over-year basis, there are far more software engineers with 1-10 years of experience than there are with 10-20 years experience, possibly even than those with 10+ years experience).
In a market such as that, ease of use is paramount and the killer feature that drives almost all you usage. Catering to amateurs is the path to increased usage, and mind-share, and market-share where applicable. With that in mind, is it really any wonder that amateurs are catered to so much that there's actually a regression in tools that cater to professionals?
My guess is that if you look into subgroups which are not friendly to new users for one reason or another (most commonly because of complexity and skill), yet still have retained users for some reason, you'll find high quality professional tools. I think C/C++ probably fits this, as well as systems programming, and kernel development. I'm not part of any of those subgroups, but my guess is they have not only retained the quality of their supplemental tools, but increased them.
If you code in C++ with tooling like QtCreator, Clion, VC++, C++ Builder and everything enabled (static analyses), it can be quite productive and even quite secure.
However it still largely ignored as good practices, in a world where teachers still use IDEs like Turbo C++ as teaching tool.
It's strange we eventually reached a place where things go unproductive...
Similar to Smalltalk, and Common Lisp are super fast to develop things. Maybe it is they were relying too much on their superior environment? So that other languages cannot easily replicate so that more generic but not so productive technology stand to the last?
The advantage of web apps is in deployment. Developing a stand-alone app in Delphi or Visual Basic was highly productive, but distribution, installation and updating was a headache. And it would only run on one platform.
" but distribution, installation and updating was a headache"
I am not sure what headache are you talking about. My product/s are usually single exe with dependencies statically linked and acting in dual role: setup and the end software itself. Installation goes like this:
1) customer clicks on link
2) setup.exe is downloaded and ran.
3) setup copies itself to a proper location and renames itself to yourwonderfulsoftware.exe.
4) When running it may communicate to servers to get whatever data/files/licenses it needs if any.
5) It also checks for new version and if there is any it can self update if you click that "update" button.
6) Before update starts the old version and data are always backed up.
Yes it does lack that instant satisfaction experience that comes with good web applications. But then again the if software I want to make looks like a good candidate for Web target way I will implement it as a web app.
In hopefully not too distant future Web assembly may make the difference minimal but we will have to wait and see how it goes.
but you can still use those tools on your own projects right? I've been learning and using XQuery and XSLT 3.1 and I really enjoy them! Also, common lisp :-) But yeah, it will be harder to use tech like that on my day job, wish I could.
Isn't a day on HN unless someone complains about web development while pining for the days when they were building command line calculators in COBOL on their mainframes while sipping espresso with Turing and Dijkstra.
I honestly hope so. Sticks are passable enough for addressing today's CRUD apps but I'll readily take scalpels and plasma torches as they become available.
Nope. Web development could (and should) be much cleaner. But it doesn't get there by tirelessly adding to the list of curmudgeonly comments over and over and over. As a web developer who made zero choice for HTML, CSS, and JavaScript to be in the position they are today and who is wholeheartedly open to seeing the chances that things like wasm could bring it is honestly more tiring reading yet another derisive comment about JavaScript fatigue than it is actually dealing with the churn in the ecosystem.
They have not changed any licensing. They have changed the way they are releasing their product. The only thing really that is different here is the fact that they are pushing less code as open source for their stable LTS releases.
If you rely on having a stable Qt, why not pay the company that develops the software? You and I, the developers that use Qt without paying for it, still get the latest and greatest Qt, with all it's source code for free, under the same LGPL license that Qt has always been under.
Actually Qt was not always LGPL, it was first under the QPL, then the GPL, and now the LGPL. So it is quite an improvement from a license forbidding commercial uses to a permissive license.
There is also a legal framework to make sure Qt will remain open source forever: the KDE Free Qt Foundation.
Historically you are correct, yes. I should have had an asterisk and explanation next to 'always'.
The fun fact that this also brings up is that the KDE Free Qt Foundation also means that if Qt does not keep supporting open source Qt, then the framework code becomes licensed under the BSD.
It is literally in Qt's best interests to keep the open source community by it's side.
This does not detract from my point that the license of Qt is not changing.
> As a result, they are thinking about restricting ALL Qt releases to paid license holders for the first 12 months.
> They are aware that this would mean the end of contributions via Open Governance in practice.
> If you rely on having a stable Qt, why not pay the company that develops the software?
Because it's too expensive for me to afford. More than an order of magnitude too high for me. As side projects I write closed source software so they do pull in a a bit of money. But this is typically only a few hundred to a few thousand per year per app. I think they target corporations rather than individuals or small shops.
I did a couple of small projects in Python using wxWidgets recently, and I was surprised at how easy it was (as well as how consistent the results were between Windows and Mac OS).
I'm overall surprised at how unpopular/unknown wxWidgets is when the discussion turns to cross platform UI toolkits.
I've used it a lot, as well as other popular toolkits (Qt, GTK, etc.) and I find that wx is, at the very least, the least bad option and overall programming in it hasn't been a pain, regardless of which binding I used.
I guess my only complaint would be that it's a bit harder to do something way outside the norm when compared to Qt, but at that point it might be better to use native SDKs or straight up OpenGL or something for your GUI.
The wxPython demo (which spawns many apps that show use of many widgets, including complex ones), is very good too. And is a non-trivial wxPython app itself. Separately downloadable from wxPython itself, last I used it. All apps come with source (of course), good for learning from and adapting to your own app needs.
Creating hello world and drawing trivial forms is just as simple with HTML/CSS/JS, you don't need a SPA MVC framework or component libraries or state management framework.
Looking back to VB and other WinForms RAD tools it's easy to do that stuff and there are HTML WYSIWYG tools but that double-click code behind logic doesn't scale - software these days is distributed, has more complex requirements and expectations. Once you bolt MVC or MVVM or whatever to one of those GUI toolkits you get very close to modern JS framework complexity.
I laugh because you'd think the distributed part would be the toughest one. But nope, it's pretty easy. In fact the client/server model was pretty well understood even in the 90s (or well before that even).
In my opinion what has exploded the complexity is the proliferation of environments. The execution environment of our software provides very few guarantees on what is available (no standard library) or even what language is supported (many JavaScript features and versions with varying support). That, combined with the explosion of devices including input modes, screen sizes, and resolutions has just made it extraordinarily difficult.
We don't even have standard ui primitives like we did in the past. Every major website is expected to have a team of world class designers and reinvent the wheel.
It doesn't need to be this way. But it's the way we have chosen. It has advantages, but I'd imagine the economic cost is enormous.
I can't agree more with your "we don't have standard UI primitives" and "every major website is expected to have a team of world class designers" comments. I have been migrating a site from an old to a new shop system and the amount of work for simply displaying product data, options and choosing something to buy is insane. I really believe we've lost something from the RAD era.
Lol - 90s is the era that gave birth to PHP, I'd wager to say very few things were "well understood".
People writing apps with 100s of global variables was the norm.
If you worked with WPF you'll see angular 2 is very similar in spirit - complexity is on par. I don't really see the difference between native development and JS once you start dealing with the same level of complexity.
Client/server models existed before the web and HTML/JavaScript.
WPF is backed by the .NET framework. While the UI framework itself might not be easier than say "React", you have a stable language (C#) with a huge, stable, standard library for all kinds of things from Date manipulation to File IO. There are so many advantages to that ecosystem over JavaScript currently.
For example, I've been using numeraljs for number formatting in JavaScript. This is now un-maintained and contains a bug where if the number is below 1e-6 it shows up as NaN. So now I have to go source a new package for number formatting. This is so simple and core it's amazing I am searching for packages for it.
There is now Intl.NumberFormat but it has quirks also. But this is the crap you have to deal with on a daily basis with JavaScript.
Single machine local apps and apps using direct database connections over a trusted network were the norm - these are much easier to write since they didn't have to deal with client/server separation of logic.
You won't hear me arguing that JavaScript is a great environment to work with - but people argue that JS frameworks are overcomplicating things - when in reality desktop frameworks arrive at the same design decisions.
There's a free edition for small business and personal use and a perpetual "enterprise" license for $1,895. Not so bad for a commercial product. But yes, no free software.
It’s a really good crash course on the shared origins of Delphi and C# plus a brief reminder of all the modern features offered by Delphi that have flown under the radar, but the anger and emotion is rather immature and really detracts from the argument. Those words could have been better spent selling its strengths further.
That aside, things have changed rapidly in the last five years in the C#/.NET world first with .NET Native then with .NET Core and CoreRT. Exciting times, really.
Unfortunately .NET Native seems to be on the way out with the project Project Reunion, and it is not clear how AOT support will be like post NET 5 release.
So far they are demoing single file release, which is basically packing everything into the same exe, but you get a JIT + MSIL instead.
I agree. .NET Native was always for the full framework only; I think CoreRT was supposed to be the future there but it seems to be put by the wayside now as with .NET Core 3 the official recommendation for AOT was Mono with no mention of CoreRT in the roadmap. I would love to hear from Miguel what the plans are there.
Ironically this comes at the same time that C# the language has become much more usable without GC or with minimal GC thanks to the work that went into implementing Span but I think that was more a matter of necessity to support advanced async features for web usage (although I found it also made P/Invoke a joy and eliminated virtually all my need for marshaling in a few codebases.. and would have eliminated all the performance issues the led the OS team to abandon C#). It does seem that the ASP/Blazor team is driving the show and calling the shots after the UWP failure in terms of adoption, and I’m not seeing too much that would indicate it’s not the case even with Project Reunion. I’ve been testing WinUI 3, MSIX, and WebView2 and have been disappointed at the lack of story for putting all the parts together. It seems like side-loading packages with sparse package projects is intended to replace “native” UWP packages (“regular” AppX packages require .NET native unless side-loaded and I can’t get apps pulling in WinUI/STJ/Buffers/etc code to compile to .NET Native without an undeclared dependency on System.Private.CoreLib and without serious hacks to enable RTTI which makes me think they’re not meant to be used in that way any longer) but as always MS isn’t very forthcoming about the future of UWP components more than a single step at a time with all bets clearly hedged.
In C# 9 there will be Source Generators, an easy way to generate code in a build task before compilation. That will remove the need for Reflection in many areas. Work is also underway to identify and remove all "linker" dangerous code. Once the SDK is linker safe and has a lot less reflection, AOT becomes a lot easier.
Mono is now part of the dotnet family so I suspect they are going to use that for AOT in C# forwards. Starting with Blazor.
The low level stuff seems to have come from three fronts, high performance Web servers (trying to be on top results at TechEmpower), improving overall performance for Unity/MonoGame/Xenko, and given their position regarding safer languages, reducing the need to jump into C++/CLI for performance.
Project Reunion seems to be the start of long roadmap to bring UWP tech into Win32, as if Windows 8 had introduced UWP as Win32 evolution without the additional application model.
So if developers don't migrate en masse into UWP, bring UWP APIs (COM v2) to all Windows developers and in 10 years from now pretend that Windows 8.x (UAP)/W10 UWP never happened.
I used to believe in UWP despite all its issues, still think C++/CX was the closest that Microsoft ever came to having their own C++ Builder (C++/WinRT tooling is still too clunky), and .NET Native was what .NET should have been since day one.
sometimes channeling a bit of anger as an engineer can be a pretty good motivator to do things The Right Way(TM). that is to say, I have some sympathy with engineers being angry at technology - obviously it's not ideal as a communication tool like here...
I love this post. My first gig was slinging Delphi for a telecoms company, my second was C# and I've been involved with TypeScript since it was announced. I'm a complete fan of Anders Hejlsberg work. To the extent that I gave genuine consideration to naming one of my son's "Anders". My wife was not onboard.
Right! I remember now buying the book just to read the chapter about Anders. Was not disappointed. (for the record other "masterminds" were also interesting).
> Some people seem to think that Delphi ended with version 7 back in the 90s
For me it is the case.
Before Delphi was a really the greatest, smartest and easiest RAD IDE.
I even think that today dev tools are in a bad state and nothing match what we once had back then!
Especialy for creating 'responsive' Gui apps easily.
Then, as shitty companies always do, they changed completely the tool and the language with the new bad .net like one. Despite it to be wrong, they claimed that this was better than what we had before and forced it on users.
But it was not anymore the Pascal Object that we liked!
We are all slaves to the whim of companies. Sure I could've continued using Delphi beyond the late '90's but that would've meant that I wouldn't have been able to hold a job as a developer in the '00's when .NET became all the rage.
Sometimes you just have to make do with what your customers want. And besides, C# is IMHO the best computer language ever invented.
as a delphi Programmer who works next to a C# programmer we agree to agree that neither of us will ever use the others shit language, we also agree that all the other languages are worse :-)
Delphi is one of those languages which was a good product from a failing company - Borland. Just as Modula 3 was a good product from a failing company - DEC.
I remember having read the story about Borland as follows: upper management decided software products was a thing of the past and the company had to switch to services. They did that and were out of business in 6 month.
Borland is still in business, although with another name, Embarcadero.
What happened was that decided indies weren't any longer their target demographic and they wanted to be enterprise and focus on product and process lifecycle management (PPLM) tooling instead.
So Delphi and C++ Builder got priced accordingly and they renamed themselves to Inprise.
After 20 years they are trying to get back the indies with community editions, but the damage is already done, and 100% of their customers are enterprise shops.
So unless you are working for a Fortune 500, you will hardly come in touch with Delphi / C++ Builder nowadays.
It was a big mistake not embracing AOT from day one, C++ wouldn't have kept its king position in MS ecosystem if the original .NET had been like .NET Native since the begging and kept Delphi like features for low level coding (some of each are have been added since C# 7.x).
JIT support could still be an additional option as well, just like on languages like Eiffel.
Instead we got NGEN (with basic optimizations), .NET Native (out of Midori, but which seems on the death march with Project Reunion), MDIL/Bartok (out of Singularity, used only in Windows 8.x), and all remaining efforts are from third parties (Xamarin pre-acquisition), Unity, CosmOS.
And no one really knows if CoreRT will ever be integrated into the main product.
I've been doing C# for over 10 years, and so far the AOT trend just gets in my way. I would prefer .NET Core on all platforms to just function similarly to .NET Framework on Windows: shared runtime(s) with major releases for breaking changes, and point releases for in place improvements.
Case in point: when Windows RT jailbreak came out, my .NET Framework AnyCPU apps just worked there. Now when I package, I very often have to list the target architectures in advance.
When I hear Java announcing JIT in Java and catching up with C# on syntax sugar, I feel that C# + .NET might start to lag behind in innovation.
.NET has had lot of innovation, AOT was there since the beginning via NGEN, although there was never a big investment in its optimizing capabilities.
Then there was Axum, Cω, Phoenix Compiler (LLVM like in .NET), Singularity, Midori, Rosylin, MDIL, .NET Native.
GraalVM goes back to MaximeVM and JikesRVM, so JIT in Java is also quite old.
What all these projects need is the money and political willigness to keep driving them forward, and here is probably the main issue with some .NET research projects, because since the begging Windows Development (which kind of owns the C and C++ story) teams aren't that willing into having too much .NET on their turf.
I have colleague who still write desktop application in Delphi. I sometimes work with him and to observe how he write Delphi, which is quite impressive, and the performance is remarkable.
However, it is really bad at some points.
First, it is not friendly to web, we have some special web requirements, we decided to spawn a node process to do it.
Second, the tooling is so much better in Visual Studio. And the compiler is so much smart with sophisticated and syntactic analysis.
Last but not least, it really lacks 3rd party libraries so people always need to implement themself.
C# may not be the best option for any application. But it is general enough to almost support every types of application.
Unfortunately C# and the rest of .NET ecosystem (and for that matter Java and rest of JVM) are not fully 64bit capable to this day and will probably not be fully 64bit capable for the next decade if ever.
What do I mean with this? .NET and JVM native arrays use 32bit indexes and most standard container classes also use 32bit indexes so you cannot have more than 2^31 elements.
Who needs that many elements in the array?
No-one who develops in these environments...because they can't have them so they move to something that can.
I think you can, but it does involve a configuration switch [0]. The index accessor for arrays has always been allowed to be of type long, the runtime was the one causing the trouble. As for List<T>, it's still problematic since all the methods still accept an int.
You can have 2GB array of 2GB arrays if you’d like. But, having a single contiguous huge block is only one scenario for 64-bit, not the only one. I doubt it’s the most common one either.
I am not sure what you mean under marginally. Delphi and FreePascal/Lazarus have no troubles producing 64bit executables. I have ongoing commercial product in Delphi and it is 64bit only.
The one thing that bugged me about Delphi apps on Windows, was that they used their own custom buttons on the forms (with little graphics of crosses and ticks etc.). I am not a Delphi programmer, but I always wondered why this was.
Delphi made it easy as a developer to add those icons, and they had a nice standard set named for most actions you would want a user to do in a line-of-business application UI.
Also the hello world apps for Delphi had these icons everywhere so it was "just the way" you built applications using it.
This blog is not an article but a rant full of half-truths and outright lies. The author doesn't realize that most C# developers have no knowledge of the history of both Pascal and C (I'm old enough to know) because they were born decades after these languages came into being.
The author also misses the point that C and C++ are systems programming languages (for developing operating systems, device drivers and low-level stuff such as compilers) and Pascal is an application programming language. C and C++ were pressed into service for developing application programming because many nerds thought it cool to have the fastest benchmark speed test result and ignored the fact that these languages are unsafe to use on a day-to-day basis. That's why C# and Java were invented.
Pascal was originally an application programming language, not a systems programming language.
These days you can probably mix assembly within your Pascal code or access pointers directly with some custom language extensions, but these weren't part of the original Pascal specification.
The only takeaway for me is that most developers these days don't understand the difference between systems programming languages and general purpose application languages either.
We need an AOT compiled general purpose application language like C# with managed memory access. If people want to fool around with low-level stuff let them use Rust, assembler or C/C++.
Original Pascal was designed for bare metal systems programming on a CDC6000 series. Don't get snickety just because other people aren't abiding by your ahistorical definitions.
Modern C++ is actually pretty safe. Pascal is reasonably safe as well if you know what you're doing. And all this safety talk is over-hyped anyways. Just learn hot to program properly. I do not really remember when was the last time I had problem with "safety" in either Delphi or C++. My commercial apps (C++, Delphi and C for firmware run for years without a single crash report).
C++ is safe if you know how to use it properly, but this does incur overhead and many people aren't willing to incur any overhead because they want the fastest raw speed available. So they usually mess around with C like constructs in their C++ programs.
Pascal OTOH is 100% safe no matter what you do because you can't access pointers directly or do pointer math or out-of-bounds array access. The pointers available in Pascal are "managed" pointers which prevent you from accessing the underlying machine hardware.
This all incurs a little overhead, but you won't find any insecure Pascal programs due to memory mismanagement.
>Pascal OTOH is 100% safe no matter what you do because you can't access pointers directly or do pointer math or out-of-bounds array access.
Maybe in your old classic pascal. Not true for modern Delphi/FreePascal incarnations of it. You can do pretty much anything including pointer math, including safe version of it.
I started out writing programs for the Mac in Pascal and believe me you, it was possible to crash the machine.
I think people get hung up on the theoretical capabilities of "pure" Pascal but the forms that people actually used had all sorts of extensions. Back in the day, whether you were using Pascal or C or whatever, it probably wasn't very standards-compliant anyway.
I could not reply to your message one below so moving it here:
>You're the one that has three times now commented on one of my posts trying to prove to yourself that everything you're doing is fine and there's nothing you could be doing better from a security perspective.
I am just expressing my opinion. You know, taking distraction from mundane work to talk technical things. I am not trying nor do I need to prove how I do my development/design and what tools do I use and frankly I do not give a flying hoot what others might think about it. I run my own business after all.
As to you particular point of language being unsafe because it allows typecast pointer to integer: allowing unsafe features in my view does not make language unsafe as long as it provides safe way of doing things as well. It is called flexibility in my book.
Security wise: could I've done better? Sure. Anything could be done better but you've probably heard about the law of diminishing returns. Does the fact that I use language that have unsafe feature automatically make my software unsafe if I do not use said features - big fat NO. Even if do use such feature (and rarely but sometimes I do for the sake of efficiency) it does not really change the main point.
As senior engineer at a security conscious firm, who used to freelance by writing exploits for code written by developers with your attitude, I want to make sure we don't use your software.
I think you should go and get help if you do not understand why people on newsgroups often prefer to stay anonymous.
And if you are as you say major security guy as you claim you probably already checked all the software you allow to use for security breaches. So you either found the problem and got rid of said software or you're just full of it.
And if your business recommendations to your company are base on "attitude" picked from internet chat or shall I say you personal likes why don't you submit this conversation to your employer.
And finally the last thing I feel like doing is promoting my business/services to people like yourself.
> And if you are as you say major security guy as you claim you probably already checked all the software you allow to use for security breaches. So you either found the problem and got rid of said software or you're just full of it.
Do you know what an 0 day is?
> And if your business recommendations to your company are base on "attitude" picked from internet chat or shall I say you personal likes
My recommendations to my employer are based on many things, including overall security posture. The world's best engineers ship memory unsafety bugs in C and C++, including modern C++. Your statements belay an overall lack of respect for the problem space, which in my experience leads to an increase of issues, even above the general steady state.
> why don't you submit this conversation to your employer.
I plan to as soon as there is actionable information.
> And finally the last thing I feel like doing is promoting my business/services to people like yourself.
You're the one who's been regularly deciding to comment on my posts trying to convince yourself that everything you're doing is fine, and you can't do better. "People like [my]self" are just telling you that you can do better, and only when you decide to go out of your way to start shit.
You're the one that has three times now commented on one of my posts trying to prove to yourself that everything you're doing is fine and there's nothing you could be doing better from a security perspective.
Security centers can also confirm that any other language / software / system has holes.
I am not Microsoft/Google/etc and do not give flying hoot about their language preferences. I ran my own business and my clients are happy. As already said not a single complaint.
My servers use proprietary protocols and one of the first thing my protocol handlers do is check validity of the input. You will not find generic "read until EOL/Whatever" in my code. The low level logic knows exactly what to expect at each point and how to validate it. All over encrypted connection.
Good luck hiring dedicated expert or team willing to break it. I am not Google and cost of trying to break my software where it matters far exceeds any potential benefits.
So sure, I am not the a prime choice as security target which makes it even better.
While unsafe memory access can have consequences for stability, it has much broader implications than that. A system can keep running reliably while being heavily exploited.
I am really tired of this FUD. Any system can be like this. You can have perfectly memory safe language and still be compromised. If you have enough dosh you hire experts and have them try to poke holes in your systems or analyze whether those have been compromised already. Until then all this security talk is not worth much.
But those would be logic errors, not memory management errors. If someone is stupid enough to pass user input directly to a shell or database then yes, they will get compromised.
But these comprise only 30% of all security errors as the research shows. Most are due to memory management issues.
I don't see how it's FUD. Yes, any system can have security problems, but some are more likely to than others. "Just try harder" is no solution. With equal effort, better (suitable) tools still give better results.
Let's just disagree. I think your definition of "better" is vastly different from mine. I also think that a mess most use as a current web development stack using "safe languages" poses way more danger security wise then my proprietary client/servers ever will.
edit/mass reply: I've been coding web apps with the rest of you guys for 20 years. The web isn't the problem, the tooling just isn't there yet. The solution space is large and we're still in the 'throw things at the wall' stage. It will eventually be figured out and web dev will be nice and stable just like the backend and database layers.