I have fond memories of Aspect-Oriented Programming, which I feel should've been in this list.
For those not in the know, AOP is basically the Intercal COME FROM statement (itself a pun on goto, ie it's a jump but in the opposite direction) (think that through a bit). Yes, that's a totally nuts idea, but it turns out that with a bit of effort you can formulate it in such a way that you can do conference talks about it and people nod and think "o wow nice I gotta tell the team".
More concretely, AOP is usually a bit of compiler infrastructure that lets you inject code into other people's functions/methods etc and make arbitrary changes. It's nice for eg inserting logging code at the beginning of each method, with the method name and argument. You'd be able to eg define a query somewhere and say "for all methods with annotation X, inject code Y at the beginning of the method".
But naturally, teams that adopted it always had this one team member who thought they knew better and started using it for injecting all kind of behavior-altering code in methods defined far, far away. When reading such a method's source, there was absolutely no way to tell that code was going to get injected. If the AOP crowd was well behaved there'd be an annotation/attribute/etc that would trigger the behavior, but that's not a requirement. I've seen people inject code that sanitized data, which was very surprising if said function didn't actually take user data and suddenly while debugging a string's content had changed and there was no indication whatsoever what had caused it.
This is very similar to my biggest frustration with PubSub, or heavy event driven systems in general.
The concept is so appealing, “Hey I’ll just fire this message off and whoever needs to know about it will do what they need”.
It’s great to have that ability to focus on just the component or module you’re working on, but the lack of visibility on exactly what is about to happen, and what has happened, has caused many late nights of debugging and frustration.
It’s just how you described “there’s absolutely no way to tell the code that was going to get injected”
I’m guilty of contributing to this problem as well. Once you become familiar with the event system you’re working in, windows, DOM, whatever, you can make some pretty good assumptions about how other components are implemented. When you’re faced with a bug in code you don’t have access to, or a behaviour you don’t want, often the solution you’re left with is asking “okay, so what combination of events and timing do I need to force this component into the state I need?”
For example I remember doing this with some third party grid controls. We really really wanted the TAB key to insert a new row at the end. We had to orchestrate exactly the right mix of events and method calls in the right order, with a few BeginInvokes to get the job done.
I wish I had a solution to all of this and other frustrations. It’s all trade offs and a balancing act.
This is why robotic process automation (RPA) is such big business, legacy systems without APIs that need engineered state, race condition detection, all through a black box.
> When reading such a method's source, there was absolutely no way to tell that code was going to get injected.
This clearly makes it pretty awkward to handle. Yet considering that the compiler manages to figure out where code needs to be injected, one can ask: why does the programmer not get to see this information?
This is not a fundamentally unsolvable problem. It is a tools problem. It's easy to imagine a programming environment where you can tell immediately which code gets injected where. Even in a dynamic system you should be able to see, on examining a function, what its current contents including injected code are.
Some programming language features require proper tools support to be useful. This requires languages that want to introduce such features to be opinionated on the matter of what tools you can use.
In the end, it is deterministic, as you say. But it's also quite more involved than you imagine. It's a big pile of interceptors (and their proper ordering), bootstrap classes, autoconfiguration, component scans, layered property sources and some more. Lots of it is not immediately visible and you have to start to read Spring source code. And it gets worse if the project gets set up in a way that makes it hard to deal with. For example, not organizing packages by feature. Relying on component scans at the default package is also not exactly funny.
When developing a Spring project, you indeed start to quickly appreciate the introspection feature that advanced IDEs like IntelliJ IDEA provide. Inspecting the application container at runtime is also possible, but it's a clear sign that your app got waay too complicated and has to be refactored.
I'm sure there are existing systems for which this is quite involved. These are design choices, though. If the compiler can figure it all out and find everything, then so can an IDE, and make everything visible where it needs to be visible.
The issue is that it can become very complicated very quickly, even when the IDE is able to figure it all out and present it in a useful way. Even with the best tooling, nonlocal control flow can't help but increase the WTF per minute[0].
If it does in fact become "very complicated" then maybe you are trying to achieve something that is very complicated.
Good tooling takes the surprise out of nonlocal control flow, and can project it into local control flow on demand.
To take it back to skrebbel's complaint, "methods defined far, far away", "absolutely no way to tell", "very surprising", "no indication whatsoever" -- all things that good tooling fixes because it can show you what gets injected where and why, when you need it (and show you a minimally intrusive reminder when you don't).
Game modding in general tends to strongly resemble AOP. How you do it depends on what the game itself provides you, but it tends to boil down to hooking various parts of the game which may or may not have been intended as extension points and either replacing the default code or adding your own logic.
Game modding also demonstrates the big strengths and weaknesses of AOP. New versions of games tend to break some or all mods until the mods are updated, even if the game developer is trying to support the modding scene and avoid breaking things. When literally every function in your code base is a potential extension point, it becomes impossible to change anything without breaking something else, and you can get insane impossible to debug errors.
Aspect-oriented programming is alive and well and living in the Spring Framework.
While it can be misused (by making unclear what aspects are being applied, as you mention) it can be very useful and avoid a lot of boilerplate code and repeated logic.
Wow, I forgot all about AOP, even though I was, briefly, very excited about it.
On a project where "logging" was budgeted as a short, separate task that could be delegated to a junior engineer at the end, I used Spring's AOP to inject performance and usage logs across entire chunks of the program. It worked great and took way less time than scheduled.
I was convinced there'd be other uses for AOP, and kept an eye out for other opportunities for months. Given that I completely forgot this was a thing until now, I think you can guess how often that happened.
I've used AOP a few times, it's never caused problems.
Other than logging and profiling I've used it in ejb3 app calling old plsql business logic - we had problems with some plsql code handling exceptions and transactions in non-standard way (basically - comiting or rolling back in pl/sql code) which messed up our ejb3 container-managed transactions. We had a template for pl/sql code that should be followed - every function should start with a savepoint, never commit, and only rollback to that savepoint in case of error. But there was a lot of plsql code and some of it didn't followed that rule.
So I added aspect in java that wrapped around any plsql call, created a savepoint before it, and checked after the call if it's still there - if not it would a special exception. That allowed us to quickly find all the places that weren't handling transactions properly in PL/SQL and fix them.
We also used AOP for integration testing with our java business logic and plsql - instead of preparing and using up data during tests we simply wrapped transactions around testing methods and rolled back everything afterwards.
I don't think AOP is bad, even if it's truly COME FROM with a different branding.
> used Spring's AOP to inject performance and usage logs across entire chunks of the program. It worked great and took way less time than scheduled.
There was the guy who used AOP to inject logging and perf, and set it to "instrument every method".
_Every method_.
It was Ok in test, but when the production load rose to the daily peak, the volume of logging data was in itself sufficient to saturate a network and requests got dropped. Including health checks, which caused machines to be taken out of the load balancer, increasing load on the survivors. Autoscaling didn't help, new machines came up with the same issue.
The failure cascade brought down the entire service.
Luckily for my case, some senior engineer was convinced that it was a good idea to make an "abstraction layer" consisting of pass-throughs for every API and external service, and group all the stuff in that layer in a single package. So I did instrument every method, but only in that BS package.
Emacs has an AOP facility as well. It's called advising there. But the manual strongly recommends to only use it as a method of last resort, as performance will eventually degrade. Also, debugging in the presence of these features is not exactly fun.
I absolutely abused this functionality when I first started out as a junior and now I cringe thinking about the poor developers that are stuck with that codebase.
Haha you reminded me of AOP nightmares. It's great used for instrumentation. A big part of why Java has great monitoring tools is runtime code modification with AOP.
But as you said, devs find all kinds of mind melting creative ways of using it.
In the early days of computing, programs did not have a call stack and code was not usually re-entrant. No recursion. So, finding a place to store state was tough. This is still seen in some embedded code.
It's all Von Neumann's fault. The program is stored in main memory, and he makes a big point that this means the program can modify itself. So early programming languages, operating systems, and CPUs tended to have support for that. Von Neumann didn't think of index registers. Array access involved code modification.
It took a while for index registers to become a standard computer feature. The first one was in the Manchester Mark I, but they were, for too long, a "high end" feature. The IBM 650, IBM 1401, and the Intel 8080 lacked them.[1] Building in an extra adder cost money.
The PDP-8 has a fun method of returning from subroutines: every subroutine has a spare word allocated at the beginning of it, and it gets the return address written to it when the subroutine is called. Returning from the subroutine is therefore a matter of jumping to the written address. This lets you call a chain of subroutines without a stack, but of course recursion is impossible.
There was an interesting controversy surrounding the Algol-60 report. It seems many members of the committee wanted to be able to create implementations of the language that used only static allocations for activation records, but in the "Amsterdam plot," they sneaked into the report the possibility for recursive procedure definitions which require dynamically allocated activation records (not all machines had efficient stacks!). I don't remember where I first learned about this, it might have been this https://vanemden.wordpress.com/2014/06/18/how-recursion-got-...
UNIVAC 1100 series machines had a Store Location Jump instruction which did that. The return point was stored into the first word of the subroutine. Fortunately, you didn't have to use that.
Stacks were, for a long time, controversial. What if you ran out of memory? Some rare compilers, such as Modula 1 for the PDP-11, computed the stack size for each task. If you wanted to recurse, you had to give a recursion limit. Async, the early years.
Stacks work well now because we now have lots of memory and address space. Early machines lacked that luxury.
I used a compiler 30 years ago for an embedded target that didn't support recursion.
There was a tradeoff. By nixing recursion the compiler could perform variable folding optimizations that would be impossible with recursion. That's because without recursion the call tree is an acyclic graph. The result is variables are held in registers and a tiny scratchpad instead of being pushed on a stack.
> There was a tradeoff. By nixing recursion the compiler could perform variable folding optimizations that would be impossible with recursion.
I don't think this is a real tradeoff. Your optimization clearly depends on whole-program compilation anyway, so programs that do not involve non-tail recursion can still be optimized as such.
To add to the history lesson, Herr Doktor Professor Djikstra's infamous "Go To Statement Considered Harmful" should perhaps have been titled "Call Stack Considered Very Useful".
It's worth reading with this understanding, and of course, it is a must to follow through with Dr. Knuth's equally seminal "Structured Programming with go to Statements", in which Djikstra is briefly quoted to qualify and clarify his position on the matter.
A bit pedantic, but to be clear `set -e` is the call to change the behavior from "on error resume next" to "on error exit". The `-u -o pipefail` are both other common error settings, but unrelated to that behavior.
Pipefail is actually relevant here; because the difference between `-e` and `-o pipefail` is whether “on error resume next” is supported for `cmd;cmd` versus `cmd|cmd`.
I recall reading that the primary purpose of the COBOL ALTER statement was actually for punch cards — you could fix a function by printing out the new function + alter statement, add it to the top of the card stack, and feed it through again — without having to reprint your program, or find the relevant cards to replace
DoEvents is a great tool if your project is small and you use it consciously. Just make sure it won't invoke any I/O or heavy logic. This is easy when your app is small. And DoEvents let's you KISS and short. Not a single time in my life it caused a problem because I would always switch to BackgroundWorkers as soon as the app would grow more complex.
On Error Resume Next wasn't anything really terrible either. You just enable it before a risky call so your program won't crash, do the call, disable it and validate the result of the call before using it. This way you just don't have to clutter your code with endless try-catch-finally clauses.
> You just enable it before a risky call so your program won't crash, do the call, disable it and validate the result of the call before using it. This way you just don't have to clutter your code with endless try-catch-finally clauses.
How is that in any way better than try/catch? It would be just as mush code, but without the syntax ensuring you disable it.
I'm guessing you haven't seen a try/catch block in VB6 / VBA. It's not like a C code block; it's just another goto statement. What GP describes is much better because it's localized: control flow follows line after line. Try/catch blocks in VB6 scramble your control flow. And you still have lots of things you need to remember to do with that approach.
Perhaps you are right. But at least it's less nesting and I'm not sure special syntax to ensure you disable it is necessary - finding all occurrences of "On Error Resume Next" and matching "On Error Resume 0" statements is trivial. Both ways feel tidy than the other in their own way.
I always thought the classic synchronous message loop in Win32 was actually rather elegant if you took the time to understand the mechanics surrounding it. DoEvents is a bit crude but it serves a purpose in the context of VB6. Cant disagree about the error handling though, thats the absolute worst part of the language.
We all stand on the shoulders of those who came before us.
Yes, they are bad ideas. But back when those old COBOL people were on the job, it is what they had in their tool kit.
It is easy to make light of decades old ideas from a modern lamp. But wait, oh contemporary programmer, your day awaits to be mocked by someone who thinks they have the tiger by the tail.
That is not much used directly by human C programmers. It's more for using C as a target language for another compiler, or in some cases as a fast dispatch in an interpreter. GNU Forth (gforth) uses it to dispatch between Forth words, for example.
It's kind of unfair to compare this to ALTER. Most of the time you're not using `goto *something;`, and when you're not, you know exactly where it's going to go. With COBOL's ALTER, it's as if every single goto were a computed goto, with the value of all of them being stored in global variables.
This is definitely a young coder because those features made some sense in the era they were created. In some cases they were life savers.
On error resume next:
This can be scoped. So I’m reality it’s was used in small sub routines where errors needed to be discarded. I’m not suggesting it’s a particularly great idiom but this comes from an era before try / catch blocks.
DoEvents:
This was vital back when your language runtime was event driven but also single threaded. It was a way of allowing the application not to lock up (and thus the user force closing it) when there was computationally heavy workload. These days you’d stick that workload in a new thread for async await but you’d have all the same problems of ensuring that you’d disabled UI elements first as the article describes with DoEvents. And actually DoEvents is rather more clever because you’re telling a single threaded application that was typically running on a single core/CPU system exactly where the safe points are to prioritise other processes.
These days we have green threads, async/await and other concepts for lazy multitasking. But back when DoEvents was created there was no such thing and using real OS threads was very difficult (in fact it was officially unsupported in the languages that had DoEvents, though there were some unofficial jacks) so it was a real life saver in some scenarios.
> On error resume next: This can be scoped. So I’m reality it’s was used in small sub routines where errors needed to be discarded.
Your “in reality” sounds a lot nicer than mine. In my experience “On Error Resume Next” was slapped right at the top of every single lovecraftian horror that was a VB6 file, like a prayer to gods who had clearly abandoned humanity.
Engineering inventory management? On error resume next. Departmental budget management? On error resume next. Telecom billing rectification? On error resume next.
Was there some sane way to use it? Sure, probably there was a team, somewhere, somehow, using this language and making safe, sane, and well-reasoned decisions on how to best take advantage of its features while avoiding its sharpest edges.
But the first thing I learned in this industry is that if you try to teach the average VB programmer to fish, they’ll have burnt down the village and strangled themselves in the line by lunchtime.
> It was a way of allowing the application not to lock up
not just the application. with cooperative multitasking (eg windows 3.1) the whole computer would be unresponsive if you didn't give the event loop a chance to run.
> On Error Resume Next takes an approach that’s the worst idea in many error situations
Does this mean that this feature should not exist? Or that it may be misused?
It seems just a kind of rough Java 'try-final'. If you know that an access to disk, e.g., may fail 'resume on next' just guaratees that the rest of code is executed.
This would not be so wise in a language like C++ were you may have corrupted the stack. But, for basic it had its uses.
Yeah, the legions of VB programmers who just wrote On Error Resume Next at the beginning of the file and never looked back weren’t keen on doing that, either.
Having done some Visual Basic with that a long time ago, the problem with it is that its not fine grained and there's no easy way to know if something failed. For example, you have a large procedure with a division by zero somewhere in it, the code will just continue and you will never know, nor have a way to detect it.
But often, if an error happens, the rest of the code will also be errors (or just be incorrect), so continuing to the next statement just masks the error, it doesn't do anything to handle it.
I don't remember what happens in case of a file access failure: is there a way to detect that there was an error? (Other comment below suggests yes, but it sounds rather brittle to have to check the Err object after every statement) But if so, why not just check for errors without 'on error resume next'.
At least with try-finally, you get to choose the scope of it, so you isolate the part where an error might occur that you wish to resume after.
The only sane use of On Error Resume Next is surrounding one or two specific lines of code where you dont want to let the ordinary exception mechanism be triggered. For larger blocks of code it's completely insane.
Its problem is not that it exists, but that it’s a separate statement. In most languages, if you write try, you have to write catch or finally for your program to even start running.
If you write on error resume next, you should revert that soon, but the language doesn’t prevent you from forgetting to do that. Neither, AFAIK, do typical editors. A good editor, IMO, would indent code inside a “on error resume next”, but I haven’t seen such editors.
It does help that its scope ends at function/procedure return (I think), but of course, that means seemingly benign refactoring can change program behavior.
It isn’t too bad when you write new code, but once you start maintaining a larger program, it can bite you.
Ehhhh, I mean it's conceivable that there were some reasonable use cases for it, but in reality it was just a massive foot gun. I remember trying to debug some old VB6 code in the late 90s that used On Error Resume Next all over the place, and it would get into some unbelievably funky states, it was a miracle that it worked even some of the time. The truly scary thing about it was that even some system level errors would just get swallowed up and you had no idea that Really Bad Things(TM) were happening under the hood.
The equivalent to try/catch in VB was On Error Goto <ErrorHandlerLabel> which when used properly was more or less fine.
A much nicer version of this is the Common Lisp condition system. Roughly the way it works is that in addition to being able to raise exceptions, a procedure can ask a condition handler how to proceed. Basically, the condition handlers are dynamically scoped like a try/catch, and the conditions can be finer-grained than a catch-all Error (though to be somewhat fair to Visual Basic, you could also "On Error GoTo handler_label").
Here's how it might look in Python:
try:
f = open("foo.txt")
s = f.read()
condition FileNotExists as c:
handle UseStream(io.StringIO(""))
where when open fails to find a file, Python will look for the first FileNotExists condition handler, pass it a FileNotExists condition object, and then the handler can choose to handle the condition.
On the open side, it could be something like this:
def open(filename):
restartable:
... low-level stuff to open the file ...
if failed:
signal FileNotExists(filename)
...
restart UseStream as r:
return r.stream
It makes sense for a batch of simple commands that could be run in parallel.
A very simple startup or autoexec script for example, where it might be useful for the rest of the tasks to be done even if one of them fails. It assumes no dependencies between the tasks.
On Error Resume Next is similar to NaN. In some situations the code can be much simpler if after a lengthy calculation one just test for an error result with no error checks in between. In modern languages that evolved into various forms of Maybe/Optional types with various syntax sugar like ?. or Haskel do notation.
It's disingenuous to suggest that "on error resume next" evolved into, or even influenced, Maybe/Optional types.
The "on error" construct existed in QuickBasic, but it took a goto line number or label as its operand. "ON ERROR RESUME n" would give you a relative line, allowing you to jump n lines down. Since Visual Basic didn't care about line numbers, "on error" took a goto label instead. Visual Basic 3.0 introduced the "on error resume next" construct in 1993 so that the QuickBasic usage could be imitated.
Meanwhile, already Haskell had Maybe/Option types and monadic notation in 1992.
And the argument against your points usually makes no sense! Sure "performance". But these "features" should at the very least be on-by-default, with an opt-out for people who want to be able to shoot themselves in the foot.
Unisys keeps selling Burroughs as ClearPath MCP, not only because of legacy mainframes, rather it is an originally written platform in 1961 with system programming languages (ESPOL/NEWP) that take security seriously (the first set of languages with unsafe code blocks and instrisics instead of Assembly), so there are still customers around that are willing to pay for that extra security.
And then there is the whole experience from ALGOL compilers in production,
"Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to--they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."
-- C.A.R Hoare on his Turing award speech in 1981.
Circa 1998 ish I had to take a "Software Engineering" course in college. We had to use Visual Basic for our silly application, and only one of the 6 people in the group had any experience with VB at all.
The project grading rules dictated that each program crash (which typically was some modal dialog with a system error and an OK button) during final demo was an entire letter-grade reduction (!!).
Enter On Error Resume Next. No crashes, just keep going! Software doesn't exactly work properly/to specs? Whatever, that "costs less" points than crashing =)
> DoEvents() is a once-popular but very dangerous kludge for Windows applications that don’t want to deal with multithreading.*
> The problem is that you never know exactly what DoEvents() is going to do. Other parts of your application may receive Windows messages (say, if the user clicks somewhere else), and they can start running their code in the space that you create when you call DoEvents(). This sounds bad, but it’s actually a lot of fun (if you like late-night debugging), because the result is just a little bit different on every computer!
Oh, but if instead you introduce threads, you won't have any such problems, right?
Maybe you're supposed to change the state of your program to "calculation running" so that when UI events are processed in the middle of DoEvents, this state is observed.
If the calculation ran in a thread, you'd probably track that; the UI would know that the calculation is running and certain actions would be disabled or handled differently.
I don't really know what would have been a better solution at the time. In the early '70s, the native word size of computers varied in increments that feel unfamiliar today: DEC's line-up already contained 16-bit and 18-bit computers, and vendors were looking at 32-bit and 36-bit.
C's solution was to define int as at least 16 bits long (but assumed to be the native word size), and char as at least 8 bits long (though it might be stored in a 9-bit half-word on an 18-bit machine, or even a full word on another type of machine).
Actually, for a good reason, it depends! The standard specifies how to get useful information. There are also exact-width, minimum-width. fastest minimum-width, greatest-width,... integer types.
On Error Resume Next was supposed to be used in conjunction with the global Err object that contained the recent most error. You would check Err.Code after each error prone operation for an error and handle it.
With JS off, both regular and incognito mode didn't work. Turns out Medium now locks you out of articles if you don't enable JS. So much for HTML being a format for hypertext, rather than a delivery vector for arbitrary code execution.
Many 1970/80s 8-bit basics allowed GOTO X rather than GOTO 10000.
In Sinclair Basic though replacing the constant with the variable was potentially a significant efficiency win as the 10000 wasn't just stored as five bytes but rather 5 bytes plus six further bytes containing a floating point or integer representation of 10000.
Using GOTO X thus saved ten bytes in this case - less the assignment to X. If there were lots of GOTOs (or more likely GOSUBs) to a particular point in the code this was worth doing. On a machine with less than 10k of free memory (16k Spectrum) this was a worthwhile saving and so was used quite a bit in practice.
> The only possible reason you might use On Error Resume Next is if you have no error handling code at all
The author (and pretty much everyone else who never used Visual Basic) misunderstands how it's used. The next line is expected to check the Err object to see if there was an error. It was never expected to be used for just ignoring errors; it's a way to switch from "exception style" to "error code style" error handling.
It's true that it's a footgun because you can forget to check, but people misunderstand the intended usage.
> The author (and pretty much everyone else who never used Visual Basic) misunderstands how it's used. The next line is expected to check the Err object to see if there was an error. It was never expected to be used for just ignoring errors; it's a way to switch from "exception style" to "error code style" error handling.
I think this comes down to how you think about languages and features. Is a feature good if its intended use is that it be used carefully in limited circumstances and with careful error checking?
Or is it a terrible feature if in practice, the largely low-skill userbase slapped it everywhere and anywhere they could with absolute abandon, because it made the confusing error messages go away?
I think the author (and I) just fall in the latter camp. It doesn’t really matter that you weren’t “supposed” to use on error resume next except in limited scopes, and you were “supposed” to rigorously check the Err global afterwards.
In practice, it was abused non-stop as a crutch by people who largely didn’t know what they were doing, and it made touching a VB6 codebase hell on earth.
I’m not looking at the language in the abstract, but from the experience of having been handed maintainership of a few LOB VB6 apps early in my career. On error resume next’s consequences, how people actually used and abused it, are what matter, not whatever hypothetical nice way to use it existed in the docs and the maybe 1 in 50 VB programmers who approached the language with any knowledge or care.
It was a godawful feature that vanishingly few VB programmers ever used correctly, and it made working with VB projects an absolute nightmare.
Sure, that was a longwinded way of agreeing with me that it's a footgun. I have similar experience maintaining VB6 apps. This is still how errors are handled in C and Go to this day, and it's error-prone in both languages. Exceptions have proven themselves again and again to be less prone to mistakes than manual checking of error codes.
I really don't think the author understands this, though; the quote I provided was pretty unequivocal. The author makes it pretty clear that he doesn't think there's any other way of using the feature. If he does, he made absolutely no mention of it. It's pretty misleading.
Actually, it is still in vb.net for compatibility reasons. "on error" is fundamentally incompatible with structured error handling, going so far, that you can't event have try..catch oder event using..end using in a sub or function that also has "on error"
Why is "On Error Resume Next" terrible, but not-nullable types are praised? With nullable types f.doStuff() will crash if f is null. With non-nullable types f?.doStuff() won't crash if f is null, the doStuff call will be silently dropped. Isn't that just like On Error Resume Next?
Dynamic scope can be great, if used wisely. Of course, by default, you don't want dynamic scope for your variables. But to be able, as an exception, to declare dynamically scoped variables is exactly how you want to handle global values. Common Lisp handles this nicely.
Common Lisp doesn't handle this at all in the way you describe. It does not have global lexical variables which you can opt-in to dynamic scope. It has only dynamically scoped globals. (Implementations may provide global lexicals as an extension.)
- Indentation-based syntax isn't a programming feature, it's a lexical/parsing style religion. In my religion, it looks much cleaner than extra punctuation, superfluous keywords, and the visual noise of curly braces. What's worse is special column flags.
C F90 sucks
C F95 still sucks
C F18 probably still sucks
Since people are mentioning current examples, putting message queues everywhere is a terrible modern pattern.
Looks great on the surface. Ride through transient failures. Easier load balancing. Even out load spikes. But underneath, you're adding bufferbloat to your system.
I've worked on systems where every service communicates with event queues. Debugging these things is hell. Problems with eventual consistency everywhere. Duplicate requests chilling in the queues where you can't see or get rid of them. Ordering issues. Bad messages/events clogging up queues. Upgrade nightmares whenever endpoints change.
Adding a message queue between your services instead of doing things synchronously is a great way to sabotage a project.
For those not in the know, AOP is basically the Intercal COME FROM statement (itself a pun on goto, ie it's a jump but in the opposite direction) (think that through a bit). Yes, that's a totally nuts idea, but it turns out that with a bit of effort you can formulate it in such a way that you can do conference talks about it and people nod and think "o wow nice I gotta tell the team".
More concretely, AOP is usually a bit of compiler infrastructure that lets you inject code into other people's functions/methods etc and make arbitrary changes. It's nice for eg inserting logging code at the beginning of each method, with the method name and argument. You'd be able to eg define a query somewhere and say "for all methods with annotation X, inject code Y at the beginning of the method".
But naturally, teams that adopted it always had this one team member who thought they knew better and started using it for injecting all kind of behavior-altering code in methods defined far, far away. When reading such a method's source, there was absolutely no way to tell that code was going to get injected. If the AOP crowd was well behaved there'd be an annotation/attribute/etc that would trigger the behavior, but that's not a requirement. I've seen people inject code that sanitized data, which was very surprising if said function didn't actually take user data and suddenly while debugging a string's content had changed and there was no indication whatsoever what had caused it.