Sure, yes, a graphical layer would be nice, I would love to be able to draw some arrows and other elements in some sort of an overlay, something like DrRacket does. And yes, better integration with an actual web browser would be splendid. I would love that.
> And yet, most programmers have an irrational fear of built-in metaprogramming because it "makes hard to understand code". But seemingly nobody is afraid of bad metaprogramming, like code generation. Which I would argue is more difficult to understand.
The difference here is that “bad meta programming” happens when the people writing thorny libraries and frameworks clamor enough for meta programming that the language implementers shoehorn in a subpar implementation. These people know how important it is to programmatically generate and modify code, because they had problems that just couldn’t be solved without that, and the other language users never see it (very few programmers in most non-Lisp ecosystems are curious about the code of libraries they use) and so don’t care whether or not it exists.
On the other hand, a language with pervasive meta programming is going to have cases where you could do a task without that feature, but can do that task much more easily with it. Meaning that of the programmers using that language, everyone in the subset of “programmers that like to minimize what they need to learn as much as possible” will be exposed to code using meta programming.
Most of these engineers will not have the prior experiences to understand that “ok, we need to learn meta programming either way because it’s essential to many important coding tasks and so has to be in the language; it’s not an increase in learning/comprehension difficulty to also have it in code that doesn’t strictly require it”. As such, their reaction is simply “why do I have to learn how to work around this extra thing?! Get rid of it, now!!!”.
That’s not a matter of the language; and I don’t mean that in the “No True Scotsman” way either.
Lisp definitely allows you to write code that is very easy to come back to and understand. I’ve written code like that, and I’ve seen code like that in eg certain sections of the SBCL internals, among many other projects.
However, there’s a reason we don’t write all our code in Python or Assembly. The fundamental semantic restrictions of one and the simple functionality of the other are both traded off against horrible scaling relative to a technical project’s complexity and scope.
If you write a project with complex stages/components in those kinds of languages (or mimic their styles in Lisp code, for that matter), either you have to split those components into parts that don’t make sense separately (making the code base significantly more complicated than eg an equivalent Lisp codebase), or you have to write extremely long and convoluted programs to accomplish conceptually simple functions (making the code base significantly more complicated than eg an equivalent Lisp codebase).
Similarly, if you use the Python or Assembly coding paradigms for large projects, you quickly get lost in a sea of individually simple-seeming code segments, unable to grok the connections between said segments just because of how many things need to be kept track of.
In contrast, for Lisp languages:
A) The default style used by Lisp programmers is highly functional, using many higher-order functions, using macros that solve their problem-domains without explicitly exposing the code doing the solving (objects also fall into this general role, btw, which is apropos given the Common Lisp Object System was defined purely with macros), and clearly delineating code / systems with mutating effects.
While this style is admittedly harder to grok than Python/Assembly for simple cases, it has almost no reduction in reading-complexity as code scale/complexity increases. Complexity gets naturally encapsulated and automated by functions/macros/objects (as opposed to either a restricted subset of those or simply the developer’s own head, which are the 2 approaches most other languages offer for complexity management) (algebraic type systems are a very cool exception to the prior note, but a pervasive object system (like CLOS) paired with macros allows you to make a type system (like Coalton) without feature/ergonomics loss).
B) Interactive development and ergonomic DSL creation are objective wins for complex or large programs; the former significantly reduces iteration time for both cases, while the latter makes it easy to just write code once that addresses the complexity/scale of the domain and then never deal with that complexity again (or at least not with the full complexity, even if you don’t understand the domain well enough to obviate the irrelevant complexity entirely in your DSL).
Sure, hot-reloading in a few other languages gives us most of the former (although things like CL conditions, debugging REPL-layers, access to object contents & stack-frame inputs (only addressed by some debuggers, and only in a restricted way), and some other features still lack equivalents) and other languages generally use objects alone to address (most instances, though not all, of) the latter (rather than Lisp’s mix of macros/objects/higher-order-functions) (we’re ignoring the text-templating macros in languages like C++ because everyone agrees they’re more trouble than they’re worth).
But Lisp is still by far the most featureful and ergonomic offering of both of these facilities.
Something was bugging me about this article the first time I read it, so I made a mind map.
Turns out, not once does the author actually give a reason why Python was better than Lisp for Reddit. The only references to that connection at all were claims that some “X” (all among the set that are oft-refuted by Lisp programmers) wasn’t the reason!
It reads more like a PR justification (“oh, there’s not much difference really, we even asked unspecified engineers who agreed with the change, and our code is working better and better!”) than an actual explanation.
As I understand his claims, “less code that ran faster” was after switching from existing web frameworks to web.py, a from-scratch web framework he wrote himself to ‘do the right thing, simply’ (as all the best frameworks do).
The decision to write web.py instead of web.lisp was not elaborated on, however.
As mentioned above, the closest point I see to him discussing that decision is saying that ‘Python uses objects to make frameworks somewhat like the syntactic constructs Lisp allows you to make’, ie ‘you can do the same things as Lisp less easily in Python, so that’s not an advantage of Lisp’. Which is both incoherent as a refutation of the referenced reason to keep Lisp, and not a reason for the change to Python.
That's actually really direct. It appears as though they had python rewrite had more familiarity with the programmers so they had an easier time writing it.
I come from the opposite side and find python syntax and style stifling.
Instead of an ordinary blub like python, they should have doubled-down, went full-retard, and used PHP. For web work in those days, it probably would have been even faster... even easier to read and maintain.
Not a Firefox user, but this feels like a culture gap. I use multiple applications where configurability through user-code is part of the SLA, and dropping support would be as utterly unacceptable as, say, dropping the toolbar. Considering the contrast Mozilla is trying to make with Chrome's more restrictive and non-user-conscious nature I see no reason they wouldn't share that same philosophy.
In the past they’ve not hesitated to reduce customizability when doing so was perceived to bring some benefit to ease of development.
Aside from that I just think it’s a good idea to push for implementation of basic functionality like hiding bars that don’t pair nicely with popular vertical tab extensions. I shouldn’t need to resort to usercss mods to do something that simple.
Scientists are people too. These efforts have always struck me as a desperate attempt to cling to conventionally intuitive views of what humanity and sapience actually means, in the face of physical laws which don't seem to treat "personhood" as anything special (aside from the "observer-interaction" of quantum physics, which some claim involves impact on the universe being observed rather than just what parts of reality the sensor/observer can perceive).
EDIT: Though to be fair, I wouldn't be surprised if neurons just so happen to use quantum decoherence in their computations. I just don't think it has any relevance to the macro-level phenomenon we call "consciousness".
I think the difference here is whether we're considering the plausibility that there aren't any security violations versus the overall frequency and severity. Centralization significantly increases the chance that all the systems involved will be safe; that's what makes it so useful for individual organizations, where centralizing their operations wouldn't attract significantly more bad actors to try breaking their security than decentralizing.
But if we have centralization on the scale of a society, then anyone interested in any of the groups using that centralized source of secure data storage/transfer will be drawn to look for the flaws in that source. And there are always flaws, either technical, legal (as with the government spying mentioned elsewhere in the comments), or otherwise. And once any group manages to infiltrate that one source, they get access to everything dependent on it.
Sure, decentralized security is harder to get together, meaning we have an initially-high violation rate that decreases over time (though this can be supplemented by security-conscious users taking their own steps to protect their data).
But centralized security at sufficiently large scales essentially guarantees a breach impacting everyone within its domain; and the kind of trust that would be required to sustain such centralization also anti-correlates with users independently adding additional layers of security to their systems.
This seems like a much greater risk than just accepting that users who are "impervious to education" will be vulnerable to certain social-side exploits, while everyone else will be reasonably safe.
I've actually seen this usage pattern in the practical usage of logic programming libraries. Common Lisp's Screamer [1], for instance, or Clojure's less-mature core.logic [2].
Though to be fair, both of these libraries are in Lisp languages, which inherently support the design pattern of slipping into/out-of DSLs.
Minikanren/Microkanren (which is what core.logic is based off of) actually works really well as library in most languages. Nothing about it really requires macros to be easy to use, just higher order functions.
But agreed that Emacs is more user-extensible than all the other options presently out there, purely from how easy text-oriented extensions are.