Hacker News new | past | comments | ask | show | jobs | submit | dapperdrake's comments login

You seem to have given this some thought.

Honest question: What is your current estimate of how much the OOP side is impacted differently by Composition vs. Inheritance, especially when contrasted with FP?


Honestly, for all the OOP talk about composition its rarely done well and afaik no major OOP language supports that paradigm, so it's a bit tough to say, as a really good take on it from a language level could affect my opinion a bit. Java 21 seems promising in this regard with switch + record, but the legacy java layer seems to drag it down.

Currently I'm not entirely convinced composition is a silver bullet that will save OOP and from what I've seen it is also somewhat antithetical to OOP - you are constructing a bunch of "has-a" relationships and expecting it to relate to behaviour (generally done by inheritance / "is-a"). So in "saving" OOP composition will likely kill it.

Instead I think we'll see more (especially newer) languages move towards FP by introducing composition (like rust / golang interfaces). I think it's because composition maps quite well to the FP concept of a composite data type (like a struct / tuple) as well as function composition. But at the same time I think we'll see a lot of in-between languages (again rust / golang) where we take some stuff from procedural+oop and mix it with a bit of functional programming (especially in data pipelining areas). Similar (but opposite) to how Java 8 introduced the streams monad.


Someone else beat me to it. Will answer anyway.

Had a very similar experience to yours with one important exception. The visitor pattern is relevant for traversing ASTs in single dispatch languages. Most notably, this includes Java and C++.

The alternative concept is multiple dispatch and it is way more anti-fragile.

This is where the next step in college literally was to learn the concept of cargo-culting. The concept of multiple dispatch is considerably more useful than single dispatch.

Many years later the realization hit, that OOP, singletons, DB mocking and monads are the hard parts [1,2]. Then you can even skip multiple dispatch in favor of low latency. C++ wants this but its ecosystem has no idea (as in zero) of how to get there [3,4,5,6] (-Ofast -fno-fast-math). Then the "Patterns" (capital P) melt away.

On a semi-related note, it seems worth pondering whether strict typing requirements and/or a missing garbage collector make macro programming and AST work so hard in non-lisp languages. Think I read somewhere that some people considered Rust macros hard. Sounded like they were harder than they should be, which means that the language designers piled on incidental complexity. Macros are difficult enough as it is. And worth it. Even Python pilfered the "with" syntax.

[1] http://somethingdoneright.net/2015/07/30/when-object-orienta...

[2] https://lobste.rs/s/vbivyq/synchronous_core_asynchronous_she...

[3] https://moyix.blogspot.com/2022/09/someones-been-messing-wit...

[4] https://news.ycombinator.com/item?id=32738206

[5] https://matklad.github.io/2023/11/15/push-ifs-up-and-fors-do... Note, that Fortran and Functional Analysis got this right ages ago. Excuses are now few and far between. All of use are late to the party.

[6] https://news.ycombinator.com/item?id=38282950

Edit: typo


The visitor pattern is relevant in multiple dispatch also; just most of its boilerplate goes away, so that only the visitation remains: traverse the AST (or whatever structure) and invoke the given method with the node and visitor as arguments.

For list in Common Lisp, the ancient mapcar function can do this:

  (mapcar (lambda (node) (visit node visitor)) node-list)
We still have to write method specializations the different combinations of node and visitor types we need, for the generic function visit.

  (defmethoc visit ((left if-statement) (right print-visitor))
    ... logic for printing if statement
    )


It's just less fragmented than the single dispatch version, because we don't have to perform two single dispatches to obtain one.

What do you mean by "multiple dispatch"? What's an example of a multiple dispatch program?

Multiple dispatch means that a method is selected based on the run-time type of more than one argument, rather than just "the object" (leftmost argument).

This is beneficial to the Visitor Pattern, because the pattern needs to traverse a structure with polymorphic nodes, and invoke logic that is based on the type of each node, and on the type of the visiting object.

The familiar single-dispatch Visitor Pattern introduces an emulation of double dispatch via two single dispatches. First a "accept" method is invoked on each Node of the traversed structure, taking the visitor as the argument. This accept method is a stub which calls the "visit" method on the visitor, passing the node as an argument. The static type of the node is known at that point, so the visitor can statically dispatch different overloads for different nodes. I possibly have some of this backwards, but it doesn't matter; it will work with different naming. Under multiple dispatch, we can just call a single generic function visit which is dispatched for the node and visitor in one step. If it is a printing visitor, it prints the node, and so on.


https://en.wikipedia.org/wiki/Multiple_dispatch

It means dynamic dispatching based on the types of multiple arguments not a single argument.


here's an article with examples in Python and Julia:

https://medium.com/@AlexanderObregon/how-pythons-multiple-di...


TL;DR Have a few years of experience with using a projector for programming. Can recommend. Am typing this comment while using my projector.

Long version: If you are into convex tinkering, then try ordering a USD 200.00 LED projector off of Amazon and see how you like it before investing in a more expensive one. The LED projectors have almost instant cooldown times unlike BenQ projectors with light bulbs. The only drawback seems to be, that the cheap LED projectors lack mounting holes -- contrary to the official documentation both on Amazon and the manual booklet with the projector. The price is below USD 200.00 for a reason. Both my LED projectors are still working. Never really felt the need to upgrade.

Didn't really need a projector screen. My first setup used white backwall for a cabinet. The second setup is simply projecting onto off-white wall paper. Please note, that it doesn't bother me, but your mileage may vary. Please, also note, that I use gnuscreen/rlwrap/sbcl/vim in MacOS's Terminal.app. At 80x24 characters on 1080p at 60Hz refresh rate, it fails to be a MacBook Pro M1 screen experience. My crude setup may be insufficient for graphical artistical work.

If you want to save your back from sitting down a lot, then you can point the projector at the ceiling. This permits switching between standing, sitting and looking up. Having even more variety may imply headstands. This is where the missing mounting holes are a bit of a buzzkill. But simple solutions exist, if you turn the projector's box into a stand with a box cutter. Protip: Keep the air vents clear.

My setup was so popular with family, that I passed my first projector and stand onto them. That is why I got a second setup. Both are still operational and highly appreciated. Haven't productized it yet. If someone is interested in that, then please feel free to reach out. Don't mind going into business.

Please, feel free to ask questions in case they arise.


Thanks a lot! May I ask if you noticed any difference regarding eye strain? I definitely notice it when i’m working with my laptop and when I’m working with an external monitor.

And how far away do you usually project your screen? Is it OK in daylight (or do you need to close the curtains)?


Counter-example.

Have used literate programming for a real-life freelance project a year ago. Am using it for both my private projects and current freelance projects.

The manual can be broken down into four parts or volumes depending on size:

  I. User's Guide
  II.  User's Reference Manual
  III. Maintainer's Guide
  IV. Maintainer's Reference Manual

Don't get this wrong, updating the screenshots in the User's Guide is annoying and a pain. Over 60% of my time go into writing soft documentation for the users. This seems to be the best way to have hard evidence of the gathered requirements. It shows what the user actually faces. The user stories write themselves. This is the place to put assumptions and data-flow requirements that the human being in front of the tool has to know. The customer last year loved it.

Testing becomes easy. Just follow the pictures in the User's Guide. That's your checklist. In last year's project we immediatly found differences in using the application on iOS and Android, just because the customer was on iOS and could play around with everything immediatly [0]. The first working prototype was done in a week. Iteration was fast in a tight loop with the customer. Basically, live coding.

Source code, entity-relationship diagrams made with Dia diagram, data-flow diagrams and even some math typesetting go into the Maintainer's Guide. All those hairy assumptions forced upon you by third party libraries, external APIs, and external ABIs go in big fat red boxes. Ongoing maintenance check-lists, build steps, and deployment issues also live here.

LaTeX makes it look beautiful. It is, honestly, a joy to read and nice to look at. Just like with research papers decent pictures make it better.

The reference guides are basically lookup-tables. There can even be an index and a glossary.

The PDFs can even be stored in git(1). I know, I know, don't store binaries in git. But it archives the project well [1a,1b]. You already have the screenshots. You can pull old versions out. All the hidden footguns can be called out in the text.

  "It is better to solve the right problem the wrong
   way than the wrong problem the right way."
      -- Richard Hamming as quoted by Doug McIlroy in
         Eric Raymond's The Art of Unix Programming. [2]




[0] [James Hague: Documenting the Undocumentable](https://prog21.dadgum.com/161.html)

[1a] [Archive Your Old Projects](https://arne.me/articles/archive-your-old-projects)

[1b] [Archive Your Old Projects (HN)](https://news.ycombinator.com/item?id=38239358)

[2] [Solving the right problem](https://www.catb.org/~esr/writings/taoup/html/ch01s04.html)


Do you have a link to your repo where you've done this?


Concerning stephen's item (2). The stricter set of rules was laid out by Richard C. Waters in Optimization of Series Expressions: Part I: User's Manual for the Series Macro Package, page 46 (document page 48). See reference Waters(1989a).

The paper's language is a bit different than contemporary (2023) language.

`map()` is called `map-fn`.

`reduce()` a.k.a. `fold` seems to be `collect-fn`, although `collecting-fn` also seems interesting.

sorting, uniqueness and permutation seem to be covered by `producing`.

Just think of McIlroy's famous pipeline in response to Donald Knuth's trie implementation[mcilroy-source]:

  tr -cs A-Za-z '\n' |
  tr A-Z a-z |
  sort |
  uniq -c |
  sort -rn |
  sed ${1}q
As far as pipeline or stream processing diagrams are concerned, the diagram on page 13 (document page 15) of Waters(1989a) may also be worth a closer look.

What the SERIES compiler does is pipeline the loops. Think of a UNIX shell pipeline. Think of streaming results. Waters calls this pre-order processing. This also seems to be where Rich Hickey got the term "transducer" from. In short it means dropping unnecessary intermediate list or array allocations.

Shameless self-plug: Eliminated unnecessary allocations in my JavaScript code by adding support for SERIES to the PARENSCRIPT Common Lisp to JavaScript compiler. The trick was (1) to define (series-expand ...) on series expressions so that they can be passed into (parenscript:ps ...) and (2) the parenscript compiler was missing (tagbody ... (go ...) ...) support. The latter is surprisingly tricky to implement. See dapperdrake(2023). Apologies for the less than perfect blog post. Got busy actually using this tool. Suddenly stream processing is easy, and maintainable.

When adding a Hylang-style threading macro (-> ...) you get UNIX style pipelines without unnecessary allocations. It looks similar to this:

  (-> (it :let*-symbol series::let)
    (scan-file in-path-name #'read)
    (map-fn T #'some-function it)
    (collect 'list it))
Sadly, the SERIES compiler available on quicklisp right now is a bit arcane to use. It seems like it may have been more user friendly if it would have been integrated into the ANSI Common Lisp 1995 standard so that is has access to compiler internals. The trick seems to be to use macros instead of (series::defun ...) and use (series::let ...) instead of (cl:let ...). Note, that the two crucial symbols 'defun and 'let are not exported by SERIES. So using the package is insufficient and pipelining fails without a decent warning.

Am chewing on the SERIES source code. It is available on sourceforge. [series-source]. If anybody is interested in porting it, then please reach out. It seems to be of similar importance as Google's V8 relooper algorithm [relooper-reference]. Waters(1989b), page 27 (document page 29) even demonstrates an implementation for Pascal. So it is possible.

References:

dapperdrake(2023): Faster Loops in JavaScript https://dapperdrake.neocities.org/faster-loops-javascript

Waters(1989a) document page 48, paper page 46 https://dspace.mit.edu/bitstream/handle/1721.1/6035/AIM-1082...

Waters(1989b) document page 29, paper page 27 https://dspace.mit.edu/bitstream/handle/1721.1/6031/AIM-1083...

[relooper-reference] http://troubles.md/why-do-we-need-the-relooper-algorithm-aga...

[series-source] https://series.sourceforge.net/

[mcilroy-source] https://franklinchen.com/blog/2011/12/08/revisiting-knuth-an...


The paper about Series explicitly bemoans the lack of compiler integration, explaining why the hacks are that way: why Series has its own implementations of let and so on.


Apologies for being slightly unclear.

For both getting function composition and avoiding unnecessary intermediate allocations, the naive approach to using the SERIES package is insufficient. And the error messages it returns along the way are unhelpful.

Evaluating (defpackage foo (:use :cl :series)) fails to import (series::defun ...) and (series::let ...) and (series::let* ...). So, when you think you are following the rules of the paper, you are invisibly not following the rules of the paper and get the appropriate warnings about pipelining being impossible. That seems somewhat confusing.

After reading the source code, it turns out the answer is calling (series::install :shadow T :macro T :implicit-map nil). How is (series::install ...) supposed to be discoverable with (describe (find-package :series)) in SBCL if it is an internal symbol of package SERIES (?) Usability here is somewhat less than discoverable. Listing all exported package symbols of SERIES obviously also fails here.

Furthermore, the source code and naming in "s-code.lisp" suggest that (series::process-top ...) may be useful for expanding series expressions to their pipelined (read optimized/streaming/lazy) implementations. This is desirable for passing the optimized version on to PARENSCRIPT or other compilers. Here is the catch: It fails when the series expression is supposed to return multiple values. One of the points of using SERIES, is that Waters and his fellow researchers already took care of handling multiple return values. (If the lisp implementation is smart enough, this seems to mean that these values are kept in registers during processing.) After some tinkering, there is a solution that also handles multiple return values:

  (defun series-expand (series-expression)
      "(series::process-top ...) has problems with multiple return values."
    (let (series::*renames*
          series::*env*)
      (series::codify
        (series::mergify
          (series::graphify
            series-expression)))))
Will submit pull requests once I am comfortable enough with the source code.

Yes, the SERIES papers Waters(1989a,b) bemoan the lack of deep integration with Common Lisp compilers. And yes, they could have been resolved by making SERIES part of ANSI Common Lisp like LOOP was. They could theoretically also have been resolved by having explicit compiler and environment interfaces in ANSI Common Lisp. That is not the world we seem to live in today. Nevertheless, package SERIES solved all of the hard technical problems. When people know about the documentation failings, then SERIES is a powerful hammer for combining streaming/lazy-evaluation with function composition as well as other compilers like PARENSCRIPT.


I think, one issue is that Series had been hacked on since that paper, which had not been updated.

Anyway, I guess series::let is an unexported symbol? I.e. not series:let? There is a good reason for that.

If series:let were exported, then you would get a clash condition by doing (:use :cl :series). The CL package system detects and flags ambiguous situations in which different symbols would become visible under the same name. You would need to a shadowing import for all the clashing symbols.

It's probably a bad idea for any package to export symbols that have the same names as CL symbols. Other people say that using :use (other than for the CL package) is a bad idea. Either way, if you have clashing symbols, whether exported or not, you're going to be importing them individually if you also use CL, which is often the case.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: