
The Minimally-Nice Open Source Software Maintainer - aturon
http://brson.github.io/2017/04/05/minimally-nice-maintainer
======
otterley
Contributors volunteer skilled labor that can help a project be even better.
The worst thing a maintainer can do is dissuade them from helping.

A good rule of thumb is to assume positive intent from contributors. People
who are requesting changes are trying to solve problems; rarely are they doing
it for personal aggrandizement or to fulfill some philosophical mission.

One thing that really upsets me when I propose changes -- especially when
they're accompanied by code -- is getting a thumbs-down from (or, even worse,
an issue closed by) a maintainer, without any constructive feedback as to how
to resolve their concerns. I can work with someone who can inform me about
their concerns or the weaknesses of my proposal, and who says, "yes, but...,"
but I can't work with someone who just says "no."

In my own projects, I have a rule of thumb: I never close an issue without the
consent of the submitter. I try to ensure I've either convinced them that
there's a better way to do what they're trying to achieve; I've resolved their
problem as best I can; or I simply don't have the resources to give them a
proper resolution.

~~~
pc86
> _rarely are they doing it for personal aggrandizement or to fulfill some
> philosophical mission._

Sometimes. Spending hours tweaking a CoC or changing pronouns in documentation
is not solving any problems and results in unnecessary back-and-forth, flame
wars, HN posts, etc.

------
no_protocol
One thing I wish GitHub made easier is responding to a pull request with a
patch rather than encouraging reviewers to type up a bunch of suggestions that
the original submitter will have to turn into a patch themselves.

If you have a better idea how to accomplish part of a suggested change, you
can communicate that more clearly by making a patch and leaving a comment
explaining why. GitHub should encourage this and allow the submitter some
interface to easily incorporate that patch into his PR.

This is especially useful in situations where the only changes are in
documentation or writing rather than code. Dealing with a dozen responses of
"s/topy/typo/" should be as simple as clicking a few buttons to accept all the
corrections. It would be less work for both sides.

~~~
steveklabnik
Have you seen [https://github.com/blog/2247-improving-collaboration-with-
fo...](https://github.com/blog/2247-improving-collaboration-with-forks) ?

~~~
no_protocol
Yes, that is very nice, but it doesn't help if I am not a maintainer of the
upstream project.

Since I haven't used it, I'm not sure, but it seems like it would also help
avoid the case of "Can you squash this before I accept it?" leading to
pointless delays and extra work.

~~~
steveklabnik
Sorry, I thought you were speaking as a maintainer. You mean reviewing someone
else's PR on a project you're not a maintainer of? Yeah that's "send a PR to
their branch" right now.

> Does it help avoid the case of "Can you squash this before I accept it?"
> leading to pointless delays and extra work?

That's [https://github.com/blog/2243-rebase-and-merge-pull-
requests;](https://github.com/blog/2243-rebase-and-merge-pull-requests;) lets
you choose squash and/or rebase as you merge.

------
readams
The worst maintainers are the ones who refuse to say no when they mean no or
want to say no. I've seen maintainers waste weeks or months of a contributors
time on work that they never intend to accept because they're trying to avoid
confrontation up front.

~~~
kibwen
I'd say that's a distinct skill that requires a different approach. Big
projects have this problem less, because they have so many conflicting
stakeholders and they get so much more practice rejecting random patches. But
for little projects with one or few maintainers it's easy to look at every
random contribution and be flattered that someone would bother to take the
time to try and give back, which at the same time predisposes you to feeling
like rejecting them would be some sort of betrayal. It's important to be
magnanimous here when possible, but at the same time when they inevitably ask
"why wasn't this accepted?" sometimes there simply isn't a more satisfying
answer than "this doesn't fit with our unspoken philosophy of what the project
should be".

To use an example of a project I've contributed to, Dungeon Crawl Stone Soup
has an explicit wiki page for "patches that we will not accept, so don't even
bother asking":
[https://crawl.develz.org/wiki/doku.php?id=dcss:planning:wont...](https://crawl.develz.org/wiki/doku.php?id=dcss:planning:wont_do)

~~~
Karrot_Kream
Working with the DCSS maintainers is great though. I recently contributed some
code to put FreeBSD support into DCSS, and it was great working with them!

------
cjCamel
The Compliment Sandwich* is known to be a poor technique for providing
criticism. It doesn't help the person you're criticising (people see straight
through it) and it also negates the compliment. It also braces people for
criticism next time you pay them a compliment.

Worse still, it distracts from the message you are trying to convey.

There is a way of being critical without making a person feel bad, by being
factual and not making it personal, and by remembering that you are trying to
help that person in some way.

*Btw, we brits call it a shit sandwich :-D

~~~
pc86
It's absolutely a shit sandwich, the sandwich is named for whatever's in the
middle! :)

------
shoyer
One of my favorite tricks as maintainer for a moderately popular project is
asking every user to give themselves credit in the release notes, e.g.,

\- Fixed a terrible bug that crashes lots of crashes. By John Smith.

Contributors feel great seeing themselves get credit. New users see all the
different contributors listed, giving themselves confidence that the project
is well maintained. And finally, contributors are much less likely to forget
to update the release notes!

------
Cpoll
> But before you fly off the chain, prove your undeniable superiority, and
> prove that they are wrong, let me suggest instead that you do something
> crazy, that you do the opposite: that you prove they are right.

I'm a bit uneasy about this. If you spend your time agreeing with a suggestion
that your initial reaction is to reject, you probably won't have the patience
to go back and question it after the fact. You'll also possibly 'brainwash'
yourself into thinking it's a good idea?

I agree that you shouldn't knee-jerk into rejecting an idea, but I believe
thinking critically is important. Take the opposite side, but do it civilly.
Come up with counter-examples, but phrase them constructively: "have you
considered this problem, do you think it's an issue, can you think of a
solution?"

~~~
Veen
> but I believe thinking critically is important

I think you have to differentiate between thinking and communicating. Thinking
comes first, unhindered by concerns for social niceties and hurt feelings.

Then, you communicate by expressing what you've concluded in a way that does
account for the fact that you're conveying a message to a human being, with
all the baggage that entails.

~~~
mwfunk
I agree, at least in general- I think of critical thinking and communication
as overlapping but very distinct skills.

I believe that if an idea hasn't been articulated yet (put into words, whether
in writing or speech or thought), it hasn't been fully formed.

There's a saying that you don't truly understand something until you have to
explain it to somebody else, and that's because it's not until you have to
explain it to somebody else that you are forced to articulate the entire thing
by serializing it (ugh) into words. Putting things into words nails down
meaning in a way that can make logical problems or false assumptions much more
obvious.

So, IMO communication is a two-pronged challenge. One challenge is
articulating something that may have previously existed in a partially-formed
state in the back of our heads. The other challenging is adapting that
articulation to be as lossless and efficient as possible when directed at a
specific audience. Communication is only successful if the signal is both
successfully transmitted and successfully received. When we fail, it's easy to
blame the audience, but IMO more often than not it's our own failure to read
the room and we just don't like admitting that to ourselves. It's human nature
to get defensive and use excuses like, "they were focused on nitpicking my
words and not listening to my ideas!" That excuse in particular has become a
huge red flag for me.

------
kazinator
All those things are basically like a software job, except for the part that
you aren't paid.

The skills to handle it are the same.

In commercial development, you ideally have some layers between you and the
customers. That depends on how large of an outfit you are and so on.

How it works at Reasonably Big Co. is that all those requests from the end
users don't go to you directly but to some customer support people (who are
actually technical: they develop solutions and fix issues by actual coding; do
not think "Microsoft customer support" here). You, in turn, support these
people: but your manager should be coordinating that. If you are asked
anything that would require significant development, you redirect to your
manager; other than that, you help as much as you can without derailing your
official work schedule.

Secondly, if customers have some major request like for a major feature,
support may defer them to "product management" to have that work put on the
road-map for the next release or whatever.

Product management doesn't bug you directly to develop anything. They
negotiate with your managers to hammer out what is in scope, and from there a
schedule is devised and so on. You end up with some tasks out of that
schedule.

Similar principles can be applied to your OSS project: you (and possibly the
handful of other contributors) just have to wear all these hats yourselves.

Here is the thing: those aforementioned support people independently produce
solutions to problems. They check them into their own repos and they release
that stuff to customers. Then later, customers want all that stuff carried
forward in the software. So there is a tension: there are all these wild and
wooly patches whipped up by support, of varying quality and they (or
equivalent solutions in their place) have to be somehow integrated. The
support people tend to say yes to customers to make them happy which isn't
always best for the product.

That is quite similar like when in OSS you are receiving complete patches from
highly technical users who can code.

------
bhntr3
Best blog I've read in a long time! <3

> By consistently exhibiting a few simple behaviors, one can at least look
> like a kind and decent person. Maybe someday we all actually will be.

The best thing about this is that we will! Mindset follows behavior.

P.S. Am I doing it right? I'm actually terrible at this. But I wish I weren't.

------
ori_b
I think I'd get annoyed working with people who respond like this. Except for
the part about fast response times -- I like that.

I don't like guessing people's intent. I don't like it when people are evasive
about saying no, wasting both my time and theirs.

------
carapace
Small sans-serif body text means you hate your readers. Making it grey means
you _really_ hate them.

~~~
Veen
And to add insult to injury, for some reason Reader Mode in Safari doesn't
appear to work on this site either (MacOS).

------
vvanders
This is great stuff and applies to more than just OSS. Working with vendors?
Other parts of a large organization? A lot of the times the dynamics(or at
least the learnings here) can be applied.

Kudos to the author, I'm sure that was no short amount of work to put
together.

------
pm24601
Most open-source code is not managed very well (why should it be?)

I always fire off an issue with an offer to patch. If no response, then no
patch submitted even if I have a patch.

I don't submit the patch because code moves on. A few commits later by someone
else and the patch can no longer be cleanly applied. So I don't do the work on
the patch if it will not be accepted.

Also I don't do nit picky code clean up. For example, if someone wants 2 space
indentation and I did 4 ... they can accept the patch and use their IDE with
their formatting rules to fix such things.

------
carapace
One huge problem is that September never ended...
([https://en.wikipedia.org/wiki/Eternal_September](https://en.wikipedia.org/wiki/Eternal_September))

What I mean is, there are lots of people who are not as smart as they think
they are, and they won't STFU and RTFM, etc...

Send not to know at whom the Torvalds curses, he curses at thee.

Frankly, I think the author's original position was correct: Emphasize credit
for authors of RFCs and you'll get fools authoring RFCs for the credit.
Sheesh!

~~~
joe_the_user
Maybe I could put it another way. September 1993 ended, just not the way some
people would have wanted it to.

The net has become a medium for everyone, not just a small enough group that
this group could become acclimatized to a dominant discourse style (whatever
the virtues might be of the Linus Torvalds "I will call you a fucking moron
when you fail" school of discussion for a rarefied group). And given that, it
is now necessary to have the standards of discourse for the net be the
standards of discussion for society as a whole.

And sure, maybe being nice can produce a series of problems of its own. But
it's now necessary to solves those problems while being nice.

~~~
carapace
"Being nice is something stupid people do to hedge their bets." ~Rick,
fictional smart a-hole.

> Maybe I could put it another way. September 1993 ended, just not the way
> some people would have wanted it to.

You could put it that way, if you wanted to be a patronizing fool. Try to
understand that _you_ are evidently one of the people I'm complaining about.
You're not disagreeing with me; you're calling me a fool in a passive
aggressive format.

> The net has become a medium for everyone...

I think you mean _Facebook_. Most humans haven't Clue One when it comes to
computers, let alone networked computers. Even _wordpress_ users are digital
peasants.

> ...just a small enough group that this group could become acclimatized to a
> dominant discourse style (whatever the virtues might be of the Linus
> Torvalds "I will call you a fucking moron when you fail" school of
> discussion for a rarefied group). And given that, it is now necessary to
> have the standards of discourse for the net be the standards of discussion
> for society as a whole.

But I'm not talking about the "normals" and their precious discussion, I'm
talking about the art, craft, and-- yes --science of computer programming, the
thing I've dedicated my life to, and that suffers from a huge influx of fools
who think that they know what they're doing when they just don't. It pisses me
off.

Getting software _right_ is important. I estimate that about 9 out of 10
people getting paid to write software today shouldn't be. You do realize that
today, in 2017, _most_ of the needful software has _already been written_
don't you? Think about it.

So, my premises are: Most new software is unecessary. Most people writing
software are not qualified.

If we're talking about software development (not "the standards of discussion
for society as a whole") whether FOSS or closed "being nice" is waaaaay down
the list of priorities for for what needs to happen globally in the software
industry (IMHO).

We should establish strict standards and ensure that pro coders meet them.
(like y'know _engineers_ do)

Software is machinery. There's not a lot of scope for touchy-feely in
software: you can usually make arguments with math and numbers and _these do
not care about people 's feelings_

Certainly I'm not arguing in favor of unbridled assholism, but if someone
doesn't have the emotional maturity to deal with, e.g. Linus Torvalds being
cranky and calling names (over email!) then that's probably a fine reason for
that person to go do something else and quit wasting his time. (I've read some
of his ranty responses and generally the folks he's popping off at are being
thick and stubborn about it. I've got no sympathy at all for that sort of
thing.)

~~~
nickpsecurity
"Getting software right is important. I estimate that about 9 out of 10 people
getting paid to write software today shouldn't be. You do realize that today,
in 2017, most of the needful software has already been written don't you?
Think about it."

I'm curious if you've read the Richard Gabriel essays. Certainly the two of us
have seen something like engineering of software. Anyone looking has seen
high-quality software. Yet, most software isn't written with that goal in
mind. It's about taking markets, politics, squeezing more money out of
existing customers, scratching an itch, etc. It shouldn't surprise you that
almost no high-quality software comes out of such abysmal demand for it. After
a while, it shouldn't even bother you since it's pointless to worry about what
most will do to quality for reasons aside from quality.

The best we can do is convince people who might give a shit, companies that
might differentiate on better things, governments that might regulate to a
baseline of methods that work, and so on. Plus advocate voting with our
wallets for "The Correct Thing." Best advice I can give you after way too much
time doing the opposite. ;)

~~~
carapace
I haven't read them but I'm familiar with the area of discussion.

My argument here rests on the idea that we are _late_ in the game of software
development. Most of the software we need has been written already. What we
are doing now is a thundering herd attack on the global mind-space of
algorithms+data. (How many chat apps? how many serialization formats, etc.)

I am kind of off in the corner. For example, I don't see Rust lang as cool and
innovative, to me it looks like a tarpit.

There is a better way.™ ;-)

Over in the "The Power of Prolog" thread
[https://news.ycombinator.com/item?id=14045987](https://news.ycombinator.com/item?id=14045987)
there are posted three solutions to teh Zebra Puzzle. There's one in Ruby all
bloody minded, one in python even longer and more bloody minded. Then there's
the one in prolog I wrote after reading the docs for half the day instead of
working. (D'oh! Hi boss.) The Prolog version is less than a page of code and
half of that is direct translations of the puzzle hints to CLP constraints
(the other half just sets up the variables and such.)

The other two aren't bad because they weren't written in Prolog. They're bad
because they both would be better off implementing the resolution algorithm
(i.e. mini-kanren) and then using that to solve the puzzle. It still would
have been shorter and easier and faster.

Now, _why didn 't they know that?_ That's the essence of my concern.

The prolog solution is much shorter, returns the solution instantly, and I'll
wager it took me much less time to type it in once I had figured out how to
translate the puzzle into CLP(FD) style. Am I the 10x programmer? No. I think
what I (and others) do is learnable. I've maintained for years that anyone who
can solve a sudoku puzzle has the intelligence to learn to program. In fact,
I've just realized that anyone who solves Sudoku puzzles _already knows the
resolution algorithm_ , that's what their mind is doing to figure out the
puzzle.

So most people _can_ be taught Prolog. The machines are vast and fast enough
now. Why is everybody so keen on cracking out code, to make a buck or scratch
whatever itch, but _not_ on doing it _better_ using tools and techniques that
are hella old!? Is humanity really so perverse? ;-)

(P.S. I'm still working on replying to your kind and excellent email.)

~~~
nickpsecurity
"I haven't read them but I'm familiar with the area of discussion." "What we
are doing now is a thundering herd attack on the global mind-space of
algorithms+data."

Nice way to put it. Ok, you really need to read those references since most
programming is supposed to suck if it's done by humans. It used to bother me
when I was younger but I accept it as inevitable. Much like evolution itself
producing tons of waste on its way to overall dominance of most of the search
space. Once you get basic principles driving it, you must then work within
that to squeeze as much of that good engineering as possible into those
constraints. It's only way to have high impact or shift segments of the herd
in better directions. Hell, it's a little like CLP for people & development
processes. ;)

I'm including original essay, a great commentary tie-in in historical proof,
and one by ex-high assurance guy Steve Lipner at Microsoft:

[http://www.dreamsongs.com/RiseOfWorseIsBetter.html](http://www.dreamsongs.com/RiseOfWorseIsBetter.html)

[http://yosefk.com/blog/what-worse-is-better-vs-the-right-
thi...](http://yosefk.com/blog/what-worse-is-better-vs-the-right-thing-is-
really-about.html)

[https://blogs.microsoft.com/microsoftsecure/2007/08/23/the-e...](https://blogs.microsoft.com/microsoftsecure/2007/08/23/the-
ethics-of-perfection/)

" The Prolog version is less than a page of code and half of that is direct
translations of the puzzle hints to CLP constraints (the other half just sets
up the variables and such.)"

It was beautifully concise compared to the others. Of course, they were trying
to re-invent the parts like execution strategy that are hidden in Prolog. Your
point seems to be that nobody taught them this or they didn't learn enough.
That comes from combo of education system training people for workforce and
popularity of imperative languages for FOSS. The environment is real problem.
Changing that leads us right back to above essays. Two, good examples are
Ocaml and Clojure. One gives them escapes back to imperative on their way to
gradually learning functional. It's done better in uptake than most FP. The
other made some changes to LISP while dropping it on top of one of
evolutionary-strongest ecosystems. Got uptake no other LISP had up to that
point. A subset of its users also started learning other LISP's.

"They're bad because they both would be better off implementing the resolution
algorithm (i.e. mini-kanren) and then using that to solve the puzzle. It still
would have been shorter and easier and faster."

" The machines are vast and fast enough now. Why is everybody..."

It would've been. Now we get to the point where you find that you _may be_
caught in the same problem you're accusing them of. I can't remember Prolog or
most programming now due to my head injury. Yet, I remember quite a bit about
the market & what people did with it back when what you suggested was tried on
a large scale by smart people. That was the AI boom where they coded in LISP,
Prolog, Poplog, OP5, etc. I read all that, built expert systems with some of
it, and tried to stretch it in new ways. I confirmed myself that it was very
difficult to apply it to all kinds of problems where other models allow
concise expression of problem or dramatically, higher efficiency. We
collectively needed that development pace or efficiency to be competitive. The
Japs even threw piles of money and brains at Prolog-specific hardware in their
Fifth Generation project to bootstrap the goal you're talking about. All of
that failed miserably. The AI Winter finished most of those companies off with
a few pivoting and surviving.

So, in case you're wondering what we learned, it was that you don't want to
write all your code in Prolog. Even those doing logical search in some cases.
The default on bottom of the stack should _not_ be Prolog. You want to use the
most powerful language you can that supports DSL's & FFI's. You then embed
things like Prolog in it to use when ideally suited for problem. Anything
that's easier to handle a different way is done differently. LISP and REBOL
were main proponents of this approach with Allegro CL still bragging about
their Prolog implementation. "sklogic" added Standard ML to his LISP for
coding safer, easier-to-analyze modules on top of DSL's for parsing & Prolog.
Haskell has recently joined the fray where a number of DSL's are letting one
mix benefits of Haskell and low-level languages like C. Galois Ivory language
& Bluespec come to mind. If a tool such as SWI Prolog is used, the typical
case should be prototyping & verifying Prolog source that's then embedded in a
better tool like those. There's times like Zebra puzzle, NLP, etc where
constaints allow it used standalone. Also, possibility of doing things in
reverse extending Prolog with foreign primitives instead. More space to
explore in R&D.

Point is that was the hard-learned lesson of decades of failures to do
everything in Prolog. It just didn't work. It was ideal for some things but
slow, hard-to-write, and hard-to-maintain for other things. Same was true for
many models. Hence, a unified tool can express and integrate each of those
models to let builder use best tool for each job. Alternatively, data formats,
calling conventions, or protocols standardized to integrate separate programs
using separate models. High-assurance recently did something similar for
verification in DeepSpec program that led to CertiKOS OS. A bunch of DSL's are
used. Prior efforts tried to build & verify it all in one tool like FOL or HOL
but work was miserable.

" I've maintained for years that anyone who can solve a sudoku puzzle has the
intelligence to learn to program."

I've never thought about it. Especially in light of Prolog. Very interesting.
Now you got me wanting to drop Prolog on some Sudoku fan sites to see what
happens. Have to have syntactic sugar, libraries for common things, and great
tutorial so the start is painless. I'll hold off for now but keep it in mental
peripheral if I see someone messing with sudoku.

"P.S. I'm still working on replying to your kind and excellent email."

Cool. :) Also reminds me I still need to take black and yellow highlighters to
that book. Probably take it to work with me to mess with on lunch break.

~~~
carapace

        It used to bother me when I was younger but I accept it as inevitable. Much like evolution itself producing tons of waste on its way to overall dominance of most of the search space. Once you get basic principles driving it, you must then work within that to squeeze as much of that good engineering as possible into those constraints. It's only way to have high impact or shift segments of the herd in better directions. Hell, it's a little like CLP for people & development processes. ;)
    

Reflecting on that calms me down a little. Evolution has no purpose, so it
cannot be inefficient. I think my problem may well be in unrealistic
expectations. ;-)

    
    
        I'm including original essay, a great commentary tie-in in historical proof, and one by ex-high assurance guy Steve Lipner at Microsoft:
    
        http://www.dreamsongs.com/RiseOfWorseIsBetter.html
    
        http://yosefk.com/blog/what-worse-is-better-vs-the-right-thi...
    
        https://blogs.microsoft.com/microsoftsecure/2007/08/23/the-e...
    

I'll read them ASAP. I'm changing jobs at the moment so I'll either have
little to no time or too much.

    
    
        ...re-invent the parts like execution strategy that are hidden in Prolog. Your point seems to be that nobody taught them this or they didn't learn enough. That comes from combo of education system training people for workforce and popularity of imperative languages for FOSS. The environment is real problem. Changing that leads us right back to above essays. Two, good examples are Ocaml and Clojure. One gives them escapes back to imperative on their way to gradually learning functional. It's done better in uptake than most FP. The other made some changes to LISP while dropping it on top of one of evolutionary-strongest ecosystems. Got uptake no other LISP had up to that point. A subset of its users also started learning other LISP's.
    

Part of it is education, part of it is environment, and part of it is just the
state of the industry, what counts as "professional" education and behavior is
kinda grotesque compared to most other groups of people who call themselves
"engineers". I'm working with a guy who has never heard of Alan Kay. What's
worse is that _he doesn 't care_. He's not ashamed of his ignorance. Yet he
has zero qualms about pulling down six figures as a hotshot developer.

When I finally learned LISP I got mad. I didn't even learn it: I read the TOC
of pg's book. That was all it took. My experience and brain cells were so
primed that I "got" LISP just from that table of contents. And for about
twenty minutes I was _really pissed off_ at all of my fellow computer geeks
_en masse_. How much time and energy, how much sweat blood and tears shed? _We
could have just been using LISP the whole time!_

I really _really_ think it's time we collectively turn our attention from
chasing our brain-tails, and focus on the real issues: How to map human
intention to automation in the most efficient manner? If we can _just get out
of our own way_ I think this physical "human condition" is mostly licked. All
our problems are psychological now.

But, uh, I rant...

    
    
        "They're bad because they both would be better off implementing the resolution algorithm (i.e. mini-kanren) and then using that to solve the puzzle. It still would have been shorter and easier and faster."
    
        " The machines are vast and fast enough now. Why is everybody..."
    
        It would've been. Now we get to the point where you find that you may be caught in the same problem you're accusing them of. I can't remember Prolog or most programming now due to my head injury. Yet, I remember quite a bit about the market & what people did with it back when what you suggested was tried on a large scale by smart people. That was the AI boom where they coded in LISP, Prolog, Poplog, OP5, etc. I read all that, built expert systems with some of it, and tried to stretch it in new ways. I confirmed myself that it was very difficult to apply it to all kinds of problems where other models allow concise expression of problem or dramatically, higher efficiency. We collectively needed that development pace or efficiency to be competitive. The Japs even threw piles of money and brains at Prolog-specific hardware in their Fifth Generation project to bootstrap the goal you're talking about. All of that failed miserably. The AI Winter finished most of those companies off with a few pivoting and surviving.
    
        So, in case you're wondering what we learned, it was that you don't want to write all your code in Prolog. Even those doing logical search in some cases. The default on bottom of the stack should not be Prolog. You want to use the most powerful language you can that supports DSL's & FFI's. You then embed things like Prolog in it to use when ideally suited for problem. Anything that's easier to handle a different way is done differently. LISP and REBOL were main proponents of this approach with Allegro CL still bragging about their Prolog implementation. "sklogic" added Standard ML to his LISP for coding safer, easier-to-analyze modules on top of DSL's for parsing & Prolog. Haskell has recently joined the fray where a number of DSL's are letting one mix benefits of Haskell and low-level languages like C. Galois Ivory language & Bluespec come to mind. If a tool such as SWI Prolog is used, the typical case should be prototyping & verifying Prolog source that's then embedded in a better tool like those. There's times like Zebra puzzle, NLP, etc where constaints allow it used standalone. Also, possibility of doing things in reverse extending Prolog with foreign primitives instead. More space to explore in R&D.
    
        Point is that was the hard-learned lesson of decades of failures to do everything in Prolog. It just didn't work. It was ideal for some things but slow, hard-to-write, and hard-to-maintain for other things. Same was true for many models. Hence, a unified tool can express and integrate each of those models to let builder use best tool for each job. Alternatively, data formats, calling conventions, or protocols standardized to integrate separate programs using separate models. High-assurance recently did something similar for verification in DeepSpec program that led to CertiKOS OS. A bunch of DSL's are used. Prior efforts tried to build & verify it all in one tool like FOL or HOL but work was miserable.
    

Yeah, I get it, I do. I'm old enough to know about things like "Fifth
Generation" computers and the AI Winter, and I agree with the pragmatic
issues. I still have what I guess amounts to faith that there is a better way.
Personally, I think I'm onto something with a system based on George Spencer-
Brown's "Laws of Form" and Manfred von Thun's "Joy" notation, implementing
something similar to Hamilton's HOS but without the deficiencies. I ahve no
idea if I'm a crackpot or not here, but I think I see a glimmer.

At the very least, we today have the benefit of hindsight, if we'll avail
ourselves, eh?

    
    
        " I've maintained for years that anyone who can solve a sudoku puzzle has the intelligence to learn to program."
    
        I've never thought about it. Especially in light of Prolog. Very interesting. Now you got me wanting to drop Prolog on some Sudoku fan sites to see what happens. Have to have syntactic sugar, libraries for common things, and great tutorial so the start is painless. I'll hold off for now but keep it in mental peripheral if I see someone messing with sudoku.
    

To me, the essential piece was learning the Logical Unification algorithm by
walking through mrocklin's port of Kanren to Python. Once I understood how
resolution worked, the next time I was figuring some puzzle (in programming as
it happens) out, I was startled and pleased to be able to recognize the
resolution/unification process _as my mind performed it_ to solve the puzzle.

I'm not sure that folks could go directly from Sudoku to Prolog, although I
would wager that anyone good at Sudoku _would_ be able to learn Prolog under
some conducive conditions. In my experience the limiting factor is interest.
I've told one friend of mine several times now that "the only difference
between you and a 'real programmer' at this point is number of lines of code
written!" but he has other priorities or something...

\----

edit: the quoting came out weird, but I'm going to leave it. (And now I know
how to do that quoting.)

------
crispyambulance
This is just basic manners like your mother taught you.

It is kind of sad that so much "social" advice on the internet consists of
stuff people should have learned by the time they turned 12 years of age.

In any case, I am glad the OP is making an effort in favor of gentility.

~~~
hyperpape
I think it's basic manners that when someone writes a long piece about a
subject they're struggling with, you shouldn't just respond that it's "stuff
people should have learned by the time they turned 12 years of age".

It's basic manners, but we all struggle with basic manners sometimes,
especially on the Internet.

~~~
crispyambulance
You're right about that. It was my initial response and half-baked.

I should have elaborated but now it is too late.

