Granted, many of the problems with Ethereum Solidity contracts are more to do with all of its use of implicit behavior (in a misguided attempt to hide the complexity of contracts) rather than directly consequences of Solidity being imperative.
Here's a quick plug for Mercury, a statically typed dialect of Prolog with an optimizing native code compiler. Supposedly it's 5 to 10 times faster than commercial Prolog compilers or available interpreters.
One concern with logic programming is cost of computation, on Ethereum every transaction has a gas associated with it and so you can't run computations that go over the gas available in a block.
Turner's ideas of Total Functional Programming might have application in the smart contract space as well, since you disallow general recursion but allow structural recursion, you can likely precalculate or bound gas costs accurately ahead of time.
As for being statically typed, I completely agree, Solidity's poor design choices contributed to millions of USD in loss (e.g. DAO hack) because the developers were not able to easily reason about the implicit behavior or concurrency model.
Right, but the client executes the contract, keeping a trace of what needs to be computed by the verifier. The verifier doesn't actually execute the full contract, just verifies that the trace was faithfully executed. If we have
let R = (A() || B() || C() || D()) && ! E();
Effectively, because declarative languages don't dictate order, the client is free to re-order the contract execution order to be optimal for this particular execution, without altering semantics. Declarative semantics, are by definition, independent of execution order. This makes efficient compilation and execution more difficult, but makes verification faster (if the verifier is provided with an execution trace).
Now, you could potentially do similar optimizations with Solidity contracts, with a suitably modified EVM definition, but if the execution order is up to the runtime/compiler instead of dictated by the source code, then you've by definition changed the language to be declarative.
I believe the DAO bug was a reentrancy bug, not a concurrency bug. The code was not written to be reentrant because the developer didn't realize recursion could be triggered via implicit behavior. Many reentrancy bugs are concurrency bugs, but I really think that's not the case with the DAO bug. I saw one proposed fix (not sure if it's the one that got finally accepted) that used a reentrancy flag to prevent the problem and called the flag a "mutex", but it wasn't actually a mutex, adding to the confusion about the root cause.
I really don't think execution of a single contract transaction is concurrent, and due to (eventual) serializability of blockchain transactions, the blockchain acts as if concurrent execution within a contract across miners doesn't exist. If you have concurrent calls to a single contract, at most one of them will succeed, and those that fail will not affect the blockchain state.
In general, the Ethereum community seems to refer to serialized execution of contract methods in an order unexpected by the contract author as "concurrency", but I have seen no evidence that the effects on the contract state as reflected in the blockchain are not always serializable. In other words, it acts much like concurrent SQL queries under a serializable isolation level: if two concurrent executions modify the same data, one of them will fail instead of you getting an interleaving of the two write sequences.
It's possible that I misunderstand the EVM, but it seems insane to design a system to allow multiple threads to execute within a single contract at a time in the presence of shared mutable state, at least without an RDBMS-like isolation system in place.
Edit: https://forum.ethereum.org/discussion/1317/reentrant-contrac... supports my understanding.
They've built a language for distributed ledger platforms based on Haskell with defined state transitions based loosely on traditional contract law. For exactly the reasons you've mentioned, this makes modeling the participants, rights, and obligations of a smart contract use case incredibly efficient.
Whether smart contracts are useful or not remains to be seen. There seems to be a lot of potential in the finance and supply chain worlds.
When thinking about DAML it makes me wonder how impactful something like Cobol was in reality. Definitely found use and even long term value add, but transformational? I don't know.
I'm not even sure what the technology comparison should be for DLT without DAML. There's only so many use cases or niche areas where it's valuable.
Hand-tuned imperative C enjoys a performance advantage over functional and declarative alternatives. So I’d imagine that imperative smart contracts are inherently easier to optimize than Prolog-style contracts.
In the simple and common cases, there ends up being no difference between execution and verification. However, if you have a contract that has
let F = (A() || B() || C() || D()) && ! E();
let R = F() && (G() || H());
With a declarative contract, the cost of suboptimal ordering is borne by the client and not by the verifier.
With EVM, there's no separation of contract execution and verification. The verifier/miner needs to execute the contract at the request of the client. If you modify the EVM and contracts to keep track of what's provably side-effect free and allow the client to specify reordering of those terms, then you've by definition created a non-imperative language or sublanguage. In that case, it's much safer and easier to design the system from the ground up to have semantics that are invariant under evaluation order (that is, declarative semantics).
Most of the cost of declarative program optimization vs. imperative program optimization (deciding an optimal order) is borne on the client side. Due to the structure of the contracts and the traces, it's trivial to prove that portions of the contract don't need to be executed in order to verify the transaction.
- my time is better spent getting deeper into some more popular language that I already know to some extent
- unused skills deteriorate with time so by adding a new language that you don't need professionally right now, you also need to add the future time and effort of practicing and maintaining that skill
- I could be wrong but I feel that it's a red flag to have too many of the more esoteric languages on your resume.
I do enjoy tinkering with a new language but very often it just feels like a distraction.
Prolog is interesting because it is really niche, and “learning it” as in solving a few easy problems will definitely not take much time but will add another way to think about a problem to your toolkit.
Back when I worked at Google, one of my colleagues (Jeremy) did some Bayesian inference on "1% experiment" results, where he needed to see which observations matched a complex hypothesis and complex rules for the context.
So, Jeremy found it easiest to think about the problem in terms of Prolog, and wrote some Python objects whose constructors defined these rules. He also whipped up a memoizing backtracking engine that effectively lazily generated a DFA over experiment observations.
He talked and thought in terms of Prolog, because that's the mental tool that he used, but the programming tool was Python.
Interesting guy, that Jeremy. He used to do cacao price forecasting for Mars (Mars Bar/M&Ms). He also enjoyed craft beer and carving pumpkins with chainsaws.
We also worked in the same (NYC) office with Fergus Henders, who in grad school wrote Mercury, a statically typed dialect of Prolog with an optimizing native code compiler.
My huge problem now is I'm in Sydney, Australia. 4-5 years ago there was a reasonable strong enthusiasm for Scala and functional programmers in general. No, I've been looking to change jobs for the past year and absolutely nothing has popped up.
It seems the only roles available are traditional Java with Spring Boot, Javscript, Python and C#.
Even with DevOps stuff which I have a bunch of experience for everyone want Ansible when after discovering NixOS/Nix it completely removes the need for Ansible giving declarative deterministic infrastructure/deployments without fragile line upon line of YAML files.
Of course all the functional programming work I've learnt is usable and can be applied everyday in other languages. Functional programming isn't new predating all the new shiny popular languages. The problem is having used better languages which treat these principals as first class it isn't the same as trying to hack together something in a language never designed to support it.
It seems I've now learnt myself out of a job. If I want a job I need to take a step backwards and use tools that now frustrate me and seem clunky / broken. It's at the point where I'm starting to look at a career change outside of software to start enjoying work again.
I think businesses have weighted the power/capability of scala against the relative lack of easily hireable programmer for them.
If a business makes themself dependent on one (or a few) key scala programmers, they are more prone to be held hostage (e.g., the power dynamic is more towards the scala programmer). They will have to pay more salary or face the risk of the programmer leaving (which affects business continuity). The scala stack doesn't automatically give a more competitive edge against the competition, however (tho i would say it does, but only if the entire org buys into it).
Therefore, it makes more business sense, esp. for a middle manager responsible for hiring, to hire a java programmer (which is basically a dime a dozen at this point). The business will face no risk of the programmer holding the business hostage (because they are easily replaceable). So that's why you see businesses start to not adopt scala.
I'd never touch it again.
I think in the case of Scala many companies became invested in Spark and Scala came along for the ride. Then as time went on, it turned out you could do advanced FP in Scala, despite its warts, with principled libraries like Cats and the Typelevel stack (see https://typelevel.org/cats/). Companies could then use Scala as a "better Java" or squint-and-it's-almost-Haskell. Or like you said they could just ditch Scala and use something less powerful but easier to learn and work with.
I bet in a couple of years, as the Java VM platform keeps improving, it will only matter to those stuck in Android.
This is a movie with a script I have seen multiple times.
I'm a Scala programmer due to Spark. I love it.
I've worked on a few big data projects for large companies. I have a huge dislike for them as mostly seem about how can we fuck over a customer to better the business or do shady stuff with the data rather than anything meaningful.
That and it's mostly just ETL, aggregating various data sources, cleaning, putting the data in some central location then running queries on it and the "If the only tool you have is a hammer, you will start treating all your problems like a nail." problem, tiny data sets going from NiFi / Hadoop / etc when `cat ... | awk ...` will be more than sufficient.
We wrote a framework to heavily simplify writing these ETL flows. No developer of the ETL flows is using Scala unless something custom is required. I wish we could opensource it.
Sounds similar to the AI hype these days.
I think there are professional opportunities doing FP in Haskell/functional Scala, though they are few and far between and in certain industries such as fintech or other areas where correctness matters more. Your best bet is to look for a remote position and keep your eyes peeled (at least, that's my strategy!).
a) hiring is way too difficult at the best of times and is compounded more when you filter the pool so small (admittedly the filter is not bad for some measures).
b) so few at the company have any interest in using languages outside of the more mainstream; for example it was no effort at all to get interest in Golang.
Other less exotic languages can still be worth taking a look at to see what they do differently. Ada, for instance, is a plain old imperative/OOP language at heart, but it's different enough in the details that it could be worth looking at if you already know C/C++.
Look at the alternatives. If there's a practical alternative to the language/material that has reasonably widespread use and can be counted on to be present in 5-20 years, go for the alternative. If you have a need for something that's been put through its paces for 40 years and is battle tested with well-supported compilers, consider the "dead language".
And you can always leave languages off your resume.
By the standards of “dead” frequently employed to point out that, say, Hasell is dead, R, Wolfram Language, and Fortran would be considered “dead” as well.
Edit: Actually, my last statement may be only in hindsight given that any experienced programmer will know or know of languages like C and Python that aren't mindbending, so one's viewpoint gets slowly skewed. I think that's why sometimes beginners find things like Prolog or other "strange" languages interesting or even easy, and that's because they present a paradigm that is more like like how humans think about certain tasks. Whether it's Prolog and logic, Elixir/Erlang and messaging, F#/Ocaml/SML with pattern matching, etc., these paradigms are actually quite natural to humans who haven't been tainted with paradigms like that of C, which takes on the paradigm of "let's think like a machine".
Programming Paradigms for Dummies (https://info.ucl.ac.be/~pvr/VanRoyChapter.pdf) is something I try to get everyone I know who's interested to read and really think hard about.
The textbook is the only real textbook I've bought after university because it's so... foundational.
I like to refer to programming paradigms as the building blocks of design patterns -- how do you derive design patterns and best practices? By trying to bring programming paradigms into your design! Our practice of immutability can be viewed as a means to make data flow more deterministically.
Regarding opportunity cost of time investment, also take into account that, even though learning a lower-level language will first save you time in comparison to Prolog, most if not all tasks you solve with lower-level languages will take longer than solving them with Prolog would take.
So, investment in Prolog is like an investment with compound interest: It first takes time, and then you save more and more time, freeing you for other activities.
When I first started at Google, the 1440 Broadway office in Times Square was pretty crowded and they stuck me in where there was a desk, and I ended up sitting next to a guy (let's call him <strike>Mel</strike>Jeremy, because that's his name) who did "1% experiment" analysis. One day, I asked Jeremy what he was working on, and he said he was writing some Prolog to check some hypothesis, and he invited me over to have a look.
Really, he was writing a complex rule matching engine (behind the scenes lazily constructing a DFA that operated over experiment observations instead of characters) in Python, to be able to sort sets of observations and contexts into those that fit a hypothesis and those that didn't, as part of some Bayesian inference.
But, it was easiest to think of the experiments observations as a facts database and his complex hypotheses and complex contexts as Prolog rules. So, in this case, Prolog was a mental tool, not a programming tool, and he ended up implementing a tiny tiny subset of a Prolog engine that used Python object constructors instead of Prolog syntax.
Here are some that helped me out immensely (I used to love Haskell but now I love it adequately):
New technologies can be a psychological trap.
If you’re trying something that feels novel, that has different primitives and constructs and workflows, you might learn new ways to think about using your more comfortable tools. You might even come to have different standards and expectations of how your peers use those tools.
I’ve become a much better developer from using somewhat esoteric languages and bringing insights back from that. I learned how to reason about and organize stateful code by doing FP, I learned how to write better interfaces in cross-paradigm languages by just reading a lot about Haskell, and I learned a ton about how to handle branching logic and databases by a foray into Prolog/logic programming.
Which by the way isn’t dying, and has been breathing life into more common use languages. That’s the benefit of learning new languages. What they do differently can help guide the mainstream too.
Here’s some ideas that have been percolating into ecosystems that didn’t have them:
- static types
- composable functions
- patten matching
- minimal contracts/dependency control inversion
- provability/static validation
- static runtime optimization of dynamic patterns
These are all because people spend time with tech that’s uncommon. Maybe you don’t need it on your resume. But having it in your life experience is going to make you better at your dull day job.
Sorry for ranting and unfairly picking your post for replying, but I'm a little bit disappointed to see the clueless and roundabout reaction of HNers here who, rather than sticking to the topic of Prolog, see stories like this as an invitation to advertise their fringe language, their not-quite-Prolog language, or their personal pet peeve.
I focus on why a language looks interesting and worthwhile to learn - i.e. what new ideas are in it that are worth stealing.
Over and over, I've found that grasping the new ideas in one language almost always permits you to use it in another more "mundane" language. I've for example used C++ functionally, brought Objective-C and Erlang ideas into C++ and so on when I was dominantly programming in C++. During that period, I also made a scheme-dialect interpreter in C because for some of the things we wanted to do C/C++ were too hard boiled.
I've also run a short fun "course" at my company called "no spoon" where the participants build a stack-based language in JS and implement many defining features of many other programming languages .. including all the above.
So what I'm trying to say is that you don't need to lament that you can't use a language that you learnt in your day job. New ideas in languages are always worth learning no matter what language you work with on a day-to-day basis.
I've been trying to keep a professional career with parallel academic activities. I.e., I did my masters and Doctorate while working.
For both my dissertation and thesis I worked in domains and problems that lend themselves to logical formulations - and most of the implementations I did were in Prolog. I also had minor professional experience with Prolog. Focusing here just on the learning-the-language parts and disregarding all other activities:
* I _absolutely_ could have used that time to learn a more marketable language or framework. So that was a tradeoff, but -
* I most likely would have put only a fraction of those hundreds of hours into "marketable training". I'd have played games for most of them. Or just browsed history channels on youtube, etc.
* I did not feel like any practical skills deteriorated at all. In fact -
* I feel like learning Prolog in-depth was a valuable "exercise in learning". It might have helped me learn hard stuff for my job by making me a better independent student
- I hardly ever interview but I usually let Prolog be at most a footnote in my resume. I don't expect it to be useful in that way (but who knows?)
My takeaway would be - for marketable purposes, learning Prolog is only substitute for meta-training. Like, instead of reading another (not your first!) book about design patterns or asynch control or (...) you could learn Prolog.
But 85% of the benefit in learning it is the intelectual stimulation.
On one hand, it often impresses or makes you more desirable in the eyes of the programmers who do your initial phone screen or whiteboard interview. They like seeing that you're curious and passionate about the field.
If you explore languages that they haven't explored yet, then they might be excited about the chance to learn something from you. If you've explored the same trendy language they're already exploring, then their ego might feel validated by seeing others on the same bandwagon.
Either way, they're probably frustrated by various examples of not having buy-in from management or architects to do various things they'd like with the existing codebase. So they might see you as the right kind of mindset, someone hungry who will be an ally in pushing for improvements.
On the OTHER hand, it will often worry managers and architects. Who have enough stress on their plates dealing with the business stakeholders, and don't want to hire another kid who will throw a temper tantrum when they're not allowed to re-write the entire platform in Rust.
So if you DO decide to put various "personal interest" languages on your resume, then make it a point during phone or F2F interviews to highlight your professional maturity. How you understand the need to balance risk. In other words, that you explore languages on your personal time to make you a better developer, not because you really have an expectation of using Brainfuck on your employer's projects.
Even with that said, by the time you're 5+ years in the industry, I'd be a little wary of putting too much on your resume that isn't genuine professional experience. I include Lisp and F# on my resume just for the occasional conversation-starter... but if I really had a list of 12 hobby languages, then I would probably try to mention that in the interview, rather than listing them all on my printed resume/CV.
In this view, you don't have to spend much effort to keep up the skill over time, since you only knew it at a shallow level to begin with.
If you're worried about too many esoteric languages on your resume, don't put them on your resume. You are under no obligation to state everything you know on there...
Is it a distraction? Maybe. Should you make it a serious project? Perhaps not. But there's nothing wrong with stopping and smelling the roses (and the rare languages) every now and then.
Learning a new language gives you more ways to understand languages you already know, so it has some value even if you don't use it professionally.
Unused skills deteriorate, but it's often easier to learn things the second time.
I wouldn't put too many languages on a resume. It's always relevant experience, relevant education, relevant knowledge. That means you can omit things if you don't think they're using BrainFuck in production, or you don't think they'd apprechiate the classy name. Maybe make a point of putting your favorite esoteric language just in case, but leave the others out.
Being able to pick up languages as needed is also a skill. At my last job, I was hired to do PHP work, but I learned Erlang on the job, as well as Applescript (ugh), got a lot less bad at C, wrote some perl and shell, and even had to write some python from time to time. And or course, I had to debug other people's Java, but didn't write any. I haven't seen a lot of positions where they explicitly hire for that, but it's sometimes what's needed.
Sometimes it's also fun to just see something different. There is value in "Play". You might find you actually really love the new thing, or you might find the new thing just totally reaffirmed your love for your current tools. Either is great! But finding out is probably more valuable.
For what it's worth, to your question directly, I found using Prolog to be a pretty profound experience with respect to how I thought about programming. I haven't touched it in ~12 years, but while I was using it I thought it was incredible. One of the "funnest" learning experiences of my career was building sudoku solvers in Prolog.
I think you might if you use it properly. I definitely don't think you'll earn less money - it doesn't take enough time for that.
There are already so many modern/useful things I want to learn that it's hard to justify learning something because it's 'beautiful' or purely intellectually stimulating.
You might be surprised to find out that "because it's 'beautiful' or purely intellectually stimulating" is the essence of the 'Hacker' ethos for which this site is named.
The Y Combinator itself is something that fits this description quite perfectly. You will never use the y combinator in practice, and it likely won't help you to get a job ever, but it's a beautiful, wonderful thing to learn about and play with.
I'll be the first person to criticize PG on a range of topics, but to his credit he has been a great proponent of this ethos and it is no coincidence that this site is named after what would be now considered esoteria by many of the commenters in this thread.
It's predictable that the VC ethos of SV would undo that of the Hacker spirit that built it, but it's still sad to see.
Is it? I thought hacking was more (Or at least also) focused on exploration and making things work .
A Y-combinator doesn't really seem to fall into "'beautiful' or purely intellectually stimulating" either . It seems like something that has practical value, as it enables recursion.
I also don't think all these things are mutually exclusive. I find consensus algorithms to be quite fascinating and they also seem to have massive practical value, which is why I want understand them more deeply and perhaps implement one.
Is it worth becoming an expert? No. Is it worth spending a day or a week playing with? Most definitely. I have a few niche things I've picked up like that. Learning a little bit of Lisp was similar, although I'd like to learn more of that some day.
- does the language have some name recognition and positive reputation?
- is it still in use (even if only in very niche areas)?
- do you have a chance to use it to solve a problem (for work or for fun) to which the language is well suited?
If the answer to all three is yes then I would have no hesitation.
For context, I've been able to satisfy all the criteria when learning both Prolog and Erlang. They were both profoundly education experiences, but I think part of the impact of that experience came from using them to solve problems I had, and to which those languages were well suited.
There are many languages that will teach me important concepts, but time to devote to learning them is limited. I need a filtering/prioritization process. The trick is to just be aware that these other languages exist, what domains they are good for, and be ready to learn them when I have the right problem.
But sometimes you learn for fun, or out of sheer curiosity.
Interesting question. My advice would be as follows:
1. Learn one practical language really well, well enough that you can both immediately write correct code, with no IDE or documentation, to both algo style problems (hacker rank etc.) and the things that typically come up in your day job or hobby projects (e.g. if you do datascience and use pandas a lot, be sure you can do all common operations from memory; if you do web development, you should be able to churn out a simple REST handler in your framework of choice without thought). You don't need to rote-memorize the language's entire API surface of course, but you should be able to churn out all the frequently needed stuff without thought.
3. Only now that you covered your bases to do "realistic" tasks effectively, look at learning some more exotic languages. Otherwise you risk flitting from one thing to the next wasting a lot of time setting up a third rate dev environment for a weird language and hunting for some semi functional library that helps you achieve X and never working on anything meaty enough to really learn much.
But once you are productive and cover a range of tasks with mainstream languages, learning either maybe-up-and-coming or already-half-dead-but-interesting language can definitely pay off. Because if you make good choices about which languages to learn you can either broaden your horizons in ways that will pay off even if you don't ever use the language for "real work" or, if, you're lucky, you might have picked an up and coming language and be one of the small pool of people with any amount of experience with it, which will give you a strong competitive advantage. Don't allow your main language skills to grow dull (unless you are switching to a new bread-and-butter language), but there is no problem with learning X and then forgetting a lot of the day-to-day stuff about it, if you got some lasting enlightenment out of it.
Also: don't put esoteric languages on your CV if you just have toyed with them, only list things you have used professionally or done some non-trivial hobby project with (unless, maybe, you are looking for a job in "obscure language X").
One of the annoying things with people telling you to learn X because it will help you to grow intellectually is that they often don't really provide concrete examples, so let me provide a few suggestions:
1. Lisp: templating trees by bashing together strings (or CPP style tokens) is braindamaged. Having a compiler at runtime is powerful and useful. A real REPL (not Python, ruby, scala, ...) is powerful and useful. Most DSLs suck and would have been better replaced with a simple sexp format.
2. Smalltalk: You can have a syntax that's as basically simple as lisp but more readable. Everything is live, inspectable and modifiable and you can persist the whole state of the world trivially. This has dangers, too.
3. Python: syntax can, and maybe should, look like pseudo-code. By getting the pragmatics mostly right (e.g. slices, convenient dicts, convenient and immutable strings, mutation returns None, good error messages and repr, basic pseudo-Repl), you can make a very productive language without particular hardcore engineering skills or understanding of CS theory (or history). On the other hand these will also become real limitations at some point.
2. APL/J/K: there are more reductions, inner products and outer products that are useful beyond those involving addition. Array rank and broadcasting. The joys of right associativity. Function power (including infinite) and under. The effects (good and bad) of extreme conciseness.
4. C/C++/Zig/Rust: understanding ownership, stack, heap, pointers. Low-level thinking.
5. Clojure: cons cells as in lisp or scheme suck. So do traditional FP linked lists. Reducers/transducers, schema-language leveraged for property-based tests. One way to think about concurrency with immutability.
6. Erlang: you can have some very nice properties without a horrendously complex implementation, if you make the right engineering trade-offs. E.g. Erlang is pretty much the only thing that gives you preemptive multi-tasking with very high granularity. Pseudo-ropes are an interesting way to do strings. Supervision trees are a powerful concept and way to think about failure and failure handling in concurrent code. The value of deep production introspectability and in particular low-overhead tracing.
7. SQL: Data is king. The power of a truly high level language (even if flawed). Where the abstraction breaks down, badly (e.g. locking, or the DB switching execution plans under you). "Null" done right (true ternary logic, even if confusing is a lot better than the NaN crap in ieee 754 or null in mainstream languages). Why you still want to avoid Nulls most of the time. ORMs are for losers.
8. Ocaml: a proper type system can make writing certain types of code much less error prone and is possible without Haskell-level ivory-tower-wankery. There is value in having a somewhat simple minded but predictable compiler.
9. Zig: you can do almost everything C++ can do at a tiny fraction of the conceptual overhead.
There are technical reasons why the pursuit of term reordering is problematic, namely 'cut', which is a kind of out-of-bounds alteration of the program's execution. Previous attempts to get rid of 'cut' apparently ran into other problems. IMO the root of the issue is that prolog has no way to declare a "closed" predicate, and then optimize execution based on their presence. I think cut papers-over (heh) the lack of "closed predicates".
I don't think that's a commonly agreed-upon "biggest potential" of Prolog. Prolog is a general-purpose language with the syntax and the semantics of the first-order predicate calculus, albeit limited to Horn clauses and with many other restrictions that are necessary to make for a language that can be used in the real world, for real programming tasks.
I cannot easily think of an example where changing the order of clauses (not "goals") in a predicate definition prevents the program from "runnign backwards". For example, take the typical append/3 predicate:
Xs = [a, b, c, d].
Xs = [c, d].
Xs = [a, b] ;
>> IMO the root of the issue is that prolog has no way to declare a "closed" predicate, and then optimize execution based on their presence. I think cut papers-over (heh) the lack of "closed predicates".
I, like tom_mellor, am also not sure what you mean by "closed predicates". Could you clarify?
It's bad enough to use "directions" for what are properly called "modes", but it's positively counterproductive to say that anything in Prolog runs "backwards". Prolog always runs top-to-bottom, left-to-right, and this is precisely why it's easy to write nonterminating definitions. As a community, we are doing ourselves a disservice by claiming that anything can magically run "backwards".
As for your question, the canonical example for clause ordering being a problem is where the incorrect order prevents generation of answers because a recursive clause precedes a non-recursive one.
xs([x | Xs]) :- xs(Xs).
?- xs([x, x, x]).
ERROR: Out of local stack
ys([y | Ys]) :- ys(Ys).
Ys =  ;
Ys = [y] ;
Ys = [y, y] ;
Ys = [y, y, y] .
Anyway my bad, I guess you can think of many cases if you try a little. I apologise. I was not in my programming mind today.
What I got out of the experience:
1. Intellectual confidence - it's not often that we get to learn not just a new way of doing something but a new paradigm. I am 38, and while I try to challenge myself in all areas of life, how often is something a true mind-bender? The fact that I was able to wrap my head around (some of) the Prolog way of doing things proves to me that I "still got it" and is good exercise for keeping it that way.
2. One specific thing about Prolog is heavy reliance on recursion. I probably "get" recursion better than an average CS grad but this took me to the next level - I will be able to apply it in other languages.
3. There's something about Prolog that makes programing feel like this: a ton of thinking up front, but once you do it, the implementation is bug-free (versus lots of fiddling around in other languages). I guess that's probably true of non-imperative languages in general, but it was a nice change of pace. I am not a full-time programmer anymore (product manager) so for me coding is mainly fun/intellectual and doing it in a way that removes the fiddling is great.
4. Prolog feels very retro, I can't really explain it but it doesn't try to be cool - it's the opposite of say the latest JS framework that is polished to the 9s. Prolog feels more like the cockpit of a fighter jet - sparse and powerful, but you have to know how to use the tools.
5. The community is cool. I found the IRC channel and folks were very generous with their time, though to be fair I asked questions like: "I implemented X like this, is that canonical?" vs "how do I do X?"
6. I am glad to have Prolog in my toolkit. I can imagine problems down the road where logic programming is the best tool and I know I can reach for it now.
Bonus: a few posters in this thread asked about the ROI of learning a retro language given that they could be learning something else. I think it's totally up to you, but looking back on the list above, almost none of the benefits I enjoyed are that I now know Prolog. So I guess you have to pick and chose your challenges, I wouldn't get the same value from learning Fortran77 and I wouldn't expect anyone else to either. Prolog is different enough from whatever else you're doing that it can grow you beyond learning some minute detail of React Hooks (which are great, btw!)
If you've never tried a declarative programming language, you should give Datalog a try.
In general it's very nice to be able to prototype queries/inference rules quickly and then tweak clause ordering, etc for performance later if needed.
I have documented them here: https://github.com/simongray/clojure-graph-resources#datalog
I'm a hardcore Clojure advocate (it's a fantastic programming language with a great ecosystem of libraries), but there's no doubt that the amount of resources available for learning the API of core.logic is less than what's available for Prolog. If the objective is to learn logic programming from the available resources, Prolog is probably the better option.
I wonder if it's used for their alarming logic.
If you need easily verifiable rules for your rules engine, it's hard to go wrong with Prolog.
SQL is a language that could have been a Prolog. In fact, querying in Prolog in general is absolutely beautiful.
Prolog is homoiconic like lisp; extending the language with new features is trivial as it requires no new syntax.
The main feature of ASP that I like is the ability to handle Now, ASP is not a full-fledged language but instead should be viewed as a workable pure-logic based notation for expressing some kinds of problems. In particular negation is handled really well. Its not easy to learn but is well worth the effort.
Firstly, by being so declarative if was very easy to end up with O(a^n) algorithms by mistake and secondly I found, at least in what I was doing, that for example the ordering of declarations would change program behaviour which made it imperative in a very non-ovvoous way.
I also found it hard to debug and diagnose the code or find decent resources online for it.
Perhaps I didn't do a good job with it or was missing details on exactly how best to use it but I found it was almost too declarative to the point of the rubber never hitting the road in a controllable way.
they give abstractions that are not constant multipliers in terms of speed or memory usage compared to a low level solution (like garbage collection and dynamic typing).
Rust gives just the opposite type of abstraction over C++: memory safety without adding any unpredictability in execution (and of course updates in the language design).
While relational algebra (SQL) is equivalent to logic algebra, it's easier to reason how it executes.
Haskell I can't comment on, though I can totally see that happening there, there might be better means of managing complexity and giving hints to actual implementation.
With prolog the means of expression seems capable only of posing the 'question' rather than indicating that it should be not be implemented in the most naive possible way.
As interesting and mind-bending a language as prolog is I just couldn't get past that and it has made me think it's just not a practical language to use for anything real. But I am happy to be proven wrong!
And then the whole following section on unfulfilled potential.. Is there any reason to believe the paradigm will somehow come into its own in the future? The way this question was addressed by the article was way too wishy-washy for my taste.
A major attraction of Prolog is that one can remove almost arbitrary parts of any pure program and still infer useful conclusions that hold for the original program. For example, one can remove a clause and thereby make the program more specific. One can remove a goal and thus make the program more general.
This is possible due to the way the language is designed, the interplay of syntax and semantics enables this. The program fragments serve as explanations that can be shown to programmers to pinpoint mistakes in the code. Thus, when narrowing down mistakes, it becomes possible to focus on smaller portions of the code that are therefore easier to understand.
Woa. I'm pretty sure that the subsumption order and the implication order are not common knowledge among Prolog programmers. Have you been reading Plotkin by any chance? :)
Prolog is _larger_ than LogicT. It is the additional syntax and runtime system in a custom Prolog implementation that needs to be defended, not the lack of it!
But perhaps someone who has used languages in this area more broadly than I can comment: Which is a better bang for your buck?
- "Functional logic languages" e.g. Curry, Mercury
- Clojure + core.logic
- scheme + minikanren
Since you'll eventually need some business logic, I don't see much point in Prolog either nowadays. Prolog-like code is a tiny part of your app, no need for a dedicated syntax. Do-notation is sufficient.
Unfortunately no. There was a great push to develop the theory of logic programming, based on resolution theorem proving, back in the '70s and '80s. This effort culminated in the design of Prolog, a general-purpose logic programming language with a resolution theorem-prover. Shortly after the AI winter of the '80s hit and research in logic programming was severely disrupted. Research and progress continued but at a much slower pace and the earlier heights have never been reached again since. Consequently, we don't have a "better Prolog" because there are very few people who can even imagine what such a "better Prolog" would look like, let alone design and implement it. Alas, we have ran out of Robinsons, Kowalskis, Colmerauers, and so on.
Another reason of course is that Prolog is a very fine-tuned language that gets many things right because its design makes very sensible trade-offs. For example, the much-maligned depth-first search and the consequent clause ordering, absence of an occurs check, negation-as-failure, extra-logical features such as the dynamic program database, etc, are all pragmatic choices that balance the need for a general-purpose language that works, with aspirations of theoretical purity. That is to say, there is nothing fundamentally broken about Prolog, it's acutally a very good logic programming language and it's just not that easy to design a radically better logic progamming language, especially one based on resolution.
Regarding minikanren, my understanding is that it's not meant as a "better Prolog", rather it's meant as a small, embedded Prolog (and here by "Prolog" I mean a logic programming language based on resolution, in the same sense that Scheme is "a Lisp"). It's meant for programmers of imperative languages who would like to integrate logic programing functionality in their applications. Minikanrens (there are many) can essentially be imported as libraries that allow logic programming in a syntax closer to the embedding language. The alternative would be either to switch entirely to Prolog (which of course is a huge decision), or to write a buggy, slow version of Prolog in the programmer's preferred imperative language.
Minikanrens have various differences with Prolog, such as absence of side-effects and a dynamic database, a different search strategy, occurs check, etc. but these are not unmitigated improvements. They are different trade-offs than the ones chosen by Prolog.
Note however that resolution-based languages like Prolog are not the only logic programming languages. For two prominent examples of alternative approaches, see Datalog and Answer Set Progamming. Both sacrifice Turing-completeness but offer, e.g. decidability, the ability to benefit from the power of modern SAT solvers, classical negation, etc.
Again, there are trade-offs. The thing to understand is that theorem proving -proving the truth or falsehood of statements in a formal language- is damn hard, even with computers, and there are always going to be trade-offs when the purpose is to have a real-world application.
Other books I have but haven't gone through yet are Adventure in Prolog and The Art of Prolog.
The only thing I'm wondering about are skipping any important developments made in the last 25 years, but I guess I can always jump into the more up-to-date online resources by then.
AFAIK the biggest item that’s missing is constraint logic programming. Markus Triska discusses CLP in his Power of Prolog book.
Not having to deal with "how do I read a csv file in prolog?" and such, frees time for richer ideas, like prolog search is a poor fit for most problems, but prolog is a nice language for writing problem-appropriate search.
Basically saying replace SQL with Datalog.
That is just what I think I'd like to do. I don't know what would be the most practical platforms to do that, which mainstream language + which Datalog database. Any recommendations?
Prolog and datalog start out as rather different beasts, though prolog tabling gets them closer. It's an area of current research.
A datalog implementation can serve as DSL, learning tool, and/or efficient solver of large problems. For the first two, most languages have a little datalog implementation or three.
For datalog engines, perhaps Z3 and its fixpoint engine from python?? Or maybe Soufflé from python?? But I really have no idea what's plausible. I too would be interested.
That's something with a big diverse active market. Datalog engines, not so much. So I'm unclear that datalog is a place to go for that? But I also pay no attention to non-open-source offerings.
If I understand constraint propagation and backtracking, why not just use those concepts when it fits the problem rather than an entirely new language meant specifically for that narrow range of problems?
Another amusing property of Prolog is that a typical Prolog program can be run forward... and backward, i.e. it is possible to «ask» a question and receive an «answer», or if the «answer» is known, the program can arrive at a set of all possible «questions». Hence heavy Prolog use to build expert systems and alike.
Back in the days, we used to giggle that if the 42 answer was fed into a Prolog problem solver to arrive at a full and exhaustive set of questions about the meaning of life, the universe and stuff, that would instantly create a singularity at the point in time and space where the solver was run and henceforth result in a collapse of the universe unto itself.
I have worked on and with Mercury for about 10 years. I had very little logic programming experience before this, while others tend to learn Prolog first.
I like Mercury, but not for the logic programming features, in practice they aren't used very often, there's not many parts of a program that require backtracking. So on reflection, I don't think it's something I'd include in a new language, it's not worth the implementation and maintenance costs. Not for most programs most of the time anyway.
On learning logic programming: It might sound like I'm discouraging anyone interested in this, far from it. Logic programming is a great paradigm to learn, another conceptual tool in your toolbox etc.
Yet, once it all clicked, I loved it, and it opened up a new way of thinking for me, which was different from both imperative and, dare I say, functional paradigms.
I also learned that, while fairly specialised, it has very real industrial applications (e.g. there's a super important critical infrastructure component in Microsoft built entirely on prolog if I remember correctly). Though, admittedly, I think most prolog finds uses as 'pluggable components to larger non-prolog projects' these days.
Also, another interesting direction in Prolog is its fuzzy / probabilistic derivatives. Very cool stuff.
Was - in Windows NT. It was used for netowkr configuration and it was only partly programmed in Prolog:
Microsoft's Windows NT operating system uses an embedded Prolog interpreter to configure its local and wide-area network systems. Interdependent software and hardware components are abstracted into a simplified object-oriented framework using declarative information provided by each component's installation script. This information, along with the Prolog algorithm, is consulted into the embedded interpreter, which is then queried to construct the most usable configuration. The algorithm which solves the plumbing problem is described, including its positive and negative constraints, Prolog database and efficiency considerations. The results of the query, stored into NT's configuration database, inform each component of its load order and binding targets. A description of theC++ wrapper class is given and portation considerations are discussed. The Small Prolog interpreter is briefly described.
I'm not sure how "super important" that was to be honest. My impression has always been that the programmer in question simply wanted to use Prolog in his day job (a feeling I can very much sympathise with).
If you are seriously considering logic programming, please do not stop with prolog, but also take a look at ASP.
For anyone with a different experience than mine - what is your reason to stick to prolog instead of use ASP?
Download SWI prolog, and get an editor that allows interactive programming with it. Emacs is of course the best for any obscure hacker language, but there may be plug-ins for other editors.
Predicates are like functions but instead of returning you simply leave some parameter as a variable, and the “return value” is prolog finding what the value of that parameter can be.
Making web apps with the built in libraries and library(persistence) is a lot of fun - you can make a real webapp with a relational database with zero dependencies but prolog. Programs are small. Highly recommend for anyone who loves programming.
Only thing I wish for is no effects but instead it being pure like Elm, and a type system rather than runtime type errors.
It's nice that the community has kept going with it as an intellectual idea. I'm wondering what contributions it has made to the wider programming community. I guess things like the warren engine may have helped inform efficient methods of interpreting other languages, or the parallel prolog investigations probably again helped other languages.
It's pretty clear to me that functional languages from the time (e.g. Miranda) which live on it Haskel have had a massive impact on modern programming languages, and approaches to resolve complex threading problems. I've a feeling the prolog innovations may be quite wide too, just maybe less obvious.
Unfortunately, the comments are in my native language, but the assignment was to search for a box in an infinite space and bring it back to (0, 0).
It was fun: https://github.com/mateioprea/Searching-In-An-Infinite-Space...
For example WRITE-statements can be lazy parallel too -- They would only start producing text when the answer is definite. Does not happen in any particular sequence anymore, but you must account that and use appropriate headings.
"The elegant solution is not efficient.
The efficient solution is not elegant"
Elegance is not optional.
There is no tension between writing a beautiful program and writing an efficient program. If your code is ugly, the chances are that you either don’t understand your problem or you don’t understand your programming language, and in neither case does your code stand much chance of being efficient. In order to ensure that your program is efficient, you need to know what it is doing, and if your code is ugly, you will find it hard to analyse.
- Because the author likes it.
- Because it's different.
- Because very few other people like it.
Is this really an effective pitch to students who are looking for an efficient way to use their limited resources?
I recall going through the Adventure in Prolog one (free online resource) a while ago and enjoying it, helped direct a lot of exploration into the Prolog language and its capabilities for me. I have Clocksin & Mellish's Programming in Prolog (2003, 5th ed.) and liked it as a good introduction (or more expansion, I had dabbled in it previously and "knew" the language but not the full depth of it). I've read good things about The Art of Prolog by Sterling and Shapiro, but have not read it.
For example, can someone add good error messages to this? It didn't really seem practical. I'm sure I am missing something, but there also seemed to be a lot of deficiencies.
In fact I think I learned the opposite lesson. I have to dig up the HN post, but I think the point was "Prolog is NOT logic". It's not programming and it's not math.
(Someone said the same thing about Project Euler and so forth, and I really liked that criticism. https://lobste.rs/s/bqnhbo/book_review_elements_programming )
Related thread but I think there was a pithy blog post too: https://news.ycombinator.com/item?id=18373401 (Prolog Under the Hood)
Yeah this is the quote and a bunch of the HN comments backed it up to a degree:
Although Prolog's original intent was to allow programmers to specify programs in a syntax close to logic, that is not how Prolog works. In fact, a conceptual understanding of logic is not that useful for understanding Prolog.
I have programmed in many languages, and I at least have a decent understanding of math. In fact I just wrote about the difference between programming and math with regards to parsing here:
But I had a bad experience with Prolog. Even if you understand programming and math, you don't understand Prolog.
I'm not a fan of the computational complexity problem either; that makes it unsuitable for production use.
Please think harder about why you're offering this class and consider offering something else instead that will provide more value to the adults trying to compete in this flat world.